text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
listlengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
listlengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
listlengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
```python
import sympy as sm
import sympy.physics.mechanics as me
sm.init_printing()
```
In the video I incorrectly typed `q1, q2, q3, q4 = sm.symbols('q1:5')`. It is corrected below:
```python
q1, q2, q3, q4, q5 = sm.symbols('q1:6')
l1, l2, l3, l4 = sm.symbols('l1:5')
```
```python
N, A, B, C = sm.symbols('N, A, B, C', cls=me.ReferenceFrame)
```
```python
A.orient_body_fixed(N, (q1, q2, 0), 'ZXZ')
A.dcm(N)
```
```python
B.orient_axis(A, q3, A.x)
```
In the video I incorrectly typed `C.orient_body_fixed(B, (q3, q4, 0), 'XZX')`. It is corrected below:
```python
C.orient_body_fixed(B, (q4, q5, 0), 'XZX')
```
```python
r_P1_P2 = l1*A.z
r_P1_P2
```
```python
r_P2_P3 = l2*B.z
r_P2_P3
```
```python
r_P3_P4 = l3*C.z - l4*C.y
r_P3_P4
```
```python
r_P1_P4 = r_P1_P2 + r_P2_P3 + r_P3_P4
r_P1_P4
```
```python
r_P1_P4.express(N)
```
```python
r_P1_P4.express(N).simplify()
```
```python
r_P1_P4.express(B)
```
```python
r_P1_P4.free_symbols(B)
```
```python
r_P1_P4.free_symbols(N)
```
```python
```
| 71dfa8ed9297a742b2160d917ad6f0e8ba3bfb87 | 3,806 | ipynb | Jupyter Notebook | content/notebooks/vectors.ipynb | moorepants/me41055 | 5c941d28f6f61fa1a02fbda4772d88010014eec0 | [
"CC-BY-4.0"
]
| null | null | null | content/notebooks/vectors.ipynb | moorepants/me41055 | 5c941d28f6f61fa1a02fbda4772d88010014eec0 | [
"CC-BY-4.0"
]
| 42 | 2022-01-07T18:05:36.000Z | 2022-03-22T15:15:01.000Z | content/notebooks/vectors.ipynb | moorepants/me41055 | 5c941d28f6f61fa1a02fbda4772d88010014eec0 | [
"CC-BY-4.0"
]
| 1 | 2022-01-24T17:18:40.000Z | 2022-01-24T17:18:40.000Z | 18.298077 | 107 | 0.492906 | true | 434 | Qwen/Qwen-72B | 1. YES
2. YES | 0.933431 | 0.870597 | 0.812642 | __label__eng_Latn | 0.359914 | 0.726373 |
```python
# 그래프, 수학 기능 추가
# Add graph and math features
import pylab as py
import numpy as np
import numpy.linalg as nl
# 기호 연산 기능 추가
# Add symbolic operation capability
import sympy as sy
```
```python
sy.init_printing()
```
# 2차 적분<br>Second Order Numerical Integral
다시 면적 1인 반원을 생각해 보자.<br>
Again, let's think about a half circle with area of 1.
```python
import plot_num_int as pi
```
```python
r = pi.radius_of_half_circle_area(1)
```
```python
pi.plot_a_half_circle_of_area(1)
pi.axis_equal_grid_True()
```
이번에는 3 지점에서의 함수값을 이용하는 심슨규칙을 이용해서 구해 보기로 하자.<br>
This time, let's integrate by the Simpson's rule using function values at three points.
## 심슨 규칙<br>Simpson's Rule
마찬가지로 일정 간격으로 $x$ 좌표를 나누어 보자.<br>
Same as before, let's divide $x$ coordinates in a constant interval.
```python
n = 10
pi.plot_half_circle_with_stems(n, 1)
```
마지막 두 구간을 생각해 보기로 하자.<br>
Let's just think about the last two segments.
```python
n = 10
pi.plot_half_circle_with_stems(n, 1)
x_array, y_plus = pi.get_half_circle_xy_theta_space(1)
x_array_bar, y_array_bar = pi.get_half_circle_xy_linspace(n, 1)
# 마지막 두 구간에 해당하는 x y 값을 선택
# Choose x y values of the last two intervals
x_last_two_array = x_array[x_array_bar[-3] < x_array]
y_last_two_array = y_plus[x_array_bar[-3] < x_array]
py.fill_between(x_last_two_array, y_last_two_array)
py.axis('equal')
py.grid(True)
```
해당 넓이를 구하기 위해, 이 세 점을 지나는 2차 다항식을 찾아서 적분할 수 있을 것이다<br>
To get the area, we would be able to find a second order polynomal passing through these three points and integrate.
문제를 좀 더 쉽게 만들기 위해 해당 면적을 원점 주위로 평행 이동 시켜 보자.<br>
To make the problem simpler, let's translate the area around the origin.
```python
delta_x = x_array_bar[1]-x_array_bar[0]
py.plot(x_array, y_plus, alpha=0.0)
py.plot(x_array_bar[-3:], y_array_bar[-3:], '.')
# 마지막 두 구간을 표시
# Indicate last two intervals
py.fill_between(x_last_two_array, y_last_two_array)
# x 좌표 표시
# Indicate x coordinates
py.text(x_last_two_array[0], -0.1, '$x_{n-2}$', horizontalalignment='center')
py.text(x_last_two_array[-1], -0.1, '$x_{n}$', horizontalalignment='center')
# y 좌표 표시
# Indicate x coordinates
py.text(x_array_bar[-3], y_array_bar[-3], '$f(x_{n-2})$', horizontalalignment='center', verticalalignment='bottom')
py.text(x_array_bar[-2], y_array_bar[-2], '$f(x_{n-1})$', horizontalalignment='center', verticalalignment='bottom')
py.text(x_array_bar[-1], y_array_bar[-1], '$f(x_{n})$', horizontalalignment='center', verticalalignment='bottom')
# 평행이동한 면적
# Translated Area
py.plot(x_array_bar[-3:]-x_array_bar[-2], y_array_bar[-3:], '.')
py.fill_between(x_last_two_array-x_array_bar[-2], y_last_two_array)
# x 좌표 표시
# Indicate x coordinates
py.text(-delta_x, -0.1, '$-\Delta x$', horizontalalignment='center')
py.text(delta_x, -0.1, '$+\Delta x$', horizontalalignment='center')
# y 좌표 표시
# Indicate x coordinates
py.text(-delta_x, y_array_bar[-3], '$y_0$', horizontalalignment='center', verticalalignment='bottom')
py.text( 0, y_array_bar[-2], '$y_1$', horizontalalignment='center', verticalalignment='bottom')
py.text(+delta_x, y_array_bar[-1], '$y_2$', horizontalalignment='center', verticalalignment='bottom')
py.axis('equal')
py.grid(True)
```
$$
y=a_0 x^2 + a_1 x + a_2
$$
원래 위치의 면적과 평행이동한 면적은 같다.<br>The translate area and the original area are equivalent.
평행이동한 면적의 세 점을 살펴 보자.<br>Let's take a look at the three points of the translated area.
$$
\begin{align}
p_0&=\left(-\Delta x, y_0\right) \\
p_1&=\left(0, y_1\right) \\
p_2&=\left(\Delta x, y_2\right)
\end{align}
$$
```python
delta_x, y_m, y_0, y_p = sy.symbols('Delta_x, y_0, y_1, y_2', real=True)
```
```python
points = (-delta_x, y_m), (0, y_0), (delta_x, y_p)
```
```python
points
```
2차 다항식은 다음과 같은 형태를 가진다.<br>
A second order polynomial would take following form.
```python
a0, a1, a2, x = sy.symbols('a0, a1, a2, x', real=True)
f = a0 * x**2 + a1 * x + a2
```
```python
f
```
위 세 점을 모두 지나는 2차 곡선을 생각해 보자.<br>Let's think about a second order polynomal passing all three points above.
$$
\begin{align}
y_0&=a_0 \left(-\Delta x\right)^2 + a_1 \left(-\Delta x\right) + a_2 \\
y_1&=a_2 \\
y_2&=a_0 \left(\Delta x\right)^2 + a_1 \left(\Delta x\right) + a_2
\end{align}
$$
```python
eq_points = [sy.Eq(p[-1], f.subs(x, p[0])) for p in points]
```
```python
eq_points
```
계수 $a_i$에 관하여 풀어 보자.<br>Let's try to solve for the coefficients $a_i$.
```python
a_sol = sy.solve(eq_points, (a0, a1, a2))
```
```python
a_sol
```
## 2차 다항식의 정적분<br>Definite Integral of a Second Order Polynomial
이제 $f(x)$를 $x$에 관하여 $-\Delta x$ 부터 $\Delta x$까지 적분해 보자.<br>Now let's integrate $f(x)$ about $x$ from $-\Delta x$ to $\Delta x$.
```python
integral = sy.integrate(f, (x, -delta_x, delta_x))
```
```python
integral
```
계수를 대입하고 정리해 보자.<br>Let's substitute the coefficients and simplfy.
```python
simpson = sy.simplify(integral.subs(a_sol))
```
```python
simpson
```
예를 들어 C 언어 코드로는 다음과 같이 가능하다<br>For example, in C programming language, following expression would be possible.
```python
sy.ccode(simpson)
```
## 심슨 규칙 구현<br>Implementing Simpson's Rule
한번에 두 구간의 면적을 계산한다.<br>
In one iteration, calculate the area of two intervals.
$$
Area = F_0 + F_2 + \ldots + F_{n-2}
$$
$$
F_k = \frac{\Delta x}{3}\left[f(x_k)+4 \cdot f(x_{k+1}) + f(x_{k+2})\right]
$$
```python
def get_delta_x(xi, xe, n):
return (xe - xi)/n
```
```python
def num_int_2(f, xi, xe, n_partition, b_verbose=False):
"""
f : function to indegrate f(x)
xi : start of integration
xe : end of integration
n_partition : number of partitions within the interval
"""
# 구간의 갯수를 항상 짝수로 한다.
# Always use even number of intervals
if n_partition % 2:
n_partition += 1
delta_x = get_delta_x(xi, xe, n_partition)
# delta_x 값이 너무 작은 경우
# if delta_x is too small
if 1e-7 > abs(delta_x):
raise ValueError(f'delta_x(delta_x:g) too small')
x_array = py.linspace(xi, xe, n_partition+1)
assert 1e-3 > abs((abs(x_array[1] - x_array[0]) - delta_x)/delta_x), (
f"\ndelta_x = {delta_x} "
f"\nx_array[1] - x_array[0] = {x_array[1] - x_array[0]}"
)
delta_x_third = delta_x / 3.0
integration_result = 0.0
xp = x_array[0]
y0 = f(xp)
for i in range(1, n_partition, 2):
x1 = x_array[i]
x2 = x_array[i+1]
y1 = f(x1)
y2 = f(x2)
area_i = delta_x_third * (y0 + 4*y1 + y2)
if b_verbose: print('i = %2d, area_i = %g' % (i-1, area_i))
xp, y0 = x2, y2
integration_result += area_i
return integration_result
```
```python
n = 10
result = num_int_2(pi.half_circle, -r, r, n, b_verbose=True)
```
```python
n = 100
result = num_int_2(pi.half_circle, -r, r, n)
print('result =', result)
```
```python
%timeit -n 100 result = num_int_2(pi.half_circle, -r, r, n)
```
```python
n = 2**8
result_256 = num_int_2(pi.half_circle, -r, r, n)
print('result =', result_256)
```
```python
%timeit -n 100 result = num_int_2(pi.half_circle, -r, r, n)
```
### $cos \theta$의 반 주기<br>Half period of $cos \theta$
```python
n = 10
result_cos = num_int_2(py.cos, 0, py.pi, n, b_verbose=True)
print('result =', result_cos)
```
```python
n = 100
result_cos = num_int_2(py.cos, 0, py.pi, n)
print('result =', result_cos)
```
### 1/4 원<br>A quarter circle
```python
n = 10
result_quarter = num_int_2(pi.half_circle, -r, 0, n, b_verbose=True)
print('result =', result_quarter)
```
```python
n = 100
result_quarter = num_int_2(pi.half_circle, -r, 0, n)
print('result =', result_quarter)
```
## 연습문제<br>Exercises
도전 과제 1 : 넓이 1인 반원의 예로 0차, 1차 적분과의 오차를 비교하시오.<br>Using the example of half circle with area 1, compare errors with zeroth and first order integrations.
```python
```
도전 과제 2 : 긴 지름 4, 짧은 지름 2인 타원의 면적의 절반을 심슨법으로 계산하시오. [[위키피디아](https://ko.wikipedia.org/wiki/%ED%83%80%EC%9B%90)]<br>Try this 2 : Calculate the half of area of an ellipse with long diameter 4 and short diameter 2 using the Simpson's rule. [[wikipedia](https://en.wikipedia.org/wiki/Ellipse)]
$$
\frac{x^2}{4^2} + \frac{y^2}{2^2} = 1
$$
```python
```
## 함수형 프로그래밍<br>Functional programming
$n$ 개의 간격에 대해 심슨 규칙 적용을 생각해 보자.<br>
Let's think about applying Simpson's rule over $n$ intervals.
$$
Area = F_0 + F_2 + \ldots + F_{n-2}
$$
$$
F_k = \frac{\Delta x}{3}\left[f(x_k)+4 \cdot f(x_{k+1}) + f(x_{k+2})\right]
$$
$$
\begin{align}
Area &= \frac{\Delta x}{3}\left[f(x_0)+4 \cdot f(x_{1}) + f(x_{2})\right] \\
&+ \frac{\Delta x}{3}\left[f(x_2)+4 \cdot f(x_{3}) + f(x_{4})\right] \\
&+ \frac{\Delta x}{3}\left[f(x_4)+4 \cdot f(x_{5}) + f(x_{6})\right] \\
& \ldots \\
&+ \frac{\Delta x}{3}\left[f(x_{n-4})+4 \cdot f(x_{n-3}) + f(x_{n-2})\right] \\
&+ \frac{\Delta x}{3}\left[f(x_{n-2})+4 \cdot f(x_{n-1}) + f(x_{n})\right] \\
\end{align}
$$
$$
\begin{align}
Area &= \frac{\Delta x}{3}\left[f(x_0)+f(x_{n})\right] \\
&+ \frac{4}{3}\Delta x \left[f(x_{1}) + f(x_{3}) + f(x_{5}) + \ldots + f(x_{n-3}) + f(x_{n-1})\right] \\
&+ \frac{2}{3}\Delta x \left[f(x_{2}) + f(x_{4}) + f(x_{6}) + \ldots + f(x_{n-4}) + f(x_{n-2})\right] \\
\end{align}
$$
```python
def even_sum_func(f, xi, xe, delta_x):
return sum(
map(
f,
py.arange(xi+delta_x, xe-delta_x*0.5, delta_x*2),
)
)
```
```python
def odd_sum_func(f, xi, xe, delta_x):
return sum(
map(
f,
py.arange(xi+(delta_x*2) , xe-delta_x*0.5, delta_x*2),
)
)
```
```python
def num_int_2_functional(f, xi, xe, n):
return (
(get_delta_x(xi, xe, n) * (1.0/3)) * (
f(xi) + f(xe)
+ 4 * even_sum_func(f, xi, xe, get_delta_x(xi, xe, n))
+ 2 * odd_sum_func(f, xi, xe, get_delta_x(xi, xe, n))
)
)
```
```python
n = 100
result_func = num_int_2_functional(pi.half_circle, -r, r, n)
print('result_func =', result_func)
```
```python
assert 1e-7 > abs(result - result_func), f"result = {result}, result_func = {result_func}"
```
```python
%timeit -n 100 result_func = num_int_2_functional(pi.half_circle, -r, r, n)
```
## 시험<br>Test
아래는 함수가 맞게 작동하는지 확인함<br>
Following cells verify whether the functions work correctly.
```python
import pylab as py
r = py.sqrt(1.0 / py.pi)
n = 10
delta_x = r/n
def half_circle(x):
return py.sqrt(r**2 - x ** 2)
assert 0.25 > num_int_2(half_circle, -r, 0, n)
assert 0.25 > num_int_2(half_circle, 0, r, n)
assert 0.25 > num_int_2_functional(half_circle, -r, 0, n)
assert 0.25 > num_int_2_functional(half_circle, 0, r, n)
```
```python
assert 0.1 > (abs(num_int_2(half_circle, -r, 0, n) - 0.25) * 4)
assert 0.1 > (abs(num_int_2(half_circle, 0, r, n) - 0.25) * 4)
assert 0.1 > (abs(num_int_2_functional(half_circle, -r, 0, n) - 0.25) * 4)
assert 0.1 > (abs(num_int_2_functional(half_circle, 0, r, n) - 0.25) * 4)
```
## Final Bell<br>마지막 종
```python
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
```
```python
```
| ea1fefed76200584240846bbf4f413722a724bc1 | 22,588 | ipynb | Jupyter Notebook | 20_second_order.ipynb | kangwonlee/19ECA-30-num-int | 20f45cb6cde958ebb901341a0253abfb95efe92d | [
"BSD-3-Clause"
]
| null | null | null | 20_second_order.ipynb | kangwonlee/19ECA-30-num-int | 20f45cb6cde958ebb901341a0253abfb95efe92d | [
"BSD-3-Clause"
]
| null | null | null | 20_second_order.ipynb | kangwonlee/19ECA-30-num-int | 20f45cb6cde958ebb901341a0253abfb95efe92d | [
"BSD-3-Clause"
]
| null | null | null | 22.701508 | 298 | 0.484771 | true | 4,225 | Qwen/Qwen-72B | 1. YES
2. YES | 0.810479 | 0.73412 | 0.594988 | __label__kor_Hang | 0.501574 | 0.220688 |
# Cerebellar Model Articulation Controller(CMAC)
## 概论
小脑模型的初始设想很简单,希望设计一个这样的模型:
1. **足够快**
2. **拟合**
### 足够快
对于第一点,传统的神经网络均是使用浮点数进行计算,使用浮点数显然存在两个缺点:其一是占用空间大,其二是运算不够快。但也存在非常明显的优点:计算精度高。
若为了在不太降低精度的条件下尽可能提高模型运算效率,显然有两个角度:其一是改进模型,其二是改变数值存储方式。
量化技术就是这样一类通过改变数值存储方式提高模型运算效率的方式。对于现代神经网络,训练通常采用32位浮点数,推理时则可以选用16位浮点数以提高精度,而当部署到边缘设备上时则可以进一步采用**int8量化技术**。
对于CMAC,同样的,为了提升运算效率,初始数据输入时会进行量化。此外,为了进一步提升效率,CMAC还引入了**哈希散列**,通过查表的方式将相近的输入映射到相似的地址上。
哈希散列同时也引入了不确定的因素。哈希散列是一种压缩映射,其很有可能将不同的输入映射到同一个地址,即导致了**碰撞**。从另一个角度讲,这也引入了非线性变换,即在原空间可能相隔非常远的两个输入,在映射后可能十分相近甚至发生**碰撞**。
### 拟合
为了实现拟合,CMAC在查表后建立了一个自适应线性层,实现地址到输出的线性估计
由于采用了哈希,实际上建立了输入与地址的映射表。在不考虑碰撞的情况下,一个特定的输入会激活特定的地址,而特定的地址会激活特定的自适应线性层的输入单元,这些单元则会连接到输出。
不同于传统的神经网络,进行推理时同层所有的神经元均会参与运算,CMAC中仅有被激活的输入单元才会参与运算,这显然也加速了CMAC的运算速度。
## 符号定义
### 空间
|符号|含义|
|:-:|:-:|
|$S$|输入空间|
|$M$|扩充地址空间|
|$MC$|扩充地址空间长度|
|$A_c$|虚拟存储空间|
|$A_p$|实际存储空间|
|$F$|输出空间|
### 数据
|符号|含义|
|:-:|:-:|
|$\bm{s}$|输入向量|
|$\bm{m}$|扩充后矩阵|
|$\bm{a}$|虚拟存储空间向量|
|$\bm{d}$|实际存储空间向量|
|$\hat{y}$|预测输出|
|$y$|真实输出|
### 参数
|符号|含义|
|:-:|:-:|
|s|输入空间维度|
|q|量化级|
|c|扩充地址维度|
|$N_p$|用于Hash运算的质数|
|$\bm{W}$|权矩阵|
## 正向运算
CMAC的整体流程如下:
* 输入空间$S$离散化
* 输入空间$S$ $\rightarrow$ 扩充地址空间$M$
* 扩充地址空间$M$ $\rightarrow$ 虚拟存储空间$A_c$
* 虚拟存储空间$A_c$ $\rightarrow$ 实际存储空间$A_p$
* 实际存储空间$A_p$ $\rightarrow$ 输出空间$F$
第零步为离散化。一方面是加速运算,另一方面也是配合后续的Hash
第一步是在进行升维
第二步是将第一步中升维到高维的多个分量组合为一个向量
第三步为Hash
第四步为自适应线性拟合
### 离散化
输入为$\bm{s}=[s_1, s_2, \cdots, s_s]$
设定第n个维度的取值范围为$[n_{min}, n_{max}]$,量化级为q_n
则第n个维度的离散值为
$$
\begin{equation}
s_n = \lceil\frac{(s_n-n_{min})}{(n_{max}-n_{min})}*(q_n-1)\rceil + 1
\end{equation}
$$
### 输入空间到扩充地址空间
每一个输入分量均扩充到c维
对于一个特定的输入分量$s_n$,有对应的扩充后向量$\bm{m_n}$
扩充后向量按照如下的方式进行运算
定义如下的取余运算
$$
\begin{equation}
\Psi(s_n) = mod(\frac{s_n-1}{c})+1
\end{equation}
$$
则$\bm{m_n}$的第$\Psi(s_n)$位为$s_n$,其他位依次推出
|$\bm{m_{n1}}$|$\bm{m_{n2}}$|$\cdots$|$\bm{m_{n\Psi(s_n)}}$|$\cdots$|$\bm{m_{nc}}$|
|:-:|:-:|:-:|:-:|:-:|:-:|
|$s_n+(c-\Psi(s_n)+1)$|$s_n+(c-\Psi(s_n)+2)$|$\cdots$|$s_n$|$\cdots$|$s_n+(c-\Psi(s_n))$|
### 扩充地址空间到虚拟存储空间
扩充地址空间为一个$s\times c$的矩阵,转换到虚拟存储空间后进行纵向连接,即进行如下操作
$$
\begin{equation}
\bm{m} =
\left[\begin{array}{cc}
m_{11}&m_{12}&\cdots&m_{1c} \\
\vdots&\ddots&\ddots&\vdots \\
m_{s1}&m_{s2}&\cdots&m_{sc}
\end{array}\right]
\end{equation}
$$
$$
\begin{equation}
\begin{split}
\bm{a}
&= [a_1, a_2, \cdots, a_c]^T \\
&= [m_{11}m_{21}\cdots m_{s1}, m_{12}m_{22}\cdots m_{s2}, \cdots, m_{1c}m_{2c}\cdots m_{sc}]^T
\end{split}
\end{equation}
$$
### 虚拟存储空间到实际存储空间
这一步即为Hash,在这里采用取余运算的方式,类似于式2有:
$$
\begin{equation}
\Psi(a_n) = mod(\frac{a_n-1}{N_p})+1
\end{equation}
$$
$$
\begin{equation}
\begin{split}
\bm{d}
&= [d_1, d_2, \cdots, d_c]^T \\
&= [\Psi(a_1), \Psi(a_2), \cdots, \Psi(a_c)]^T
\end{split}
\end{equation}
$$
### 实际存储空间到输出空间
这一步采用简单的线性变换
上述得到的是地址,在CMAC中,为了加速运算,最终一步是通过查询实现的,
权矩阵$\bm{W}\in\mathcal{R}^{c \times N_p}$
其中$c$为地址的维度,$N_p$为取余运算的除数,对于以$N_p$为底的除法,显然得到的余数不可能大于$N_p$
输出即为对应地址位置的权值之和
$$
\begin{equation}
\hat{y} = \sum_{i=0}^{c}W[i, \Psi(a_i)]
\end{equation}
$$
上述方法均是在从1开始计数的背景下讨论的,在具体实现中将从0开始计数
## 参数学习
定义损失函数
$$
\begin{equation}
\mathcal{L} = ||\hat{y}-y||_2^2
\end{equation}
$$
对权值求偏导
$$
\begin{equation}
\begin{split}
\frac{\partial \mathcal{L}}{\partial \bm{W}}
&= \frac{||\hat{y}-y||_2^2}{\partial \bm{W}} \\
&= \frac{(\sum_{i=0}^{c}W[i, \Psi(a_i)]-y)^2}{\partial \bm{W}} \\
&= 2(\sum_{i=0}^{c}W[i, \Psi(a_i)]-y)\frac{\sum_{i=0}^{c}W[i, \Psi(a_i)]}{\partial \bm{W}}
\end{split}
\end{equation}
$$
$$
\begin{equation}
\frac{\partial \mathcal{L}}{W_{ij}} =
\left\{
\begin{array}{cc}
0,&d_i \neq j+1 \\
2(\sum_{i=0}^{c}W[i, \Psi(a_i)]-y),&else
\end{array}
\right.
\end{equation}
$$
获得参数更新函数
$$
\begin{equation}
\bm{W}(t+1) = \bm{W}(t) - \eta\frac{\partial \mathcal{L}}{\bm{W}(t)}
\end{equation}
$$
```python
import os
import random
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.axes3d import Axes3D
import seaborn as sn
import imageio
%matplotlib inline
```
```python
class CMAC(object):
def __init__(self,
input_dim,
input_range,
q_level,
expend_dim,
hash_number):
"""
input_dim: 输入维度
input_range: 输入取值范围,默认需要一个list
q_level: 量化级,默认需要输入一个list
expend_dim: 扩充地址维度
hash_number: 用于散列的质数
"""
self.input_dim = input_dim
self.input_range = np.array(input_range)
self.q_level = q_level
self.expend_dim = expend_dim
self.hash_number = hash_number
# 输出权值矩阵
random.seed(1024)
np.random.seed(1024)
self.weight_matrix = np.zeros((self.expend_dim, self.hash_number))
# 数据存储空间向量
self.real_vector = None
self.output_vector = None
def forward(self, x):
"""
x: 输入向量
"""
# ------------- 离散化 ---------------
input_q = self._quantification(np.array(x))
# ------------- 输入空间到扩充地址空间 ------------------
# (self.expend_dim, self.input_dim)
expend_matrix = self._input2expend(input_q)
# ------------- 扩充地址空间到虚拟存储空间 -----------------
# (self.expend_dim, 1)
virtual_vector = self._expend2virtual(expend_matrix)
# ------------- 虚拟存储空间到实际存储空间 ------------------
# (self.expend_dim, 1)
self.real_vector = self._virtual2real(virtual_vector)
# ------------- 实际存储空间到输出空间 --------------------
# (self.output_dim, 1)
self.output_value = self._real2output(self.real_vector)
return self.output_value
def optim(self, true_output, lr):
"""
true_output: 真实输出
"""
# 更新mask
partial_matrix = np.zeros((self.expend_dim, self.hash_number))
partial_matrix[range(0, self.expend_dim), self.real_vector.reshape(-1)] = 1
self.weight_matrix -= lr * (self.output_value - true_output) * partial_matrix
def _quantification(self, input_vector):
"""
input_vector: 输入向量
"""
input_q = np.zeros(self.input_dim, dtype=np.int32)
# (self.input_dim)
for i in range(self.input_dim):
_input_range = self.input_range[i, :]
input_q[i] = np.math.ceil((input_vector[i]-_input_range[0])*(self.q_level[i]-1)/(_input_range[1]-_input_range[0]))
return input_q
def _input2expend(self, input_q):
"""
input_q: 量化后的输入向量
"""
expend_matrix = np.zeros((self.input_dim, self.expend_dim))
for i in range(self.input_dim):
# 计算取余运算
phi_ = input_q[i] % self.expend_dim
if phi_ != 0:
# index < phi_
add_number_list_1 = np.array(range(0, phi_))
# index > phi_
add_number_list_2 = np.array(range(0, self.expend_dim-phi_))
expend_matrix[i, :phi_] = input_q[i] + self.expend_dim - phi_ + add_number_list_1
expend_matrix[i, phi_:] = input_q[i] + add_number_list_2
else:
expend_matrix[i] = input_q[i] + np.array(range(0, self.expend_dim))
return expend_matrix.astype(np.int32).T
def _expend2virtual(self, expend_matrix):
"""
expend_matrix: 扩充地址空间矩阵
"""
virtual_vector = np.zeros((self.expend_dim, 1), dtype=np.int32)
# 进行组合
mul_num = 1
for i in range(self.input_dim):
virtual_vector += expend_matrix[:, i:i+1] * mul_num
mul_num *= self.q_level[i] + self.expend_dim - 1
return virtual_vector
def _virtual2real(self, virtual_vector):
"""
virtual_vector: 虚拟存储空间向量
"""
real_vector = np.zeros((self.expend_dim, 1), dtype=np.int32)
for i in range(self.expend_dim):
real_vector[i] = virtual_vector[i]%self.hash_number
return real_vector
def _real2output(self, real_vector):
"""
real_vector: 实际存储空间向量
"""
output_value = np.sum(self.weight_matrix[range(0, self.expend_dim), real_vector.reshape(-1)])
return output_value
```
```python
# 测试数据
# 输入维度
input_dim = 2
# 输入范围
input_range = [[0, 6.2], [0, 6.2]]
# 量化级
q_levle = [62, 62]
# 扩充地址空间维度
expend_dim = 7
# 散列质数
hash_number = 5001
# 学习率
lr = 0.1
# 输出维度
output_dim = 1
# 测试输入
input_vector = [0.2, 0.2]
# 测试输出
output_ = 1
my_cmac = CMAC(input_dim=input_dim,
input_range=input_range,
q_level=q_levle,
expend_dim=expend_dim,
hash_number=hash_number)
# 离散值输出
input_q = my_cmac._quantification(input_vector)
print("离散\n", input_q)
# 输入空间到扩充地址空间
expend_matrix = my_cmac._input2expend(input_q)
print("扩充地址空间\n", expend_matrix)
# 扩充地址空间到虚拟存储空间
virtual_vector = my_cmac._expend2virtual(expend_matrix)
print("虚拟存储空间\n", virtual_vector)
# 虚拟存储空间到实际存储空间
real_vector = my_cmac._virtual2real(virtual_vector)
print("实际存储空间\n", real_vector)
# 实际存储空间到输出空间
output_value = my_cmac._real2output(real_vector)
print("输出\t", output_value)
# 参数优化
print("参数学习")
for step in range(10):
output_value_2 = my_cmac.forward(input_vector)
my_cmac.optim(output_, lr)
print("step:{}\t output_value:{:.6f}".format(step, output_value_2))
```
离散
[2 2]
扩充地址空间
[[7 7]
[8 8]
[2 2]
[3 3]
[4 4]
[5 5]
[6 6]]
虚拟存储空间
[[483]
[552]
[138]
[207]
[276]
[345]
[414]]
实际存储空间
[[483]
[552]
[138]
[207]
[276]
[345]
[414]]
输出 0.0
参数学习
step:0 output_value:0.000000
step:1 output_value:0.700000
step:2 output_value:0.910000
step:3 output_value:0.973000
step:4 output_value:0.991900
step:5 output_value:0.997570
step:6 output_value:0.999271
step:7 output_value:0.999781
step:8 output_value:0.999934
step:9 output_value:0.999980
```python
# 对函数进行拟合
# 拟合函数f(x1, x2) = sin(x1)cos(x2)
# x1范围[-3, 3],总点数100
# x2范围[-3, 3],总点数100
# ------------------- 超参数 -------------------
# 输入维度
input_dim = 2
# 输入范围
input_range = [[-3, 3], [-3, 3]]
# 量化级
q_levle = [100, 100]
# 扩充地址空间维度
expend_dim = 13
# 散列质数
hash_number = 101
# 学习率
lr = 0.005
# 最大epoch
iteration = 40000
# ------------------- 模型 -----------------------
my_cmac = CMAC(input_dim=input_dim,
input_range=input_range,
q_level=q_levle,
expend_dim=expend_dim,
hash_number=hash_number)
# ------------------- 数据 -----------------------
# 生成数据的分辨率
sample_resolution = 100
x1, x2 = np.meshgrid(np.linspace(-3, 3, sample_resolution), np.linspace(-3, 3, sample_resolution))
train_x_list = np.concatenate((x1.reshape(-1, 1), x2.reshape(-1, 1)), axis=1)
train_y_list = np.sin(train_x_list[:, 0:1]) * np.cos(train_x_list[:, 1:])
# ------------------- 参数优化 ------------------------
print("参数学习")
random.seed(1024)
print_frequency = 1
train_loss = 0
train_loss_dict = dict()
train_pred_recoder = list()
params_recoder = list()
# 对所有输入进行预测
pred_matrix = np.zeros((sample_resolution, sample_resolution))
for _x1 in range(sample_resolution):
for _x2 in range(sample_resolution):
pred_y = my_cmac.forward([x1[_x1, _x2], x2[_x1, _x2]])
pred_matrix[_x1, _x2] = pred_y
train_loss += (pred_y - np.sin(x1[_x1, _x2]) * np.cos(x2[_x1, _x2]))**2
train_loss /= sample_resolution ** 2
train_loss_dict[0] = train_loss
print("iterations:[{}/{}] avg loss:{:.6f}".format(0, iteration, train_loss))
train_loss = 0
train_pred_recoder.append(pred_matrix)
params_recoder.append(my_cmac.weight_matrix.copy())
# 开始训练
for i in range(1, iteration+1):
sample_index = random.randint(0, len(train_x_list)-1)
sample_x = train_x_list[sample_index]
sample_y = train_y_list[sample_index]
pred_y = my_cmac.forward(sample_x)
my_cmac.optim(sample_y, lr)
if i % print_frequency == 0 or i % iteration == 0:
# 对所有输入进行预测
pred_matrix = np.zeros((sample_resolution, sample_resolution))
for _x1 in range(sample_resolution):
for _x2 in range(sample_resolution):
pred_y = my_cmac.forward([x1[_x1, _x2], x2[_x1, _x2]])
pred_matrix[_x1, _x2] = pred_y
train_loss += (pred_y - np.sin(x1[_x1, _x2]) * np.cos(x2[_x1, _x2]))**2
train_loss /= sample_resolution * sample_resolution
train_loss_dict[i] = train_loss
print("iterations:[{}/{}] avg loss:{:.6f}".format(i, iteration, train_loss))
train_loss = 0
train_pred_recoder.append(pred_matrix)
params_recoder.append(my_cmac.weight_matrix.copy())
print_frequency = int(pow(1.1, len(train_loss_dict)))
```
参数学习
iterations:[0/40000] avg loss:0.249668
iterations:[1/40000] avg loss:0.248866
iterations:[2/40000] avg loss:0.248780
iterations:[3/40000] avg loss:0.247917
iterations:[4/40000] avg loss:0.247636
iterations:[5/40000] avg loss:0.247176
iterations:[6/40000] avg loss:0.247146
iterations:[7/40000] avg loss:0.247127
iterations:[8/40000] avg loss:0.247081
iterations:[10/40000] avg loss:0.246616
iterations:[12/40000] avg loss:0.245713
iterations:[14/40000] avg loss:0.244969
iterations:[15/40000] avg loss:0.244593
iterations:[18/40000] avg loss:0.244322
iterations:[21/40000] avg loss:0.244281
iterations:[24/40000] avg loss:0.243114
iterations:[28/40000] avg loss:0.241051
iterations:[30/40000] avg loss:0.238231
iterations:[35/40000] avg loss:0.235232
iterations:[36/40000] avg loss:0.235204
iterations:[42/40000] avg loss:0.232649
iterations:[49/40000] avg loss:0.231421
iterations:[56/40000] avg loss:0.229389
iterations:[64/40000] avg loss:0.226473
iterations:[72/40000] avg loss:0.222563
iterations:[80/40000] avg loss:0.219286
iterations:[88/40000] avg loss:0.217603
iterations:[91/40000] avg loss:0.215672
iterations:[98/40000] avg loss:0.213108
iterations:[105/40000] avg loss:0.209429
iterations:[119/40000] avg loss:0.203817
iterations:[133/40000] avg loss:0.198528
iterations:[147/40000] avg loss:0.193617
iterations:[161/40000] avg loss:0.187870
iterations:[175/40000] avg loss:0.183861
iterations:[196/40000] avg loss:0.176829
iterations:[210/40000] avg loss:0.171412
iterations:[238/40000] avg loss:0.163909
iterations:[259/40000] avg loss:0.157559
iterations:[287/40000] avg loss:0.151061
iterations:[315/40000] avg loss:0.141405
iterations:[343/40000] avg loss:0.134490
iterations:[378/40000] avg loss:0.126971
iterations:[420/40000] avg loss:0.119266
iterations:[462/40000] avg loss:0.111286
iterations:[504/40000] avg loss:0.104249
iterations:[560/40000] avg loss:0.094525
iterations:[616/40000] avg loss:0.085898
iterations:[679/40000] avg loss:0.077194
iterations:[742/40000] avg loss:0.069259
iterations:[819/40000] avg loss:0.062141
iterations:[903/40000] avg loss:0.054239
iterations:[994/40000] avg loss:0.047911
iterations:[1092/40000] avg loss:0.041417
iterations:[1197/40000] avg loss:0.035435
iterations:[1323/40000] avg loss:0.029391
iterations:[1449/40000] avg loss:0.025351
iterations:[1596/40000] avg loss:0.019751
iterations:[1757/40000] avg loss:0.016564
iterations:[1932/40000] avg loss:0.013696
iterations:[2128/40000] avg loss:0.011093
iterations:[2338/40000] avg loss:0.008763
iterations:[2576/40000] avg loss:0.006438
iterations:[2835/40000] avg loss:0.005285
iterations:[3115/40000] avg loss:0.004406
iterations:[3430/40000] avg loss:0.003611
iterations:[3773/40000] avg loss:0.003010
iterations:[4151/40000] avg loss:0.002522
iterations:[4564/40000] avg loss:0.002071
iterations:[5019/40000] avg loss:0.001858
iterations:[5523/40000] avg loss:0.001614
iterations:[6076/40000] avg loss:0.001406
iterations:[6685/40000] avg loss:0.001228
iterations:[7357/40000] avg loss:0.001124
iterations:[8092/40000] avg loss:0.001018
iterations:[8897/40000] avg loss:0.000920
iterations:[9793/40000] avg loss:0.000854
iterations:[10766/40000] avg loss:0.000783
iterations:[11844/40000] avg loss:0.000728
iterations:[13034/40000] avg loss:0.000680
iterations:[14336/40000] avg loss:0.000637
iterations:[15771/40000] avg loss:0.000600
iterations:[17346/40000] avg loss:0.000563
iterations:[19082/40000] avg loss:0.000526
iterations:[20993/40000] avg loss:0.000504
iterations:[23086/40000] avg loss:0.000477
iterations:[25396/40000] avg loss:0.000455
iterations:[27937/40000] avg loss:0.000442
iterations:[30730/40000] avg loss:0.000421
iterations:[33810/40000] avg loss:0.000406
iterations:[37191/40000] avg loss:0.000392
iterations:[40000/40000] avg loss:0.000380
```python
# ---------------------------- 绘图 ------------------------
# 探测碰撞
real_address_heatmap = np.zeros((expend_dim, hash_number))
for train_sample in train_x_list:
pred_y = my_cmac.forward(train_sample)
real_address_heatmap[range(0, my_cmac.expend_dim), my_cmac.real_vector.reshape(-1)-1] += 1
plt.figure(figsize=(30, 10))
plt.title("Collisions")
sn.heatmap(real_address_heatmap)
plt.show()
# 绘制训练过程图
fig = plt.figure(figsize=(20, 20))
for i in range(len(train_loss_dict)):
plt.cla()
plt.clf()
ax1 = fig.add_subplot(2, 2, 1, projection='3d')
ax1.set_title("Ground True", fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
ax1.tick_params(axis='z', labelsize=20)
ax1.set_zlim(-1, 1)
ax1.plot_surface(x1, x2, np.sin(x1) * np.cos(x2), cmap="rainbow")
ax2 = fig.add_subplot(2, 2, 2, projection='3d')
ax2.set_title("Pred", fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
ax2.tick_params(axis='z', labelsize=20)
ax2.set_zlim(-1, 1)
ax2.plot_surface(x1, x2, train_pred_recoder[i].reshape((sample_resolution, sample_resolution)), cmap="rainbow")
ax3 = fig.add_subplot(2, 2, 3)
ax3.set_title("loss & iteration:{}".format(list(train_loss_dict.keys())[i]), fontsize=20)
ax3.set_xlabel("iterations", fontsize=20)
ax3.set_ylabel("loss", fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
ax3.plot(list(train_loss_dict.keys())[:i], list(train_loss_dict.values())[:i])
ax3.set_aspect(1.0/ax3.get_data_ratio(), adjustable="box")
ax4 = fig.add_subplot(2, 2, 4)
ax4.set_title("weight params & iteration:{}".format(list(train_loss_dict.keys())[i]), fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
sn.heatmap(params_recoder[i], vmax=0.1, vmin=-0.1, cmap="YlGnBu")
plt.savefig("./images/temp/{}.png".format(i), dpi=70)
plt.show()
with imageio.get_writer("./images/{}.gif".format("CMAC_sin_cos"), mode="I", fps=10) as Writer:
for ind in range(len(train_loss_dict)):
image = imageio.imread("./images/temp/{}.png".format(ind))
os.remove("./images/temp/{}.png".format(ind))
Writer.append_data(image)
```
| ef52c4fbcbdc0853ba4bd8dd5788227bac118ac3 | 522,139 | ipynb | Jupyter Notebook | CMAC.ipynb | koolo233/NeuralNetworks | 39e532646ada8f1e821e0b6e3565379c1f73126c | [
"MIT"
]
| null | null | null | CMAC.ipynb | koolo233/NeuralNetworks | 39e532646ada8f1e821e0b6e3565379c1f73126c | [
"MIT"
]
| null | null | null | CMAC.ipynb | koolo233/NeuralNetworks | 39e532646ada8f1e821e0b6e3565379c1f73126c | [
"MIT"
]
| null | null | null | 647.012392 | 464,826 | 0.941529 | true | 7,693 | Qwen/Qwen-72B | 1. YES
2. YES | 0.847968 | 0.672332 | 0.570116 | __label__yue_Hant | 0.06908 | 0.162899 |
#Snippets and Programs from Chapter 4: Algebra and Symbolic Math with SymPy
```python
%matplotlib inline
```
```python
#P96/97: Basic factorization and expansion
from sympy import Symbol, factor, expand
x = Symbol('x')
y = Symbol('y')
expr = x**2 - y**2
f = factor(expr)
print(f)
# Expand
print(expand(f))
```
```python
#P97: Factorizing and expanding a complicated identity
from sympy import Symbol, factor, expand
x = Symbol('x')
y = Symbol('y')
expr = x**3 + 3*x**2*y + 3*x*y**2 + y**3
print('Original expression: {0}'.format(expr))
factors = factor(expr)
print('Factors: {0}'.format(factors))
expanded = expand(factors)
print('Expansion: {0}'.format(expanded))
```
```python
#P97: Pretty printing
from sympy import Symbol, pprint, init_printing
x = Symbol('x')
expr = x*x + 2*x*y + y*y
pprint(expr)
# Reverse order lexicographical
init_printing(order='rev-lex')
expr = 1 + 2*x + 2*x**2
pprint(expr)
```
*Since we have initialized pretty printing above, it will be active for all the output below this.*
```python
#P99: Print a series
'''
Print the series:
x + x**2 + x**3 + ... + x**n
____ _____ ____
2 3 n
'''
from sympy import Symbol, pprint, init_printing
def print_series(n):
# initialize printing system with
# reverse order
init_printing(order='rev-lex')
x = Symbol('x')
series = x
for i in range(2, n+1):
series = series + (x**i)/i
pprint(series)
if __name__ == '__main__':
n = input('Enter the number of terms you want in the series: ')
print_series(int(n))
```
```python
#P100: Substituting in values
from sympy import Symbol
x = Symbol('x')
y = Symbol('y')
expr = x*x + x*y + x*y + y*y
res = expr.subs({x:1, y:2})
res
```
```python
#P102: Print a series and also calculate its value at a certain point
'''
Print the series:
x + x**2 + x**3 + ... + x**n
____ _____ ____
2 3 n
and calculate its value at a certain value of x.
'''
from sympy import Symbol, pprint, init_printing
def print_series(n, x_value):
# initialize printing system with
# reverse order
init_printing(order='rev-lex')
x = Symbol('x')
series = x
for i in range(2, n+1):
series = series + (x**i)/i
pprint(series)
# evaluate the series at x_value
series_value = series.subs({x:x_value})
print('Value of the series at {0}: {1}'.format(x_value, series_value))
if __name__ == '__main__':
n = input('Enter the number of terms you want in the series: ')
x_value = input('Enter the value of x at which you want to evaluate the series: ')
print_series(int(n), float(x_value))
```
```python
# P104: Expression multiplier
'''
Product of two expressions
'''
from sympy import expand, sympify
from sympy.core.sympify import SympifyError
def product(expr1, expr2):
prod = expand(expr1*expr2)
print(prod)
if __name__=='__main__':
expr1 = input('Enter the first expression: ')
expr2 = input('Enter the second expression: ')
try:
expr1 = sympify(expr1)
expr2 = sympify(expr2)
except SympifyError:
print('Invalid input')
else:
product(expr1, expr2)
```
```python
#P105: Solving a linear equation
>>> from sympy import Symbol, solve
>>> x = Symbol('x')
>>> expr = x - 5 - 7
>>> solve(expr)
```
```python
#P106: Solving a quadratic equation
>>> from sympy import solve
>>> x = Symbol('x')
>>> expr = x**2 + 5*x + 4
>>> solve(expr, dict=True)
```
```python
#P106: Quadratic equation with imaginary roots
>>> from sympy import Symbol
>>> x=Symbol('x')
>>> expr = x**2 + x + 1
>>> solve(expr, dict=True)
```
```python
#P106/107: Solving for one variable in terms of others
>>> from sympy import Symbol, solve
>>> x = Symbol('x')
>>> a = Symbol('a')
>>> b = Symbol('b')
>>> c = Symbol('c')
>>> expr = a*x*x + b*x + c
>>> solve(expr, x, dict=True)
```
```python
#P107: Express s in terms of u, a, t
>>> from sympy import Symbol, solve, pprint
>>> s = Symbol('s')
>>> u = Symbol('u')
>>> t = Symbol('t')
>>> a = Symbol('a')
>>> expr = u*t + (1/2)*a*t*t - s
>>> t_expr = solve(expr,t, dict=True)
>>> t_expr
```
```python
#P108: Solve a system of Linear equations
>>> from sympy import Symbol
>>> x = Symbol('x')
>>> y = Symbol('y')
>>> expr1 = 2*x + 3*y - 6
>>> expr2 = 3*x + 2*y - 12
>>> solve((expr1, expr2), dict=True)
```
```python
#P109: Simple plot with SymPy
>>> from sympy.plotting import plot
>>> from sympy import Symbol
>>> x = Symbol('x')
>>> plot(2*x+3)
```
```python
#P110: Plot in SymPy with range of x as well as other attributes specified
>>> from sympy import plot, Symbol
>>> x = Symbol('x')
>>> plot(2*x + 3, (x, -5, 5), title='A Line', xlabel='x', ylabel='2x+3')
```
```python
#P112: Plot the graph of an input expression
'''
Plot the graph of an input expression
'''
from sympy import Symbol, sympify, solve
from sympy.plotting import plot
def plot_expression(expr):
y = Symbol('y')
solutions = solve(expr, y)
expr_y = solutions[0]
plot(expr_y)
if __name__=='__main__':
expr = input('Enter your expression in terms of x and y: ')
try:
expr = sympify(expr)
except SympifyError:
print('Invalid input')
else:
plot_expression(expr)
```
```python
#P113: Plotting multiple functions
>>> from sympy.plotting import plot
>>> from sympy import Symbol
>>> x = Symbol('x')
>>> plot(2*x+3, 3*x+1)
```
```python
#P114: Plot of the two lines drawn in a different color
>>> from sympy.plotting import plot
>>> from sympy import Symbol
>>> x = Symbol('x')
>>> p = plot(2*x+3, 3*x+1, legend=True, show=False)
>>> p[0].line_color = 'b'
>>> p[1].line_color = 'r'
>>> p.show()
```
```python
#P116: Example of summing a series
>>> from sympy import Symbol, summation, pprint
>>> x = Symbol('x')
>>> n = Symbol('n')
>>> s = summation(x**n/n, (n, 1, 5))
>>> s.subs({x:1.2})
```
3.51206400000000
```python
#P117: Example of solving a polynomial inequality
>>> from sympy import Poly, Symbol, solve_poly_inequality
>>> x = Symbol('x')
>>> ineq_obj = -x**2 + 4 < 0
>>> lhs = ineq_obj.lhs
>>> p = Poly(lhs, x)
>>> rel = ineq_obj.rel_op
>>> solve_poly_inequality(p, rel)
```
[(-oo, -2), (2, oo)]
```python
#P118: Example of solving a rational inequality
>>> from sympy import Symbol, Poly, solve_rational_inequalities
>>> x = Symbol('x')
>>> ineq_obj = ((x-1)/(x+2)) > 0
>>> lhs = ineq_obj.lhs
>>> numer, denom = lhs.as_numer_denom()
>>> p1 = Poly(numer)
>>> p2 = Poly(denom)
>>> rel = ineq_obj.rel_op
>>> solve_rational_inequalities([[((p1, p2), rel)]])
```
(-oo, -2) U (1, oo)
```python
#P118: Solve a non-polynomial inequality
>>> from sympy import Symbol, solve, solve_univariate_inequality, sin
>>> x = Symbol('x')
>>> ineq_obj = sin(x) - 0.6 > 0
>>> solve_univariate_inequality(ineq_obj, x, relational=False)
```
(0.643501108793284, 2.49809154479651)
| 6075efe3ee36ffd05522e9f499072db93c58a992 | 72,623 | ipynb | Jupyter Notebook | chapter4/Chapter4.ipynb | hexu1985/Doing.Math.With.Python | b6a02805cd450325e794a49f55d2d511f9db15a5 | [
"MIT"
]
| 109 | 2015-08-28T10:23:24.000Z | 2022-02-15T01:39:51.000Z | chapter4/Chapter4.ipynb | hexu1985/Doing.Math.With.Python | b6a02805cd450325e794a49f55d2d511f9db15a5 | [
"MIT"
]
| 6 | 2015-12-07T19:35:30.000Z | 2021-05-01T07:25:42.000Z | chapter4/Chapter4.ipynb | hexu1985/Doing.Math.With.Python | b6a02805cd450325e794a49f55d2d511f9db15a5 | [
"MIT"
]
| 74 | 2015-10-15T18:09:15.000Z | 2022-01-30T05:06:21.000Z | 115.274603 | 14,682 | 0.858447 | true | 2,128 | Qwen/Qwen-72B | 1. YES
2. YES | 0.951863 | 0.903294 | 0.859813 | __label__eng_Latn | 0.768788 | 0.835966 |
### Example 5: Laplace equation
In this tutorial we will look constructing the steady-state heat example using the Laplace equation. In contrast to the previous tutorials this example is entirely driven by the prescribed Dirichlet and Neumann boundary conditions, instead of an initial condition. We will also demonstrate how to use Devito to solve a steady-state problem without time derivatives and how to switch buffers explicitly without having to re-compile the kernel.
First, we again define our governing equation:
$$\frac{\partial ^2 p}{\partial x^2} + \frac{\partial ^2 p}{\partial y^2} = 0$$
We are again discretizing second-order derivatives using a central difference scheme to construct a diffusion problem (see tutorial 3). This time we have no time-dependent term in our equation though, since there is no term $p_{i,j}^{n+1}$. This means that we are simply updating our field variable $p$ over and over again, until we have reached an equilibrium state. In a discretised form, after rearranging to update the central point $p_{i,j}^n$ we have
$$p_{i,j}^n = \frac{\Delta y^2(p_{i+1,j}^n+p_{i-1,j}^n)+\Delta x^2(p_{i,j+1}^n + p_{i,j-1}^n)}{2(\Delta x^2 + \Delta y^2)}$$
And, as always, we first re-create the original implementation to see what we are aiming for. Here we initialise the field $p$ to $0$ and apply the following bounday conditions:
$p=0$ at $x=0$
$p=y$ at $x=2$
$\frac{\partial p}{\partial y}=0$ at $y=0, \ 1$
**Developer note:**
The original tutorial stores the field data in the layout `(ny, nx)`. Until now we have used `(x, y)` notation for creating our Devito examples, but for this one we will adopt the `(y, x)` layout for compatibility reasons.
```python
from examples.cfd import plot_field
import numpy as np
%matplotlib inline
# Some variable declarations
nx = 31
ny = 31
c = 1
dx = 2. / (nx - 1)
dy = 1. / (ny - 1)
```
```python
def laplace2d(p, bc_y, dx, dy, l1norm_target):
l1norm = 1
pn = np.empty_like(p)
while l1norm > l1norm_target:
pn = p.copy()
p[1:-1, 1:-1] = ((dy**2 * (pn[1:-1, 2:] + pn[1:-1, 0:-2]) +
dx**2 * (pn[2:, 1:-1] + pn[0:-2, 1:-1])) /
(2 * (dx**2 + dy**2)))
p[:, 0] = 0 # p = 0 @ x = 0
p[:, -1] = bc_right # p = y @ x = 2
p[0, :] = p[1, :] # dp/dy = 0 @ y = 0
p[-1, :] = p[-2, :] # dp/dy = 0 @ y = 1
l1norm = (np.sum(np.abs(p[:]) - np.abs(pn[:])) /
np.sum(np.abs(pn[:])))
return p
```
```python
#NBVAL_IGNORE_OUTPUT
# Out initial condition is 0 everywhere,except at the boundary
p = np.zeros((ny, nx))
# Boundary conditions
bc_right = np.linspace(0, 1, ny)
p[:, 0] = 0 # p = 0 @ x = 0
p[:, -1] = bc_right # p = y @ x = 2
p[0, :] = p[1, :] # dp/dy = 0 @ y = 0
p[-1, :] = p[-2, :] # dp/dy = 0 @ y = 1
plot_field(p, ymax=1.0, view=(30, 225))
```
```python
#NBVAL_IGNORE_OUTPUT
p = laplace2d(p, bc_right, dx, dy, 1e-4)
plot_field(p, ymax=1.0, view=(30, 225))
```
Ok, nice. Now, to re-create this example in Devito we need to look a little bit further under the hood. There are two things that make this different to the examples we covered so far:
* We have no time dependence in the `p` field, but we still need to advance the state of p in between buffers. So, instead of using `TimeFunction` objects that provide multiple data buffers for timestepping schemes, we will use `Function` objects that have no time dimension and only allocate a single buffer according to the space dimensions. However, since we are still implementing a pseudo-timestepping loop, we will need to objects, say `p` and `pn`, to act as alternating buffers.
* If we're using two different symbols to denote our buffers, any operator we create will only perform a single timestep. This is desired though, since we need to check a convergence criteria outside of the main stencil update to determine when we stop iterating. As a result we will need to call the operator repeatedly after instantiating it outside the convergence loop.
So, how do we make sure our operator doesn't accidentally overwrite values in the same buffer? Well, we can again let SymPy reorganise our Laplace equation based on `pn` to generate the stencil, but when we create the update expression, we set the LHS to our second buffer variable `p`.
```python
from devito import Grid, Function, Eq, INTERIOR
from sympy import solve
# Create two explicit buffers for pseudo-timestepping
grid = Grid(shape=(nx, ny), extent=(1., 2.))
p = Function(name='p', grid=grid, space_order=2)
pn = Function(name='pn', grid=grid, space_order=2)
# Create Laplace equation base on `pn`
eqn = Eq(pn.laplace, region=INTERIOR)
# Let SymPy solve for the central stencil point
stencil = solve(eqn, pn)[0]
# Now we let our stencil populate our second buffer `p`
eq_stencil = Eq(p, stencil)
# In the resulting stencil `pn` is exclusively used on the RHS
# and `p` on the LHS is the grid the kernel will update
print("Update stencil:\n%s\n" % eq_stencil)
```
Update stencil:
Eq(p(x, y), 0.5*(h_x**2*pn(x, y - h_y) + h_x**2*pn(x, y + h_y) + h_y**2*pn(x - h_x, y) + h_y**2*pn(x + h_x, y))/(h_x**2 + h_y**2))
Now we can add our boundary conditions. We have already seen how to prescribe constant Dirichlet BCs by simply setting values using the low-level notation. This time we will go a little further by setting a prescribed profile, which we create first as a custom 1D symbol and supply with the BC values. For this we need to create a `Function` object that has a different shape than our general `grid`, so instead of the grid we provide an explicit pair of dimension symbols and the according shape for the data.
```python
x, y = grid.dimensions
bc_right = Function(name='bc_right', shape=(nx, ), dimensions=(x, ))
bc_right.data[:] = np.linspace(0, 1, nx)
```
Now we can create a set of expressions for the BCs again, where we wet prescribed values on the right and left of our grid. For the Neuman BCs along the top and bottom boundaries we simply copy the second rwo from the outside into the outermost row, just as the original tutorial did. Using these expressions and our stencil update we can now create an operator.
```python
#NBVAL_IGNORE_OUTPUT
from devito import Operator
# Create boundary condition expressions
bc = [Eq(p.indexed[x, 0], 0.)] # p = 0 @ x = 0
bc += [Eq(p.indexed[x, ny-1], bc_right.indexed[x])] # p = y @ x = 2
bc += [Eq(p.indexed[0, y], p.indexed[1, y])] # dp/dy = 0 @ y = 0
bc += [Eq(p.indexed[nx-1, y], p.indexed[nx-2, y])] # dp/dy = 0 @ y = 1
# Now we can build the operator that we need
op = Operator(expressions=[eq_stencil] + bc)
```
We can now use this single-step operator repeatedly in a Python loop, where we can arbitrarily execute other code in between invocations. This allows us to update our L1 norm and check for convergence. Using our pre0compiled operator now comes down to a single function call that supplies the relevant data symbols. One thing to note is that we now do exactly the same thing as the original NumPy loop, in that we deep-copy the data between each iteration of the loop, which we will look at after this.
```python
#NBVAL_IGNORE_OUTPUT
# Silence the runtime performance logging
from devito import configuration
configuration['log_level'] = 'ERROR'
# Initialise the two buffer fields
p.data[:] = 0.
p.data[:, -1] = np.linspace(0, 1, ny)
pn.data[:] = 0.
pn.data[:, -1] = np.linspace(0, 1, ny)
# Visualize the initial condition
plot_field(p.data, ymax=1.0, view=(30, 225))
# Run the convergence loop with deep data copies
l1norm_target = 1.e-4
l1norm = 1
while l1norm > l1norm_target:
# This call implies a deep data copy
pn.data[:] = p.data[:]
op(p=p, pn=pn)
l1norm = (np.sum(np.abs(p.data[:]) - np.abs(pn.data[:])) /
np.sum(np.abs(pn.data[:])))
# Visualize the converged steady-state
plot_field(p.data, ymax=1.0, view=(30, 225))
```
One crucial detail about the code above is that the deep data copy between iterations will really hurt performance if we were to run this on a large grid. However, we have already seen how we can match data symbols to symbolic names when calling the pre-compiled operator, which we can now use to actually switch the roles of `pn` and `p` between iterations, eg. `op(p=pn, pn=p)`. Thus, we can implement a simple buffer-switching scheme by simply testing for odd and even time-steps, without ever having to shuffle data around.
```python
#NBVAL_IGNORE_OUTPUT
# Initialise the two buffer fields
p.data[:] = 0.
p.data[:, -1] = np.linspace(0, 1, ny)
pn.data[:] = 0.
pn.data[:, -1] = np.linspace(0, 1, ny)
# Visualize the initial condition
plot_field(p.data, ymax=1.0, view=(30, 225))
# Run the convergence loop by explicitly flipping buffers
l1norm_target = 1.e-4
l1norm = 1
counter = 0
while l1norm > l1norm_target:
# Determine buffer order
if counter % 2 == 0:
_p = p
_pn = pn
else:
_p = pn
_pn = p
# Apply operator
op(p=_p, pn=_pn)
# Compute L1 norm
l1norm = (np.sum(np.abs(_p.data[:]) - np.abs(_pn.data[:])) /
np.sum(np.abs(_pn.data[:])))
counter += 1
plot_field(p.data, ymax=1.0, view=(30, 225))
```
| 58d49f68b1cf1959c1c419377e0bc05376f4463b | 608,087 | ipynb | Jupyter Notebook | examples/cfd/05_laplace.ipynb | RajatRasal/devito | 162abb6b318e77eaa4e8f719047327c45782056f | [
"MIT"
]
| null | null | null | examples/cfd/05_laplace.ipynb | RajatRasal/devito | 162abb6b318e77eaa4e8f719047327c45782056f | [
"MIT"
]
| null | null | null | examples/cfd/05_laplace.ipynb | RajatRasal/devito | 162abb6b318e77eaa4e8f719047327c45782056f | [
"MIT"
]
| null | null | null | 1,604.451187 | 108,748 | 0.958851 | true | 2,685 | Qwen/Qwen-72B | 1. YES
2. YES | 0.833325 | 0.851953 | 0.709953 | __label__eng_Latn | 0.99292 | 0.487791 |
```python
# import packages
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sympy import *
from scipy.optimize import fsolve
%matplotlib inline
```
```python
# set up constants
length = 25 #(nm)
dx = length/12 #(nm)
dt = 0.016667*332/12 #(s)
M = 2.75*10**(-15) #(m mol/J s) it should be determined by further aproach but assumed as a constant here
Cmax = 0.02119 #(mol/cm3)
F = 96500 #(C/mol)
k2 = -4.8
b2 = 7.57
CeaCmax = 0.041 #(Cea/Cmax)
CebCmax = 0.006 #(Ceb/Cmax)
Eeq = 3.4276
```
```python
# set up constant for f(xi)
a = 218414
b = 288001
c = 122230
d = 12466
```
```python
# import two-phase data
df = pd.read_excel('D.xlsx', sheet_name='4')
C = df['Li Fraction'] # the concentration in two-phase region
A = df['SymbolA']
```
```python
A
```
0 A0
1 A1
2 A2
3 A3
4 A4
5 A5
6 A6
7 A7
8 A8
9 A9
10 A10
11 A11
Name: SymbolA, dtype: object
```python
def myFunctionA(AA):
for i in range(0,12):
A[i] = AA[i]
F = np.empty((12))
F[0] = A[0] - C[0]
F[11] = A[11] - 0
for i in range(1,11):
F[i] = (C[i]-2*A[i]) * ( M*( (C[i]-2*A[i])/Cmax*F*(k2*(C[i]-A[i])/Cmax+b2) - (CebCmax-CeaCmax)*F*Eeq + a*(C[i]/Cmax)**3 - b*(C[i]/Cmax)**2 + c*(C[i]/Cmax) + d ) ) - (A[i+1]-A[i])**2/(A[i+1]-2*A[i]+A[i-1])*dx/dt - (C[i+1]-A[i+1]-C[i]+A[i])**2/(C[i+1]-A[i+1]-2*C[i]+2*A[i]+C[i-1]-A[i-1])*dx/dt
return F
AAGuess = np.linspace(0, 1, 12)
AA = fsolve(myFunctionA,AAGuess)
print(AA)
```
```python
def myFunctionA(AA):
for i in range(0,12):
A[i] = AA[i]
F = np.empty((12))
F[0] = A[0] - C[0]
F[11] = A[11] - 0
for i in range(1,11):
F[i] = (C[i]-2*A[i]) * ( a*(C[i]-A[i])**2 + b*(C[i]-A[i]) - a*(C[i]-A[i])*C[i] - b*C[i] + c*C[i]**3 - d*C[i]**2 + e*C[i] - f) - (A[i+1]-A[i])**2/(A[i+1]-2*A[i]+A[i-1])*dx/dt - (C[i+1]-A[i+1]-C[i]+A[i])**2/(C[i+1]-A[i+1]-2*C[i]+2*A[i]+C[i-1]-A[i-1])*dx/dt
return F
AAGuess = np.linspace(0, 1, 12)
AA = fsolve(myFunctionA,AAGuess)
print(AA)
```
```python
```
| ec92a4ee9953fc29c31f5de7aefcf172a55e7c31 | 28,995 | ipynb | Jupyter Notebook | ode_pve/GITT.ipynb | ode-pve/ODE_PVE | a19b26c2aa260820b8c0e51bae5f654c7de97ba0 | [
"MIT"
]
| null | null | null | ode_pve/GITT.ipynb | ode-pve/ODE_PVE | a19b26c2aa260820b8c0e51bae5f654c7de97ba0 | [
"MIT"
]
| null | null | null | ode_pve/GITT.ipynb | ode-pve/ODE_PVE | a19b26c2aa260820b8c0e51bae5f654c7de97ba0 | [
"MIT"
]
| null | null | null | 123.382979 | 5,690 | 0.676806 | true | 934 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.672332 | 0.607313 | __label__eng_Latn | 0.19601 | 0.249323 |
# Circuitos RL sem fonte
Jupyter Notebook desenvolvido por [Gustavo S.S.](https://github.com/GSimas)
Considere a conexão em série de um resistor e um indutor, conforme mostra a
Figura 7.11. Em t = 0, supomos que o indutor tenha uma
corrente inicial Io.
\begin{align}
I(0) = I_0
\end{align}
Assim, a energia correspondente armazenada no indutor como segue:
\begin{align}
w(0) = \frac{1}{2} LI_0²
\end{align}
Exponenciando em e, obtemos:
\begin{align}
i(t) = I_0 e^{-t \frac{R}{L}}
\end{align}
Isso demonstra que a resposta natural de um circuito RL é uma queda exponencial
da corrente inicial. A resposta em corrente é mostrada na Figura 7.12. Fica
evidente, da Equação, que a constante de tempo para o circuito RL é:
\begin{align}
τ = \frac{L}{R}
\end{align}
A tensão no resistor como segue:
\begin{align}
v_R(t) = I_0 R e^{-t/τ}
\end{align}
A potência dissipada no resistor é:
\begin{align}
p = v_R i = I_0^2 R e^{-2t/τ}
\end{align}
A energia absorvida pelo resistor é:
\begin{align}
w_R(t) = \int_{0}^{t} p(t)dt = \frac{1}{2} L I_0^2 (1 - e^{-2t/τ})
\end{align}
**Enquanto t → ∞, wr(∞) → 1/2 L I0², que é o mesmo que wl(0), a energia armazenada inicialmente no indutor**
Assim, os procedimentos são:
1. Determinar corrente inicial i(0) = I0 por meio do indutor.
2. Determinar a constante de tempo τ = L/R
**Exemplo 7.3**
Supondo que i(0) = 10 A, calcule i(t) e ix(t) no circuito da Figura 7.13.
```python
print("Exemplo 7.3")
import numpy as np
from sympy import *
I0 = 10
L = 0.5
R1 = 2
R2 = 4
t = symbols('t')
#Determinar Req = Rth
#Io hipotético = 1 A
#Analise de Malhas
#4i2 + 2(i2 - i0) = -3i0
#6i2 = 5
#i2 = 5/6
#ix' = i2 - i1 = 5/6 - 1 = -1/6
#Vr1 = ix' * R1 = -1/6 * 2 = -1/3
#Rth = Vr1/i0 = (-1/3)/(-1) = 1/3
Rth = 1/3
tau = L/Rth
i = I0*exp(-t/tau)
print("Corrente i(t):",i,"A")
vl = L*diff(i,t)
ix = vl/R1
print("Corrente ix(t):",ix,"A")
```
Exemplo 7.3
Corrente i(t): 10*exp(-0.666666666666667*t) A
Corrente ix(t): -1.66666666666667*exp(-0.666666666666667*t) A
**Problema Prático 7.3**
Determine i e vx no circuito da Figura 7.15. Façamos i(0) = 12 A.
```python
print("Problema Prático 7.3")
L = 2
I0 = 12
R1 = 1
#Determinar Req = Rth
#i0 hipotetico = 1 A
#vx = 4 V
#vx + 2(i0 - i1) + 2vx - v0 = 0
#-2i1 - v0 = -14
#-2vx + 2(i1 - i0) + 6i1 = 0
#8i1 = 10
#i1 = 10/8 = 5/4
#v0 = vx + 2(i0 - i1) + 2vx
#v0 = 4 + 2 - 5/2 + 8 = 11.5
#Rth = v0/i0 = 11.5/1 = 11.5
Rth = 11.5
tau = L/Rth
i = I0*exp(-t/tau)
print("Corrente i(t):",i,"A")
vx = -R1*i
print("Tensão vx(t):",vx,"V")
```
Problema Prático 7.3
Corrente i(t): 12*exp(-5.75*t) A
Tensão vx(t): -12*exp(-5.75*t) V
**Exemplo 7.4**
A chave do circuito da Figura 7.16 foi fechada por um longo período. Em t = 0, a chave
é aberta. Calcule i(t) para t > 0.
```python
print("Exemplo 7.4")
Vs = 40
L = 2
def Req(x,y): #funcao para calculo de resistencia equivalente em paralelo
res = (x*y)/(x + y)
return res
Req1 = Req(4,12)
V1 = Vs*Req1/(Req1 + 2)
I0 = V1/4
Req2 = 12 + 4
Rth = Req(Req2, 16)
tau = L/Rth
i = I0*exp(-t/tau)
print("Corrente i(t):",i,"A")
```
Exemplo 7.4
Corrente i(t): 6.0*exp(-4.0*t) A
**Problema Prático 7.4**
Para o circuito da Figura 7.18, determine i(t) para t > 0.
```python
print("Problema Prático 7.4")
L = 2
Cs = 15
R1 = 24
Req1 = Req(12,8)
i1 = Cs*R1/(R1 + Req1)
I0 = i1*8/(8 + 12)
Rth = Req(12+8,5)
tau = L/Rth
i = I0*exp(-t/tau)
print("Corrente i(t):",i,"A")
```
Problema Prático 7.4
Corrente i(t): 5.0*exp(-2.0*t) A
**Exemplo 7.5**
No circuito indicado na Figura 7.19, encontre io, vo e i durante todo o tempo, supondo
que a chave fora aberta por um longo período.
```python
print("Exemplo 7.5")
Vs = 10
L = 2
print("Para t < 0, i0:",0,"A")
I0 = Vs/(2 + 3)
v0 = 3*I0
print("Para t < 0, i:",I0,"A")
print("Para t < 0, v0:",v0,"V")
Rth = Req(3,6)
tau = L/Rth
i = I0*exp(-t/tau)
v0 = -L*diff(i,t)
i0 = -i*3/(3 + 6)
print("Para t > 0, i0:",i0,"A")
print("Para t > 0, v0:",v0,"V")
print("Para t > 0 i:",i,"A")
```
Exemplo 7.5
Para t < 0, i0: 0 A
Para t < 0, i: 2.0 A
Para t < 0, v0: 6.0 V
Para t > 0, i0: -0.666666666666667*exp(-1.0*t) A
Para t > 0, v0: 4.0*exp(-1.0*t) V
Para t > 0 i: 2.0*exp(-1.0*t) A
**Problema Prático 7.5**
Determine i, io e vo para todo t no circuito mostrado na Figura 7.22.
```python
print("Problema Prático 7.5")
Cs = 24
L = 1
#Para t < 0
i = Cs*4/(4 + 2)
i0 = Cs*2/(2 + 4)
v0 = 2*i
print("Para t < 0, i =",i,"A")
print("Para t < 0, i0 =",i0,"A")
print("Para t < 0, v0 =",v0,"V")
#Para t > 0
R = Req(4 + 2,3)
tau = L/R
I0 = i
i = I0*exp(-t/tau)
i0 = -i*3/(3 + 4 + 2)
v0 = -i0*2
print("Para t < 0, i =",i,"A")
print("Para t < 0, i0 =",i0,"A")
print("Para t < 0, v0 =",v0,"V")
```
Problema Prático 7.5
Para t < 0, i = 16.0 A
Para t < 0, i0 = 8.0 A
Para t < 0, v0 = 32.0 V
Para t < 0, i = 16.0*exp(-2.0*t) A
Para t < 0, i0 = -5.33333333333333*exp(-2.0*t) A
Para t < 0, v0 = 10.6666666666667*exp(-2.0*t) V
| 3071f5ecf085e60fcec7e007e2cbe60326494c04 | 9,828 | ipynb | Jupyter Notebook | Aula 10 - Circuitos RL.ipynb | ofgod2/Circuitos-electricos-Boylestad-12ed-Portugues | 60e815f6904858f3cda8b5c7ead8ea77aa09c7fd | [
"MIT"
]
| 7 | 2019-08-13T13:33:15.000Z | 2021-11-16T16:46:06.000Z | Aula 10 - Circuitos RL.ipynb | ofgod2/Circuitos-electricos-Boylestad-12ed-Portugues | 60e815f6904858f3cda8b5c7ead8ea77aa09c7fd | [
"MIT"
]
| 1 | 2017-08-24T17:36:15.000Z | 2017-08-24T17:36:15.000Z | Aula 10 - Circuitos RL.ipynb | ofgod2/Circuitos-electricos-Boylestad-12ed-Portugues | 60e815f6904858f3cda8b5c7ead8ea77aa09c7fd | [
"MIT"
]
| 8 | 2019-03-29T14:31:49.000Z | 2021-12-30T17:59:23.000Z | 23.625 | 117 | 0.43549 | true | 2,329 | Qwen/Qwen-72B | 1. YES
2. YES | 0.73412 | 0.746139 | 0.547755 | __label__por_Latn | 0.736297 | 0.110948 |
# Automation's impact on the economic growth.
## Importing modules
```python
import numpy as np
import scipy as sp
from scipy import linalg
from scipy import optimize
from scipy import interpolate
import sympy as sm
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import ipywidgets as widgets
from ipywidgets import interact, interactive, fixed, interact_manual
from IPython import display
sm.init_printing(use_unicode=True)
import time
```
## Introduction
This notebook considers a model that Klaus Prettner [Prettner, K. (2019)] use to analyze optimal growth with the presence of automation, thought as AI, robots 3D printers, and so forth.
The model can be viewed as an extension of the model of Robert Solow, where labor ($L_t$) and automation ($P_t$) is perfect substitutes.
**Framework:**
"The robots are coming - Do they take our jobs" - A topic has got more and more publicity the daily press now a days. The answer is difficult because it is ambiguous - New technology, can solve problems better than maintream labor, but the new technology also creates new opputuneties and jobs. In short, it is clear that new technology has a large impact on some given industries.
*For example, in the car industry, robots perform many production steps in an autonomous way; 3D printers are capable of producing customized products with minimal labor input (Abeliansky et al., 2015); driverless cars and lorries could soon transport goods from location A to location B without any involvement of labor; and devices based on machine learning are already able to diagnose some forms of diseases, to translate texts from one language to another with an acceptable quality, and even to write simple newsflashes.
This implies that stock of automation capital, $P_t$, installed in the form of robots, 3D printers, driverless cars, and devices based on machine learning is close to being a perfect substitute for labor, $L_t$.* - **Klaus Prettner**
That written, we will not go deeper into the discussion about the substitution between labor and automation, but simply analyse the specific model. We will investigate which savings rate for automation that is the optimal for the overall growth of the model.
**Model:**
Consider the following extension to the **Standard Solow-model** where:
1. $K(t)$ is capital
* $P(t)$ is the stock of automation capital
* $L(t)$ is labor (growing with a constant rate of $n$)
* $A(t)$ is technology (which we deliberately normalize to 1)
* $Y(t) = F[K(t),A(t),(L(t)+P(t))]$ is GDP
The production function is a Cobb-Douglas given by:
\\[ Y(t)=A(t)K(t)^{\alpha}(L(t)+P(t))^{1-\alpha}\\]
The economy is closed, and the savings rate, s, is given exogenously. This setup implies that investment equal savings, such that $I(t)=S(t)=sY(t)$. The investments can be made in terms of two different forms of capital: machines and automation, which implies that a share $s_m$ of savings are diverted to investment in terms of machines and a share $(1−s_m)$ are diverted to investment in terms of automation.
Putting it all together, are the accumulation equations for machines and automation given by respectively:
\\[\dot{K}(t) =s_mI(t)-\delta K(t) \quad \quad \dot{P}(t) =(1-s_m)I(t)-\delta P(t)\\]
This model setup gives us the following equations for the accumulation of capital and automation capital per worker:
\\[\dot{k}(t)=s_ms[1+p(t)]^{1-\alpha}k(t)^\alpha-\delta k(t)-nk(t) \\]
\\[\dot{p}(t)=(1-s_m)s[1+p(t)]^{1-\alpha}k(t)^{\alpha}-\delta p(t)-np(t) \\]
Above we have a problem, which can be turned into to a problem of two equations with two unknown parameters.
We divide the equations above by k(t) and p(t), respectively, and impose constant growth along the balanced growth path (BGP), which give us the equations below: (Steps in between, see Appendix):
\\[g = s_m s C^{1-\alpha}-\delta-n \\]
\\[g = (1-s_m) s [\frac{1}{C}]^\alpha-\delta-n \\]
In such a manner it can be shown that the economy converges to a situation, in which traditional capital per worker, automation capital per worker, and GDP per worker all grow at the same constant rate. The equation below is also calculated in section 1.3 using sympy.
\\[g = s\cdot s_m^\alpha(1-s_m)^{1-\alpha}-\delta-n \\]
In this model project (as written in the beginning), we would like to investigate the parameter $s_m$, to figure out what a share, channeled into traditional capital investment, that would maximize the economic growth.
## Analytical
We solve the problem of the two equation with two unknown parameter by using sympy:
First we equalize the RHS of the two equations to isolate C.
```python
#In order to use sympy, we define our parameters as sympy-symbols
alpha = sm.symbols('alpha')
delta = sm.symbols('delta')
s = sm.symbols('s')
s_m=sm.symbols('s_m')
n=sm.symbols('n')
C=sm.symbols('C')
#We make use of the equalize and solve-function
eq3 = sm.Eq(s_m * s * C **(1-alpha)-delta-n-(1-s_m) * s * (1/C)**alpha+delta+n)
Cstar = sm.solve(eq3,C)
Cstar
```
Afterwards we subsitute C into the equation for g:
```python
g = s_m * s * ((-s_m+1)/s_m) **(1-alpha)-delta-n
g1 = (1-s_m)*s*(1/(((-s_m+1)/s_m)))**alpha-delta-n
g
```
If we rearrange the expression above, it can easily be seen that it is equal to the expression for $g$ in the introduction.
The expression implies that the growth rate depend positively on the general savings rate plus the shares channeled into traditional and automation capital, respectively. Additionally, the growth rate depend negativly on the depreciation of capital, and the growth of the population. For large values of $\delta$ and $n$, the growth rate would be negative, but for now we focus on the solutions for which the growth rate is postive. This is also the case for plausible parameter specifications by Prettner (see table 1) and why he use those.
Furthermore is it obvious that there is a trade-off problem in the expression for g, since the optimal investment rate in traditional and automation capital is in relation to each other.
This is exactly what we aim to figure out in this model project: What share of the parameter $s_m$ maximize the economic growth.
## Optimizing the parameter of the share invested in machines.
We start by are using sympy yet again, to find an analytical expression for the value of $s_m$ which maximize g.
```python
g = s*(s_m**alpha)*((1-s_m)**(1-alpha))-delta-n
gdiff=sm.diff(g,s_m)
gdiff
```
```python
#this expression is too muddy, so we simplify it using the simplify-function.
sm.simplify(gdiff)
```
```python
#Then we solve this equation, isolating s_m
sm.solve(gdiff,s_m)[0]
```
The result show that $s_m=\alpha$ would maximize the economic growth.
This means that, the economy would be best served with an investment in automation capital equal to 70 percent.
## Numerical optimization
Even though, this problem can be solved analytically, we will also show how to solve the problem numirically, since it is the core of this course. We already know, what the solution is, but we think that this procedure is relevant, because it provides an intuitive base for understanding how numerical optimization works.
```python
#We assign the parameters with the values given by Prettner (see table 1)
alpha = 0.3
delta = 0.04
s = 0.21
n = 0.009
# We turn our sympy-equation, into a python-function
f=lambda s_m: s*(s_m**(alpha-1))*((1-s_m)**(-alpha))*(alpha-s_m)
```
Before optimizing, we plot our function of balanced growth, to see how it looks like.
```python
# Data for plotting
s_m = np.arange(-1, 80, 0.01)
g = g = 0.21*(s_m**0.3)*((1-s_m)**(1-0.3))-0.04-0.009
fig, ax = plt.subplots()
ax.plot(s_m, g, label='Growth path')
plt.plot([0.3], [0.06500510506486332], marker='o', markersize=4, color="red", label='Optimum')
ax.set(xlabel='Saving rate for automation', ylabel='Growth in GDP per worker',
title='Balanced growth')
ax.grid()
plt.legend(bbox_to_anchor=(0.5, -0.3), loc=8, ncol=4)
plt.show()
```
We optimize with bisection, since we think that it provides the users and readers with a fine understanding of how the optimal value of $s_m$ is found. We suplement this, with a widget that describes the steps of the bisection method. Overall, we supply our deriviative function with two values $a & b$, which we know will give the deriviative function a positive and a negative value. We thereby know that the optimum is between these to points (where $f=0$). Then we find the midpoint of these to values. If the midpoint is not equal to zero, we move our points of a & b, closer to each other. This progress repeated till the midpoint equals the root. If our function value of a & b multiplied is positive, our bisection will not work, beause our bounds are wrongly set.
```python
def bisection(f,a,b,max_iter=50,tol=1e-6,full_info=True):
""" bisection
Solve equation f(x) = 0 for a <= x <= b.
Args:
f (function): function
a (float): left bound
b (float): right bound
tol (float): tolerance on solution
Returns:
m (float): root
"""
# test inputs
if f(a)*f(b) >= 0:
print("bisection method fails.")
return None
# step 1: initialize
_a = a
_b = b
a = np.zeros(max_iter)
b = np.zeros(max_iter)
m = np.zeros(max_iter)
fm = np.zeros(max_iter)
a[0] = _a
b[0] = _b
# step 2-4: main
i = 0
while i < max_iter:
# step 2: midpoint and associated value
m[i] = (a[i]+b[i])/2
fm[i] = f(m[i])
# step 3: determine sub-interval
if abs(fm[i]) < tol:
break
elif f(a[i])*fm[i] < 0:
a[i+1] = a[i]
b[i+1] = m[i]
elif f(b[i])*fm[i] < 0:
a[i+1] = m[i]
b[i+1] = b[i]
else:
print("bisection method fails.")
return None
i += 1
if full_info == True:
return m,i,a,b,fm
else:
return m[i],i
```
```python
#We assign the values of "a" and "b" - and we know it has to be between [0,1].
m,i,a,b,fm = bisection(f,0.2,0.9)
#Next, we print our value of our midtpoint, and the number of iterations.
#The number of iterations, is interesting becauses it shows how many loops the bisection has been through.
print(i, m[i])
```
17 0.2999996185302734
So with 17 iterations, we find the optimal value of $s_m$ equal to 0.3, which is in lign with our value of $\alpha$ given by Prettner. To get a better grasp of how the bisection method works, we plot a widget, which illustrate how we find our optimal point.
```python
def plot_bisection(f,a,b,xmin=0.2,xmax=0.9,xn=100):
# a. find root and return all information
m,max_iter,a,b,fm = bisection(f,a,b,full_info=True)
# b. compute function on grid
xvec = np.linspace(xmin,xmax,xn)
fxvec = f(xvec)
# c. figure
def _figure(i):
# ii. figure
fig = plt.figure(dpi=100)
ax = fig.add_subplot(1,1,1)
ax.plot(xvec,fxvec) # on grid
ax.plot(m[i],fm[i],'o',color='black',label='current') # mid
ax.plot([a[i],b[i]],[fm[i],fm[i]],'--',color='black',label='range') # range
ax.axvline(a[i],ls='--',color='black')
ax.axvline(b[i],ls='--',color='black')
ax.legend(loc='lower right')
ax.grid(True)
ax.set_ylim([fxvec[-1],fxvec[0]])
widgets.interact(_figure,
i=widgets.IntSlider(description="iterations", min=0, max=max_iter, step=1, value=0)
);
plot_bisection(f,0.2,0.9) #The command gives a figure which is interactive manually.
```
interactive(children=(IntSlider(value=0, description='iterations', max=17), Output()), _dom_classes=('widget-i…
We also think it could be fun to make a figure, which moves automatically without scrolling on the interactive bar. We therefore rewrite the code, so it just plots data of one number of iterations.
```python
#transforming widget to plot - to be able to loop the different plots (with increasing iterations)
def plot_bi(f,a,b,i,xmin=0.2,xmax=0.9,xn=100):
# a. find root and return all information
m,max_iter,a,b,fm = bisection(f,a,b,full_info=True)
# b. compute function on grid
xvec = np.linspace(xmin,xmax,xn)
fxvec = f(xvec)
# c. figure
def _figure(i):
# ii. figure
# fig = plt.figure(dpi=100)
# ax = fig.add_subplot(1,1,1)
fig, ax = plt.subplots(ncols=1, sharey=False)
ax.plot(xvec,fxvec) # on grid
ax.plot(m[i],fm[i],'o',color='black',label='current') # mid
ax.plot([a[i],b[i]],[fm[i],fm[i]],'--',color='black',label='range') # range
ax.axvline(a[i],ls='--',color='black')
ax.axvline(b[i],ls='--',color='black')
ax.legend(loc='lower right')
ax.grid(True)
ax.set_ylim([fxvec[-1],fxvec[0]])
plt.show()
return
fig = _figure(i)
return fig
plot_bi(f,0.2,0.9, 2) # The function makes it possible to choose which iterations to plot
```
Hereafter, we make a loop over the display of increasing numbers of iterations - and we make sure too clear the present output. In this way, it looks automatic.
```python
#loop set too 1 second
def loop():
for i in range(0,9,1):
display.display(plot_bi(f,a[i],b[i],i,xmin=0.2,xmax=0.9,xn=100))
display.clear_output(wait=True);
time.sleep(1)
loop()
```
We have now found the optimal parameter value of $s_m$, both by analytically(sympy) and via numerical optimization.
## Analysis: Using our results to plot the accumulation
We will now show how the optimal parameter value of $s_m$ express itself in the accumulation of capital and the respective growth rates.
For doing so, we use the following parameter values shown in table 1, which is taken from the paper of Klaus Prettner.
The table summarizes the parameter values we will use for our numerical analysis and provides a valuation of them.
\\[Table \ 1: Parameter \ values\\]
| Parameter | Value | Comment |
|-----------|:-----:|-------------------------------------------------------:|
| s | 0.21 | Average gross investment rate (2000-2013) for the US |
| $s_m$ | 0.2, 0.3, 0.4 | Arbitrary value |
| $\alpha$ | 1/3 | Jones (1995), Acemoglu (2009), Grossmann et al. (2013) |
| $\delta$ | 0.04 | Grossmann et al. (2013) |
| n | 0.009 | Average rate (2000-2014) for the US (World Bank, 2015) |
**Plotting the accumulation:**
```python
# Our model
# Allocate memory for time series
series_length=100
k = np.empty(series_length)
p = np.empty(series_length)
g_k = np.empty(series_length)
g_p = np.empty(series_length)
fig, axes = plt.subplots(2, 2, figsize=(12, 15))
# Trajectories with different s_m
alpha = 1/3
delta = 0.04
s = 0.21
#s = (0.21, 0.25) # Would generate the same figures as Klaus Prettner fig. 1. b
s_m = (0.2, 0.3, 0.4)
n = 0.009
for j in range(3):
k[0] = 1
p[0] = 1
for t in range(series_length-1):
k[t+1] = s_m[j] * s * (1 + p[t])**(1 - alpha) * k[t]**(alpha) + (1- delta) * k[t] - n * k[t]
p[t+1] = (1-s_m[j]) * s * (1 + p[t])**(1 - alpha) * k[t]**(alpha) + (1- delta) * p[t] - n * p[t]
#k[t+1] = s_m * s[j] * (1 + p[t])**(1 - alpha) * k[t]**(alpha) +(1- delta) * k[t] - n * k[t] #Klaus Prettner-Copy
#p[t+1] = (1-s_m) * s[j] * (1 + p[t])**(1 - alpha) * k[t]**(alpha) + (1- delta) * p[t] - n * p[t] #Klaus Prettner-Copy
axes[0,0].plot(k, '-', label=rf"$\alpha = {alpha:.3f},\; s_m = {s_m[j]},\; \delta={delta}$")
axes[1,0].plot(p, '-', label=rf"$\alpha = {alpha:.3f},\; s_m = {s_m[j]},\; \delta={delta}$")
for j in range(3):
for t in range(series_length-1):
g_k[t+1] = s_m[j] * s * ((1+p[t])/(k[t]))**(1-alpha) - delta - n
g_p[t+1] = (1-s_m[j]) * s * ((1+p[t])/p[t])**(1-alpha) * (k[t]/p[t])**alpha - delta - n
#g_k[t+1] = s_m * s[j] * ((1+p[t])/(k[t]))**(1-alpha) - delta - n #Klaus Prettner-Copy
#g_p[t+1] = (1-s_m) * s[j] * ((1+p[t])/p[t])**(1-alpha) * (k[t]/p[t])**alpha - delta - n #Klaus Prettner-Copy
axes[0,1].plot(g_k, '-', label=rf"$\alpha = {alpha:.3f},\; s_m = {s_m[j]},\; \delta={delta}$")
axes[1,1].plot(g_p, '-', label=rf"$\alpha = {alpha:.3f},\; s_m = {s_m[j]},\; \delta={delta}$")
axes[0,0].grid(lw=0.2)
axes[1,0].grid(lw=0.2)
axes[0,1].grid(lw=0.2)
axes[1,1].grid(lw=0.2)
#ajust limmit on the y and x axis.
axes[0,0].set_ylim(0, 600)
axes[0,1].set_xlim(1,series_length)
axes[1,1].set_xlim(1,series_length)
axes[0,1].set_ylim(0.0,0.3)
axes[1,1].set_ylim(0.0,0.3)
axes[0,0].set_xlabel('time')
axes[0,0].set_ylabel('capital')
axes[1,0].set_xlabel('time')
axes[1,0].set_ylabel('automation')
axes[0,1].set_xlabel('time')
axes[0,1].set_ylabel('growth of k')
axes[1,1].set_xlabel('time')
axes[1,1].set_ylabel('growth of k')
axes[0,0].legend(loc='upper left', frameon=True, fontsize=12)
axes[1,0].legend(loc='upper left', frameon=True, fontsize=12)
print(g_k[99])
print(g_p[99])
```
We observe, that the different values of $s_m$, gives a trade-off problem. Looking at the upper-graphs, we see that the function with $s_m=0.4$ will accumulate the biggest amount of capital. The reason is simple, since it is the function with the biggest investment-share in capital. We also observe that all three examples converge to a constant growth-rate.
Looking at the lower graphs, the opposite is the situation, since we observe automation capital - that is the case, when we see the growth rates, but not the accumultation. The biggest investmentshare in automation, does not give the largest accumulation. The reason is simple; automation requires investments in "normal" capital, otherwise it is useless. The growth in automation, will be affected negatively, if we end up in a situation, where there is more automation capital than normal capital (see the growth rate equation above).
Our result of $s_m=0.3$, can be viewed as the compromise (or share) between investments in automation and capital.
# Conclusion
In this project we have investigated the relationship between automation capital and "normal" capital. We have solved the model using ``sympy``. Furthermore we have made numrical optimization, using bisection, with the parameter values by Prettner. Finally we use our results to plot the capital accumulations of automation and normal capital, and analyse relationship between the two forms.
## Appendix
First we divide $\dot k(t)$ and $\dot p(t)$ by k(t) and p(t), respectively. $g_x$ denote the growth rate of k and p respectively.
(1) \\[ g_k = \frac {\dot k(t)}{k(t)} = s_ms\bigg[\frac{1+p(t)}{k(t)}\bigg]^{1-\alpha}-\delta-n\\]
(2) \\[ g_p=\frac{\dot p(t)}{p(t)} =(1-s_m)s\bigg[\frac{1+p(t)}{p(t)}\bigg]^{1-\alpha}\bigg[\frac{k(t)}{p(t)}\bigg]-\delta-n \\]
We then calculate the growth rate of its growth rate denoted by $g_{g_x}$:
\\[
log(g_k+\delta+n)=log(s_m)+log(s)+(1-\alpha)log[1+p(t)]-(1-\alpha)log[k(t)] \\
log(g_p+\delta+n)=log(1-s_m)+log(s)+(1-\alpha)log[1+p(t)]-(1-\alpha)log[p(t)]+\alpha log[k(t)]-\alpha log[p(t)] \\
\\]
\\[g_{({g_k}+\delta+n)}=\frac{\partial log(g_k)}{\partial t}=(1-\alpha)\frac{\dot p(t)}{1+p(t)}-(1-\alpha)g_k\\]
\\[g_{({g_p}+\delta+n)}=\frac{\partial log(g_p)}{\partial t}= (1-\alpha)\frac{\dot p(t)}{1+p(t)}-(1-\alpha)g_p+\alpha g_k-\alpha g_p\\]
We then impose constant growth along the balanced growth path, which does that we can equalize $g_{({g_k}+\delta+n)}$ and $g_{({g_p}+\delta+n)}$. By doing soo, it is *easily* seeing by reducing the equation, that $g_k=g_p$. This imply that the economy converges to a long-run growth rate with $g_p ≈ g_k ≡ g$
Note that, for large $p(t)$ and large $k(t)$, we have:
\\[ (\frac{1+p(t)}{p(t)} )^{1-\alpha} ≈ 1, \quad and \quad \frac{p(t)}{k(t)}≈\frac{1+p(t)}{k(t)}:=C\\]
With the following approximations is equation (1) and (2), respectively the growth rate of $k$ and $p$ rewritten as:
(1.a) \\[g = s_m s C^{1-\alpha}-\delta-n \\]
(2.a) \\[g = (1-s_m) s [\frac{1}{C}]^\alpha-\delta-n \\]
## Reference
Prettner, K. (2019). A NOTE ON THE IMPLICATIONS OF AUTOMATION FOR ECONOMIC GROWTH AND THE LABOR SHARE. Macroeconomic Dynamics, 23(3), 1294-1301. doi:10.1017/S1365100517000098
## Graphs of Klaus Prettner
To be sure that our results were correct, we have reproduced his graphs with our python code.
```python
#Extra code - Copy of Klaus Prettner's graphs turned into python code.
# Allocate memory for time series
series_length=100
k = np.empty(series_length)
p = np.empty(series_length)
y = np.empty(series_length)
g_k = np.empty(series_length)
g_p = np.empty(series_length)
g_y = np.empty(series_length)
fig, axes = plt.subplots(3, 2, figsize=(12, 15))
# Trajectories with different s_m
alpha = 1/3
delta = 0.04
s = (0.21, 0.25)
s_m = 0.7
#s_m = (0.3, 0.7, 0.9)
n = 0.009
for j in range(2):
k[0] = 1
p[0] = 1
y[0] = 1
for t in range(series_length-1):
#k[t+1] = s_m[j] * s * (1 + p[t])**(1 - alpha) * k[t]**(alpha) - delta * k[t] - n * k[t]+k[t] #Plot til forskellige s_m-værdier
k[t+1] = s_m * s[j] * (1 + p[t])**(1 - alpha) * k[t]**(alpha) +(1- delta) * k[t] - n * k[t]
p[t+1] = (1-s_m) * s[j] * (1 + p[t])**(1 - alpha) * k[t]**(alpha) + (1- delta) * p[t] - n * p[t]
y[t+1] = (1+p[t])**(1-alpha)*k[t]**alpha+y[t]
axes[0,0].plot(k, '-', label=rf"$\alpha = {alpha:.3f},\; s = {s[j]},\; \delta={delta}$")
axes[1,0].plot(p, '-', label=rf"$\alpha = {alpha:.3f},\; s = {s[j]},\; \delta={delta}$")
axes[2,0].plot(y, '-', label=rf"$\alpha = {alpha:.3f},\; s = {s[j]},\; \delta={delta}$")
for j in range(2):
g_k[0] = 1
g_p[0] = 1
g_y[0] = 1
for t in range(series_length-1):
g_k[t+1] = s_m * s[j] * ((1+p[t])/(k[t]))**(1-alpha) - delta - n
g_p[t+1] = (1-s_m) * s[j] * ((1+p[t])/p[t])**(1-alpha) * (k[t]/p[t])**alpha - delta - n
g_y[t+1] = (1-alpha)*g_p[t]+alpha*g_k[t]
axes[0,1].plot(g_k, '-', label=rf"$\alpha = {alpha:.3f},\; s = {s[j]},\; \delta={delta}$")
axes[1,1].plot(g_p, '-', label=rf"$\alpha = {alpha:.3f},\; s = {s[j]},\; \delta={delta}$")
axes[2,1].plot(g_y, '-', label=rf"$\alpha = {alpha:.3f},\; s = {s[j]},\; \delta={delta}$")
axes[0,0].grid(lw=0.2)
axes[1,0].grid(lw=0.2)
axes[0,1].grid(lw=0.2)
axes[1,1].grid(lw=0.2)
#ajust limmit on the y and x axis.
axes[0,0].set_ylim(0, 600)
axes[0,1].set_xlim(1,series_length)
axes[1,1].set_xlim(1,series_length)
axes[2,1].set_xlim(1,series_length)
axes[0,1].set_ylim(0.0,0.3)
axes[1,1].set_ylim(0.0,0.3)
axes[0,0].set_xlabel('time')
axes[0,0].set_ylabel('capital')
axes[1,0].set_xlabel('time')
axes[1,0].set_ylabel('automation')
axes[0,1].set_xlabel('time')
axes[0,1].set_ylabel('growth of k')
axes[1,1].set_xlabel('time')
axes[1,1].set_ylabel('growth of k')
axes[0,0].legend(loc='upper left', frameon=True, fontsize=12)
axes[1,0].legend(loc='upper left', frameon=True, fontsize=12)
print(g_k[99])
print(g_p[99])
```
```python
```
| 0d3f4f9b45a0e60f552b5a77420c38b39f75149c | 291,293 | ipynb | Jupyter Notebook | modelproject/modelproject-Final.ipynb | NumEconCopenhagen/projects-2019-ob4ever | d2027137e69e71f09a4a0fca7a597810cff08c0d | [
"MIT"
]
| null | null | null | modelproject/modelproject-Final.ipynb | NumEconCopenhagen/projects-2019-ob4ever | d2027137e69e71f09a4a0fca7a597810cff08c0d | [
"MIT"
]
| 8 | 2019-04-09T12:42:45.000Z | 2019-05-14T12:44:28.000Z | modelproject/modelproject-Final.ipynb | NumEconCopenhagen/projects-2019-ob4ever | d2027137e69e71f09a4a0fca7a597810cff08c0d | [
"MIT"
]
| null | null | null | 288.981151 | 94,128 | 0.911385 | true | 7,151 | Qwen/Qwen-72B | 1. YES
2. YES | 0.746139 | 0.795658 | 0.593672 | __label__eng_Latn | 0.974298 | 0.217628 |
<p align="center">
</p>
## Interactive Variogram Calculation Demonstration
### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
### The Interactive Workflow
Here's an interactive workflow for calculating directional experimental variograms in 2D.
* setting the variogram calculation parameters for identifying spatial data pairs
This approach is essential for quantifying spatial continuity with sparsely sampled, irregular spatial data.
I have more comprehensive workflows for variogram calculation:
* [Experimental Variogram Calculation in Python with GeostatsPy](https://github.com/GeostatsGuy/PythonNumericalDemos/blob/master/GeostatsPy_variogram_calculation.ipynb)
* [Determination of Major and Minor Spatial Continuity Directions in Python with GeostatsPy](https://github.com/GeostatsGuy/PythonNumericalDemos/blob/master/GeostatsPy_spatial_continuity_directions.ipynb)
#### Spatial Continuity
**Spatial Continuity** is the correlation between values over distance.
* No spatial continuity – no correlation between values over distance, random values at each location in space regardless of separation distance.
* Homogenous phenomenon have perfect spatial continuity, since all values as the same (or very similar) they are correlated.
We need a statistic to quantify spatial continuity! A convenient method is the Semivariogram.
#### The Semivariogram
Function of difference over distance.
* The expected (average) squared difference between values separated by a lag distance vector (distance and direction), $h$:
\begin{equation}
\gamma(\bf{h}) = \frac{1}{2 N(\bf{h})} \sum^{N(\bf{h})}_{\alpha=1} (z(\bf{u}_\alpha) - z(\bf{u}_\alpha + \bf{h}))^2
\end{equation}
where $z(\bf{u}_\alpha)$ and $z(\bf{u}_\alpha + \bf{h})$ are the spatial sample values at tail and head locations of the lag vector respectively.
* Calculated over a suite of lag distances to obtain a continuous function.
* the $\frac{1}{2}$ term converts a variogram into a semivariogram, but in practice the term variogram is used instead of semivariogram.
* We prefer the semivariogram because it relates directly to the covariance function, $C_x(\bf{h})$ and univariate variance, $\sigma^2_x$:
\begin{equation}
C_x(\bf{h}) = \sigma^2_x - \gamma(\bf{h})
\end{equation}
Note the correlogram is related to the covariance function as:
\begin{equation}
\rho_x(\bf{h}) = \frac{C_x(\bf{h})}{\sigma^2_x}
\end{equation}
The correlogram provides of function of the $\bf{h}-\bf{h}$ scatter plot correlation vs. lag offset $\bf{h}$.
\begin{equation}
-1.0 \le \rho_x(\bf{h}) \le 1.0
\end{equation}
#### Variogram Observations
The following are common observations for variograms that should assist with their practical use.
##### Observation \#1 - As distance increases, variability increase (in general).
This is common since in general, over greater distance offsets, there is often more difference between the head and tail samples.
In some cases, such as with spatial cyclicity of the hole effect variogram model the variogram may have negative slope over somelag distance intervals
Negative slopes at lag distances greater than half the data extent are often caused by too few pairs for a reliable variogram calculation
##### Observation \#2 - Calculated with over all possible pairs separated by lag vector, $\bf{𝐡}$.
We scan through the entire data set, searching for all possible pair combinations with all other data. We then calculate the variogram as one half the expectation of squared difference between all pairs.
More pairs results in a more reliable measure.
##### Observation \#3 - Need to plot the sill to know the degree of correlation.
**Sill** is the variance, $\sigma^2_x$
Given stationarity of the variance, $\sigma^2_x$, and variogram $\gamma(\bf{h})$:
we can define the covariance function:
\begin{equation}
C_x(\bf{h}) = \sigma^2_x - \gamma(\bf{h})
\end{equation}
The covariance measure is a measure of similarity over distance (the mirror image of the variogram as shown by the equation above).
Given a standardized distribution $\sigma^2_x = 1.0$, the covariance, $C_x(\bf{h})$, is equal to the correlogram, $\rho_x(\bf{h})$:
\begin{equation}
\rho_x(\bf{h}) = \sigma^2_x - \gamma(\bf{h})
\end{equation}
##### Observation \#4 - The lag distance at which the variogram reaches the sill is know as the range.
At the range, knowing the data value at the tail location provides no information about a value at the head location of the lag distance vector.
##### Observation \#5 - The nugget effect, a discontinuity at the origin
Sometimes there is a discontinuity in the variogram at distances less than the minimum data spacing. This is known as **nugget effect**.
The ratio of nugget / sill, is known as relative nugget effect (%). Modeled as a discontinuity with no correlation structure that at lags, $h \gt \epsilon$, an infinitesimal lag distance, and perfect correlation at $\bf{h} = 0$.
Caution when including nuggect effect in the variogram model as measurement error, mixing populations cause apparent nugget effect
This exercise demonstrates the semivariogram calculation with GeostatsPy. The steps include:
1. generate a 2D model with sequential Gaussian simulation
2. sample from the simulation
3. calculate and visualize experimental semivariograms
#### Variogram Calculation Parameters
The variogram calculation parameters include:
* **azimuth** is the azimuth of the lag vector
* **azimuth tolerance** is the maximum allowable departure from the azimuth (isotropic variograms are calculated with an azimuth tolerance of to 90.0)
* **unit lag distance** the size of the bins in lag distance, usually set to the minimum data spacing
* **lag distance tolerance** - the allowable tolerance in lage distance, commonly set to 50% of unit lag distanceonal smoothing
* **number of lags** - set based on the spatial extent of the dataset, we can typically calculate reliable variograms up to 1/2 the extent of the dataset
* **bandwidth** is the maximum offset allowable from the lag vector
#### Objective
In the PGE 383: Stochastic Subsurface Modeling class I want to provide hands-on experience with building subsurface modeling workflows. Python provides an excellent vehicle to accomplish this. I have coded a package called GeostatsPy with GSLIB: Geostatistical Library (Deutsch and Journel, 1998) functionality that provides basic building blocks for building subsurface modeling workflows.
The objective is to remove the hurdles of subsurface modeling workflow construction by providing building blocks and sufficient examples. This is not a coding class per se, but we need the ability to 'script' workflows working with numerical methods.
#### Getting Started
Here's the steps to get setup in Python with the GeostatsPy package:
1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/).
2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal.
3. In the terminal type: pip install geostatspy.
4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality.
You will need to copy the data file to your working directory. They are available here:
* Tabular data - sample_data.csv at https://git.io/fh4gm.
There are exampled below with these functions. You can go here to see a list of the available functions, https://git.io/fh4eX, other example workflows and source code.
#### Load the required libraries
The following code loads the required libraries.
```python
import geostatspy.GSLIB as GSLIB # GSLIB utilies, visualization and wrapper
import geostatspy.geostats as geostats # GSLIB methods convert to Python
```
We will also need some standard packages. These should have been installed with Anaconda 3.
```python
%matplotlib inline
import os # to set current working directory
import sys # supress output to screen for interactive variogram modeling
import io
import numpy as np # arrays and matrix math
import pandas as pd # DataFrames
import matplotlib.pyplot as plt # plotting
from matplotlib.pyplot import cm # color maps
from ipywidgets import interactive # widgets and interactivity
from ipywidgets import widgets
from ipywidgets import Layout
from ipywidgets import Label
from ipywidgets import VBox, HBox
```
If you get a package import error, you may have to first install some of these packages. This can usually be accomplished by opening up a command window on Windows and then typing 'python -m pip install [package-name]'. More assistance is available with the respective package docs.
#### Set the working directory
I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time). Also, in this case make sure to place the required (see above) GSLIB executables in this directory or a location identified in the environmental variable *Path*.
```python
#os.chdir("d:/PGE383") # set the working directory
```
#### Loading Tabular Data
Here's the command to load our comma delimited data file in to a Pandas' DataFrame object.
```python
#df = pd.read_csv("sample_data_MV_biased.csv") # read a .csv file in as a DataFrame
df = pd.read_csv("https://raw.githubusercontent.com/GeostatsGuy/GeoDataSets/master/sample_data_MV_biased.csv")
#print(df.iloc[0:5,:]) # display first 4 samples in the table as a preview
df.head() # we could also use this command for a table preview
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Unnamed: 0</th>
<th>X</th>
<th>Y</th>
<th>Facies</th>
<th>Porosity</th>
<th>Perm</th>
<th>AI</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>100.0</td>
<td>900.0</td>
<td>0.0</td>
<td>0.101319</td>
<td>1.996868</td>
<td>5590.417154</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>100.0</td>
<td>800.0</td>
<td>1.0</td>
<td>0.147676</td>
<td>10.711789</td>
<td>3470.845666</td>
</tr>
<tr>
<th>2</th>
<td>2</td>
<td>100.0</td>
<td>700.0</td>
<td>1.0</td>
<td>0.145912</td>
<td>17.818143</td>
<td>3586.988513</td>
</tr>
<tr>
<th>3</th>
<td>3</td>
<td>100.0</td>
<td>600.0</td>
<td>1.0</td>
<td>0.186167</td>
<td>217.109365</td>
<td>3732.114787</td>
</tr>
<tr>
<th>4</th>
<td>4</td>
<td>100.0</td>
<td>500.0</td>
<td>1.0</td>
<td>0.146088</td>
<td>16.717367</td>
<td>2534.551236</td>
</tr>
</tbody>
</table>
</div>
We will work with all facies pooled together. I wanted to simplify this workflow and focus more on spatial continuity direction detection. Finally, by not using facies we do have more samples to support our statistical inference. Most often facies are essential in the subsurface model. Don't worry we will check if this is reasonable in a bit.
You are welcome to repeat this workflow on a by-facies basis. The following code could be used to build DataFrames ('df_sand' and 'df_shale') for each facies.
```p
df_sand = pd.DataFrame.copy(df[df['Facies'] == 1]).reset_index() # copy only 'Facies' = sand records
df_shale = pd.DataFrame.copy(df[df['Facies'] == 0]).reset_index() # copy only 'Facies' = shale records
```
Let's look at summary statistics for all facies combined:
```python
df.describe().transpose() # summary table of sand only DataFrame statistics
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
</tr>
</thead>
<tbody>
<tr>
<th>Unnamed: 0</th>
<td>368.0</td>
<td>293.260870</td>
<td>169.058258</td>
<td>0.000000</td>
<td>150.500000</td>
<td>296.000000</td>
<td>439.500000</td>
<td>586.000000</td>
</tr>
<tr>
<th>X</th>
<td>368.0</td>
<td>499.565217</td>
<td>289.770794</td>
<td>0.000000</td>
<td>240.000000</td>
<td>500.000000</td>
<td>762.500000</td>
<td>990.000000</td>
</tr>
<tr>
<th>Y</th>
<td>368.0</td>
<td>520.644022</td>
<td>277.412187</td>
<td>9.000000</td>
<td>269.000000</td>
<td>539.000000</td>
<td>769.000000</td>
<td>999.000000</td>
</tr>
<tr>
<th>Facies</th>
<td>368.0</td>
<td>0.597826</td>
<td>0.491004</td>
<td>0.000000</td>
<td>0.000000</td>
<td>1.000000</td>
<td>1.000000</td>
<td>1.000000</td>
</tr>
<tr>
<th>Porosity</th>
<td>368.0</td>
<td>0.127026</td>
<td>0.030642</td>
<td>0.041122</td>
<td>0.103412</td>
<td>0.125842</td>
<td>0.148623</td>
<td>0.210258</td>
</tr>
<tr>
<th>Perm</th>
<td>368.0</td>
<td>85.617362</td>
<td>228.362654</td>
<td>0.094627</td>
<td>2.297348</td>
<td>10.377292</td>
<td>50.581288</td>
<td>1991.097723</td>
</tr>
<tr>
<th>AI</th>
<td>368.0</td>
<td>4791.736646</td>
<td>974.560569</td>
<td>1981.177309</td>
<td>4110.728374</td>
<td>4713.325533</td>
<td>5464.043562</td>
<td>7561.250336</td>
</tr>
</tbody>
</table>
</div>
Let's transform the porosity and permeaiblity data to standard normal (mean = 0.0, standard deviation = 1.0, Gaussian shape). This is required for sequential Gaussian simulation (common target for our variogram models) and the Gaussian transform assists with outliers and provides more interpretable variograms.
Let's look at the inputs for the GeostatsPy nscore program. Note the output include an ndarray with the transformed values (in the same order as the input data in Dataframe 'df' and column 'vcol'), and the transformation table in original values and also in normal score values.
```python
geostats.nscore # see the input parameters required by the nscore function
```
<function geostatspy.geostats.nscore(df, vcol, wcol=None, ismooth=False, dfsmooth=None, smcol=0, smwcol=0)>
The following command will transform the Porosity and Permeabilty to standard normal.
```python
#Transform to Gaussian by Facies
df['NPor'], tvPor, tnsPor = geostats.nscore(df, 'Porosity') # nscore transform for all facies porosity
df['NPerm'], tvPermSand, tnsPermSand = geostats.nscore(df, 'Perm') # nscore transform for all facies permeability
```
Let's look at the updated DataFrame to make sure that we now have the normal score porosity and permeability.
```python
df.head() # preview sand DataFrame with nscore transforms
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Unnamed: 0</th>
<th>X</th>
<th>Y</th>
<th>Facies</th>
<th>Porosity</th>
<th>Perm</th>
<th>AI</th>
<th>NPor</th>
<th>NPerm</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>100.0</td>
<td>900.0</td>
<td>0.0</td>
<td>0.101319</td>
<td>1.996868</td>
<td>5590.417154</td>
<td>-0.749088</td>
<td>-0.767247</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>100.0</td>
<td>800.0</td>
<td>1.0</td>
<td>0.147676</td>
<td>10.711789</td>
<td>3470.845666</td>
<td>0.653263</td>
<td>0.017030</td>
</tr>
<tr>
<th>2</th>
<td>2</td>
<td>100.0</td>
<td>700.0</td>
<td>1.0</td>
<td>0.145912</td>
<td>17.818143</td>
<td>3586.988513</td>
<td>0.611663</td>
<td>0.336607</td>
</tr>
<tr>
<th>3</th>
<td>3</td>
<td>100.0</td>
<td>600.0</td>
<td>1.0</td>
<td>0.186167</td>
<td>217.109365</td>
<td>3732.114787</td>
<td>1.993601</td>
<td>1.211919</td>
</tr>
<tr>
<th>4</th>
<td>4</td>
<td>100.0</td>
<td>500.0</td>
<td>1.0</td>
<td>0.146088</td>
<td>16.717367</td>
<td>2534.551236</td>
<td>0.628172</td>
<td>0.279461</td>
</tr>
</tbody>
</table>
</div>
That looks good! One way to check is to see if the relative magnitudes of the normal score transformed values match the original values. e.g. that the normal score transform of 0.10 porosity normal score is less than the normal score transform of 0.14 porsity. Also, the normal score transform of values close to the mean value should be close to 0.0
Let's also check the original and transformed sand and shale porosity distributions.
```python
plt.subplot(221) # plot original sand and shale porosity histograms
plt.hist(df['Porosity'], facecolor='red',bins=np.linspace(0.0,0.25,1000),histtype="stepfilled",alpha=0.2,density=True,cumulative=True,edgecolor='black',label='Original')
plt.xlim([0.05,0.25]); plt.ylim([0,1.0])
plt.xlabel('Porosity (fraction)'); plt.ylabel('Frequency'); plt.title('Porosity')
plt.legend(loc='upper left')
plt.grid(True)
plt.subplot(222)
plt.hist(df['NPor'], facecolor='blue',bins=np.linspace(-3.0,3.0,1000),histtype="stepfilled",alpha=0.2,density=True,cumulative=True,edgecolor='black',label = 'Trans')
plt.xlim([-3.0,3.0]); plt.ylim([0,1.0])
plt.xlabel('Porosity (fraction)'); plt.ylabel('Frequency'); plt.title('Nscore Porosity')
plt.legend(loc='upper left')
plt.grid(True)
plt.subplot(223) # plot nscore transformed sand and shale histograms
plt.hist(df['Perm'], facecolor='red',bins=np.linspace(0.0,1000.0,100000),histtype="stepfilled",alpha=0.2,density=True,cumulative=True,edgecolor='black',label='Original')
plt.xlim([0.0,1000.0]); plt.ylim([0,1.0])
plt.xlabel('Porosity (fraction)'); plt.ylabel('Frequency'); plt.title('Permeability')
plt.legend(loc='upper left')
plt.grid(True)
plt.subplot(224) # plot nscore transformed sand and shale histograms
plt.hist(df['NPerm'], facecolor='blue',bins=np.linspace(-3.0,3.0,100000),histtype="stepfilled",alpha=0.2,density=True,cumulative=True,edgecolor='black',label = 'Trans')
plt.xlim([-3.0,3.0]); plt.ylim([0,1.0])
plt.xlabel('Permeability (mD)'); plt.ylabel('Frequency'); plt.title('Nscore Permeability')
plt.legend(loc='upper left')
plt.grid(True)
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=2.2, wspace=0.2, hspace=0.3)
plt.show()
```
The normal score transform has correctly transformed the porosity and permeability to standard normal.
#### Inspection of Posted Data
Data visualization is very useful to detect patterns. Our brains are very good at pattern detection. I promote quantitative methods and recognize issues with cognitive bias, but it is important to recognize the value is expert intepretation based on data visualization.
* This data visualization will also be important to assist with parameter selection for the quantitative methods later.
Let's plot the location maps of normal score transforms of porosity and permeability for all facies. We will also include a cross plot of the nscore permeability vs. porosity colored by facies to aid with comparison in spatial features between the porosity and permeability data.
```python
cmap = plt.cm.plasma # set the color map
plt.subplot(131) # location map of normal score transform of porosity
GSLIB.locmap_st(df,'X','Y','NPor',0,1000,0,1000,-3,3,'Nscore Porosity - All Facies','X (m)','Y (m)','Nscore Porosity',cmap)
plt.subplot(132) # location map of normal score transform of permeability
GSLIB.locmap_st(df,'X','Y','NPerm',0,1000,0,1000,-3,3,'Nscore Permeability - All Facies','X (m)','Y (m)','Nscore Permeability',cmap)
plt.subplot(133)
facies = df['Facies'].values +0.01 # normal score porosity / permeability scatter plot color coded by facies
plt.scatter(df['NPor'],df['NPerm'],c = facies,edgecolor = 'black',cmap = plt.cm.inferno)
#plt.plot([-3,3],[-3,3],color = 'black')
plt.xlabel(r'Nscore Porosity')
plt.ylabel(r'Nscore Permeability')
plt.title('Nscore Permeability vs. Porosity')
plt.xlim([-3,3])
plt.ylim([-3,3])
plt.grid(True)
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=0.8, wspace=0.5, hspace=0.3)
plt.show()
```
What do you see? Here's my observations:
* there is a high degree of spatial agreement between porosity and permeability, this is supported by the high correlation evident in the cross plot.
* there are no discontinuities that could suggest that facies represent a distinct change, rather the porosity and permeability seem continuous and the assigned facies are a truncation of their continous behavoir, we doing 'ok' with no facies
* suspect a 045 azimuth major direction of continuity (up - right)
* there may be cycles in the 135 azimuth
* there will not likely be a nugget effect, but there is an hint of some short scale discontinuity?
**Do you agree?** If you have a different observations, drop me a line at mpyrcz@austin.utexas.edu and I'll add to this lesson with credit.
#### Experimental Variograms
We can use the location maps to help determine good variogram calculation parameters. For example:
```p
tmin = -9999.; tmax = 9999.;
lag_dist = 100.0; lag_tol = 50.0; nlag = 7; bandh = 9999.9; azi = azi; atol = 22.5; isill = 1
```
* **tmin**, **tmax** are trimming limits - set to have no impact, no need to filter the data
* **lag_dist**, **lag_tol** are the lag distance, lag tolerance - set based on the common data spacing (100m) and tolerance as 100% of lag distance for additonal smoothing
* **nlag** is number of lags - set to extend just past 50 of the data extent
* **bandh** is the horizontal band width - set to have no effect
* **azi** is the azimuth - it has not effect since we set atol, the azimuth tolerance, to 90.0
* **isill** is a boolean to standardize the distribution to a variance of 1 - it has no effect since the previous nscore transform sets the variance to 1.0
#### Dashboard for Interactive Variogram Calculation
Below we make a dashboard with the ipywidgets and matplotlib Python packages for calculating experimental variograms.
```python
# interactive calculation of the experimental variogram
l = widgets.Text(value=' Variogram Calculation Interactive Demonstration, Michael Pyrcz, Associate Professor, The University of Texas at Austin',layout=Layout(width='950px', height='30px'))
lag = widgets.FloatSlider(min = 20, max = 500, value = 100, step = 10, description = 'lag',orientation='vertical',layout=Layout(width='90px', height='200px'))
lag.style.handle_color = 'gray'
lag_tol = widgets.FloatSlider(min = 20, max = 500, value = 50, step = 10, description = 'lag tolerance',orientation='vertical',layout=Layout(width='90px', height='200px'))
lag_tol.style.handle_color = 'gray'
nlag = widgets.IntSlider(min = 1, max = 100, value = 10, step = 1, description = 'number of lags',orientation='vertical',layout=Layout(width='90px', height='200px'))
nlag.style.handle_color = 'gray'
azi = widgets.FloatSlider(min = 0, max = 360, value = 0, step = 5, description = 'azimuth',orientation='vertical',layout=Layout(width='90px', height='200px'))
azi.style.handle_color = 'gray'
azi_tol = widgets.FloatSlider(min = 10, max = 90, value = 20, step = 5, description = 'azimuth tolerance',orientation='vertical',layout=Layout(width='120px', height='200px'))
azi_tol.style.handle_color = 'gray'
bandwidth = widgets.FloatSlider(min = 100, max = 2000, value = 2000, step = 100, description = 'bandwidth',orientation='vertical',layout=Layout(width='90px', height='200px'))
azi_tol.style.handle_color = 'gray'
ui1 = widgets.HBox([lag,lag_tol,nlag,azi,azi_tol,bandwidth],) # basic widget formatting
ui = widgets.VBox([l,ui1],)
def f_make(lag,lag_tol,nlag,azi,azi_tol,bandwidth): # function to take parameters, calculate variogram and plot
# text_trap = io.StringIO()
# sys.stdout = text_trap
tmin = -9999.9; tmax = 9999.9
lags, gammas, npps = geostats.gamv(df,"X","Y","NPor",tmin,tmax,lag,lag_tol,nlag,azi,azi_tol,bandwidth,isill=1.0)
plt.subplot(111) # plot experimental variogram
plt.scatter(lags,gammas,color = 'black',s = npps*0.1,label = 'Azimuth ' +str(azi))
plt.plot([0,2000],[1.0,1.0],color = 'black')
plt.xlabel(r'Lag Distance $\bf(h)$, (m)')
plt.ylabel(r'$\gamma \bf(h)$')
plt.title('Directional NSCORE Porosity Variogram - Azi ' + str(azi))
plt.xlim([0,1000]); plt.ylim([0,1.8])
plt.grid(True)
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.5, top=1.0, wspace=0.3, hspace=0.3)
plt.show()
# connect the function to make the samples and plot to the widgets
interactive_plot = widgets.interactive_output(f_make, {'lag':lag,'lag_tol':lag_tol,'nlag':nlag,'azi':azi,'azi_tol':azi_tol,'bandwidth':bandwidth})
interactive_plot.clear_output(wait = True) # reduce flickering by delaying plot updating
```
### Interactive Variogram Calculation Demostration
* calculate omnidirectional and direction experimental variograms
#### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy)
### The Problem
Calculate interpretable experimental variograms for sparse, irregularly-space spatial data.
* **azimuth** is the azimuth of the lag vector
* **azimuth tolerance** is the maximum allowable departure from the azimuth
* **unit lag distance** the size of the bins in lag distance
* **lag distance tolerance** - the allowable tolerance in lage distance
* **number of lags** - number of lags in the experimental variogram
* **bandwidth** - maximum departure from the lag vector
```python
display(ui, interactive_plot) # display the interactive plot
```
VBox(children=(Text(value=' Variogram Calculation Interactive Demonstration, Mich…
Output(outputs=({'output_type': 'display_data', 'data': {'text/plain': '<Figure size 432x288 with 1 Axes>', 'i…
#### Comments
This was a basic demonstration of vairogram calculation for spatial continuity analysis. Much more could be done, I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy.
#### The Author:
### Michael Pyrcz, Associate Professor, University of Texas at Austin
*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development.
For more about Michael check out these links:
#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#### Want to Work Together?
I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.
* Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you!
* Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!
* I can be reached at mpyrcz@austin.utexas.edu.
I'm always happy to discuss,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
```python
```
| 45168b11a9f94894cf4be0231b95f390af138f6c | 298,185 | ipynb | Jupyter Notebook | Interactive_Variogram_Calculation.ipynb | caf3676/PythonNumericalDemos | 206a3d876f79e137af88b85ba98aff171e8d8e06 | [
"MIT"
]
| 403 | 2017-10-15T02:07:38.000Z | 2022-03-30T15:27:14.000Z | Interactive_Variogram_Calculation.ipynb | caf3676/PythonNumericalDemos | 206a3d876f79e137af88b85ba98aff171e8d8e06 | [
"MIT"
]
| 4 | 2019-08-21T10:35:09.000Z | 2021-02-04T04:57:13.000Z | Interactive_Variogram_Calculation.ipynb | caf3676/PythonNumericalDemos | 206a3d876f79e137af88b85ba98aff171e8d8e06 | [
"MIT"
]
| 276 | 2018-06-27T11:20:30.000Z | 2022-03-25T16:04:24.000Z | 284.799427 | 195,280 | 0.902996 | true | 8,789 | Qwen/Qwen-72B | 1. YES
2. YES | 0.749087 | 0.740174 | 0.554455 | __label__eng_Latn | 0.914899 | 0.126515 |
# Introduction to `esys.escript`
## Outline
This unit gives an introduction into solving partial differential equations (PDEs) in python. This
section assumed that you have a basic understanding how to work with python.
We are particularly looking at PDEs as they arise in geophysical problems. Of course it would take some work to build appropriate PDE solvers from scratch. To save this effort we use the python PDE solver module `esys.escript`. This section will give an introduction into the work with `esys.escript` to solve 2D problems. To illustrate the use we will look at the
calculation of gravity anomaly fields from subsurface density anomalies. The following section will then present some geophysical applications of PDEs. The the third section will discuss the use of `esys.escript` to solve seismic wave equations in time and in frequency domain.
We first present some details the PDEs for modeling gravity field anomalies before we start to
discuss how to use `esys.escript` to solve this PDE.
## Useful links:
- [esys.escript home page](https://launchpad.net/escript-finley)
- [Researchgate](https://www.researchgate.net/project/esys-escript)
- [user's guide](https://launchpad.net/escript-finley/3.0+/5.4/+download/docs.zip)
- [API documentation](https://esys-escript.readthedocs.io/en/latest/index.html)
## An example problem: Gravity Field Anomalies
The gravitational field $\mathbf{g}$ (also called gravitational acceleration) is a vector field defining the gravitational force experienced by a particle as its mass multiplied by the gravitational field at that point.
In Cartesian coordinates the gravitational field $\mathbf{g}$ is expressed in the form
\begin{equation}
\mathbf{g}=(g_x, g_y, g_z)
\end{equation}
The vector $\mathbf{g}$ needs to fulfill the Gauss's law which is a generalization of Newton's law introduced in The Gauss's law is stated in the following way:
\begin{equation}\label{EQGAUSSLAW}
\frac{\partial g_x}{\partial x} + \frac{\partial g_y}{\partial y} + \frac{\partial g_z}{\partial z} = - 4\pi G\rho
\end{equation}
where $G=6.67 \cdot 10^{-11} \frac{m^3}{kg s^2}$ is the gravitational constant and $\rho$ is the density distribution. Gauss's law also stated using the divergence operator $\mathbf{\nabla}^t$:
\begin{equation}
\mathbf{\nabla}^t \; \mathbf {g} =-4\pi G\rho
\end{equation}
The magnetic field $\mathbf{g}$ is obtained from its potential $u$
using the *Grad* operator $\mathbf{\nabla}$:
\begin{equation}
\mathbf{g} = -
\mathbf{\nabla} U
\end{equation}
with the gravity accelerations
\begin{equation}\label{EQGRADRULE}
g_x = - \frac{\partial U}{\partial x}, g_y = - \frac{\partial U}{ \partial y} \mbox{ and } g_z = - \frac{\partial U}{\partial z},
\end{equation}
If $\rho$ in \eqref{EQGAUSSLAW} is selected as a density anomaly, ie. as the deviation from a constant background
density, then the Gauss's law \eqref{EQGAUSSLAW} and scalar potential definition \eqref{EQGRADRULE}
defines a PDE for the potential $u$ which gradient gives the gravity field anomaly $\mathbf{g}$
due to the density anomaly $\rho$. To model field observations we are interested in the
vertical component $g_z$ of $\mathbf{g}$.
## The PDE template
In `esys.escript` the PDE to be solved is defined trough a generic PDE template
in $x_0x_1$-coordinates.
which is provided through the `LinearSinglePDE` class. The
template fits nicely with the problem of finding the gravity potential. We will in the $x_0=x$
and $x_1=z$ coordinate system.
First we write `LinearSinglePDE` template down in an abstract formulation: When $u$ is the unknown
we define the so-called `flux` vector $\mathbf{F}$ which is in essence the negative gradient of the solution
times some matrix $\mathbf{A}$ plus some vector $X$:
\begin{equation} \label{EQFLUX}
\mathbf{F} = - \mathbf{A} \mathbf{\nabla} u +\mathbf{X}
\end{equation}
Ignoring the matrix $\mathbf{A}$ and setting $\mathbf{X}=0$
this already looks like the scalar potential definition \eqref{EQGRADRULE} when $u=U$ is the gravity potential and
$\mathbf{F}=\mathbf{g}$ is the gravity acceleration.
The flux vector $\mathbf{F}$ needs to fulfill the conservation equation :
\begin{equation}\label{EQCONSERVATION}
\mathbf{\nabla}^t \; \mathbf{F} + D \; u = Y
\end{equation}
where $D$ is a scalar and $Y$ is the right hand side. We can easily identify Gauss's law \eqref{EQGAUSSLAW}
when we choose $D=0$ and $Y=- 4\pi G\rho$.
Before we set this up in python we look at these equations in a bit more detail. For the 2D case the flux definition \eqref{EQFLUX} reads as
\begin{equation}\label{EQFLUX2}
\mathbf{F} =
\begin{bmatrix}
F_0 \\
F_1
\end{bmatrix}
= -
\begin{bmatrix}
A_{00} \frac{\partial u}{\partial x_0} & + & A_{01} \frac{\partial u}{\partial x_1}\\
A_{10} \frac{\partial u}{\partial x_0} & + & A_{11} \frac{\partial u}{\partial x_1}
\end{bmatrix}
\end{equation}
Comparison with the grad rule \eqref{EQGRADRULE} shows that we need to choose
\begin{equation}
\begin{bmatrix}
A_{00} & A_{01} \\
A_{10} & A_{11}
\end{bmatrix}=
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}
\end{equation}
To define the PDE we want to solve we need to define the coefficients in the PDE template
through `LinearSinglePDE` instance. To get the solution $u$ (in fact a numerical approximation is calculated)
is then as easy as calling the `getSolution` method.
Before we can define the PDE we need to set up the region over which we would like to solve the PDE. In the
terminology of `esys.escript` this is called a `Domain`.
## How to create a domain
The first step to set-up the problem is to define the region over which the problem is solved. Here we use
a rectangular grid of `NEx` elements in the horizonal x-direction and `NEz` elements in the vertical z-direction.
We use a grid line spacing of `dx`.
```python
#%matplotlib notebook
```
```python
NEx=100 # number of cells
NEz=100
dx=10. # in meter [m] grid spacing
Lx=dx*NEx
Lz=dx*NEz
print("domin extension = %s x %s"%(Lx,Lz))
print("grid = %s x %s"%(NEx,NEz))
```
domin extension = 1000.0 x 1000.0
grid = 100 x 100
We use the package `finley` which is part of the `esys.escript` distribution to set up the domain:
```python
from esys.escript import *
from esys.finley import Rectangle
domain=Rectangle(l0=Lx, l1=Lz, n0=NEx, n1=NEz)
print(type(domain))
```
<class 'esys.finley.finleycpp.FinleyDomain'>
**Note** the following rules for `Rectangle`:
- the axis are labeled `x0` and `x1`
- the lower, left corner has the coordinates `(0.,0.)`.
Use `Brick` from `esys.finley` for 3D domains.
There are other domain packages available:
- `esys.finley` - general FEM solver solver also support unstructured meshes and contact elements
- `esys.ripley` - special solver for rectangular grids
- `esys.specley` - spectral element solver for wave problems (will discussed later)
- `esys.dudley` - FEM solver for tetrahedral and triangular meshes
## Setting up and solving a PDE
The first step to set up a PDE is to create an instance of the `LinearSinglePDE` class.
Here we call this instance `model` and attach it to the domain `domain` we have already created:
```python
from esys.escript.linearPDEs import LinearSinglePDE
model=LinearSinglePDE(domain)
print(model)
```
<LinearPDE 140375848399536>
Now we need to set the coefficients $\mathbf{A}$ and $Y$. `esys-escript` will automatically assume that the other coefficients $\mathbf{X}$ and $D$ are zero. Let's start with coefficient $\mathbf{A}$:
Recall that for the Gauss' Law $\mathbf{A}$ is the identity matrix. `esys-escript`
provides the convenience function `identityTensor` that sets up the identity matrix.
Its return value can directly be passed on to `model` as coefficient `A`:
```python
model.setValue(A=identityTensor(domain))
```
**Note:**
`identityTensor` returns a $d \times d$ matrix where $d$ is chosen as the spatial
of its argument. As `domain` is two dimensional a $2 \times 2$ identity matrix is returned.
If the domain becomes 3D at a later point the identity matrix will be defines as $3 \times 3$. This allows writing code that is identity from the spatial dimension of the domain.
Next we define the PDE coefficient $Y$ which is given as $Y=- 4\pi G\rho$ and requires us
to define the density $\rho$.
As an example we assume that the density anomaly is a circle centered
at $\mathbf{c}=(c_0, c_1)=(\frac{Lx}{2}, \frac{Lx}{3})$ with radius $R_C=100$m.
To define $\rho$ we first need to calculate the distance of any point $\mathbf{x}$ in the
domain from the $\mathbf{c}$. Then $rho$ is set to $\rho_0=1000 \frac{kg}{m^3}$
at those points which distance is smaller than $R_C$. For the others $\rho$ is set to zero.
First gets the coordinates of the points in the domain:
```python
x=domain.getX()
print(type(x))
```
<class 'esys.escriptcore.escriptcpp.Data'>
`x` is a `Data` object which is giving locations in the domain. So `x` has two components:
```python
print("shape of data obect `x`",x.getShape())
```
shape of data obect `x` (2,)
Components of `x` can be accessed by slicing:
```python
print("x0 coordinates = ",x[0])
print("x1 coordinates = ",x[1])
```
x0 coordinates = Summary: inf=0 sup=1000 data points=10201
x1 coordinates = Summary: inf=0 sup=1000 data points=10201
The statement `inf=0` gives the smallest value - in our case of the $x_0$ coordinate.
The maximal value is shown as `sup=1000` which gives the value of `Lx`. The number of
point being used is `10201`. The reason for this is that there is one value for each grid point which would be '(n0+1) x (n1+1)' values (Why?). In our case this would be
'(n0+1) x (n1+1) = (NEx+1) x (NEz+1) = 201 x 201 = 10201' values.
The distance of point $\mathbf{x}$ to point $\mathbf{c}$ can be calculated by
\begin{equation}\label{eq:distance}
d=\sqrt{ (x_0-c_0)^2 + (x_1-c_1)^2}
\end{equation}
```python
c=[Lx/2., Lz/3.]
RC=100.
```
```python
d=sqrt((x[0]-c[0])**2+(x[1]-c[1])**2)
print("distance to c = %s :"%c,d)
```
distance to c = [500.0, 333.3333333333333] : Summary: inf=3.33333 sup=833.333 data points=10201
Notice that `d` is also a `Data` object as it has been derived from
the `Data` object `x`. It can be seen as a function of the location `x`.
There is a more compact form to calculate `d` using the `length` function:
```python
d=length(x-c)
print("distance to c = %s :"%c,d)
```
distance to c = [500.0, 333.3333333333333] : Summary: inf=3.33333 sup=833.333 data points=10201
We want have the anomaly distribution `rho` to be `rho0` where the distance `d` is smaller than `RC`
(or `d-RC<0`) and zero otherwise. We use the `whereNegative` function which returns one where its argument is negative and zero elsewhere:
```python
rho0=1000.
rho=rho0*whereNegative(d-RC)
```
Now we can set the right hand side `Y` of the PDE `model`:
```python
import numpy as np
G=6.67e-11 # m^3/kg/sec^2 gravity constant
model.setValue(Y=-4*G*np.pi*rho)
```
We are expecting the gravity to vanish at large distances away from the circle. For a bounded domain as we use in the gravity model here we enforce this condition on the boundary of the domain formed by the bottom and top face
$x_1=0$ and $x_1=L_z$. To tell this to `LinearSinglePDE` we need to set a mask `q` which marks the locations on the surface where we want the solution $u$ to be zero:
\begin{equation}\label{EQQ}
q(\mathbf{x}) = \begin{cases}
>0 & \mbox{ set } u(\mathbf{x})=0 \\
=0 & \mbox{ no constraint at } x
\end{cases}
\end{equation}
We use the `whereZero` function which takes a `Data` object. It returns
a `Data` object which has the value
**one** at location where the argument has the value **zero**. Otherwise zero is used.
It is applied to define a `q` for each face of the domain. These `q`s are then combined
by addition to define the final `q` which is has positive value at any point on the
surface of the domain but zero in the interior.
```python
x=domain.getX()
q_bottom=whereZero(x[1]) # 1 for face x_1=0
q_top=whereZero(x[1]-Lz) # 1 for face x_1=Lz
model.setValue(q=q_bottom+q_top)
```
Question is: What boundary conditions are applied on the other two faces?
By default the so called *weak* boundary conditions
\begin{equation}\label{eq:weakBC}
\mathbf{F} \cdot \mathbf{n} = F_0 n_0 + F_1 n_1 = 0
\end{equation}
with outer normal field $\mathbf{n}=(n_0, n_1)$ are assumed.
As $\mathbf{n}=(-1,0)$ on the left face and
As $\mathbf{n}=(1,0)$ on the right face
of the domain the boundary condition there become
\begin{equation}\label{eq:weakBC1}
F_0= -\frac{\partial u}{\partial x_0} = 0
\end{equation}
Now we are are ready to get the solution of the PDE:
```python
u=model.getSolution()
```
**Note**
The `getSolution` call involves the solution of system of linear equation. The dimension
is the number of grid points. For large grids and for 3D problems this solve can take some time.
The `grad` function returns the gradient of the argument. The negative of the gradient
gives us the gravitational acceleration `g`:
```python
g=-grad(u)
print(g)
```
Summary: inf=-3.85762e-05 sup=4.6591e-05 data points=40000
The gravitational acceleration is a vector:
```python
g.getShape()
```
(2,)
Before we start looking into postprocessing techniques we need to understand better
what `esys.escript.Data` objects such as `d`, `u` and `g` are actually represent.
## What do the values in `Data` objects actually mean?
When we print `Data` objects this shows the minimum (`inf`) and maximum (`sup`) value but also
the number of `data points` being used. Let have a look at the `d`, `u` and `g` again:
```python
print("x :", x)
print("d :", d)
print("u :", u)
print("g :", g)
```
x : Summary: inf=0 sup=1000 data points=10201
d : Summary: inf=3.33333 sup=833.333 data points=10201
u : Summary: inf=-0.00974452 sup=0 data points=10201
g : Summary: inf=-3.85762e-05 sup=4.6591e-05 data points=40000
The number of data points for `x`, `d` and `u` are the same namely `10201`. As already explained this
comes from the fact that `d` holds the data on the grid nodes (intersects of grid lines).
The gradient `g` is stored cell (or element) based where in four integration points in each cell is used.
This explains the number of data points of `40000` as
\begin{equation}
NEx \times NEz \times \mbox{ # integration points } = 100 \times 100 \times 4 = 40000
\end{equation}
When handling `Data` objects it is obviously important to know which representation locations
a particular `Data` object is
using. For instance a `Data` object using grid nodes and a `Data` objects using cells cannot easily be added together. In order to handle this the `Data` object has an attribute that defines the way its values are handled.
This location attribute can be obtained by the `getFunctionSpace` method:
```python
print("d is stored at ", d.getFunctionSpace())
print("g is stored at ", g.getFunctionSpace())
```
d is stored at Finley_Nodes [ContinuousFunction(domain)] on FinleyMesh
g is stored at Finley_Elements [Function(domain)] on FinleyMesh
This a list of location attributes are available:
- `ContinuousFunction(domain)`: mesh nodes
- `Solution(domain)`: solution of a PDE, typically also on mesh nodes
- `Function(domain)` : on integration points per element
- `ReducedFunction(domain)`: on element centers
- `DiracDeltaFunction(domain)`: point sources and sinks which we will use later
The locations where the values are hold can be obtained by the `getX` methof of a `FunctionSpace`:
```python
X=g.getFunctionSpace().getX()
```
One can test if the `FunctionSpace` attribute of two `Data` objects is the same:
```python
print("g and d on the same location?", g.getFunctionSpace() == d.getFunctionSpace())
print("X and g on the same location?",X.getFunctionSpace() == g.getFunctionSpace())
print("g is on integration points?",X.getFunctionSpace() == Function(domain))
```
g and d on the same location? False
X and g on the same location? True
g is on integration points? True
Interpolation can be used to change the data location. We would like to have the
vertical gravity $g_z$ at element centers. Here we interpolate the `g[1]` which is the
vertical gravity at the integration points to the element centers:
```python
gz=interpolate(g[1], ReducedFunction(domain))
print(g)
print(gz)
```
Summary: inf=-3.85762e-05 sup=4.6591e-05 data points=40000
Summary: inf=-3.69606e-05 sup=4.65409e-05 data points=10000
**WARNING:** Not all interpolations work. This depends on the PDE model being used.
For instance interpolation from element centers to nodes is not supported for a `finley` domain:
```python
interpolate(gz, ContinuousFunction(domain))
```
## Visualization using `matplotlib`
`matplotlib` is a powerful and very versatile tool for plotting data in python.
Here we want to use it plot the distribution of the vertical gravity over the domain.
To hand `Data` objects to `matplotlib` they first need to converted into `numpy` arrays.
With `convertToNumpy` `esys.escript` provides an easy mechanism to do this.
To plot data we need not only the data but also the location at which they are position in order
to plot their distribution. So we not only convert the `Data` objects but also the corresponding
locations which we can get through the `FunctionSpace` attribute:
```python
gz_np=convertToNumpy(gz)
x_np=convertToNumpy(gz.getFunctionSpace().getX())
print(gz_np)
print(x_np)
```
We now can use the `tricontourf` to plot filled contours and `tricontour` to plot contour lines of the
the spatial distribution of vertical gravity `gz` using its `numpy` version `gz_np` and the corresponding data locations `x_np`.
** Note ** For more options see the documentaions of [tricontour](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.tricontour.html) and [tricontourf](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.tricontourf.html).
```python
tomGal = 100000. # conversion from m/s**2 to mGal
```
```python
import matplotlib.pyplot as plt
tomGal = 100000
plt.figure(figsize=(7,7))
plt.clf()
contour=plt.tricontourf(x_np[0], x_np[1], gz_np[0]*tomGal, 15)
plt.tricontour(x_np[0], x_np[1], gz_np[0]*tomGal, 15, linewidths=0.8, colors='k')
plt.xlabel('$x_0$ [m]')
plt.ylabel('$x_1$ [m]')
plt.title("Vertical gravity $g_z$ [mGal] due to a circular anomaly")
plt.colorbar(contour)
plt.gca().set_aspect('equal')
```
We can also do this in 3D:
```python
if True:
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
plt.clf()
ax = fig.gca(projection='3d')
contour=ax.plot_trisurf(x_np[0], x_np[1], gz_np[0]*tomGal , linewidth=0.2, antialiased=True, cmap=plt.cm.CMRmap)
plt.xlabel('$x_0$ [m]')
plt.ylabel('$x_1$ [m]')
plt.title("Vertical gravity $g_z$ [mGal]")
plt.colorbar(contour)
plt.show()
```
**Note**:
For larger meshes with millions of element and also for 3D domains `matplotlib` is not really suitbale.
Firstly its is just too slow but it also does not provide appropriate functionalities to deal with these kind of data. Alternative packages are
- [mayavi](https://docs.enthought.com/mayavi/mayavi/)
- [paraview](https://www.paraview.org/)
- [visit](https://wci.llnl.gov/simulation/computer-codes/visit/)
Although these packages can also be used from python it is more appropriate to use them separately trough their respective GUIs. To hand data to them `Data` objects are written to external files preferable in the `VTK` format.
This can be done using the `saveVTK` function from the `esys.weipa` module, see the users guide for details.
## Value Picking
Visualization of the distribution is often not sufficient for instance if one wants to apply a more quantitative
analysis of the result; for instance comparison with a observations. There is a mechanism to pick values of `Data` objects at specific locations.
### Single Value
Here we want to obtain the vertical gravity at a point at a height $h=300$ m above the center of the circular anomaly located at $c$. The location of this point $\mathbf{p}$ is then
\begin{equation}\label{eq:P}
\mathbf{p}=(c_0, c_1+300)
\end{equation}
We define a so-called `Locator` that provides a mechanism to extract the value at this point $\mathbf{p}$.
The `Locator` is build for the use at all `Data` objects with a specific `FunctionSpace` attribute; in our case
that should be the same as for `gz`:
```python
from esys.escript.pdetools import Locator
p=[c[0], c[1]+300]
point_locator=Locator(where=ReducedFunction(domain), x=p)
```
Now we can easily get the value of `gz` at `p`:
```python
v=point_locator.getValue(gz)
print("value of gz @ %s is %s mGal. "%(point_locator.getX(), v*tomGal))
```
The `Locator` actually is not always using the specified point but picks the location
of the `FunctionSpace` nearest to the requested. In our case the point is actually moved by about $5$ m:
```python
print("target point was %s."%(p,))
```
target point was [500.0, 633.3333333333333].
The `point_locator` can be reused to pick values from other `Data` objects as long as they can interpolated
to `ReducedFunction`:
```python
v2=point_locator.getValue(g)
print("value of g @ %s is %s."%(point_locator.getX(), v2))
```
value of g @ [ 495. 635.] is [ 1.67666900e-07 -1.32564753e-05].
### Along a line of points (transect)
One can also use the `Locator` to pick data for a set of points for instance along a horizontal transect.
First we define the $x_0$-coordinates of the points we would like to use:
```python
x0_transect=np.linspace(0., Lx, NEx)
```
We then add the $x_1$ coordinate as $c_1+300$ to define the locations in the transect in the 2D domain:
```python
h=300
x_transect=[ (x0, c[1]+h) for x0 in x0_transect]
print(c[1]+h-Lz/2)
```
133.33333333333326
Now we can create new `Locator` named `transect_locator` using the points `x_transect`:
```python
transect_locator=Locator(where=ReducedFunction(domain), x=x_transect )
```
Then the vertical gravity across the transect can be picked from `gz`. We also get the true $x_0$ coordinates
of the points in the transect:
```python
gz_transect=transect_locator.getValue(gz*tomGal)
x0_transect=[ x[0] for x in transect_locator.getX()]
```
And finally we can plot the vertical gravity over the transect:
```python
plt.figure(figsize=(5,5))
plt.clf()
plt.plot(x0_transect, gz_transect)
plt.xlabel('offset [m]')
plt.ylabel('$g_z$ [mGal]')
plt.title("gravity anomaly over transect @ height %g"%(c[1]+h))
plt.show()
```
| b77641889f8c656151a7a10731025533ae405aaf | 46,254 | ipynb | Jupyter Notebook | B_GeophyicalModeling/EscriptBasics.ipynb | uqzzhao/Programming-Geophysics-in-Python | e6e8299116b4698892921b78927b71fc47ee018a | [
"Apache-2.0"
]
| 20 | 2019-11-06T09:08:54.000Z | 2021-12-03T08:37:47.000Z | B_GeophyicalModeling/EscriptBasics.ipynb | uqzzhao/Programming-Geophysics-in-Python | e6e8299116b4698892921b78927b71fc47ee018a | [
"Apache-2.0"
]
| null | null | null | B_GeophyicalModeling/EscriptBasics.ipynb | uqzzhao/Programming-Geophysics-in-Python | e6e8299116b4698892921b78927b71fc47ee018a | [
"Apache-2.0"
]
| 3 | 2020-11-23T14:16:06.000Z | 2022-03-31T14:45:46.000Z | 33.493121 | 2,161 | 0.589095 | true | 6,534 | Qwen/Qwen-72B | 1. YES
2. YES | 0.851953 | 0.810479 | 0.69049 | __label__eng_Latn | 0.991697 | 0.44257 |
```python
from ngames.evaluation.extensivegames import ExtensiveFormGame, plot_game,\
subgame_perfect_equilibrium, DFS_equilibria_paths
```
# Market game
From S. Fatima, S. Kraus, M. Wooldridge, Principles of Automated Negotiation,
Cambridge University Press, 2014, see Figure 3.3.
```python
m = ExtensiveFormGame(title='Market game')
m.add_players('firm 1', 'firm 2')
m.add_node(1, 'chance', is_root=True)
m.add_node(2, 'firm 1')
m.add_node(3, 'firm 1')
for i in range(4, 8):
m.add_node(i, 'firm 2')
for i in range(8, 16):
m.add_node(i)
m.add_edge(1, 2, label='small')
m.add_edge(1, 3, label='large')
m.add_edge(2, 4, label='L')
m.add_edge(2, 5, label='H')
m.add_edge(3, 6, label='L')
m.add_edge(3, 7, label='H')
m.add_edge(4, 8, label='L')
m.add_edge(4, 9, label='H')
m.add_edge(5, 10, label='L')
m.add_edge(5, 11, label='H')
m.add_edge(6, 12, label='L')
m.add_edge(6, 13, label='H')
m.add_edge(7, 14, label='L')
m.add_edge(7, 15, label='H')
m.set_information_partition('firm 2', {4, 6}, {5, 7})
m.set_information_partition('firm 1', {2, 3})
m.set_uniform_probability_distribution(1)
m.set_utility(8, {'firm 1': 16, 'firm 2': 8})
m.set_utility(9, {'firm 1': 8, 'firm 2': 16})
m.set_utility(10, {'firm 1': 20, 'firm 2': 4})
m.set_utility(11, {'firm 1': 16, 'firm 2': 8})
m.set_utility(12, {'firm 1': 30, 'firm 2': 10})
m.set_utility(13, {'firm 1': 28, 'firm 2': 12})
m.set_utility(14, {'firm 1': 16, 'firm 2': 24})
m.set_utility(15, {'firm 1': 24, 'firm 2': 16})
position_colors = {'firm 1': 'cyan', 'firm 2': 'red'}
my_fig_kwargs = dict(figsize=(12, 12), frameon=False)
my_node_kwargs = dict(font_size=24, node_size=2000, edgecolors='k',
linewidths=3)
my_edge_kwargs = dict(arrowsize=25, width=5)
my_edge_labels_kwargs = dict(font_size=24)
my_patch_kwargs = dict(linewidth=3)
my_legend_kwargs = dict(fontsize=24, loc='upper right', edgecolor='white')
my_utility_label_kwargs = dict(horizontalalignment='center', fontsize=20)
my_info_sets_kwargs = dict(linestyle='--', linewidth=3)
fig = plot_game(m,
position_colors,
fig_kwargs=my_fig_kwargs,
node_kwargs=my_node_kwargs,
edge_kwargs=my_edge_kwargs,
edge_labels_kwargs=my_edge_labels_kwargs,
patch_kwargs=my_patch_kwargs,
legend_kwargs=my_legend_kwargs,
utility_label_kwargs=my_utility_label_kwargs,
utility_label_shift=0.075,
info_sets_kwargs=my_info_sets_kwargs)
```
# Prisoner's Dilemma
Classical one-shot Prioner's Dilemma, but in extensive form.
```python
pd = ExtensiveFormGame(name='Prisoners Dilemma')
pd.add_players('player 1', 'player 2')
pd.add_node(1, 'player 1', is_root=True)
pd.add_node(2, 'player 2')
pd.add_node(3, 'player 2')
for i in range(4, 8):
pd.add_node(i)
pd.add_edge(1, 2, 'cooperate')
pd.add_edge(1, 3, 'defect')
pd.add_edge(2, 4, 'cooperate')
pd.add_edge(2, 5, 'defect')
pd.add_edge(3, 6, 'cooperate')
pd.add_edge(3, 7, 'defect')
pd.set_information_partition('player 2', {2, 3})
pd.set_utility(4, {'player 1': 6, 'player 2': 6})
pd.set_utility(5, {'player 1': 0, 'player 2': 9})
pd.set_utility(6, {'player 1': 9, 'player 2': 0})
pd.set_utility(7, {'player 1': 3, 'player 2': 3})
position_colors = {'player 1': 'aquamarine', 'player 2': 'lightcoral'}
my_fig_kwargs = dict(figsize=(12, 12), tight_layout=True)
my_node_kwargs = dict(font_size=36, node_size=4500, edgecolors='k',
linewidths=3)
my_edge_kwargs = dict(arrowsize=50, width=5)
my_edge_labels_kwargs = dict(font_size=32)
my_patch_kwargs = dict(linewidth=3)
my_legend_kwargs = dict(fontsize=32, loc='upper left', edgecolor='white')
my_utility_label_kwargs = dict(horizontalalignment='center', fontsize=28,
weight='bold')
my_info_sets_kwargs = dict(linestyle='--', linewidth=3)
fig = plot_game(pd,
position_colors,
fig_kwargs=my_fig_kwargs,
node_kwargs=my_node_kwargs,
edge_kwargs=my_edge_kwargs,
edge_labels_kwargs=my_edge_labels_kwargs,
patch_kwargs=my_patch_kwargs,
legend_kwargs=my_legend_kwargs,
utility_label_kwargs=my_utility_label_kwargs,
decimals=0,
utility_label_shift=0.06,
info_sets_kwargs=my_info_sets_kwargs)
```
# Example of information rules
```python
example = ExtensiveFormGame()
example.add_players('position 1', 'position 2')
example.add_node(1, 'position 1', is_root=True)
example.add_node(2, 'position 2')
example.add_node(3, 'position 2')
example.add_edge(1, 2, label='do A')
example.add_edge(1, 3, label='do B')
example.set_information_partition('position 2', {2, 3})
position_colors = {'position 1': 'green', 'position 2': 'cyan'}
my_fig_kwargs['figsize'] = (6, 6)
my_legend_kwargs['loc'] = 'upper left'
my_legend_kwargs['fontsize'] = 20
my_patch_kwargs['linewidth'] = 2
fig = plot_game(example,
position_colors,
utility_label_shift=0.,
fig_kwargs=my_fig_kwargs,
node_kwargs=my_node_kwargs,
edge_kwargs=my_edge_kwargs,
edge_labels_kwargs=my_edge_labels_kwargs,
patch_kwargs=my_patch_kwargs,
legend_kwargs=my_legend_kwargs,
utility_label_kwargs=my_utility_label_kwargs,
info_sets_kwargs=my_info_sets_kwargs)
```
```python
example.set_information_partition('position 2', {2}, {3})
example.is_perfect_information = True
fig = plot_game(example,
position_colors,
utility_label_shift=0.,
fig_kwargs=my_fig_kwargs,
node_kwargs=my_node_kwargs,
edge_kwargs=my_edge_kwargs,
edge_labels_kwargs=my_edge_labels_kwargs,
patch_kwargs=my_patch_kwargs,
legend_kwargs=my_legend_kwargs,
utility_label_kwargs=my_utility_label_kwargs,
info_sets_kwargs=my_info_sets_kwargs)
```
# Ostrom's fishing game
```python
fishing_game = ExtensiveFormGame(title='Fishing game C1')
# add the two positions for the two fishers
fishing_game.add_players('fisher 1', 'fisher 2')
# add the nodes to the graph
fishing_game.add_node(1, player_turn='fisher 1', is_root=True)
fishing_game.add_node(2, player_turn='fisher 2')
fishing_game.add_node(3, player_turn='fisher 2')
fishing_game.add_node(4, player_turn='fisher 1')
fishing_game.add_node(5)
fishing_game.add_node(6)
fishing_game.add_node(7, player_turn='fisher 1')
fishing_game.add_node(8, player_turn='fisher 2')
fishing_game.add_node(9, player_turn='fisher 2')
fishing_game.add_node(10, player_turn='fisher 2')
fishing_game.add_node(11, player_turn='fisher 2')
fishing_game.add_node(12, player_turn='chance')
fishing_game.add_node(13)
fishing_game.add_node(14)
fishing_game.add_node(15, player_turn='chance')
fishing_game.add_node(16, player_turn='chance')
fishing_game.add_node(17)
fishing_game.add_node(18)
fishing_game.add_node(19, player_turn='chance')
for i in range(20, 28):
fishing_game.add_node(i)
# add the edges to the graph
fishing_game.add_edge(1, 2, label='go to spot 1')
fishing_game.add_edge(1, 3, label='go to spot 2')
fishing_game.add_edge(2, 4, label='go to spot 1')
fishing_game.add_edge(2, 5, label='go to spot 2')
fishing_game.add_edge(3, 6, label='go to spot 1')
fishing_game.add_edge(3, 7, label='go to spot 2')
fishing_game.add_edge(4, 8, label='stay')
fishing_game.add_edge(4, 9, label='leave')
fishing_game.add_edge(7, 10, label='stay')
fishing_game.add_edge(7, 11, label='leave')
fishing_game.add_edge(8, 12, label='stay')
fishing_game.add_edge(8, 13, label='leave')
fishing_game.add_edge(9, 14, label='stay')
fishing_game.add_edge(9, 15, label='leave')
fishing_game.add_edge(10, 16, label='stay')
fishing_game.add_edge(10, 17, label='leave')
fishing_game.add_edge(11, 18, label='stay')
fishing_game.add_edge(11, 19, label='leave')
fishing_game.add_edge(12, 20, label='1 wins')
fishing_game.add_edge(12, 21, label='2 wins')
fishing_game.add_edge(15, 22, label='1 wins')
fishing_game.add_edge(15, 23, label='2 wins')
fishing_game.add_edge(16, 24, label='1 wins')
fishing_game.add_edge(16, 25, label='2 wins')
fishing_game.add_edge(19, 26, label='1 wins')
fishing_game.add_edge(19, 27, label='2 wins')
# add imperfect information, equivalent to having players take the actions
# simultaneously
fishing_game.set_information_partition('fisher 2', {2, 3}, {8, 9}, {10, 11})
```
## Rule configuration C1 (default)
See page 85 in E. Ostrom, R. Gardner, J. Walker, Rules, Games, and
Common-Pool Resources, The University of Michigan Press, Ann Arbor, 1994.
https://doi.org/10.3998/mpub.9739.
First, build the default situation as an extensive-form game. Two fishers
competing for two fishing spots, the first spot being better than the other.
The utilities and probability at chance nodes of the game are parametrized
with the following variables:
* $v_i$: economical value of the $i$-th spot. It is assumed that the first
spot is the better one, so $v_1>v_2$.
* $P$: the probability that fisher 1 wins the fight if one happens. It is
assumed that fisher 1 is the stronger one, so $P>0.5$. Then, the probability
that fisher 2 wins a fight is $(1-P)<0.5$.
* $c$: cost of travel between the two spots.
* $d$: damage incurred to the loser of the fight.
$w(j, i)$ denotes the *expected* value for fisher $j$ of having a fight at
spot $i$:
\begin{align}
w(1,i) =& Pv_i+(1-P)(-d)\\
w(2,i) =& (1-P)v_i+P(-d)
\end{align}
Start assigning utilities with made-up values.
```python
# parameters
v1, v2 = 5, 3
P = 0.6
c = 0.5
d = 2
def set_parameters(v1: float, v2: float, P: float, c: float, d: float,
game: ExtensiveFormGame):
w11 = P*v1+(1-P)*(-d)
w12 = P*v2+(1-P)*(-d)
w21 = (1-P)*v1+P*(-d)
w22 = (1-P)*v2+P*(-d)
# set utility parameters and probabilities over outgoing edges at
# chance nodes
game.set_utility(5, {'fisher 1': v1, 'fisher 2': v2})
game.set_utility(6, {'fisher 1': v2, 'fisher 2': v1})
game.set_utility(13, {'fisher 1': v1, 'fisher 2': v2-c})
game.set_utility(14, {'fisher 1': v2-c, 'fisher 2': v1})
game.set_utility(17, {'fisher 1': v2, 'fisher 2': v1-c})
game.set_utility(18, {'fisher 1': v2-c, 'fisher 2': v1})
game.set_probability_distribution(12, {(12, 20): P, (12, 21): (1-P)})
game.set_probability_distribution(15, {(15, 22): P, (15, 23): (1-P)})
game.set_probability_distribution(16, {(16, 24): P, (16, 25): (1-P)})
game.set_probability_distribution(19, {(19, 26): P, (19, 27): (1-P)})
game.set_utility(20, {'fisher 1': v1, 'fisher 2': -d})
game.set_utility(21, {'fisher 1': -d, 'fisher 2': v1})
game.set_utility(22, {'fisher 1': v2-c, 'fisher 2': -c-d})
game.set_utility(23, {'fisher 1': -c-d, 'fisher 2': v2-c})
game.set_utility(24, {'fisher 1': v2, 'fisher 2': -d})
game.set_utility(25, {'fisher 1': -d, 'fisher 2': v2})
game.set_utility(26, {'fisher 1': v1-c, 'fisher 2': -c-d})
game.set_utility(27, {'fisher 1': -c-d, 'fisher 2': v1-c})
return (w11, w12), (w21, w22)
_ = set_parameters(v1, v2, P, c, d, fishing_game)
# default keywords for rendering the figure
my_fig_kwargs = dict(figsize=(45, 24), frameon=False)
my_node_kwargs = dict(font_size=30, node_size=2250, edgecolors='k',
linewidths=2)
my_edge_kwargs = dict(arrowsize=25, width=3)
my_edge_labels_kwargs = dict(font_size=20)
my_patch_kwargs = dict(linewidth=2)
my_legend_kwargs = dict(fontsize=24, loc='upper right', edgecolor='white')
my_utility_label_kwargs = dict(horizontalalignment='center', fontsize=24)
my_info_sets_kwargs = dict(linestyle='--', linewidth=3)
position_colors = {'fisher 1': 'aquamarine', 'fisher 2': 'greenyellow'}
fig = plot_game(fishing_game,
position_colors,
fig_kwargs=my_fig_kwargs,
node_kwargs=my_node_kwargs,
edge_kwargs=my_edge_kwargs,
edge_labels_kwargs=my_edge_labels_kwargs,
patch_kwargs=my_patch_kwargs,
legend_kwargs=my_legend_kwargs,
utility_label_kwargs=my_utility_label_kwargs,
utility_label_shift=0.07,
info_sets_kwargs=my_info_sets_kwargs)
```
Compute the subgame perfect equilibria strategy:
```python
spe = subgame_perfect_equilibrium(fishing_game)
```
Compute the paths of play that arise from following the subgame perfect
equilibria strategy, alongside with the probability of being played. It is
necessary to include probabilities since there are chance nodes:
```python
path_store = []
DFS_equilibria_paths(fishing_game, fishing_game.game_tree.root, spe, [], 1,
path_store)
print("Path -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
Path -- Probability
-------------------
[1, 'go to spot 1', 2, 'go to spot 2', 5] -- 1.00
In order to introduce the later changes to the game in the other rule
configurations, establish that the utility at the terminal nodes can be
computed as:
*Utility* = (*payoff of the outcome*) + (*cost of the path of actions to
get there*)
__Costs assigned to actions__:
| Action | Cost |
|--------------|----------|
| Go to spot 1 | 0 |
| Go to spot 2 | 0 |
| Stay | 0 |
| Leave | -c (c>0) |
__Costs assigned to payoffs__:
| Outcome | Payoff |
|---------|--------|
| Fish at spot 1 | $v_1$ |
| Fish at spot 2 | $v_2$ |
| Looser of fight* | -d (d>0) |
*: in more general terms, a fight can be encompassed under the term
"competition". This will make the transition from C1 to C2 easier.
To reproduce the analysis of the game by Ostrom in an automated way, I use
backward induction. This is not technically correct because the game is
imperfect information, but for the moment the focus is not so much on the
correctness of the solution concept.
The first three cases fulfill $$w(1, 1)>v_2-c$$ and $w(2, 1)>v_2-c$:
* Case 1: $w(1,1)>v_2;\;w(2,1)>v_2$
* Case 2: $w(1,1)>v_2;\;w(2,1)<v_2$
* Case 3: $w(1,1)<v_2;\;w(2,1)<v_2$
The other two cases are:
* Case 4: $w(1,1)>v_2-c;\;w(2,1)<v_2-c$
* Case 5: $w(1,1)<v_2-c;\;w(2,1)<v_2-c$
### Case C1-1
$w(1,1)>v_2;\;w(2,1)>v_2$
As predicted by Ostrom, in the equilibrium both fishers go to spot 1,
stay there and fight.
```python
v1, v2 = 10, 2
P = 0.55
c = 1
d = 2
(w11, w12), (w21, w22) = set_parameters(v1, v2, P, c, d, fishing_game)
print("w(1,1) = {:.1f}".format(w11))
print("w(2,1) = {:.1f}".format(w21))
print("v2 = {:.1f}".format(v2))
spe = subgame_perfect_equilibrium(fishing_game)
path_store = []
DFS_equilibria_paths(fishing_game, fishing_game.game_tree.root, spe, [], 1,
path_store)
print("\nPath -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
w(1,1) = 4.6
w(2,1) = 3.4
v2 = 2.0
Path -- Probability
-------------------
[1, 'go to spot 1', 2, 'go to spot 1', 4, 'stay', 8, 'stay', 12, '1 wins', 20] -- 0.55
[1, 'go to spot 1', 2, 'go to spot 1', 4, 'stay', 8, 'stay', 12, '2 wins', 21] -- 0.45
### Case C1-2
$w(1,1)>v_2;\;w(2,1)<v_2$
As predicted by Ostrom, in the equilibrium the stronger fisher 1 ($P>0.5$)
goes to spot 1, the weaker fisher 2 goes to spot 2.
```python
v1, v2 = 10, 4
P = 0.6
c = 1
d = 2
(w11, w12), (w21, w22) = set_parameters(v1, v2, P, c, d, fishing_game)
print("w(1,1) = {:.1f}".format(w11))
print("w(2,1) = {:.1f}".format(w21))
print("v2 = {:.1f}".format(v2))
spe = subgame_perfect_equilibrium(fishing_game)
path_store = []
DFS_equilibria_paths(fishing_game, fishing_game.game_tree.root, spe, [], 1,
path_store)
print("\nPath -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
### Case C1-3
$w(1,1)<v_2;\;w(2,1)<v_2$
As predicted by Ostrom, again in the equilibrium the stronger fisher 1
($P>0.5$) goes to spot 1, the weaker fisher 2 goes to spot 2.
Ostrom points out that there is another pure strategy equilibrium in which
fisher 1 goes to the worse spot 2, and fisher 2 goes to the better spot 1.
However, that outcome is part of a Nash equilibrium that is not sequentially
rational (?).
```python
v1, v2 = 6, 4
P = 0.6
c = 1
d = 2
(w11, w12), (w21, w22) = set_parameters(v1, v2, P, c, d, fishing_game)
print("w(1,1) = {:.1f}".format(w11))
print("w(2,1) = {:.1f}".format(w21))
print("v2 = {:.1f}".format(v2))
spe = subgame_perfect_equilibrium(fishing_game)
path_store = []
DFS_equilibria_paths(fishing_game, fishing_game.game_tree.root, spe, [], 1,
path_store)
print("\nPath -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
### Case C1-4
$w(1,1)>v_2-c;\;w(2,1)<v_2-c$
As predicted by Ostrom, again in the equilibrium the stronger fisher 1 goes
to spot 1, the weaker fisher 2 goes to spot 2.
```python
v1, v2 = 10, 5
P = 0.6
c = 1
d = 2
(w11, w12), (w21, w22) = set_parameters(v1, v2, P, c, d, fishing_game)
print("w(1,1) = {:.1f}".format(w11))
print("w(2,1) = {:.1f}".format(w21))
print("v2-c = {:.1f}".format(v2-c))
spe = subgame_perfect_equilibrium(fishing_game)
path_store = []
DFS_equilibria_paths(fishing_game, fishing_game.game_tree.root, spe, [], 1,
path_store)
print("\nPath -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
### Case C1-5
$w(1,1)<v_2-c;\;w(2,1)<v_2-c$
As predicted by Ostrom, again in the equilibrium the stronger fisher 1 goes
to spot 1, the weaker fisher 2 goes to spot 2.
```python
v1, v2 = 7, 5
P = 0.6
c = 1
d = 2
(w11, w12), (w21, w22) = set_parameters(v1, v2, P, c, d, fishing_game)
print("w(1,1) = {:.1f}".format(w11))
print("w(2,1) = {:.1f}".format(w21))
print("v2-c = {:.1f}".format(v2-c))
spe = subgame_perfect_equilibrium(fishing_game)
path_store = []
DFS_equilibria_paths(fishing_game, fishing_game.game_tree.root, spe, [], 1,
path_store)
print("\nPath -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
## Rule configuration C2
First fisher in arrival has the right to fish at that spot for the remaining
of the day.
```python
fishing_game_C2 = ExtensiveFormGame(title='Fishing game C2')
# add the two positions for the two fishers
fishing_game_C2.add_players('fisher 1', 'fisher 2')
# add the nodes to the graph
fishing_game_C2.add_node(1, player_turn='fisher 1', is_root=True)
fishing_game_C2.add_node(2, player_turn='fisher 2')
fishing_game_C2.add_node(3, player_turn='fisher 2')
fishing_game_C2.add_node(4, player_turn='chance')
fishing_game_C2.add_node(5)
fishing_game_C2.add_node(6)
fishing_game_C2.add_node(7, player_turn='chance')
fishing_game_C2.add_node(8, player_turn='fisher 2')
fishing_game_C2.add_node(9, player_turn='fisher 1')
fishing_game_C2.add_node(10, player_turn='fisher 2')
fishing_game_C2.add_node(11, player_turn='fisher 1')
for i in range(12, 15+1):
fishing_game_C2.add_node(i)
# add the edges to the graph
fishing_game_C2.add_edge(1, 2, label='go to spot 1')
fishing_game_C2.add_edge(1, 3, label='go to spot 2')
fishing_game_C2.add_edge(2, 4, label='go to spot 1')
fishing_game_C2.add_edge(2, 5, label='go to spot 2')
fishing_game_C2.add_edge(3, 6, label='go to spot 1')
fishing_game_C2.add_edge(3, 7, label='go to spot 2')
fishing_game_C2.add_edge(4, 8, label='1 first')
fishing_game_C2.add_edge(4, 9, label='2 first')
fishing_game_C2.add_edge(7, 10, label='1 first')
fishing_game_C2.add_edge(7, 11, label='2 first')
fishing_game_C2.add_edge(8, 12, label='leave')
fishing_game_C2.add_edge(9, 13, label='leave')
fishing_game_C2.add_edge(10, 14, label='leave')
fishing_game_C2.add_edge(11, 15, label='leave')
# add imperfect information, equivalent to having players take the actions
# simultaneously
fishing_game_C2.set_information_partition('fisher 2', {2, 3}, {8}, {10})
```
The utilities and probability at chance nodes of the game are parametrized
with the following variables:
* $v_i$: economical value of the $i$-th spot. It is assumed that the first
spot is the better one, so $v_1>v_2$.
* $P$: the probability that fisher 1 gets to a spot first. It is assumed
that fisher 1 has better technology than fisher 2, and hence $P>0.5$.
* $c$: cost of travel between the two spots.
$w(j, i)$ denotes the *expected* value for fisher $j$ of spot $i$ if the
other fisher has gone there too:
\begin{align}
w(1,i) =& Pv_i+(1-P)(v_{-i}-c)\\
w(2,i) =& (1-P)v_i+P(v_{-i}-c)
\end{align}
```python
# parameters
v1, v2 = 5, 3
P = 0.6
c = 0.5
def set_parameters_C2(v1: float, v2: float, P: float, c: float,
game: ExtensiveFormGame):
w11 = P*v1+(1-P)*(v2-c)
w12 = P*v2+(1-P)*(v1-c)
w21 = (1-P)*v1+P*(v2-c)
w22 = (1-P)*v2+P*(v1-c)
# set utility parameters and probabilities over outgoing edges at
# chance nodes
game.set_utility(5, {'fisher 1': v1, 'fisher 2': v2})
game.set_utility(6, {'fisher 1': v2, 'fisher 2': v1})
game.set_utility(12, {'fisher 1': v1, 'fisher 2': v2-c})
game.set_utility(13, {'fisher 1': v2-c, 'fisher 2': v1})
game.set_utility(14, {'fisher 1': v2, 'fisher 2': v1-c})
game.set_utility(15, {'fisher 1': v1-c, 'fisher 2': v2})
game.set_probability_distribution(4, {(4, 8): P, (4, 9): (1-P)})
game.set_probability_distribution(7, {(7, 10): P, (7, 11): (1-P)})
return (w11, w12), (w21, w22)
_ = set_parameters_C2(v1, v2, P, c, fishing_game_C2)
# default keywords for rendering the figure
my_fig_kwargs = dict(figsize=(30, 20), frameon=False)
my_node_kwargs = dict(font_size=30, node_size=2250, edgecolors='k',
linewidths=2)
my_edge_kwargs = dict(arrowsize=25, width=3)
my_edge_labels_kwargs = dict(font_size=20)
my_patch_kwargs = dict(linewidth=2)
my_legend_kwargs = dict(fontsize=24, loc='upper right', edgecolor='white')
my_utility_label_kwargs = dict(horizontalalignment='center', fontsize=20)
my_info_sets_kwargs = dict(linestyle='--', linewidth=3)
position_colors = {'fisher 1': 'aquamarine', 'fisher 2': 'greenyellow'}
fig = plot_game(fishing_game_C2,
position_colors,
fig_kwargs=my_fig_kwargs,
node_kwargs=my_node_kwargs,
edge_kwargs=my_edge_kwargs,
edge_labels_kwargs=my_edge_labels_kwargs,
patch_kwargs=my_patch_kwargs,
legend_kwargs=my_legend_kwargs,
utility_label_kwargs=my_utility_label_kwargs,
utility_label_shift=0.06,
info_sets_kwargs=my_info_sets_kwargs)
```
**Changes to go from C1 to C2:**
* If fisher1.location == fisher2.location $\longrightarrow$ engage in a
"competition". In the game tree of C1: "shortcut edge" from node 4 to
node 12.
* Looser of fight from a final to an intermediate outcome (no longer a
terminal node).
* Change in payoff of outcome: $d>0\longrightarrow d=0$. Because fishers are
engaged in a different, non-violent type of competition, it is not costly to
lose the race. (this change might be redundant because being the looser is
now an intermediate instead of a final outcome).
* Additional edges: if fisher $i$ loses the competition $\longrightarrow$
fisher $i$ has to leave.
### Case C2-1
$w(1,1)>v_2;\;w(2,1)>v_2$
As predicted by Ostrom, in the equilibrium both fishers go to spot 1, and
let chance decide who was first.
```python
v1, v2 = 10, 2
P = 0.55
c = 1
(w11, w12), (w21, w22) = set_parameters_C2(v1, v2, P, c, fishing_game_C2)
print("w(1,1) = {:.1f}".format(w11))
print("w(2,1) = {:.1f}".format(w21))
print("v2 = {:.1f}".format(v2))
spe = subgame_perfect_equilibrium(fishing_game_C2)
path_store = []
DFS_equilibria_paths(fishing_game_C2, fishing_game_C2.game_tree.root, spe, [],
1, path_store)
print("\nPath -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
### Case C2-2
$w(1,1)>v_2;\;w(2,1)<v_2$
As predicted by Ostrom, in the equilibrium fisher 1 (the faster) goes to
spot 1, while fisher 2 (the slower) goes to spot 2.
```python
v1, v2 = 10, 5
P = 0.7
c = 3
(w11, w12), (w21, w22) = set_parameters_C2(v1, v2, P, c, fishing_game_C2)
print("w(1,1) = {:.1f}".format(w11))
print("w(2,1) = {:.1f}".format(w21))
print("v2 = {:.1f}".format(v2))
spe = subgame_perfect_equilibrium(fishing_game_C2)
path_store = []
DFS_equilibria_paths(fishing_game_C2, fishing_game_C2.game_tree.root, spe, [],
1, path_store)
print("\nPath -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
### Case C2-3
$w(1,1)<v_2;\;w(2,1)<v_2$
As predicted by Ostrom, in the equilibrium fisher 1 (the faster) goes to
spot 1, while fisher 2 (the slower) goes to spot 2.
As in case C1-3, there is another Nash equilibria in which the most
resourceful fisher 1 goes to the worse spot 2. However, that is not
predicted by backward induction, possibly because that Nash equilibria
is not sequentially rational.
```python
v1, v2 = 10, 8
P = 0.55
c = 3
(w11, w12), (w21, w22) = set_parameters_C2(v1, v2, P, c, fishing_game_C2)
print("w(1,1) = {:.1f}".format(w11))
print("w(2,1) = {:.1f}".format(w21))
print("v2 = {:.1f}".format(v2))
spe = subgame_perfect_equilibrium(fishing_game_C2)
path_store = []
DFS_equilibria_paths(fishing_game_C2, fishing_game_C2.game_tree.root, spe, [],
1, path_store)
print("\nPath -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
## Rule configuration C3
Fisher 1 announces first, and he has the right to go fish wherever he wants.
If fisher 2 goes to the spot taken by fisher 1, he has to leave for the
other spot.
```python
fishing_game_C3 = ExtensiveFormGame(title='Fishing game C3')
# add the two positions for the two fishers
fishing_game_C3.add_players('fisher 1', 'fisher 2')
# add the nodes to the graph
fishing_game_C3.add_node(1, player_turn='fisher 1', is_root=True)
fishing_game_C3.add_node(2, player_turn='fisher 2')
fishing_game_C3.add_node(3, player_turn='fisher 2')
fishing_game_C3.add_node(4, player_turn='fisher 2')
fishing_game_C3.add_node(5)
fishing_game_C3.add_node(6)
fishing_game_C3.add_node(7, player_turn='fisher 2')
fishing_game_C3.add_node(8)
fishing_game_C3.add_node(9)
# add the edges to the graph
fishing_game_C3.add_edge(1, 2, label='go to spot 1')
fishing_game_C3.add_edge(1, 3, label='go to spot 2')
fishing_game_C3.add_edge(2, 4, label='go to spot 1')
fishing_game_C3.add_edge(2, 5, label='go to spot 2')
fishing_game_C3.add_edge(3, 6, label='go to spot 1')
fishing_game_C3.add_edge(3, 7, label='go to spot 2')
fishing_game_C3.add_edge(4, 8, label='leave')
fishing_game_C3.add_edge(7, 9, label='leave')
```
The utilities are parametrized with the following variables:
* $v_i$: economical value of the $i$-th spot. It is assumed that the first
spot is the better one, so $v_1>v_2$.
* $c$: cost of travel between the two spots.
```python
# parameters
v1, v2 = 5, 3
P = 0.6
c = 0.5
fishing_game_C3.set_utility(5, {'fisher 1': v1, 'fisher 2': v2})
fishing_game_C3.set_utility(6, {'fisher 1': v2, 'fisher 2': v1})
fishing_game_C3.set_utility(8, {'fisher 1': v1, 'fisher 2': v2-c})
fishing_game_C3.set_utility(9, {'fisher 1': v2, 'fisher 2': v1-c})
# default keywords for rendering the figure
my_fig_kwargs = dict(figsize=(30, 20))
my_node_kwargs = dict(font_size=30, node_size=2250, edgecolors='k',
linewidths=2)
my_edge_kwargs = dict(arrowsize=25, width=3)
my_edge_labels_kwargs = dict(font_size=20)
my_patch_kwargs = dict(linewidth=2)
my_legend_kwargs = dict(fontsize=24, loc='upper right', edgecolor='white')
my_utility_label_kwargs = dict(horizontalalignment='center', fontsize=20)
my_info_sets_kwargs = dict(linestyle='--', linewidth=3)
position_colors = {'fisher 1': 'aquamarine', 'fisher 2': 'greenyellow'}
fig = plot_game(fishing_game_C3,
position_colors,
fig_kwargs=my_fig_kwargs,
node_kwargs=my_node_kwargs,
edge_kwargs=my_edge_kwargs,
edge_labels_kwargs=my_edge_labels_kwargs,
patch_kwargs=my_patch_kwargs,
legend_kwargs=my_legend_kwargs,
utility_label_kwargs=my_utility_label_kwargs,
utility_label_shift=0.05,
info_sets_kwargs=my_info_sets_kwargs)
```
**Changes to go from C1 to C3:**
* Fisher 1 announces: the information set {2,3} of fisher 2 gets split into
two information sets {2}, {3}
* If fisher$_i$.action == 'go to' fisher$_j$.location $\implies$
fisher$_i$.action $\leftarrow$ 'leave'. In terms of edges in the game tree
from C1: direct edge from 4$\longrightarrow$13 and 7$\longrightarrow$17
```python
spe = subgame_perfect_equilibrium(fishing_game_C3)
path_store = []
DFS_equilibria_paths(fishing_game_C3, fishing_game_C3.game_tree.root, spe, [],
1, path_store)
print("\nPath -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
Path -- Probability
-------------------
[1, 'go to spot 1', 2, 'go to spot 2', 5] -- 1.00
As predicted by Ostrom, in equilibrium the fisher who gets to announce first
(fisher 1) goes to the best spot 1, and the other fish goes to the other
spot. The second fisher will never go to the same spot that the first fisher
went to, because $v_2>v_2-c$.
## Rule configuration C4
This configuration consists of a prearranged rotation of the game in rule
configuration C3. It would consist of alternating between the game in C3 as
presented before, and the same game in C3 but with fisher 1 and fisher
swapped.
# Axelrod's norms game
```python
norms_game = ExtensiveFormGame(title='Axelrod norms game')
norms_game.add_players('i', 'j')
norms_game.add_node(1, player_turn='i', is_root=True)
norms_game.add_node(2, player_turn='chance')
norms_game.add_node(3)
norms_game.add_node(4, player_turn='j')
norms_game.add_node(5)
norms_game.add_node(6)
norms_game.add_node(7)
norms_game.add_edge(1, 2, 'defect')
norms_game.add_edge(1, 3, '~defect')
norms_game.add_edge(2, 4, 'j sees i')
norms_game.add_edge(2, 5, '~j sees i')
norms_game.add_edge(4, 6, 'punish')
norms_game.add_edge(4, 7, '~punish')
T = 3
H = -1
S = 0.6
P = -9
E = -2
norms_game.set_utility(3, {'i': 0, 'j': 0})
norms_game.set_utility(5, {'i': T, 'j': H})
norms_game.set_utility(6, {'i': P, 'j': E})
norms_game.set_utility(7, {'i': T, 'j': H})
norms_game.set_probability_distribution(2, {(2, 4): S, (2, 5): (1-S)})
# default keywords for rendering the figure
my_fig_kwargs = dict(figsize=(15, 15), frameon=False)
my_node_kwargs = dict(font_size=30, node_size=2250, edgecolors='k',
linewidths=2)
my_edge_kwargs = dict(arrowsize=25, width=3)
my_edge_labels_kwargs = dict(font_size=20)
my_patch_kwargs = dict(linewidth=2)
my_legend_kwargs = dict(fontsize=24, loc='upper right', edgecolor='white')
my_utility_label_kwargs = dict(horizontalalignment='center', fontsize=20)
my_info_sets_kwargs = dict(linestyle='--', linewidth=3)
position_colors = {'i': 'aquamarine', 'j': 'greenyellow'}
fig = plot_game(norms_game,
position_colors,
fig_kwargs=my_fig_kwargs,
node_kwargs=my_node_kwargs,
edge_kwargs=my_edge_kwargs,
edge_labels_kwargs=my_edge_labels_kwargs,
patch_kwargs=my_patch_kwargs,
legend_kwargs=my_legend_kwargs,
utility_label_kwargs=my_utility_label_kwargs,
utility_label_shift=0.06,
info_sets_kwargs=my_info_sets_kwargs)
```
**CASE 1:** $E<H$
Theoretical prediction: $i$ will defect but $j$ will not punishes if she
detects $i$.
```python
print("E = {:.1f}".format(E))
print("H = {:.1f}".format(H))
spe = subgame_perfect_equilibrium(norms_game)
path_store = []
DFS_equilibria_paths(norms_game, norms_game.game_tree.root, spe, [], 1,
path_store)
print("\nPath -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
E = -2.0
H = -1.0
Path -- Probability
-------------------
[1, 'defect', 2, 'j sees i', 4, '~punish', 7] -- 0.60
[1, 'defect', 2, '~j sees i', 5] -- 0.40
**CASE 2:** $E>H$
Theoretical prediction: $i$ will not defect
```python
T = 3
H = -2
S = 0.6
P = -9
E = -1
norms_game.set_utility(3, {'i': 0, 'j': 0})
norms_game.set_utility(5, {'i': T, 'j': H})
norms_game.set_utility(6, {'i': P, 'j': E})
norms_game.set_utility(7, {'i': T, 'j': H})
norms_game.set_probability_distribution(2, {(2, 4): S, (2, 5): (1-S)})
print("E = {:.1f}".format(E))
print("H = {:.1f}".format(H))
spe = subgame_perfect_equilibrium(norms_game)
path_store = []
DFS_equilibria_paths(norms_game, norms_game.game_tree.root, spe, [], 1,
path_store)
print("\nPath -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
E = -1.0
H = -2.0
Path -- Probability
-------------------
[1, '~defect', 3] -- 1.00
## Metanorms game
```python
metanorms_game = ExtensiveFormGame(title='Axelrod metanorms game')
metanorms_game.add_players('i', 'j', 'k')
metanorms_game.add_node(1, player_turn='i', is_root=True)
metanorms_game.add_node(2, player_turn='chance')
metanorms_game.add_node(3)
metanorms_game.add_node(4, player_turn='j')
metanorms_game.add_node(5)
metanorms_game.add_node(6)
metanorms_game.add_node(7, player_turn='chance')
metanorms_game.add_node(8, player_turn='k')
metanorms_game.add_node(9)
metanorms_game.add_node(10)
metanorms_game.add_node(11)
metanorms_game.add_edge(1, 2, 'defect')
metanorms_game.add_edge(1, 3, '~defect')
metanorms_game.add_edge(2, 4, 'j sees i')
metanorms_game.add_edge(2, 5, '~j sees i')
metanorms_game.add_edge(4, 6, 'punish i')
metanorms_game.add_edge(4, 7, '~punish i')
metanorms_game.add_edge(7, 8, 'k sees j')
metanorms_game.add_edge(7, 9, '~k sees j')
metanorms_game.add_edge(8, 10, 'punish j')
metanorms_game.add_edge(8, 11, '~punish j')
```
```python
T = 3
H = -1
S = 0.6
P = -9
E = -2
P_prime = P
E_prime = E
metanorms_game.set_utility(3, {'i': 0, 'j': 0, 'k': 0})
metanorms_game.set_utility(5, {'i': T, 'j': H, 'k': H})
metanorms_game.set_utility(6, {'i': P, 'j': E, 'k': 0})
metanorms_game.set_utility(9, {'i': T, 'j': H, 'k': H})
metanorms_game.set_utility(10, {'i': T, 'j': P_prime, 'k': E_prime})
metanorms_game.set_utility(11, {'i': T, 'j': H, 'k': H})
metanorms_game.set_probability_distribution(2, {(2, 4): S, (2, 5): (1-S)})
metanorms_game.set_probability_distribution(7, {(7, 8): S, (7, 9): (1-S)})
# default keywords for rendering the figure
my_fig_kwargs = dict(figsize=(25, 25), frameon=False)
my_node_kwargs = dict(font_size=30, node_size=2250, edgecolors='k',
linewidths=2)
my_edge_kwargs = dict(arrowsize=25, width=3)
my_edge_labels_kwargs = dict(font_size=16)
my_patch_kwargs = dict(linewidth=2)
my_legend_kwargs = dict(fontsize=24, loc='upper right', edgecolor='white')
my_utility_label_kwargs = dict(horizontalalignment='center', fontsize=20)
my_info_sets_kwargs = dict(linestyle='--', linewidth=3)
position_colors = {'i': 'aquamarine', 'j': 'greenyellow', 'k': 'violet'}
fig = plot_game(metanorms_game,
position_colors,
fig_kwargs=my_fig_kwargs,
node_kwargs=my_node_kwargs,
edge_kwargs=my_edge_kwargs,
edge_labels_kwargs=my_edge_labels_kwargs,
patch_kwargs=my_patch_kwargs,
legend_kwargs=my_legend_kwargs,
utility_label_kwargs=my_utility_label_kwargs,
utility_label_shift=0.09,
info_sets_kwargs=my_info_sets_kwargs)
```
```python
spe = subgame_perfect_equilibrium(metanorms_game)
path_store = []
DFS_equilibria_paths(metanorms_game, metanorms_game.game_tree.root, spe, [],
1, path_store)
print("\nPath -- Probability")
print("-------------------")
for (path, prob) in path_store:
print("{} -- {:.2f}".format(path, prob))
```
Path -- Probability
-------------------
[1, 'defect', 2, 'j sees i', 4, '~punish i', 7, 'k sees j', 8, '~punish j', 11] -- 0.36
[1, 'defect', 2, 'j sees i', 4, '~punish i', 7, '~k sees j', 9] -- 0.24
[1, 'defect', 2, '~j sees i', 5] -- 0.40
| 7b79c09b321f801a5faf0a268e92e643cf06872c | 1,024,884 | ipynb | Jupyter Notebook | examples/examples.ipynb | nmontesg/norms-games | ee4d7ad4f3cc774020cd5617e6957e804995ef70 | [
"MIT"
]
| 1 | 2021-07-22T14:28:31.000Z | 2021-07-22T14:28:31.000Z | examples/examples.ipynb | nmontesg/norms-games | ee4d7ad4f3cc774020cd5617e6957e804995ef70 | [
"MIT"
]
| null | null | null | examples/examples.ipynb | nmontesg/norms-games | ee4d7ad4f3cc774020cd5617e6957e804995ef70 | [
"MIT"
]
| null | null | null | 511.93007 | 290,553 | 0.939709 | true | 11,734 | Qwen/Qwen-72B | 1. YES
2. YES | 0.785309 | 0.774583 | 0.608287 | __label__eng_Latn | 0.696444 | 0.251585 |
```python
%pylab inline
```
Populating the interactive namespace from numpy and matplotlib
```python
from sympy import symbols, sympify, latex, integrate, solve, solveset, Matrix, expand, factor, primitive, simplify, factor_list
from sympy.parsing.sympy_parser import parse_expr
```
```python
M = Matrix(np.reshape(np.random.random(64), (8,8)) + np.eye(8))
```
```python
M
```
Matrix([
[ 1.72938010955506, 0.108998619445437, 0.617419413857536, 0.672090300210331, 0.606753741022949, 0.941950391186882, 0.329353672325283, 0.330135340967146],
[ 0.544003091846511, 1.4571834758357, 0.166588963611674, 0.77871894307536, 0.92759642632636, 0.723380109363728, 0.966571852092427, 0.358684859729231],
[0.0263348345077091, 0.205412256369131, 1.63480353971179, 0.727417671646605, 0.750735973013235, 0.718917836174743, 0.271872823879635, 0.104501570503404],
[ 0.479093567931866, 0.612751655750204, 0.220756182952718, 1.2271972517674, 0.571749999543387, 0.745256036655373, 0.436574672095169, 0.787042679860924],
[ 0.267498882107831, 0.234860551854682, 0.266755446926459, 0.822371203165733, 1.0167665761371, 0.311663043557685, 0.498502741247374, 0.483679012101594],
[ 0.131011410281342, 0.934240039468298, 0.360473891048842, 0.949236090723573, 0.666398870144586, 1.3850623124344, 0.761510580786222, 0.918311992415843],
[ 0.779835646960045, 0.206545906782152, 0.31767946760615, 0.993901276871165, 0.621777728829399, 0.120687581485802, 1.97616730627295, 0.73210086488975],
[ 0.34195261826588, 0.0933633873347242, 0.500540923752915, 0.0894789341134482, 0.582565335979311, 0.032149637056864, 0.636587579063847, 1.60563751156564]])
```python
plt.figure(figsize=(8,8))
M = np.array(M).astype('float')
im = plt.imshow(M, cmap=cm.hot)
colorbar(im, shrink=0.8, aspect=8)
```
```python
```
| f687a87231881be3e96d6972123f952bfb16cfb2 | 13,930 | ipynb | Jupyter Notebook | Calcupy/matrix heatmap.ipynb | darkeclipz/jupyter-notebooks | 5de784244ad9db12cfacbbec3053b11f10456d7e | [
"Unlicense"
]
| 1 | 2018-08-28T12:16:12.000Z | 2018-08-28T12:16:12.000Z | Calcupy/matrix heatmap.ipynb | darkeclipz/jupyter-notebooks | 5de784244ad9db12cfacbbec3053b11f10456d7e | [
"Unlicense"
]
| null | null | null | Calcupy/matrix heatmap.ipynb | darkeclipz/jupyter-notebooks | 5de784244ad9db12cfacbbec3053b11f10456d7e | [
"Unlicense"
]
| null | null | null | 105.530303 | 10,240 | 0.872721 | true | 779 | Qwen/Qwen-72B | 1. YES
2. YES | 0.843895 | 0.7773 | 0.65596 | __label__yue_Hant | 0.139734 | 0.362345 |
# Confidence interval approximations for the AUROC
The area under the receiver operating curve (AUROC) is one of the most commonly used performance metrics for binary classification. Visually, the AUROC is the integral between the sensitivity and false positive rate curves across all thresholds for a binary classifier. The AUROC can also be shown to be equivalent to an instance of the [Mann-Whitney-U test](https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test) (MNU), a non-parametric rank-based statistic. This post addresses two challenges when doing statistical testing for the AUROC: i) how to speed up the calculation of the AUROC, and ii) which inference procedure to use to obtain the best possible coverage. The AUROC's relationship to the MNU will be shown to be important for both speed ups in calculation and resampling approaches for the bootstrap.
## (1) Methods for calculating the AUROC
In the binary classification paradigm a model produces a score associated with the probability that an observation belongs to class 1 (as opposed to class 0). The AUROC of any model is a probabilistic term: $P(s^1 > s^0)$, where $s^k$ is the distribution of scores from the model for class $k$. In practice the AUROC is never known because the distribution of data is unknown! However, an unbiased estimate of the AUROC (a.k.a the empirical AUROC) can be calculated through one of several approaches.
The first method is to draw the [ROC curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) by measuring the sensitivity/specificity across all thresholds, and then using the [trapezoidal rule](https://en.wikipedia.org/wiki/Trapezoidal_rule) for calculating the integral. This approach is computationally inefficient and should only be done for visualization purposes. A second method to obtain the empirical AUROC is to simply calculate the percentage of times the positive class score exceeds the negative class score:
$$
\begin{align}
AUC &= \frac{1}{n_1n_0} \sum_{i: y_i=1} \sum_{j: y_j=0} I(s_i > s_j) + 0.5\cdot I(s_i = s_j) \label{eq:auc_pair}
\end{align}
$$
Where $y_i$ is the binary label for the $i^{th}$ observation and $n_k$ is the number of instances for class $k$. If we assume that the positive class is some fraction of the observation in the population: $P(y=1) = c$, then on average, calculating the AUROC via \eqref{eq:auc_pair} requires $c(1-c)n^2$ operations which means $O(AUC)=n^2$. For larger sample sizes this quadratic complexity will lead to long run times. One method to bound the computational complexity of \eqref{eq:auc_pair} is to randomly sample, with replacement, $m$ samples from each class the data to get a stochastic approximation of the AUC.
$$
\begin{align}
\tilde{AUC} &= \frac{1}{m} \sum_{i} P(\tilde{s_i}^1 > \tilde{s_i}^0) \label{eq:auc_rand}
\end{align}
$$
Where $\tilde{s_i}^k$ is a random instance from the scores of class $k$. The stochastic AUROC approach has the nice computational advantage that it is $O(m)$. As with other stochastic methods, \eqref{eq:auc_rand} requires knowledge of the sampling variation of the statistic and seeding, which tends to discourage its use in practice. This post will encourage the use of the rank order of the data to calculate the empirical AUROC.
$$
\begin{align}
rAUC &= \frac{1}{n_1n_0} \sum_{i: y_i=1} r_i - \frac{n_1(n_1 +1)}{2} \label{eq:auc_rank}
\end{align}
$$
Where $r_i$ is the sample rank of the data. Since ranking a vector is $O(n\log n)$, the computational complexity of \eqref{eq:auc_rank} is linearithmic, which will mean significant speed ups over \eqref{eq:auc_pair}.
## (2) Run-time comparisons
The code block below shows the run-times for the different approaches to calculate the AUROC from section (1) across different sample sizes ($n$) with different positive class proportions ($n_1/n$). The stochastic approach using $m = 5 n$. It is easy to generate data from two distributions so that the population AUROC can be known in advance. For example, if $s^1$ and $s^0$ come from the normal distribution:
$$
\begin{align*}
s_i^0 \sim N(0,1)&, \hspace{2mm} s_i^1 \sim N(\mu,1), \hspace{2mm} \mu \geq 0, \\
P(s_i^1 > s_i^0) &= \Phi\big(\mu / \sqrt{2}\big).
\end{align*}
$$
Alternatively one could use two exponential distributions:
$$
\begin{align*}
s_i^0 \sim Exp(1)&, \hspace{2mm} s_i^1 \sim Exp(\lambda^{-1}), \hspace{2mm} \lambda \geq 1, \\
P(s_i^1 > s_i^0) &= \frac{\lambda}{1+\lambda}.
\end{align*}
$$
It is easy to see that scale parameter of the normal or exponential distribution can determined *a priori* to match some pre-specific AUROC target.
$$
\begin{align*}
\mu^* &= \sqrt{2} \cdot \Phi^{-1}(AUC) \\
\lambda^* &= \frac{AUC}{1-AUC}
\end{align*}
$$
The simulations in this post will use the normal distribution for simplicity, although using the exponential distribution will change the results of the analysis. The reason is that the variance of the AUROC will be identical regardless of the distribution that generated it, as long as those two distributions have the same AUROC, of course.
```python
"""
DEFINE HELPER FUNCTIONS NEEDED THROUGHOUT POST
"""
import os
import numpy as np
import pandas as pd
import plotnine
from plotnine import *
from scipy import stats
from scipy.interpolate import UnivariateSpline
from timeit import timeit
from sklearn.metrics import roc_curve, auc
def rvec(x):
return np.atleast_2d(x)
def cvec(x):
return rvec(x).T
def auc_pair(y, s):
s1, s0 = s[y == 1], s[y == 0]
n1, n0 = len(s1), len(s0)
count = 0
for i in range(n1):
count += np.sum(s1[i] > s0)
count += 0.5*np.sum(s1[i] == s0)
return count/(n1*n0)
def auc_rand(y, s, m):
s1 = np.random.choice(s[y == 1], m, replace=True)
s0 = np.random.choice(s[y == 0], m, replace=True)
return np.mean(s1 > s0)
def auc_rank(y, s):
n1 = sum(y)
n0 = len(y) - n1
den = n0 * n1
num = sum(stats.rankdata(s)[y == 1]) - n1*(n1+1)/2
return num / den
def dgp_auc(n, p, param, dist='normal'):
n1 = np.random.binomial(n,p)
n0 = n - n1
if dist == 'normal':
s0 = np.random.randn(n0)
s1 = np.random.randn(n1) + param
if dist == 'exp':
s0 = np.random.exponential(1,n0)
s1 = np.random.exponential(param,n1)
s = np.concatenate((s0, s1))
y = np.concatenate((np.repeat(0, n0), np.repeat(1, n1)))
return y, s
```
```python
target_auc = 0.75
mu_75 = np.sqrt(2) * stats.norm.ppf(target_auc)
lam_75 = target_auc / (1 - target_auc)
n, p = 500, 0.5
np.random.seed(2)
y_exp, s_exp = dgp_auc(n, p, lam_75, 'exp')
y_norm, s_norm = dgp_auc(n, p, mu_75, 'normal')
fpr_exp, tpr_exp, _ = roc_curve(y_exp, s_exp)
fpr_norm, tpr_norm, _ = roc_curve(y_norm, s_norm)
df = pd.concat([pd.DataFrame({'fpr':fpr_exp,'tpr':tpr_exp,'tt':'Exponential'}),
pd.DataFrame({'fpr':fpr_norm,'tpr':tpr_norm, 'tt':'Normal'})])
tmp_txt = df.groupby('tt')[['fpr','tpr']].mean().reset_index().assign(fpr=[0.15,0.15],tpr=[0.85,0.95])
tmp_txt = tmp_txt.assign(lbl=['AUC: %0.3f' % auc_rank(y_exp, s_exp),
'AUC: %0.3f' % auc_rank(y_norm, s_norm)])
plotnine.options.figure_size = (4, 3)
gg_roc = (ggplot(df,aes(x='fpr',y='tpr',color='tt')) + theme_bw() +
geom_step() + labs(x='FPR',y='TPR') +
scale_color_discrete(name='Distrubition') +
geom_abline(slope=1,intercept=0,linetype='--') +
geom_text(aes(label='lbl'),size=10,data=tmp_txt))
gg_roc # ggtitle('ROC curve by distribution')
```
```python
# Get run-times for different sizes of n
p_seq = [0.1, 0.3, 0.5]
n_seq = np.arange(25, 500, 25)
nrun = 1000
c = 5
if 'df_rt.csv' in os.listdir():
df_rt = pd.read_csv('df_rt.csv')
else:
np.random.seed(nrun)
holder = []
for p in p_seq:
print(p)
for n in n_seq:
cont = True
m = c * n
while cont:
y, s = dgp_auc(n, p, 0, dist='normal')
cont = sum(y) == 0
ti_rand = timeit('auc_rand(y, s, m)',number=nrun,globals=globals())
ti_rank = timeit('auc_rank(y, s)',number=nrun,globals=globals())
ti_pair = timeit('auc_pair(y, s)',number=nrun,globals=globals())
tmp = pd.DataFrame({'rand':ti_rand, 'rank':ti_rank, 'pair':ti_pair, 'p':p, 'n':n},index=[0])
holder.append(tmp)
df_rt = pd.concat(holder).melt(['p','n'],None,'method')
df_rt.to_csv('df_rt.csv',index=False)
plotnine.options.figure_size = (7, 3.0)
gg_ti = (ggplot(df_rt,aes(x='n',y='value',color='method')) + theme_bw() +
facet_wrap('~p',labeller=label_both) + geom_line() +
scale_color_discrete(name='Method',labels=['Pairwise','Stochastic','Rank']) +
labs(y='Seconds (1000 runs)', x='n'))
gg_ti # ggtitle('AUROC run-time') +
```
Figure 1 provides an example of two ROC curves coming from a Normal and Exponential distribution. Though the empirical AUROCs between the two curves is virtually identical, their respective sensitivity/specificity trade-offs are different. The Exponential distribution tends to have a more favourable sensitivity for high thresholds because of the right skew of the data. This figure is a reminder of some of the inherent limitations with using the AUROC as an evaluation measure. Although to repeat, the distribution of the AUROC statistic between these, or other, distributions would be the same.
The significant runtime performance gains from using the ranking approach in \eqref{eq:auc_rank} is shown in Figure 2. The pairwise method from \eqref{eq:auc_pair} is many orders of magnitude slower once the sample size is more than a few dozen observations. The stochastic method's run time is shown to be slightly better than the ranking method. This is to be expected given that \eqref{eq:auc_rand} is linear in $n$. However, using the stochastic approach requires picking a permutation size that leads to sufficiently tight bounds around the point estimate. The simulations below show the variation around the estimate by the number of draws.
```python
# Get the quality of the stochastic approximation
nsim = 100
n_seq = [100, 500, 1000]
c_seq = np.arange(1,11,1).astype(int)
if 'df_se.csv' in os.listdir():
df_se = pd.read_csv('df_se.csv')
else:
np.random.seed(nsim)
holder = []
for n in n_seq:
holder_n = []
for ii in range(nsim):
y, s = dgp_auc(n, p, 0, dist='normal')
gt_auc = auc_pair(y, s)
sim_mat = np.array([[auc_rand(y, s, n*c) for c in c_seq] for x in range(nsim)])
dat_err = np.std(gt_auc - sim_mat,axis=0)
holder_n.append(dat_err)
tmp = pd.DataFrame(np.array(holder_n)).melt(None,None,'c','se').assign(n=n)
holder.append(tmp)
df_se = pd.concat(holder).reset_index(None, True)
df_se.c = df_se.c.map(dict(zip(list(range(len(c_seq))),c_seq)))
df_se.to_csv('df_se.csv',index=False)
df_se = df_se.assign(sn=lambda x: pd.Categorical(x.n.astype(str),[str(z) for z in n_seq]))
plotnine.options.figure_size = (4, 3)
gg_se = (ggplot(df_se, aes(x='c',y='se',color='sn')) +
theme_bw() + labs(y='Standard error',x='Number of draws * n') +
geom_jitter(height=0,width=0.1,size=0.5,alpha=0.5) +
scale_color_discrete(name='n') +
scale_x_continuous(breaks=list(c_seq)))
gg_se # ggtitle('Variation around point estimate from randomization method')
```
Figure 3 shows that the number of samples needed to get a small standard error to the ±1% is 4000 draws. In other words, if the actual empirical AUROC was 71%, we would expect 95% of the realizations to be around the 69-73% range. To get to the ±0.5% requires 10K draws. This shows that unless the user is happy to tolerate an error range of more than a percentage point, hundred of thousands of draws will likely be needed.
## (3) Inference approaches
After reviewing the different approaches for calculating the point estimate of the empirical AUROC, attention can now be turned to doing inference on this term. Knowing that a classifier has an AUROC on 78% on a test set provides little information if there is no quantification of the uncertainty around this range. In this section, we'll discuss three different approaches for generating confidence intervals ([CIs](https://en.wikipedia.org/wiki/Confidence_interval)) which are the most common method of uncertainty quantification in frequentist statistics. A two-sided CI at the $1-\alpha$% level is a random variable that has the following property: $P([l, u] \in AUC ) \geq 1-\alpha$. In other words, the probability that the true AUROC is contained within this upper and lower bound, $l$ and $u$ (which are random variables), is at least $1-\alpha$%, meaning the true statistic of interest (the AUROC) fails to be *covered* by this interval at most $\alpha$% of the time. An exact CI will cover the true statistic of interest exactly $1-\alpha$% of the time, given the test maximum power.
The approaches below are by no means exhaustive. Readers are encouraged to review other [methods](https://arxiv.org/pdf/1804.05882.pdf) for other ideas.
### Approach #1: Asymptotic U
As was previously mentioned, the AUROC is equivalent to an MNU test. The asymptotic properties of this statistic have been known for [more than 70 years](https://projecteuclid.org/euclid.aoms/1177730491). Under the null hypothesis assumption that $P(s_i^1 > s_i^0) = 0.5$, the asymptotic properties of the U statistic for ranks can be shown to be:
$$
\begin{align*}
z &= \frac{U - \mu_U}{\sigma_U} \sim N(0,1) \\
\mu_U &= \frac{n_0n_1}{2} \\
\sigma^2_U &= \frac{n_0n_1(n_0+n_1+1)}{12} \\
U &= n_1n_0 \cdot \max \{ AUC, (1-AUC) \} \\
\end{align*}
$$
Note that additional corrections that need to be applied in the case of data which has ties, but I will not cover this issue here. There are two clear weaknesses to this approach. First, it appeals to the asymptotic normality of the $U$ statistic, which may be a poor approximation when $n$ is small. Second, this formula only makes sense for testing a null hypothesis of $AUC_0=0.5$. Notice that the constant in the denominator of the variance, 12, is the same as the constant in the variance of a [uniform distribution](https://en.wikipedia.org/wiki/Continuous_uniform_distribution). This is not a coincidence as the distribution of rank order statistics is uniform when the data come from the same distribution. To estimate this constant for $AUC\neq 0.5$, Monte Carlo simulations will be needed. Specifically we want to find the right constant $c(AUC)$ for the variance of the AUROC:
$$
\begin{align*}
\sigma^2_U(AUC) &= \frac{n_0n_1(n_0+n_1+1)}{c(AUC)}
\end{align*}
$$
Even though it is somewhat computationally intensive to calculate these normalizing constants, their estimates hold true regardless of the sample of the sample sizes, as in $c(AUC;n_0,n_1)=c(AUC;n_0';n_1')$ for all $n_k, n_k' \in \mathbb{R}^+$. The code below estimates $c()$ and uses a spline to interpolate for values of the AUROC between the realized draws.
```python
# PRECOMPUTE THE VARIANCE CONSTANT...
if 'dat_var.csv' in os.listdir():
dat_var = pd.read_csv('dat_var.csv')
else:
np.random.seed(1)
nsim = 10000
n1, n0 = 500, 500
den = n1 * n0
auc_seq = np.arange(0.5, 1, 0.01)
holder = np.zeros(len(auc_seq))
for i, auc in enumerate(auc_seq):
print(i)
mu = np.sqrt(2) * stats.norm.ppf(auc)
Eta = np.r_[np.random.randn(n1, nsim)+mu, np.random.randn(n0,nsim)]
Y = np.r_[np.zeros([n1,nsim],dtype=int)+1, np.zeros([n0,nsim],dtype=int)]
R1 = stats.rankdata(Eta,axis=0)[:n1]
Amat = (R1.sum(0) - n1*(n1+1)/2) / den
holder[i] = (n0+n1+1) / Amat.var() / den
dat_var = pd.DataFrame({'auc':auc_seq, 'c':holder})
dat_var = pd.concat([dat_var.iloc[1:].assign(auc=lambda x: 1-x.auc), dat_var]).sort_values('auc').reset_index(None, True)
dat_var.to_csv('dat_var.csv', index=False)
# Calculate the spline
spl = UnivariateSpline(x=dat_var.auc, y=dat_var.c)
dat_spline = pd.DataFrame({'auc':dat_var.auc, 'spline':spl(dat_var.auc)})
plotnine.options.figure_size=(4,3)
gg_c = (ggplot(dat_var,aes(x='auc',y='np.log(c)')) + theme_bw() +
geom_point()+labs(y='log c(AUC)',x='AUROC') +
geom_line(aes(x='auc',y='np.log(spline)'), data=dat_spline,color='red') +
ggtitle('Red line is spline (k=3)'))
gg_c
```
Figure 4 shows that the constant term is growing quite rapidly. The stochastic estimate of the constant at AUROC=0.5 of 11.9 is close to the true population value of 12.
### Approach #2: Newcombe's Wald Method
A second approach is to use a (relatively) new approach from [Newcombe (2006)](https://onlinelibrary.wiley.com/doi/10.1002/sim.2324). Unlike the asymptotic approach above, Newcombe's method automatically calculates the different level of the variance for different values of the AUROC.
$$
\begin{align*}
\sigma^2_{AUC} &= \frac{AUC(1-AUC)}{(n_1-1)(n_0-1)} \cdot \Bigg[ 2n - 1 - \frac{3n-3}{(2-AUC)(1+AUC)} \Bigg]
\end{align*}
$$
Assuming $n_1 = c\cdot n$ then $O(\sigma^2_{AUC})=\frac{AUC(1-AUC)}{n}$, which is very similar to the variance of the binomial proportion (see [here](https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval)).
### Approach #3: Bootstrapping ranks
The final inference approach is that of [bootstrap](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)), which generates new copies of the statistic by resampling the data. Though the ability to get additional randomness by resampling rows of the data seems a little mysterious, if not dubious, it has a solid mathematical foundation. The bootstrap is equivalent to drawing from the empirical CDF (eCDF) of a random variable. Since the eCDF is known to be a [consistent](https://en.wikipedia.org/wiki/Glivenko%E2%80%93Cantelli_theorem) estimate of the true CDF, the error of the bootstrap will naturally decrease as $n$ grows. The bootstrap has the attractive property that it is fully non-parametric and works from a broad class of statistics. Note that there is no one way to do the "bootstrap" for inference, and I compare three common approaches: i) quantile, ii) classic, iii) studentized. For a review of other approaches, see [here](http://users.stat.umn.edu/~helwig/notes/bootci-Notes.pdf).
$$
\begin{align*}
\tilde{AUC}^{(k)} &= \frac{1}{n_1n_0} \sum_{i: y_i=1} \tilde{r}_i^{(k)} - \frac{n_1(n_1 +1)}{2} \\
\sigma^2_{BS} &= \frac{1}{K-1}\sum_{k=1}^K (\tilde{AUC}^{(k)} - \bar{\tilde{AUC}})
\end{align*}
$$
The $k^{th}$ bootstrap (out of $K$ total bootstraps), is generated by sampling, with replacement, the ranks of the positive score classes, and the bootstrap AUROC is calculated using the same formula from \eqref{eq:auc_rank}. Bootstrapping the ranks has the incredibly attractive property that the relative runtime is going to scale with the total number of bootstraps ($K$). If we had to recalculate the ranks for every bootstrap sample, then this would require an additional sorting call. The formulas for the three bootstrapping approaches are shown below for a $1-\alpha$% symmetric CI.
$$
\begin{align*}
\text{Quantile}& \\
[l, u] &= \big[\tilde{AUC}^{(k)}_{\lfloor\alpha/2\cdot K\rfloor}, \tilde{AUC}^{(k)}_{\lceil(1-\alpha/2)\cdot K\rceil} \big] \\
\\
\text{SE}& \\
[l, u] &= \big[AUC + \sigma_{BS}\cdot z_{\alpha/2}, AUC - \sigma_{BS}\cdot z_{\alpha/2}\big] \\
\\
\text{Studentized}& \\
[l, u] &= \big[AUC + \sigma_{BS}\cdot z_{\alpha/2}^*, AUC - \sigma_{BS}\cdot z_{1-\alpha/2}^*\big] \\
z_\alpha^* &= \Bigg[ \frac{\tilde{AUC}^{(k)} - AUC}{\sigma^{(k)}_{BS}} \Bigg]_{\lfloor\alpha\cdot K\rfloor}
\end{align*}
$$
The quantile approach simply takes the empirical $\alpha/2$ and $1-\alpha/2$ quantiles of the AUROC from its bootstrapped distribution. Though the quantile approach is easily for suited to skewed bootstrapped distributions (i.e. the CIs are not symmetric), it is known to be biased for sample sizes. The classic bootstrap, simply uses the bootstrapped AUROCs to estimate its empirical variance, and then use the standard normal approximation to generate CIs. The Studentized approach combines the estimate of the variance from the SE/classic approach but also takes into account the possibility for a skewed distribution. For each bootstrap sample, an additional $K$ (or some large number) samples are drawn, so that each bootstrapped sample has an estimate of its variance. These studentized, or normalized, scores are then used in place of the quantile from the normal distribution.
## (4) Simulations
Now we are ready to test the bootstrapping methods against their analytic counterparts. The simulations below will use a 10% positive class balance, along with a range of different sample sizes. Symmetric CIs will be calculated for the 80%, 90%, and 95% level. A total of 1500 simulations are run. An 80% symmetric CI that is exact should a coverage of 80%, meaning that the true AUROC is contained within the CI 80% of the time. A CI that has a coverage below its nominal level will have a type-1 error rate that is greater than expected, whilst a CI that has coverage above its nominal level will have less power (i.e. a higher type-II error). In other words, the closer a CI is to its nominal level, the better.
```python
"""
HELPER FUNCTION TO RETURN +- INTERVALS
A: array of AUCs
se: array of SEs
cv: critical values (can be array: will be treated as 1xk)
"""
def ret_lbub(A, se, cv, method):
ub = cvec(A)+cvec(se)*rvec(cv)
lb = cvec(A)-cvec(se)*rvec(cv)
df_ub = pd.DataFrame(ub,columns=cn_cv).assign(bound='upper')
df_lb = pd.DataFrame(lb,columns=cn_cv).assign(bound='lower')
df = pd.concat([df_ub, df_lb]).assign(tt=method)
return df
nsim = 1500
prop = 0.1
n_bs = 1000
n_student = 250
n_seq = [50, 100, 250, 1000]#[]
auc_seq = [0.5, 0.7, 0.9 ] #"true" AUROC between the distributions
pvals = (1-np.array([0.8, 0.9, 0.95]))/2
crit_vals = np.abs(stats.norm.ppf(pvals))
cn_cv = ['p'+str(i+1) for i in range(len(pvals))]
np.random.seed(1)
if 'res.csv' in os.listdir():
res = pd.read_csv('res.csv')
else:
holder = []
for n in n_seq:
for auc in auc_seq:
print('n: %i, AUROC: %0.2f' % (n, auc))
n1 = int(np.round(n * prop))
n0 = n - n1
den = n1*n0
mu = np.sqrt(2) * stats.norm.ppf(auc)
Eta = np.r_[np.random.randn(n1, nsim)+mu, np.random.randn(n0,nsim)]
Y = np.r_[np.zeros([n1,nsim],dtype=int)+1, np.zeros([n0,nsim],dtype=int)]
# Calculate the AUCs across the columns
R1 = stats.rankdata(Eta,axis=0)[:n1]
Amat = (R1.sum(0) - n1*(n1+1)/2) / den
# --- Approach 1: Asymptotic U --- #
sd_u = np.sqrt((n0+n1+1)/spl(Amat)/den)
df_asym = ret_lbub(Amat, sd_u, crit_vals, 'asymptotic')
# --- Approach 2: Newcombe's wald
sd_newcombe = np.sqrt(Amat*(1-Amat)/((n1-1)*(n0-1))*(2*n-1-((3*n-3)/((2-Amat)*(1+Amat)))))
df_newcombe = ret_lbub(Amat, sd_newcombe, crit_vals, 'newcombe')
# --- Approach 3: Bootstrap the ranks --- #
R1_bs = pd.DataFrame(R1).sample(frac=n_bs,replace=True).values.reshape([n_bs]+list(R1.shape))
auc_bs = (R1_bs.sum(1) - n1*(n1+1)/2) / den
sd_bs = auc_bs.std(0,ddof=1)
# - (i) Standard error method - #
df_bs_se = ret_lbub(Amat, sd_bs, crit_vals, 'bootstrap_se')
# - (ii) Quantile method - #
df_lb_bs = pd.DataFrame(np.quantile(auc_bs,pvals,axis=0).T,columns=cn_cv).assign(bound='lower')
df_ub_bs = pd.DataFrame(np.quantile(auc_bs,1-pvals,axis=0).T,columns=cn_cv).assign(bound='upper')
df_bs_q = pd.concat([df_ub_bs, df_lb_bs]).assign(tt='bootstrap_q')
# - (iii) Studentized - #
se_bs_s = np.zeros(auc_bs.shape)
for j in range(n_bs):
R1_bs_s = pd.DataFrame(R1_bs[j]).sample(frac=n_student,replace=True).values.reshape([n_student]+list(R1.shape))
auc_bs_s = (R1_bs_s.sum(1) - n1*(n1+1)/2) / den
se_bs_s[j] = auc_bs_s.std(0,ddof=1)
# Get the t-score dist
t_bs = (auc_bs - rvec(Amat))/se_bs_s
df_lb_t = pd.DataFrame(cvec(Amat) - cvec(sd_bs)*np.quantile(t_bs,1-pvals,axis=0).T,columns=cn_cv).assign(bound='lower')
df_ub_t = pd.DataFrame(cvec(Amat) - cvec(sd_bs)*np.quantile(t_bs,pvals,axis=0).T,columns=cn_cv).assign(bound='upper')
df_t = pd.concat([df_ub_t, df_lb_t]).assign(tt='bootstrap_s')
# Combine
tmp_sim = pd.concat([df_asym, df_newcombe, df_bs_se, df_bs_q, df_t]).assign(auc=auc, n=n)
holder.append(tmp_sim)
# Merge and save
res = pd.concat(holder)
res = res.rename_axis('idx').reset_index().melt(['idx','bound','tt','auc','n'],cn_cv,'tpr')
res = res.pivot_table('value',['idx','tt','auc','n','tpr'],'bound').reset_index()
res.tpr = res.tpr.map(dict(zip(cn_cv, 1-2*pvals)))
res = res.assign(is_covered=lambda x: (x.lower <= x.auc) & (x.upper >= x.auc))
res.to_csv('res.csv',index=False)
res_cov = res.groupby(['tt','auc','n','tpr']).is_covered.mean().reset_index()
res_cov = res_cov.assign(sn = lambda x: pd.Categorical(x.n, x.n.unique()))
lvls_approach = ['asymptotic','newcombe','bootstrap_q','bootstrap_se','bootstrap_s']
lbls_approach = ['Asymptotic', 'Newcombe', 'BS (Quantile)', 'BS (Classic)', 'BS (Studentized)']
res_cov = res_cov.assign(tt = lambda x: pd.Categorical(x.tt, lvls_approach).map(dict(zip(lvls_approach, lbls_approach))))
res_cov.rename(columns={'tpr':'CoverageTarget', 'auc':'AUROC'}, inplace=True)
tmp = pd.DataFrame({'CoverageTarget':1-2*pvals, 'ybar':1-2*pvals})
plotnine.options.figure_size = (6.5, 5)
gg_cov = (ggplot(res_cov, aes(x='tt', y='is_covered',color='sn')) +
theme_bw() + geom_point() +
facet_grid('AUROC~CoverageTarget',labeller=label_both) +
theme(axis_text_x=element_text(angle=90), axis_title_x=element_blank()) +
labs(y='Coverage') +
geom_hline(aes(yintercept='ybar'),data=tmp) +
scale_color_discrete(name='Sample size'))
gg_cov
```
Figure 5 shows the coverage results for the different approaches across different conditions. Newcombe's method is consistently the worst performer, with the CIs being much too conservative. The estimated standard errors (SEs) are at least 40% larger than the asymptotic ones (code not shown), leading to a CI with significantly reduced power. The asymptotic approach and quantile/classic bootstrap have SEs which are too small when the sample size is limited, leading under-coverage and an inflated type-I error rate. For sample sizes of at least 1000, the asymptotic intervals are quite accurate. The studentized bootstrap is by far the most accurate approach, especially for small sample sizes, and tends to be conservative (over-coverage). Overall the studentized bootstrap is the clear winner. However, it is also the most computationally costly, which means for large samples the asymptotic estimates may be better.
## (5) Ranking bootstraps?
Readers may be curious whether ranking the bootstraps, rather than bootstrapping the ranks, may lead to better inference. Section (3) has already noted the obvious computational gains from bootstrapping the ranks. Despite my initial impression that ranking the bootstraps would lead to more variation because of the additional variation in the negative class, this turned out not to be the case due to the creation of ties in the scores which reduces the variation in the final AUROC estimate. The simulation block shows that the SE of the bootstrapped ranks is higher than the ranked bootstraps in terms of the AUROC statistic. Since the bootstrap approach did not have a problem of over-coverage, the smaller SEs will lead to higher type-I error rates, especially for small sample sizes. In this case, the statistical advantages of bootstrapping the ranks also coincidence with a computational benefit.
```python
if 'df_bs.csv' in os.listdir():
df_bs = pd.read_csv('df_bs.csv')
else:
seed = 1
np.random.seed(seed)
n_bs, nsim = 1000, 1500
n1, n0, mu = 25, 75, 1
s = np.concatenate((np.random.randn(n1, nsim)+mu, np.random.randn(n0,nsim)))
y = np.concatenate((np.repeat(1,n1),np.repeat(0,n0)))
r = stats.rankdata(s,axis=0)[:n1]
s1, s0 = s[:n1], s[n1:]
r_bs = pd.DataFrame(r).sample(frac=n_bs,replace=True,random_state=seed).values.reshape([n_bs]+list(r.shape))
s_bs1 = pd.DataFrame(s1).sample(frac=n_bs,replace=True,random_state=seed).values.reshape([n_bs]+list(s1.shape))
s_bs0 = pd.DataFrame(s0).sample(frac=n_bs,replace=True,random_state=seed).values.reshape([n_bs]+list(s0.shape))
s_bs = np.concatenate((s_bs1, s_bs0),axis=1)
r_s_bs = stats.rankdata(s_bs,axis=1)[:,:n1,:]
auc_bs = (r_bs.sum(1) - n1*(n1+1)/2)/(n1*n0)
auc_s_bs = (r_s_bs.sum(1) - n1*(n1+1)/2)/(n1*n0)
se_bs = auc_bs.std(0)
se_s_bs = auc_s_bs.std(0)
df_bs = pd.DataFrame({'bs_r':se_bs, 'r_bs':se_s_bs})
df_bs.to_csv('df_bs.csv', index=False)
print('Mean AUROC for bootstrapping ranks: %0.3f, and ranking bootstraps: %0.3f' %
(np.mean(df_bs.bs_r),np.mean(df_bs.r_bs)))
```
Mean AUROC for bootstrapping ranks: 0.064, and ranking bootstraps: 0.054
```python
```
| 81cb896f982194dccf9862437be257b80a403a45 | 266,867 | ipynb | Jupyter Notebook | _rmd/extra_AUC_CI/auc_sim.ipynb | erikdrysdale/erikdrysdale.github.io | ff337117e063be7f909bc2d1f3ff427781d29f31 | [
"MIT"
]
| null | null | null | _rmd/extra_AUC_CI/auc_sim.ipynb | erikdrysdale/erikdrysdale.github.io | ff337117e063be7f909bc2d1f3ff427781d29f31 | [
"MIT"
]
| null | null | null | _rmd/extra_AUC_CI/auc_sim.ipynb | erikdrysdale/erikdrysdale.github.io | ff337117e063be7f909bc2d1f3ff427781d29f31 | [
"MIT"
]
| 2 | 2017-09-13T15:16:36.000Z | 2020-03-03T15:37:01.000Z | 376.930791 | 81,032 | 0.914246 | true | 8,504 | Qwen/Qwen-72B | 1. YES
2. YES | 0.888759 | 0.855851 | 0.760645 | __label__eng_Latn | 0.949725 | 0.605566 |
# Announcements
- __Please familiarize yourself with the term projects, and sign up for your (preliminary) choice__ using [this form](https://forms.gle/ByLLpsthrpjCcxG89). _You may revise your choice, but I'd recommend settling on a choice well before Thanksgiving._
- Recommended reading on ODEs: [Lecture notes by Prof. Hjorth-Jensen (University of Oslo)](https://www.asc.ohio-state.edu/physics/ntg/6810/readings/hjorth-jensen_notes2013_08.pdf)
- Problem Set 5 will be posted on D2L on Oct 12, due Oct 20.
- __Outlook__: algorithms for solving high-dimensional linear and non-linear equations; then Boundary Value Problems and Partial Differential Equations.
- Conference for Undergraduate Women in Physics: online event in 2021, [applications accepted until 10/25](https://www.aps.org/programs/women/cuwip/)
This notebook presents as selection of topics from the book "Numerical Linear Algebra" by Trefethen and Bau (SIAM, 1997), and uses notebooks by Kyle Mandli.
# Numerical Linear Algebra
Numerical methods for linear algebra problems lies at the heart of many numerical approaches and is something we will spend some time on. Roughly we can break down problems that we would like to solve into two general problems, solving a system of equations
$$A \vec{x} = \vec{b}$$
and solving the eigenvalue problem
$$A \vec{v} = \lambda \vec{v}.$$
We examine each of these problems separately and will evaluate some of the fundamental properties and methods for solving these problems. We will be careful in deciding how to evaluate the results of our calculations and try to gain some understanding of when and how they fail.
## General Problem Specification
The number and power of the different tools made available from the study of linear algebra makes it an invaluable field of study. Before we dive in to numerical approximations we first consider some of the pivotal problems that numerical methods for linear algebra are used to address.
For this discussion we will be using the common notation $m \times n$ to denote the dimensions of a matrix $A$. The $m$ refers to the number of rows and $n$ the number of columns. If a matrix is square, i.e. $m = n$, then we will use the notation that $A$ is $m \times m$.
### Systems of Equations
The first type of problem is to find the solution to a linear system of equations. If we have $m$ equations for $m$ unknowns it can be written in matrix/vector form,
$$A \vec{x} = \vec{b}.$$
For this example $A$ is an $m \times m$ matrix, denoted as being in $\mathbb{R}^{m\times m}$, and $\vec{x}$ and $\vec{b}$ are column vectors with $m$ entries, denoted as $\mathbb{R}^m$.
#### Example: Vandermonde Matrix
We have data $(x_i, y_i), ~~ i = 1, 2, \ldots, m$ that we want to fit a polynomial of order $m-1$. Solving the linear system $A p = y$ does this for us where
$$A = \begin{bmatrix}
1 & x_1 & x_1^2 & \cdots & x_1^{m-1} \\
1 & x_2 & x_2^2 & \cdots & x_2^{m-1} \\
\vdots & \vdots & \vdots & & \vdots \\
1 & x_m & x_m^2 & \cdots & x_m^{m-1}
\end{bmatrix} \quad \quad y = \begin{bmatrix}
y_1 \\ y_2 \\ \vdots \\ y_m
\end{bmatrix}$$
and $p$ are the coefficients of the interpolating polynomial $\mathcal{P}_N(x) = p_0 + p_1 x + p_2 x^2 + \cdots + p_m x^{m-1}$. The solution to this system satisfies $\mathcal{P}_N(x_i)=y_i$ for $i=1, 2, \ldots, m$.
#### Example: Linear least squares 1
In a similar case as above, say we want to fit a particular function (could be a polynomial) to a given number of data points except in this case we have more data points than free parameters. In the case of polynomials this could be the same as saying we have $m$ data points but only want to fit a $n - 1$ order polynomial through the data where $n - 1 \leq m$. One of the common approaches to this problem is to minimize the "least-squares" error between the data and the resulting function:
$$
E = \left( \sum^m_{i=1} |y_i - f(x_i)|^2 \right )^{1/2}.
$$
But how do we do this if our matrix $A$ is now $m \times n$ and looks like
$$
A = \begin{bmatrix}
1 & x_1 & x_1^2 & \cdots & x_1^{n-1} \\
1 & x_2 & x_2^2 & \cdots & x_2^{n-1} \\
\vdots & \vdots & \vdots & & \vdots \\
1 & x_m & x_m^2 & \cdots & x_m^{n-1}
\end{bmatrix}?
$$
Turns out if we solve the system
$$A^T A x = A^T b$$
we can gaurantee that the error is minimized in the least-squares sense[<sup>1</sup>](#footnoteRegression).
#### Practical Example: Linear least squares implementation
Fitting a line through data that has random noise added to it.
```python
%matplotlib inline
%precision 3
import numpy
import matplotlib.pyplot as plt
```
```python
# Linear Least Squares Problem
# First define the independent and dependent variables.
N = 20
x = numpy.linspace(-1.0, 1.0, N)
y = x + numpy.random.random((N))
# Define the Vandermonde matrix based on our x-values
A = numpy.ones((x.shape[0], 2))
A[:, 1] = x
# Determine the coefficients of the polynomial that will
# result in the smallest sum of the squares of the residual.
p = numpy.linalg.solve(numpy.dot(A.transpose(), A), numpy.dot(A.transpose(), y))
print("Error in slope = %s, y-intercept = %s" % (numpy.abs(p[1] - 1.0), numpy.abs(p[0] - 0.5)))
# Plot it out, cuz pictures are fun!
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, y, 'ko')
axes.plot(x, p[0] + p[1] * x, 'r')
axes.set_title("Least Squares Fit to Data")
axes.set_xlabel("$x$")
axes.set_ylabel("$f(x)$ and $y_i$")
plt.show()
```
### Eigenproblems
Eigenproblems come up in a variety of contexts and often are integral to many problem of scientific and engineering interest. It is such a powerful idea that it is not uncommon for us to take a problem and convert it into an eigenproblem. We will covered detailed algorithms for eigenproblems in the next lectures, but for now let's remind ourselves of the problem and analytic solution:
If $A \in \mathbb{C}^{m\times m}$ (a square matrix with complex values), a non-zero vector $\vec{v}\in\mathbb{C}^m$ is an **eigenvector** of $A$ with a corresponding **eigenvalue** $\lambda \in \mathbb{C}$ if
$$A \vec{v} = \lambda \vec{v}.$$
One way to interpret the eigenproblem is that we are attempting to ascertain the "action" of the matrix $A$ on some subspace of $\mathbb{C}^m$ where this action acts like scalar multiplication. This subspace is called an **eigenspace**.
#### Example
Compute the eigenspace of the matrix
$$
A = \begin{bmatrix}
1 & 2 \\
2 & 1
\end{bmatrix}
$$
Recall that we can find the eigenvalues of a matrix by computing $\det(A - \lambda I) = 0$.
In this case we have
$$\begin{aligned}
A - \lambda I &= \begin{bmatrix}
1 & 2 \\
2 & 1
\end{bmatrix} - \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \lambda\\
&= \begin{bmatrix}
1 - \lambda & 2 \\
2 & 1 - \lambda
\end{bmatrix}.
\end{aligned}$$
The determinant of the matrix is
$$\begin{aligned}
\begin{vmatrix}
1 - \lambda & 2 \\
2 & 1 - \lambda
\end{vmatrix} &= (1 - \lambda) (1 - \lambda) - 2 \cdot 2 \\
&= 1 - 2 \lambda + \lambda^2 - 4 \\
&= \lambda^2 - 2 \lambda - 3.
\end{aligned}$$
This result is sometimes referred to as the characteristic equation of the matrix, $A$.
Setting the determinant equal to zero we can find the eigenvalues as
$$\begin{aligned}
& \\
\lambda &= \frac{2 \pm \sqrt{4 - 4 \cdot 1 \cdot (-3)}}{2} \\
&= 1 \pm 2 \\
&= -1 \mathrm{~and~} 3
\end{aligned}$$
The eigenvalues are used to determine the eigenvectors. The eigenvectors are found by going back to the equation $(A - \lambda I) \vec{v}_i = 0$ and solving for each vector. A trick that works some of the time is to normalize each vector such that the first entry is 1 ($\vec{v}_1 = 1$):
$$
\begin{bmatrix}
1 - \lambda & 2 \\
2 & 1 - \lambda
\end{bmatrix} \begin{bmatrix} 1 \\ v_2 \end{bmatrix} = 0
$$
$$\begin{aligned}
1 - \lambda + 2 v_2 &= 0 \\
v_2 &= \frac{\lambda - 1}{2}
\end{aligned}$$
We can check this by
$$\begin{aligned}
2 + \left(1- \lambda \frac{\lambda - 1}{2}\right) & = 0\\
(\lambda - 1)^2 - 4 &=0
\end{aligned}$$
which by design is satisfied by our eigenvalues. Another sometimes easier approach is to plug-in the eigenvalues to find each corresponding eigenvector. The eigenvectors are therefore
$$\vec{v} = \begin{bmatrix}1 \\ -1 \end{bmatrix}, \begin{bmatrix}1 \\ 1 \end{bmatrix}.$$
Note that these are linearly independent.
## Fundamentals
### Matrix-Vector Multiplication
One of the most basic operations we can perform with matrices is to multiply them be a vector. This matrix-vector product $A \vec{x} = \vec{b}$ is defined as
$$
b_i = \sum^n_{j=1} a_{ij} x_j \quad \text{where}\quad i = 1, \ldots, m
$$
Writing the matrix-vector product this way we see that one interpretation of this product is that each column of $A$ is weighted by the value $x_j$, or in other words $\vec{b}$ is a linear combination of the columns of $A$ where each column's weighting is $x_j$.
$$
\begin{align}
\vec{b} &= A \vec{x}, \\
\vec{b} &=
\begin{bmatrix} & & & \\ & & & \\ \vec{a}_1 & \vec{a}_2 & \cdots & \vec{a}_n \\ & & & \\ & & & \end{bmatrix}
\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}, \\
\vec{b} &= x_1 \vec{a}_1 + x_2 \vec{a}_2 + \cdots + x_n \vec{a}_n.
\end{align}
$$
This view will be useful later when we are trying to interpret various types of matrices.
One important property of the matrix-vector product is that is a **linear** operation, also known as a **linear operator**. This means that the for any $\vec{x}, \vec{y} \in \mathbb{C}^n$ and any $c \in \mathbb{C}$ we know that
1. $A (\vec{x} + \vec{y}) = A\vec{x} + A\vec{y}$
1. $A\cdot (c\vec{x}) = c A \vec{x}$
#### Example: Vandermonde Matrix
In the case where we have $m$ data points and want $m - 1$ order polynomial interpolant the matrix $A$ is a square, $m \times m$, matrix as before. Using the above interpretation the polynomial coefficients $p$ are the weights for each of the monomials that give exactly the $y$ values of the data.
#### Example: Numerical matrix-vector multiply
Write a matrix-vector multiply function and check it with the appropriate `numpy` routine. Also verify the linearity of the matrix-vector multiply.
```python
#A x = b
#(m x n) (n x 1) = (m x 1)
def matrix_vector_product(A, x):
m, n = A.shape
b = numpy.zeros(m)
for i in range(m):
for j in range(n):
b[i] += A[i, j] * x[j]
return b
m = 4
n = 3
A = numpy.random.uniform(size=(m,n))
x = numpy.random.uniform(size=(n))
y = numpy.random.uniform(size=(n))
c = numpy.random.uniform()
b = matrix_vector_product(A, x)
print(numpy.allclose(b, numpy.dot(A, x)))
print(numpy.allclose(matrix_vector_product(A, (x + y)), matrix_vector_product(A, x) + matrix_vector_product(A, y)))
print(numpy.allclose(matrix_vector_product(A, c * x), c*matrix_vector_product(A, x)))
```
True
True
True
### Matrix-Matrix Multiplication
The matrix product with another matrix $A C = B$ is defined as
$$
b_{ij} = \sum^m_{k=1} a_{ik} c_{kj}.
$$
Again, a useful interpretation of this operation is that the product result $B$ is the a linear combination of the columns of $A$.
_What are the dimensions of $A$ and $C$ so that the multiplication works?_
#### Example: Outer Product
The product of two vectors $\vec{u} \in \mathbb{C}^m$ and $\vec{v} \in \mathbb{C}^n$ is a $m \times n$ matrix where the columns are the vector $u$ multiplied by the corresponding value of $v$:
$$
\begin{align}
\vec{u} \vec{v}^T &=
\begin{bmatrix} u_1 \\ u_2 \\ \vdots \\ u_n \end{bmatrix}
\begin{bmatrix} v_1 & v_2 & \cdots & v_n \end{bmatrix}, \\
& = \begin{bmatrix} v_1u_1 & \cdots & v_n u_1 \\ \vdots & & \vdots \\ v_1 u_m & \cdots & v_n u_m \end{bmatrix}.
\end{align}
$$
It is useful to think of these as operations on the column vectors, and an equivalent way to express this relationship is
$$
\begin{align}
\vec{u} \vec{v}^T &=
\begin{bmatrix} \\ \vec{u} \\ \\ \end{bmatrix}
\begin{bmatrix} v_1 & v_2 & \cdots & v_n \end{bmatrix}, \\
&=
\begin{bmatrix} & & & \\ & & & \\ \vec{u}v_1 & \vec{u} v_2 & \cdots & \vec{u} v_n \\ & & & \\ & & & \end{bmatrix}, \\
& = \begin{bmatrix} v_1u_1 & \cdots & v_n u_1 \\ \vdots & & \vdots \\ v_1 u_m & \cdots & v_n u_m \end{bmatrix}.
\end{align}
$$
#### Example: Upper Triangular Multiplication
Consider the multiplication of a matrix $A \in \mathbb{C}^{m\times n}$ and the **upper-triangular** matrix $R$ defined as the $n \times n$ matrix with entries $r_{ij} = 1$ for $i \leq j$ and $r_{ij} = 0$ for $i > j$. The product can be written as
$$
\begin{bmatrix} \\ \\ \vec{b}_1 & \cdots & \vec{b}_n \\ \\ \\ \end{bmatrix} = \begin{bmatrix} \\ \\ \vec{a}_1 & \cdots & \vec{a}_n \\ \\ \\ \end{bmatrix} \begin{bmatrix} 1 & \cdots & 1 \\ & \ddots & \vdots \\ & & 1 \end{bmatrix}.
$$
The columns of $B$ are then
$$
\vec{b}_j = A \vec{r}_j = \sum^j_{k=1} \vec{a}_k
$$
so that $\vec{b}_j$ is the sum of the first $j$ columns of $A$.
#### Example: Write Matrix-Matrix Multiplication
Write a function that computes matrix-matrix multiplication and demonstrate the following properties:
1. $A (B + C) = AB + AC$ (for square matrices))
1. $A (cB) = c AB$ where $c \in \mathbb{C}$
1. $AB \neq BA$ in general
```python
def matrix_matrix_product(A, B):
C = numpy.zeros((A.shape[0], B.shape[1]))
for i in range(A.shape[0]):
for j in range(B.shape[1]):
for k in range(A.shape[1]):
C[i, j] += A[i, k] * B[k, j]
return C
m = 4
n = 4
p = 4
A = numpy.random.uniform(size=(m, n))
B = numpy.random.uniform(size=(n, p))
C = numpy.random.uniform(size=(m, p))
c = numpy.random.uniform()
print(numpy.allclose(matrix_matrix_product(A, B), numpy.dot(A, B)))
print(numpy.allclose(matrix_matrix_product(A, (B + C)), matrix_matrix_product(A, B) + matrix_matrix_product(A, C)))
print(numpy.allclose(matrix_matrix_product(A, c * B), c*matrix_matrix_product(A, B)))
print(numpy.allclose(matrix_matrix_product(A, B), matrix_matrix_product(B, A)))
```
True
True
True
False
### Matrices in NumPy
NumPy and SciPy contain routines that ware optimized to perform matrix-vector and matrix-matrix multiplication. Given two `ndarray`s you can take their product by using the `dot` function.
```python
n = 10
m = 5
# Matrix vector with identity
A = numpy.identity(n)
x = numpy.random.random(n)
print(numpy.allclose(x, numpy.dot(A, x)))
# Matrix vector product
A = numpy.random.random((m, n))
print(numpy.dot(A, x))
# Matrix matrix product
B = numpy.random.random((n, m))
print(numpy.dot(A, B))
```
True
[1.743 2.649 2.492 1.879 1.991]
[[2.64 3.47 4.001 2.95 3.14 ]
[3.002 2.808 3.185 2.708 2.143]
[2.952 3.329 2.967 2.33 2.118]
[2.743 2.507 2.986 2.415 2.36 ]
[2.296 2.706 3.171 1.765 2.256]]
### Range and Null-Space
#### Range
- The **range** of a matrix $A \in \mathbb R^{m \times n}$ (similar to any function), denoted as $\text{range}(A)$, is the set of vectors that can be expressed as $A x$ for $x \in \mathbb R^n$.
- We can also then say that that $\text{range}(A)$ is the space **spanned** by the columns of $A$. In other words the columns of $A$ provide a basis for $\text{range}(A)$, also called the **column space** of the matrix $A$.
#### Null-Space
- Similarly the **null-space** of a matrix $A$, denoted $\text{null}(A)$ is the set of vectors $x$ that satisfy $A x = 0$.
- A similar concept is the **rank** of the matrix $A$, denoted as $\text{rank}(A)$, is the dimension of the column space. A matrix $A$ is said to have **full-rank** if $\text{rank}(A) = \min(m, n)$. This property also implies that the matrix mapping is **one-to-one**.
### Inverse
A **non-singular** or **invertible** matrix is characterized as a matrix with full-rank. This is related to why we know that the matrix is one-to-one, we can use it to transform a vector $x$ and using the inverse, denoted $A^{-1}$, we can map it back to the original matrix. The familiar definition of this is
\begin{align*}
A \vec{x} &= \vec{b}, \\
A^{-1} A \vec{x} & = A^{-1} \vec{b}, \\
x &=A^{-1} \vec{b}.
\end{align*}
Since $A$ has full rank, its columns form a basis for $\mathbb{R}^m$ and the vector $\vec{b}$ must be in the column space of $A$.
There are a number of important properties of a non-singular matrix A. Here we list them as the following equivalent statements
1. $A$ has an inverse $A^{-1}$
1. $\text{rank}(A) = m$
1. $\text{range}(A) = \mathbb{C}^m$
1. $\text{null}(A) = {0}$
1. 0 is not an eigenvalue of $A$
1. $\text{det}(A) \neq 0$
#### Example: Properties of invertible matrices
Show that given an invertible matrix that the rest of the properties hold. Make sure to search the `numpy` packages for relevant functions.
```python
m = 3
for n in range(100):
A = numpy.random.uniform(size=(m, m))
if numpy.linalg.det(A) != 0:
break
print(numpy.dot(numpy.linalg.inv(A), A))
print(numpy.linalg.matrix_rank(A))
print("range")
print(numpy.linalg.solve(A, numpy.zeros(m)))
print(numpy.linalg.eigvals(A))
```
[[ 1.000e+00 -7.574e-16 -3.664e-17]
[ 6.819e-17 1.000e+00 -1.161e-16]
[ 1.660e-17 1.371e-16 1.000e+00]]
3
range
[ 0. -0. 0.]
[ 1.044 -0.132 -0.542]
### Orthogonal Vectors and Matrices
Orthogonality is a very important concept in linear algebra that forms the basis of many of the modern methods used in numerical computations.
Two vectors are said to be orthogonal if their **inner-product** or **dot-product** defined as
$$
< \vec{x}, \vec{y} > \equiv (\vec{x}, \vec{y}) \equiv \vec{x}^T\vec{y} \equiv \vec{x} \cdot \vec{y} = \sum^m_{i=1} x_i y_i
$$
Here we have shown the various notations you may run into (the inner-product is in-fact a general term for a similar operation for mathematical objects such as functions).
If $\langle \vec{x},\vec{y} \rangle = 0$ then we say $\vec{x}$ and $\vec{y}$ are orthogonal. The reason we use this terminology is that the inner-product of two vectors can also be written in terms of the angle between them where
$$
\cos \theta = \frac{\langle \vec{x}, \vec{y} \rangle}{||\vec{x}||_2~||\vec{y}||_2}
$$
and $||\vec{x}||_2$ is the Euclidean ($\ell^2$) norm of the vector $\vec{x}$.
We can write this in terms of the inner-product as well as
$$
||\vec{x}||_2^2 = \langle \vec{x}, \vec{x} \rangle = \vec{x}^T\vec{x} = \sum^m_{i=1} |x_i|^2.
$$
The generalization of the inner-product to complex spaces is defined as
$$
\langle x, y \rangle = \sum^m_{i=1} x_i^* y_i
$$
where $x_i^*$ is the complex-conjugate of the value $x_i$.
#### Orthonormality
Taking this idea one step further we can say a set of vectors $\vec{x} \in X$ are orthogonal to $\vec{y} \in Y$ if $\forall \vec{x},\vec{y}$ $< \vec{x}, \vec{y} > = 0$. If $\forall \vec{x},\vec{y}$ $||\vec{x}|| = 1$ and $||\vec{y}|| = 1$ then they are also called orthonormal. Note that we dropped the 2 as a subscript to the notation for the norm of a vector. Later we will explore other ways to define a norm of a vector other than the Euclidean norm defined above.
Another concept that is related to orthogonality is linear-independence. A set of vectors $\vec{x} \in X$ are **linearly independent** if $\forall \vec{x} \in X$ that each $\vec{x}$ cannot be written as a linear combination of the other vectors in the set $X$.
An equivalent statement is that there does not exist a set of scalars $c_i$ such that
$$
\vec{x}_k = \sum^n_{i=1, i \neq k} c_i \vec{x}_i.
$$
Another way to write this is that $\vec{x}_k \in X$ is orthogonal to all the rest of the vectors in the set $X$.
This can be related directly through the idea of projection. If we have a set of vectors $\vec{x} \in X$ we can project another vector $\vec{v}$ onto the vectors in $X$ by using the inner-product. This is especially powerful if we have a set of linearly-independent vectors $X$, which are said to **span** a space (or provide a **basis** for a space), s.t. any vector in the space spanned by $X$ can be expressed as a linear combination of the basis vectors $X$
$$
\vec{v} = \sum^n_{i=1} \, \langle \vec{v}, \vec{x}_i \rangle \, \vec{x}_i.
$$
Note if $\vec{v} \in X$ that
$$
\langle \vec{v}, \vec{x}_i \rangle = 0 \quad \forall \vec{x}_i \in X \setminus \vec{v}.
$$
Looping back to matrices, the column space of a matrix is spanned by its linearly independent columns. Any vector $v$ in the column space can therefore be expressed via the equation above. A special class of matrices are called **unitary** matrices when complex-valued and **orthogonal** when purely real-valued if the columns of the matrix are orthonormal to each other. Importantly this implies that for a unitary matrix $Q$ we know the following
1. $Q^* = Q^{-1}$
1. $Q^*Q = I$
where $Q^*$ is called the **adjoint** of $Q$. The adjoint is defined as the transpose of the original matrix with the entries being the complex conjugate of each entry as the notation implies.
### Vector Norms
Norms (and also measures) provide a means for measure the "size" or distance in a space. In general a norm is a function, denoted by $||\cdot||$, that maps $\mathbb{C}^m \rightarrow \mathbb{R}$. In other words we stick in a multi-valued object and get a single, real-valued number out the other end. All norms satisfy the properties:
1. $||\vec{x}|| \geq 0$, and $||\vec{x}|| = 0$ only if $\vec{x} = \vec{0}$
1. $||\vec{x} + \vec{y}|| \leq ||\vec{x}|| + ||\vec{y}||$ (triangle inequality)
1. $||c \vec{x}|| = |c| ~ ||\vec{x}||$ where $c \in \mathbb{C}$
There are a number of relevant norms that we can define beyond the Euclidean norm, also know as the 2-norm or $\ell_2$ norm:
1. $\ell_1$ norm:
$$
||\vec{x}||_1 = \sum^m_{i=1} |x_i|,
$$
1. $\ell_2$ norm:
$$
||\vec{x}||_2 = \left( \sum^m_{i=1} |x_i|^2 \right)^{1/2},
$$
1. $\ell_p$ norm:
$$
||\vec{x}||_p = \left( \sum^m_{i=1} |x_i|^p \right)^{1/p}, \quad \quad 1 \leq p < \infty,
$$
1. $\ell_\infty$ norm:
$$
||\vec{x}||_\infty = \max_{1\leq i \leq m} |x_i|,
$$
1. weighted $\ell_p$ norm:
$$
||\vec{x}||_{W_p} = \left( \sum^m_{i=1} |w_i x_i|^p \right)^{1/p}, \quad \quad 1 \leq p < \infty,
$$
These are also related to other norms denoted by capital letters ($L_2$ for instance). In this case we use the lower-case notation to denote finite or discrete versions of the infinite dimensional counterparts.
#### Example: Comparisons Between Norms
Compute the norms given some vector $\vec{x}$ and compare their values. Verify the properties of the norm for one of the norms.
```python
m = 10
p = 4
x = numpy.random.uniform(size=m)
ell_1 = 0.0
for i in range(m):
ell_1 += numpy.abs(x[i])
ell_2 = 0.0
for i in range(m):
ell_2 += numpy.abs(x[i])**2
ell_2 = numpy.sqrt(ell_2)
ell_p = 0.0
for i in range(m):
ell_p += numpy.abs(x[i])**p
ell_p = (ell_2)**(1.0 / p)
ell_infty = numpy.max(numpy.abs(x))
print("L_1 = %s, L_2 = %s, L_%s = %s, L_infty = %s" % (ell_1, ell_2, p, ell_p, ell_infty))
y = numpy.random.uniform(size=m)
print()
print("Properties of norms:")
print(numpy.max(numpy.abs(x + y)), numpy.max(numpy.abs(x)) + numpy.max(numpy.abs(y)))
print(numpy.max(numpy.abs(0.1 * x)), 0.1 * numpy.max(numpy.abs(x)))
```
L_1 = 5.8008234607735485, L_2 = 2.0779946096879787, L_4 = 1.2006352895565875, L_infty = 0.9011530476321019
Properties of norms:
1.7217409744381857 1.8523009490139553
0.0901153047632102 0.0901153047632102
### Matrix Norms
The most direct way to consider a matrix norm is those induced by a vector-norm. Given a vector norm, we can define a matrix norm as the smallest number $C$ that satisfies the inequality
$$
||A \vec{x}||_{m} \leq C ||\vec{x}||_{n}.
$$
or as the supremum of the ratios so that
$$
C = \sup_{\vec{x}\in\mathbb{C}^n ~ \vec{x}\neq\vec{0}} \frac{||A \vec{x}||_{m}}{||\vec{x}||_n}.
$$
Noting that $||A \vec{x}||$ lives in the column space and $||\vec{x}||$ on the domain we can think of the matrix norm as the "size" of the matrix that maps the domain to the range. Also noting that if $||\vec{x}||_n = 1$ we also satisfy the condition we can write the induced matrix norm as
$$
||A||_{(m,n)} = \sup_{\vec{x} \in \mathbb{C}^n ~ ||\vec{x}||_{n} = 1} ||A \vec{x}||_{m}.
$$
#### Example: Induced Matrix Norms
Consider the matrix
$$
A = \begin{bmatrix} 1 & 2 \\ 0 & 2 \end{bmatrix}.
$$
Compute the induced-matrix norm of $A$ for the vector norms $\ell_2$ and $\ell_\infty$.
$\ell^2$: For both of the requested norms the unit-length vectors $[1, 0]$ and $[0, 1]$ can be used to give an idea of what the norm might be and provide a lower bound.
$$
||A||_2 = \sup_{x \in \mathbb{R}^n} \left( ||A \cdot [1, 0]^T||_2, ||A \cdot [0, 1]^T||_2 \right )
$$
computing each of the norms we have
$$\begin{aligned}
\begin{bmatrix} 1 & 2 \\ 0 & 2 \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix} &= \begin{bmatrix} 1 \\ 0 \end{bmatrix} \\
\begin{bmatrix} 1 & 2 \\ 0 & 2 \end{bmatrix} \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix} &= \begin{bmatrix} 2 \\ 2 \end{bmatrix}
\end{aligned}$$
which translates into the norms $||A \cdot [1, 0]^T||_2 = 1$ and $||A \cdot [0, 1]^T||_2 = 2 \sqrt{2}$. This implies that the $\ell_2$ induced matrix norm of $A$ is at least $||A||_{2} = 2 \sqrt{2} \approx 2.828427125$.
The exact value of $||A||_2$ can be computed using the spectral radius defined as
$$
\rho(A) = \max_{i} |\lambda_i|,
$$
where $\lambda_i$ are the eigenvalues of $A$. With this we can compute the $\ell_2$ norm of $A$ as
$$
||A||_2 = \sqrt{\rho(A^\ast A)}
$$
Computing the norm again here we find
$$
A^\ast A = \begin{bmatrix} 1 & 0 \\ 2 & 2 \end{bmatrix} \begin{bmatrix} 1 & 2 \\ 0 & 2 \end{bmatrix} = \begin{bmatrix} 1 & 2 \\ 2 & 8 \end{bmatrix}
$$
which has eigenvalues
$$
\lambda = \frac{1}{2}\left(9 \pm \sqrt{65}\right )
$$
so $||A||_2 \approx 2.9208096$.
$\ell^\infty$: We can again bound $||A||_\infty$ by looking at the unit vectors which give us the matrix lower bound of 2. To compute it turns out $||A||_{\infty} = \max_{1 \leq i \leq m} ||a^\ast_i||_1$ where $a^\ast_i$ is the $i$th row of $A$. This represents then the maximum of the row sums of $A$. Therefore $||A||_\infty = 3$.
```python
A = numpy.array([[1, 2], [0, 2]])
print(numpy.linalg.norm(A, ord=2))
print(numpy.linalg.norm(A, ord=numpy.infty))
```
2.9208096264818897
3.0
#### Example: General Norms of a Matrix
Compute a bound on the induced norm of the $m \times n$ dimensional matrix $A$ using $\ell_1$ and $\ell_2$
One of the most useful ways to think about matrix norms is as a transformation of a unit-ball to an ellipse. Depending on the norm in question, the norm will be some combination of the resulting ellipse. For the above cases we have some nice relations based on these ideas.
1. $||A \vec{x}||_1 = || \sum^n_{j=1} x_j \vec{a}_j ||_1 \leq \sum^n_{j=1} |x_j| ||\vec{a}_j||_1 \leq \max_{1\leq j\leq n} ||\vec{a}_j||_1$
1. $||A \vec{x}||_\infty = || \sum^n_{j=1} x_j \vec{a_j} ||_\infty \leq \sum^n_{j=1} |x_j| ||\vec{a}_j||_\infty \leq \max_{1 \leq i \leq m} ||a^*_i||_1$
```python
# Note: that this code is a bit fragile to angles that go beyond pi
# due to the use of arccos.
import matplotlib.patches as patches
A = numpy.array([[1, 2], [0, 2]])
def draw_unit_vectors(axes, A, head_width=0.1):
head_length = 1.5 * head_width
image_e = numpy.empty(A.shape)
angle = numpy.empty(A.shape[0])
image_e[:, 0] = numpy.dot(A, numpy.array((1.0, 0.0)))
image_e[:, 1] = numpy.dot(A, numpy.array((0.0, 1.0)))
for i in range(A.shape[0]):
angle[i] = numpy.arccos(image_e[0, i] / numpy.linalg.norm(image_e[:, i], ord=2))
axes.arrow(0.0, 0.0, image_e[0, i] - head_length * numpy.cos(angle[i]),
image_e[1, i] - head_length * numpy.sin(angle[i]),
head_width=head_width, color='b', alpha=0.5)
head_width = 0.2
head_length = 1.5 * head_width
# ============
# 1-norm
# Unit-ball
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.suptitle("1-Norm")
axes = fig.add_subplot(1, 2, 1, aspect='equal')
axes.plot((1.0, 0.0, -1.0, 0.0, 1.0), (0.0, 1.0, 0.0, -1.0, 0.0), 'r')
draw_unit_vectors(axes, numpy.eye(2))
axes.set_title("Unit Ball")
axes.set_xlim((-1.1, 1.1))
axes.set_ylim((-1.1, 1.1))
axes.grid(True)
# Image
axes = fig.add_subplot(1, 2, 2, aspect='equal')
axes.plot((1.0, 2.0, -1.0, -2.0, 1.0), (0.0, 2.0, 0.0, -2.0, 0.0), 'r')
draw_unit_vectors(axes, A, head_width=0.2)
axes.set_title("Images Under A")
axes.grid(True)
plt.show()
```
```python
# ============
# 2-norm
# Unit-ball
fig = plt.figure()
fig.suptitle("2-Norm")
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1, aspect='equal')
axes.add_artist(plt.Circle((0.0, 0.0), 1.0, edgecolor='r', facecolor='none'))
draw_unit_vectors(axes, numpy.eye(2))
axes.set_title("Unit Ball")
axes.set_xlim((-1.1, 1.1))
axes.set_ylim((-1.1, 1.1))
axes.grid(True)
# Image
# Compute some geometry
u, s, v = numpy.linalg.svd(A)
theta = numpy.empty(A.shape[0])
ellipse_axes = numpy.empty(A.shape)
theta[0] = numpy.arccos(u[0][0]) / numpy.linalg.norm(u[0], ord=2)
theta[1] = theta[0] - numpy.pi / 2.0
for i in range(theta.shape[0]):
ellipse_axes[0, i] = s[i] * numpy.cos(theta[i])
ellipse_axes[1, i] = s[i] * numpy.sin(theta[i])
axes = fig.add_subplot(1, 2, 2, aspect='equal')
axes.add_artist(patches.Ellipse((0.0, 0.0), 2 * s[0], 2 * s[1], theta[0] * 180.0 / numpy.pi,
edgecolor='r', facecolor='none'))
for i in range(A.shape[0]):
axes.arrow(0.0, 0.0, ellipse_axes[0, i] - head_length * numpy.cos(theta[i]),
ellipse_axes[1, i] - head_length * numpy.sin(theta[i]),
head_width=head_width, color='k')
draw_unit_vectors(axes, A, head_width=0.2)
axes.set_title("Images Under A")
axes.set_xlim((-s[0] + 0.1, s[0] + 0.1))
axes.set_ylim((-s[0] + 0.1, s[0] + 0.1))
axes.grid(True)
plt.show()
```
```python
# ============
# infty-norm
# Unit-ball
fig = plt.figure()
fig.suptitle("$\infty$-Norm")
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1, aspect='equal')
axes.plot((1.0, -1.0, -1.0, 1.0, 1.0), (1.0, 1.0, -1.0, -1.0, 1.0), 'r')
draw_unit_vectors(axes, numpy.eye(2))
axes.set_title("Unit Ball")
axes.set_xlim((-1.1, 1.1))
axes.set_ylim((-1.1, 1.1))
axes.grid(True)
# Image
# Geometry - Corners are A * ((1, 1), (1, -1), (-1, 1), (-1, -1))
# Symmetry implies we only need two. Here we just plot two
u = numpy.empty(A.shape)
u[:, 0] = numpy.dot(A, numpy.array((1.0, 1.0)))
u[:, 1] = numpy.dot(A, numpy.array((-1.0, 1.0)))
theta[0] = numpy.arccos(u[0, 0] / numpy.linalg.norm(u[:, 0], ord=2))
theta[1] = numpy.arccos(u[0, 1] / numpy.linalg.norm(u[:, 1], ord=2))
axes = fig.add_subplot(1, 2, 2, aspect='equal')
axes.plot((3, 1, -3, -1, 3), (2, 2, -2, -2, 2), 'r')
for i in range(A.shape[0]):
axes.arrow(0.0, 0.0, u[0, i] - head_length * numpy.cos(theta[i]),
u[1, i] - head_length * numpy.sin(theta[i]),
head_width=head_width, color='k')
draw_unit_vectors(axes, A, head_width=0.2)
axes.set_title("Images Under A")
axes.set_xlim((-4.1, 4.1))
axes.set_ylim((-3.1, 3.1))
axes.grid(True)
plt.show()
```
#### General Matrix Norms (induced and non-induced)
In general matrix-norms have the following properties whether they are induced from a vector-norm or not:
1. $||A|| \geq 0$ and $||A|| = 0$ only if $A = 0$
1. $||A + B|| \leq ||A|| + ||B||$ (Triangle Inequality)
1. $||c A|| = |c| ||A||$
The most widely used matrix norm not induced by a vector norm is the **Frobenius norm** defined by
$$
||A||_F = \left( \sum^m_{i=1} \sum^n_{j=1} |A_{ij}|^2 \right)^{1/2}.
$$
#### Invariance under unitary multiplication
One important property of the matrix 2-norm (and Forbenius norm) is that multiplication by a unitary matrix does not change the product (kind of like multiplication by 1). In general for any $A \in \mathbb{C}^{m\times n}$ and unitary matrix $Q \in \mathbb{C}^{m \times m}$ we have
\begin{align*}
||Q A||_2 &= ||A||_2 \\ ||Q A||_F &= ||A||_F.
\end{align*}
## Singular Value Decomposition
Definition: Let $A \in \mathbb R^{m \times n}$, then $A$ can be factored as
$$
A = U\Sigma V^{T}
$$
where,
* $U \in \mathbb R^{m \times m}$ and is the orthogonal matrix whose columns are the eigenvectors of $AA^{T}$
* $V \in \mathbb R^{n \times n}$ and is the orthogonal matrix whose columns are the eigenvectors of $A^{T}A$
* $\Sigma \in \mathbb R^{m \times n}$ and is a diagonal matrix with elements $\sigma_{1}, \sigma_{2}, \sigma_{3}, ... \sigma_{r}$ where $r = rank(A)$ corresponding to the square roots of the eigenvalues of $A^{T}A$. They are called the singular values of $A$ and are non negative arranged in descending order. ($\sigma_{1} \geq \sigma_{2} \geq \sigma_{3} \geq ... \sigma_{r} \geq 0$).
The SVD has a number of applications mostly related to reducing the dimensionality of a matrix.
### Full SVD example
Consider the matrix
$$
A = \begin{bmatrix}
2 & 0 & 3 \\
5 & 7 & 1 \\
0 & 6 & 2
\end{bmatrix}.
$$
The example below demonstrates the use of the `numpy.linalg.svd` function and shows the numerical result.
```python
A = numpy.array([
[2.0, 0.0, 3.0],
[5.0, 7.0, 1.0],
[0.0, 6.0, 2.0]
])
U, sigma, V_T = numpy.linalg.svd(A, full_matrices=True)
print(numpy.dot(U, numpy.dot(numpy.diag(sigma), V_T)))
```
[[ 2.000e+00 -1.150e-15 3.000e+00]
[ 5.000e+00 7.000e+00 1.000e+00]
[-1.705e-15 6.000e+00 2.000e+00]]
### Eigenvalue Decomposition vs. SVD Decomposition
Let the matrix $X$ contain the eigenvectors of $A$ which are linearly independent, then we can write a decomposition of the matrix $A$ as
$$
A = X \Lambda X^{-1}.
$$
How does this differ from the SVD?
- The basis of the SVD representation differs from the eigenvalue decomposition
- The basis vectors are not in general orthogonal for the eigenvalue decomposition where it is for the SVD
- The SVD effectively contains two basis sets.
- All matrices have an SVD decomposition whereas not all have eigenvalue decompositions.
### Existence and Uniqueness
Every matrix $A \in \mathbb{C}^{m \times n}$ has a singular value decomposition. Furthermore, the singular values $\{\sigma_{j}\}$ are uniquely determined, and if $A$ is square and the $\sigma_{j}$ are distinct, the left and right singular vectors $\{u_{j}\}$ and $\{v_{j}\}$ are uniquely determined up to complex signs (i.e., complex scalar factors of absolute value 1).
### Matrix Properties via the SVD
- The $\text{rank}(A) = r$ where $r$ is the number of non-zero singular values.
- The $\text{range}(A) = [u_1, ... , u_r]$ and $\text{null}(a) = [v_{r+1}, ... , v_n]$.
- The $|| A ||_2 = \sigma_1$ and $||A||_F = \sqrt{\sigma_{1}^{2}+\sigma_{2}^{2}+...+\sigma_{r}^{2}}$.
- The nonzero singular values of A are the square roots of the nonzero eigenvalues of $A^{T}A$ or $AA^{T}$.
- If $A = A^{T}$, then the singular values of $A$ are the absolute values of the eigenvalues of $A$.
- For $A \in \mathbb{C}^{m \times n}$ then $|det(A)| = \Pi_{i=1}^{m} \sigma_{i}$
### Low-Rank Approximations
- $A$ is the sum of the $r$ rank-one matrices:
$$
A = U \Sigma V^T = \sum_{j=1}^{r} \sigma_{j}u_{j}v_{j}^{T}
$$
- For any $k$ with $0 \leq k \leq r$, define
$$
A = \sum_{j=1}^{k} \sigma_{j}u_{j}v_{j}^{T}
$$
Let $k = min(m,n)$, then
$$
||A - A_{v}||_{2} = \text{inf}_{B \in \mathbb{C}^{m \times n}} \text{rank}(B)\leq k|| A-B||_{2} = \sigma_{k+1}
$$
- For any $k$ with $0 \leq k \leq r$, the matrix $A_{k}$ also satisfies
$$
||A - A_{v}||_{F} = \text{inf}_{B \in \mathbb{C}^{m \times n}} \text{rank}(B)\leq v ||A-B||_{F} = \sqrt{\sigma_{v+1}^{2} + ... + \sigma_{r}^{2}}
$$
#### Example: Putting the above equations into code
How does this work in practice?
```python
data = numpy.zeros((15,40))
#H
data[2:10,2:4] = 1
data[5:7,4:6] = 1
data[2:10,6:8] = 1
#E
data[3:11,10:12] = 1
data[3:5,12:16] = 1
data[6:8, 12:16] = 1
data[9:11, 12:16] = 1
#L
data[4:12,18:20] = 1
data[10:12,20:24] = 1
#L
data[5:13,26:28] = 1
data[11:13,28:32] = 1
#0
data[6:14,34:36] = 1
data[6:8, 36:38] = 1
data[12:14, 36:38] = 1
data[6:14,38:40] = 1
plt.imshow(data)
plt.show()
```
```python
u, diag, vt = numpy.linalg.svd(data, full_matrices=True)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 3)
fig.set_figheight(fig.get_figheight() * 4)
for i in range(1, 16):
diag_matrix = numpy.concatenate((numpy.zeros((len(diag[:i]) -1),), diag[i-1: i], numpy.zeros((40-i),)))
reconstruct = numpy.dot(numpy.dot(u, numpy.diag(diag_matrix)[:15,]), vt)
axes = fig.add_subplot(5, 3, i)
mappable = axes.imshow(reconstruct, vmin=0.0, vmax=1.0)
axes.set_title('Component = %s' % i)
plt.show()
```
```python
u, diag, vt = numpy.linalg.svd(data, full_matrices=True)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 3)
fig.set_figheight(fig.get_figheight() * 4)
for i in range(1, 16):
diag_matrix = numpy.concatenate((diag[:i], numpy.zeros((40-i),)))
reconstruct = numpy.dot(numpy.dot(u, numpy.diag(diag_matrix)[:15,]), vt)
axes = fig.add_subplot(5, 3, i)
mappable = axes.imshow(reconstruct, vmin=0.0, vmax=1.0)
axes.set_title('Component = %s' % i)
plt.show()
```
<sup>1</sup><span id="footnoteRegression"> http://www.utstat.toronto.edu/~brunner/books/LinearModelsInStatistics.pdf</span>
```python
```
| c5c30683690b52430e96b1748434ecf58a5ed9b6 | 221,394 | ipynb | Jupyter Notebook | Lectures/Lecture 19/Lecture19_IntroLA.ipynb | astroarshn2000/PHYS305S20 | 18f4ebf0a51ba62fba34672cf76bd119d1db6f1e | [
"MIT"
]
| 3 | 2020-09-10T06:45:46.000Z | 2020-10-20T13:50:11.000Z | Lectures/Lecture 19/Lecture19_IntroLA.ipynb | astroarshn2000/PHYS305S20 | 18f4ebf0a51ba62fba34672cf76bd119d1db6f1e | [
"MIT"
]
| null | null | null | Lectures/Lecture 19/Lecture19_IntroLA.ipynb | astroarshn2000/PHYS305S20 | 18f4ebf0a51ba62fba34672cf76bd119d1db6f1e | [
"MIT"
]
| null | null | null | 174.188828 | 42,264 | 0.865904 | true | 12,766 | Qwen/Qwen-72B | 1. YES
2. YES | 0.73412 | 0.79053 | 0.580344 | __label__eng_Latn | 0.968386 | 0.186663 |
# Class V - Conic modelling in JuMP
This notebook describes conic modelling in JuMP through a number of examples.
```julia
import Pkg
Pkg.activate(@__DIR__)
Pkg.instantiate()
```
[32m[1m Updating[22m[39m registry at `C:\Users\Oscar\.julia\registries\General`
[32m[1m Updating[22m[39m git-repo `https://github.com/JuliaRegistries/General.git`
[?25l[2K[?25h[32m[1m Updating[22m[39m registry at `C:\Users\Oscar\.julia\registries\JuliaPOMDP`
[32m[1m Updating[22m[39m git-repo `https://github.com/JuliaPOMDP/Registry`
[?25l[2K[?25h
## Example 1: minimum bounding ellipse
Given a set of ellipses centered on the origin
$E(A) = \{ u\;|\;u^\top A^{-1} u <= 1 \}$
find a "minimal" ellipse that contains the provided ellipses
We can formulate this as an SDP:
$\begin{align}
minimize \quad& trace(WX)\\
subject to \quad& X \ge A_i, \quad i = 1,...,m \\
& X \succeq 0
\end{align}$
where $W$ is a positive-definite matrix of weights to choose between different solutions.
```julia
using JuMP, SCS, Plots, LinearAlgebra, Interact
function draw_ellipse(A::Matrix, args...; kwargs...)
x_values = Float64[]
y_values = Float64[]
for angle in 0:0.001π:2π
u = [cos(angle), sin(angle)]
z = A * u
push!(x_values, z[1])
push!(y_values, z[2])
end
plot!(x_values, y_values, args...; kwargs...)
end
function solve_minimum_ellipse_problem(W, A_matrices)
model = Model(solver = SCSSolver(eps = 1e-6, verbose = false))
@variable(model, X[1:2, 1:2], SDP)
@objective(model, Min, tr(W * X))
for A in A_matrices
@SDconstraint(model, X >= A)
end
status = solve(model)
return status, JuMP.getvalue(X)
end
```
solve_minimum_ellipse_problem (generic function with 1 method)
Investigate the model. Here are some things to try:
- What happens if you comment out the first A matrix?
- What happens if you comment out the second A matrix?
- What happens if you comment out the third A matrix?
You can comment lines in Julia using the `#` symbol. As a shortcut, use `[CTRL] + [/]`.
```julia
@manipulate for weight in 1:20
A_matrices = [
[2.0 0.0; 0.0 1.0],
[1.0 0.0; 0.0 3.0],
[2.3896 1.5433; 1.5433 1.35584]
]
W = [1.0 0.0; 0.0 weight]
status, X_value = solve_minimum_ellipse_problem(W, A_matrices)
if status == :Optimal
plot(legend = false)
draw_ellipse.(A_matrices, color = "gray")
draw_ellipse(X_value, color="purple", linewidth=2)
else
println("Could not solve. Status = $(status)")
end
end
```
<div class='tex2jax_ignore interactbulma'>
<div class='display:none'></div><unsafe-script style='display:none'>
WebIO.mount(this.previousSibling,{"props":{},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[{"props":{"className":"field"},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[{"props":{},"nodeType":"Scope","type":"node","instanceArgs":{"imports":{"data":[{"name":"knockout","type":"js","url":"/assetserver/25899bae89003c9f86e9455b28fa31a2ae55d476-knockout.js"},{"name":"knockout_punches","type":"js","url":"/assetserver/fcb49798ed2cca503bfcbfacd04a1a3983159fc2-knockout_punches.js"},{"name":null,"type":"js","url":"/assetserver/8ae9f4aa930b66be3a755b57f25ffff7013ab980-all.js"},{"name":null,"type":"css","url":"/assetserver/c079e3891e8a630ad84ae9ed08bb334b46ca67ac-style.css"},{"name":null,"type":"css","url":"/assetserver/8786809522076f1019d38a12fad8b081c302304d-main.css"}],"type":"async_block"},"id":"knockout-component-962582f0-07b9-498e-99f9-7919442b140e","handlers":{"_promises":{"importsLoaded":[function (ko, koPunches) {
ko.punches.enableAll();
ko.bindingHandlers.numericValue = {
init : function(element, valueAccessor, allBindings, data, context) {
var stringified = ko.observable(ko.unwrap(valueAccessor()));
stringified.subscribe(function(value) {
var val = parseFloat(value);
if (!isNaN(val)) {
valueAccessor()(val);
}
})
valueAccessor().subscribe(function(value) {
var str = JSON.stringify(value);
if ((str == "0") && (["-0", "-0."].indexOf(stringified()) >= 0))
return;
if (["null", ""].indexOf(str) >= 0)
return;
stringified(str);
})
ko.applyBindingsToNode(element, { value: stringified, valueUpdate: allBindings.get('valueUpdate')}, context);
}
};
var json_data = JSON.parse("{\"changes\":0,\"value\":10}");
var self = this;
function AppViewModel() {
for (var key in json_data) {
var el = json_data[key];
this[key] = Array.isArray(el) ? ko.observableArray(el) : ko.observable(el);
}
[this["changes"].subscribe((function (val){!(this.valueFromJulia["changes"]) ? (WebIO.setval({"name":"changes","scope":"knockout-component-962582f0-07b9-498e-99f9-7919442b140e","id":"ob_92","type":"observable"},val)) : undefined; return this.valueFromJulia["changes"]=false}),self),this["value"].subscribe((function (val){!(this.valueFromJulia["value"]) ? (WebIO.setval({"name":"value","scope":"knockout-component-962582f0-07b9-498e-99f9-7919442b140e","id":"ob_91","type":"observable"},val)) : undefined; return this.valueFromJulia["value"]=false}),self)]
}
self.model = new AppViewModel();
self.valueFromJulia = {};
for (var key in json_data) {
self.valueFromJulia[key] = false;
}
ko.applyBindings(self.model, self.dom);
}
]},"changes":[(function (val){return (val!=this.model["changes"]()) ? (this.valueFromJulia["changes"]=true, this.model["changes"](val)) : undefined})],"value":[(function (val){return (val!=this.model["value"]()) ? (this.valueFromJulia["value"]=true, this.model["value"](val)) : undefined})]},"systemjs_options":null,"observables":{"changes":{"sync":false,"id":"ob_92","value":0},"value":{"sync":true,"id":"ob_91","value":10}}},"children":[{"props":{"attributes":{"class":"interact-flex-row"}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[{"props":{"attributes":{"class":"interact-flex-row-left"}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[{"props":{"className":"interact ","style":{"padding":"5px 10px 0px 10px"}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"label"},"children":["weight"]}]},{"props":{"attributes":{"class":"interact-flex-row-center"}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[{"props":{"max":20,"min":1,"attributes":{"type":"range","data-bind":"numericValue: value, valueUpdate: 'input', event: {change : function () {this.changes(this.changes()+1)}}","orient":"horizontal"},"step":1,"className":"slider slider is-fullwidth","style":{}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"input"},"children":[]}]},{"props":{"attributes":{"class":"interact-flex-row-right"}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[{"props":{"attributes":{"data-bind":"text: value"}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"p"},"children":[]}]}]}]}]},{"props":{},"nodeType":"Scope","type":"node","instanceArgs":{"imports":{"data":[],"type":"async_block"},"id":"scope-fa182c03-55f8-4979-877c-dc66d88079d5","handlers":{"obs-output":[function (updated_htmlstr) {
var el = this.dom.querySelector("#out");
WebIO.propUtils.setInnerHtml(el, updated_htmlstr);
}]},"systemjs_options":null,"observables":{"obs-output":{"sync":false,"id":"ob_95","value":"<div class='display:none'></div><unsafe-script style='display:none'>\nWebIO.mount(this.previousSibling,{&quot;props&quot;:{&quot;attributes&quot;:{&quot;class&quot;:&quot;interact-flex-row&quot;}},&quot;nodeType&quot;:&quot;DOM&quot;,&quot;type&quot;:&quot;node&quot;,&quot;instanceArgs&quot;:{&quot;namespace&quot;:&quot;html&quot;,&quot;tag&quot;:&quot;div&quot;},&quot;children&quot;:[{&quot;props&quot;:{&quot;setInnerHtml&quot;:&quot;&lt;?xml version=\\&quot;1.0\\&quot; encoding=\\&quot;utf-8\\&quot;?&gt;\\n&lt;svg xmlns=\\&quot;http://www.w3.org/2000/svg\\&quot; xmlns:xlink=\\&quot;http://www.w3.org/1999/xlink\\&quot; width=\\&quot;600\\&quot; height=\\&quot;400\\&quot; viewBox=\\&quot;0 0 2400 1600\\&quot;&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip8200\\&quot;&gt;\\n &lt;rect x=\\&quot;0\\&quot; y=\\&quot;0\\&quot; width=\\&quot;2000\\&quot; height=\\&quot;2000\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip8201\\&quot;&gt;\\n &lt;rect x=\\&quot;0\\&quot; y=\\&quot;0\\&quot; width=\\&quot;2400\\&quot; height=\\&quot;1600\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;polygon clip-path=\\&quot;url(#clip8201)\\&quot; points=\\&quot;\\n0,1600 2400,1600 2400,0 0,0 \\n \\&quot; fill=\\&quot;#ffffff\\&quot; fill-rule=\\&quot;evenodd\\&quot; fill-opacity=\\&quot;1\\&quot;/&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip8202\\&quot;&gt;\\n &lt;rect x=\\&quot;480\\&quot; y=\\&quot;0\\&quot; width=\\&quot;1681\\&quot; height=\\&quot;1600\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;polygon clip-path=\\&quot;url(#clip8201)\\&quot; points=\\&quot;\\n149.361,1503.47 2321.26,1503.47 2321.26,47.2441 149.361,47.2441 \\n \\&quot; fill=\\&quot;#ffffff\\&quot; fill-rule=\\&quot;evenodd\\&quot; fill-opacity=\\&quot;1\\&quot;/&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip8203\\&quot;&gt;\\n &lt;rect x=\\&quot;149\\&quot; y=\\&quot;47\\&quot; width=\\&quot;2173\\&quot; height=\\&quot;1457\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 352.57,1503.47 352.57,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 646.817,1503.47 646.817,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 941.064,1503.47 941.064,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 1235.31,1503.47 1235.31,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 1529.56,1503.47 1529.56,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 1823.8,1503.47 1823.8,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 2118.05,1503.47 2118.05,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,1457.07 2321.26,1457.07 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,1229.83 2321.26,1229.83 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,1002.6 2321.26,1002.6 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,775.359 2321.26,775.359 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,548.122 2321.26,548.122 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,320.885 2321.26,320.885 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,93.6483 2321.26,93.6483 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,1503.47 2321.26,1503.47 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,1503.47 149.361,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 352.57,1503.47 352.57,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 646.817,1503.47 646.817,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 941.064,1503.47 941.064,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1235.31,1503.47 1235.31,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1529.56,1503.47 1529.56,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1823.8,1503.47 1823.8,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 2118.05,1503.47 2118.05,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,1457.07 181.939,1457.07 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,1229.83 181.939,1229.83 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,1002.6 181.939,1002.6 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,775.359 181.939,775.359 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,548.122 181.939,548.122 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,320.885 181.939,320.885 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,93.6483 181.939,93.6483 \\n \\&quot;/&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 352.57, 1557.47)\\&quot; x=\\&quot;352.57\\&quot; y=\\&quot;1557.47\\&quot;&gt;-3&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 646.817, 1557.47)\\&quot; x=\\&quot;646.817\\&quot; y=\\&quot;1557.47\\&quot;&gt;-2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 941.064, 1557.47)\\&quot; x=\\&quot;941.064\\&quot; y=\\&quot;1557.47\\&quot;&gt;-1&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 1235.31, 1557.47)\\&quot; x=\\&quot;1235.31\\&quot; y=\\&quot;1557.47\\&quot;&gt;0&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 1529.56, 1557.47)\\&quot; x=\\&quot;1529.56\\&quot; y=\\&quot;1557.47\\&quot;&gt;1&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 1823.8, 1557.47)\\&quot; x=\\&quot;1823.8\\&quot; y=\\&quot;1557.47\\&quot;&gt;2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 2118.05, 1557.47)\\&quot; x=\\&quot;2118.05\\&quot; y=\\&quot;1557.47\\&quot;&gt;3&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 1474.57)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;1474.57\\&quot;&gt;-3&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 1247.33)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;1247.33\\&quot;&gt;-2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 1020.1)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;1020.1\\&quot;&gt;-1&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 792.859)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;792.859\\&quot;&gt;0&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 565.622)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;565.622\\&quot;&gt;1&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 338.385)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;338.385\\&quot;&gt;2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 111.148)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;111.148\\&quot;&gt;3&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#808080; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1823.8,775.359 1823.8,774.646 1823.79,773.932 1823.78,773.218 1823.76,772.504 1823.73,771.79 1823.7,771.076 1823.66,770.363 1823.62,769.649 1823.57,768.935 \\n 1823.51,768.222 1823.45,767.508 1823.39,766.795 1823.31,766.081 1823.23,765.368 1823.15,764.655 1823.06,763.942 1822.96,763.229 1822.86,762.516 1822.76,761.804 \\n 1822.64,761.091 1822.52,760.379 1822.4,759.666 1822.27,758.954 1822.13,758.242 1821.99,757.531 1821.84,756.819 1821.69,756.108 1821.53,755.396 1821.36,754.685 \\n 1821.19,753.975 1821.01,753.264 1820.83,752.554 1820.64,751.843 1820.45,751.133 1820.25,750.424 1820.04,749.714 1819.83,749.005 1819.62,748.296 1819.39,747.587 \\n 1819.16,746.879 1818.93,746.171 1818.69,745.463 1818.44,744.756 1818.19,744.048 1817.93,743.341 1817.67,742.635 1817.4,741.929 1817.13,741.223 1816.84,740.517 \\n 1816.56,739.812 1816.27,739.107 1815.97,738.402 1815.66,737.698 1815.36,736.994 1815.04,736.291 1814.72,735.588 1814.39,734.885 1814.06,734.183 1813.72,733.481 \\n 1813.38,732.779 1813.03,732.078 1812.68,731.378 1812.31,730.678 1811.95,729.978 1811.58,729.279 1811.2,728.58 1810.82,727.881 1810.43,727.184 1810.03,726.486 \\n 1809.63,725.789 1809.22,725.093 1808.81,724.397 1808.4,723.701 1807.97,723.006 1807.54,722.312 1807.11,721.618 1806.67,720.925 1806.22,720.232 1805.77,719.54 \\n 1805.31,718.848 1804.85,718.157 1804.38,717.466 1803.91,716.776 1803.43,716.087 1802.95,715.398 1802.46,714.709 1801.96,714.022 1801.46,713.335 1800.95,712.648 \\n 1800.44,711.962 1799.92,711.277 1799.39,710.592 1798.86,709.909 1798.33,709.225 1797.79,708.543 1797.24,707.861 1796.69,707.179 1796.13,706.499 1795.57,705.819 \\n 1795,705.139 1794.43,704.461 1793.85,703.783 1793.26,703.106 1792.67,702.429 1792.08,701.753 1791.47,701.078 1790.87,700.404 1790.25,699.73 1789.64,699.058 \\n 1789.01,698.386 1788.38,697.714 1787.75,697.044 1787.11,696.374 1786.46,695.705 1785.81,695.037 1785.16,694.369 1784.49,693.703 1783.83,693.037 1783.16,692.372 \\n 1782.48,691.708 1781.79,691.045 1781.11,690.382 1780.41,689.72 1779.71,689.06 1779.01,688.4 1778.3,687.74 1777.58,687.082 1776.86,686.425 1776.13,685.768 \\n 1775.4,685.113 1774.67,684.458 1773.92,683.804 1773.18,683.151 1772.42,682.499 1771.66,681.848 1770.9,681.198 1770.13,680.549 1769.36,679.9 1768.58,679.253 \\n 1767.79,678.607 1767.01,677.961 1766.21,677.317 1765.41,676.673 1764.6,676.031 1763.79,675.389 1762.98,674.748 1762.16,674.109 1761.33,673.47 1760.5,672.833 \\n 1759.66,672.196 1758.82,671.56 1757.97,670.926 1757.12,670.292 1756.26,669.66 1755.4,669.028 1754.53,668.398 1753.66,667.769 1752.78,667.14 1751.9,666.513 \\n 1751.01,665.887 1750.12,665.262 1749.22,664.638 1748.32,664.015 1747.41,663.394 1746.49,662.773 1745.58,662.153 1744.65,661.535 1743.72,660.918 1742.79,660.301 \\n 1741.85,659.686 1740.91,659.072 1739.96,658.46 1739.01,657.848 1738.05,657.238 1737.08,656.628 1736.11,656.02 1735.14,655.413 1734.16,654.808 1733.18,654.203 \\n 1732.19,653.6 1731.2,652.998 1730.2,652.397 1729.2,651.797 1728.19,651.198 1727.18,650.601 1726.16,650.005 1725.14,649.41 1724.11,648.817 1723.08,648.224 \\n 1722.04,647.633 1721,647.043 1719.95,646.455 1718.9,645.868 1717.85,645.282 1716.79,644.697 1715.72,644.114 1714.65,643.531 1713.58,642.951 1712.5,642.371 \\n 1711.41,641.793 1710.32,641.216 1709.23,640.64 1708.13,640.066 1707.03,639.493 1705.92,638.922 1704.81,638.351 1703.69,637.783 1702.57,637.215 1701.44,636.649 \\n 1700.31,636.084 1699.18,635.521 1698.04,634.959 1696.89,634.398 1695.74,633.839 1694.59,633.281 1693.43,632.725 1692.27,632.17 1691.1,631.616 1689.93,631.064 \\n 1688.75,630.513 1687.57,629.964 1686.39,629.416 1685.2,628.869 1684,628.324 1682.8,627.781 1681.6,627.239 1680.39,626.698 1679.18,626.159 1677.97,625.621 \\n 1676.75,625.085 1675.52,624.55 1674.29,624.017 1673.06,623.485 1671.82,622.955 1670.58,622.426 1669.33,621.899 1668.08,621.373 1666.83,620.849 1665.57,620.326 \\n 1664.3,619.805 1663.04,619.285 1661.76,618.767 1660.49,618.251 1659.21,617.736 1657.92,617.222 1656.63,616.71 1655.34,616.2 1654.04,615.691 1652.74,615.184 \\n 1651.44,614.679 1650.13,614.175 1648.82,613.672 1647.5,613.171 1646.18,612.672 1644.85,612.175 1643.52,611.679 1642.19,611.184 1640.85,610.691 1639.51,610.2 \\n 1638.16,609.711 1636.81,609.223 1635.46,608.737 1634.1,608.252 1632.74,607.769 1631.37,607.288 1630,606.808 1628.63,606.33 1627.25,605.854 1625.87,605.379 \\n 1624.49,604.906 1623.1,604.435 1621.71,603.966 1620.31,603.498 1618.91,603.032 1617.51,602.567 1616.1,602.104 1614.69,601.643 1613.27,601.184 1611.85,600.726 \\n 1610.43,600.27 1609,599.816 1607.57,599.364 1606.14,598.913 1604.7,598.464 1603.26,598.017 1601.82,597.571 1600.37,597.128 1598.92,596.686 1597.46,596.245 \\n 1596,595.807 1594.54,595.37 1593.07,594.935 1591.6,594.502 1590.13,594.071 1588.65,593.641 1587.17,593.214 1585.69,592.788 1584.2,592.364 1582.71,591.941 \\n 1581.22,591.521 1579.72,591.102 1578.22,590.685 1576.72,590.27 1575.21,589.857 1573.7,589.446 1572.18,589.036 1570.67,588.628 1569.14,588.222 1567.62,587.818 \\n 1566.09,587.416 1564.56,587.016 1563.03,586.617 1561.49,586.221 1559.95,585.826 1558.41,585.433 1556.86,585.042 1555.31,584.653 1553.76,584.266 1552.2,583.88 \\n 1550.64,583.497 1549.08,583.115 1547.51,582.736 1545.94,582.358 1544.37,581.982 1542.8,581.608 1541.22,581.236 1539.64,580.866 1538.05,580.498 1536.47,580.131 \\n 1534.88,579.767 1533.28,579.405 1531.69,579.044 1530.09,578.685 1528.49,578.329 1526.88,577.974 1525.28,577.621 1523.67,577.271 1522.05,576.922 1520.44,576.575 \\n 1518.82,576.23 1517.2,575.887 1515.57,575.546 1513.95,575.207 1512.32,574.87 1510.68,574.535 1509.05,574.202 1507.41,573.871 1505.77,573.542 1504.13,573.215 \\n 1502.48,572.89 1500.83,572.567 1499.18,572.246 1497.53,571.926 1495.87,571.609 1494.21,571.294 1492.55,570.981 1490.89,570.67 1489.22,570.361 1487.55,570.054 \\n 1485.88,569.749 1484.2,569.446 1482.53,569.145 1480.85,568.846 1479.17,568.55 1477.48,568.255 1475.8,567.962 1474.11,567.671 1472.42,567.383 1470.72,567.096 \\n 1469.03,566.812 1467.33,566.529 1465.63,566.249 1463.93,565.97 1462.22,565.694 1460.52,565.42 1458.81,565.148 1457.1,564.878 1455.38,564.61 1453.67,564.344 \\n 1451.95,564.08 1450.23,563.818 1448.51,563.558 1446.78,563.301 1445.06,563.045 1443.33,562.792 1441.6,562.541 1439.86,562.291 1438.13,562.044 1436.39,561.799 \\n 1434.66,561.556 1432.91,561.316 1431.17,561.077 1429.43,560.841 1427.68,560.606 1425.93,560.374 1424.18,560.144 1422.43,559.916 1420.68,559.69 1418.92,559.466 \\n 1417.16,559.244 1415.41,559.025 1413.64,558.807 1411.88,558.592 1410.12,558.379 1408.35,558.168 1406.58,557.959 1404.81,557.752 1403.04,557.548 1401.27,557.345 \\n 1399.49,557.145 1397.72,556.947 1395.94,556.751 1394.16,556.557 1392.38,556.366 1390.6,556.176 1388.81,555.989 1387.03,555.804 1385.24,555.621 1383.45,555.44 \\n 1381.66,555.261 1379.87,555.085 1378.08,554.911 1376.28,554.739 1374.49,554.569 1372.69,554.401 1370.89,554.235 1369.09,554.072 1367.29,553.911 1365.49,553.752 \\n 1363.69,553.595 1361.88,553.44 1360.07,553.288 1358.27,553.138 1356.46,552.99 1354.65,552.844 1352.84,552.7 1351.03,552.559 1349.21,552.419 1347.4,552.282 \\n 1345.58,552.147 1343.77,552.015 1341.95,551.884 1340.13,551.756 1338.31,551.63 1336.49,551.506 1334.67,551.384 1332.84,551.265 1331.02,551.148 1329.2,551.033 \\n 1327.37,550.92 1325.54,550.809 1323.72,550.701 1321.89,550.595 1320.06,550.491 1318.23,550.389 1316.4,550.29 1314.57,550.193 1312.73,550.098 1310.9,550.005 \\n 1309.07,549.914 1307.23,549.826 1305.4,549.74 1303.56,549.656 1301.73,549.574 1299.89,549.495 1298.05,549.417 1296.21,549.342 1294.37,549.27 1292.53,549.199 \\n 1290.69,549.131 1288.85,549.065 1287.01,549.001 1285.17,548.939 1283.33,548.88 1281.48,548.823 1279.64,548.768 1277.8,548.715 1275.95,548.665 1274.11,548.617 \\n 1272.26,548.571 1270.42,548.527 1268.57,548.486 1266.72,548.446 1264.88,548.409 1263.03,548.375 1261.19,548.342 1259.34,548.312 1257.49,548.284 1255.64,548.258 \\n 1253.8,548.235 1251.95,548.213 1250.1,548.194 1248.25,548.177 1246.4,548.163 1244.55,548.15 1242.71,548.14 1240.86,548.132 1239.01,548.127 1237.16,548.124 \\n 1235.31,548.122 1233.46,548.124 1231.61,548.127 1229.76,548.132 1227.92,548.14 1226.07,548.15 1224.22,548.163 1222.37,548.177 1220.52,548.194 1218.67,548.213 \\n 1216.83,548.235 1214.98,548.258 1213.13,548.284 1211.28,548.312 1209.44,548.342 1207.59,548.375 1205.74,548.409 1203.9,548.446 1202.05,548.486 1200.2,548.527 \\n 1198.36,548.571 1196.51,548.617 1194.67,548.665 1192.82,548.715 1190.98,548.768 1189.14,548.823 1187.29,548.88 1185.45,548.939 1183.61,549.001 1181.77,549.065 \\n 1179.93,549.131 1178.09,549.199 1176.25,549.27 1174.41,549.342 1172.57,549.417 1170.73,549.495 1168.89,549.574 1167.06,549.656 1165.22,549.74 1163.39,549.826 \\n 1161.55,549.914 1159.72,550.005 1157.89,550.098 1156.05,550.193 1154.22,550.29 1152.39,550.389 1150.56,550.491 1148.73,550.595 1146.9,550.701 1145.08,550.809 \\n 1143.25,550.92 1141.42,551.033 1139.6,551.148 1137.78,551.265 1135.95,551.384 1134.13,551.506 1132.31,551.63 1130.49,551.756 1128.67,551.884 1126.85,552.015 \\n 1125.04,552.147 1123.22,552.282 1121.41,552.419 1119.59,552.559 1117.78,552.7 1115.97,552.844 1114.16,552.99 1112.35,553.138 1110.55,553.288 1108.74,553.44 \\n 1106.93,553.595 1105.13,553.752 1103.33,553.911 1101.53,554.072 1099.73,554.235 1097.93,554.401 1096.13,554.569 1094.34,554.739 1092.54,554.911 1090.75,555.085 \\n 1088.96,555.261 1087.17,555.44 1085.38,555.621 1083.59,555.804 1081.81,555.989 1080.02,556.176 1078.24,556.366 1076.46,556.557 1074.68,556.751 1072.9,556.947 \\n 1071.13,557.145 1069.35,557.345 1067.58,557.548 1065.81,557.752 1064.04,557.959 1062.27,558.168 1060.5,558.379 1058.74,558.592 1056.98,558.807 1055.21,559.025 \\n 1053.46,559.244 1051.7,559.466 1049.94,559.69 1048.19,559.916 1046.44,560.144 1044.69,560.374 1042.94,560.606 1041.19,560.841 1039.45,561.077 1037.71,561.316 \\n 1035.97,561.556 1034.23,561.799 1032.49,562.044 1030.76,562.291 1029.02,562.541 1027.29,562.792 1025.56,563.045 1023.84,563.301 1022.11,563.558 1020.39,563.818 \\n 1018.67,564.08 1016.95,564.344 1015.24,564.61 1013.52,564.878 1011.81,565.148 1010.1,565.42 1008.4,565.694 1006.69,565.97 1004.99,566.249 1003.29,566.529 \\n 1001.59,566.812 999.896,567.096 998.202,567.383 996.512,567.671 994.823,567.962 993.137,568.255 991.453,568.55 989.772,568.846 988.093,569.145 986.416,569.446 \\n 984.742,569.749 983.07,570.054 981.401,570.361 979.735,570.67 978.071,570.981 976.409,571.294 974.75,571.609 973.094,571.926 971.44,572.246 969.788,572.567 \\n 968.14,572.89 966.494,573.215 964.851,573.542 963.21,573.871 961.572,574.202 959.937,574.535 958.304,574.87 956.674,575.207 955.047,575.546 953.423,575.887 \\n 951.801,576.23 950.183,576.575 948.567,576.922 946.954,577.271 945.343,577.621 943.736,577.974 942.132,578.329 940.53,578.685 938.931,579.044 937.336,579.405 \\n 935.743,579.767 934.153,580.131 932.566,580.498 930.982,580.866 929.401,581.236 927.823,581.608 926.248,581.982 924.677,582.358 923.108,582.736 921.542,583.115 \\n 919.98,583.497 918.42,583.88 916.864,584.266 915.311,584.653 913.761,585.042 912.214,585.433 910.67,585.826 909.13,586.221 907.593,586.617 906.059,587.016 \\n 904.528,587.416 903,587.818 901.476,588.222 899.955,588.628 898.438,589.036 896.923,589.446 895.413,589.857 893.905,590.27 892.401,590.685 890.9,591.102 \\n 889.403,591.521 887.909,591.941 886.418,592.364 884.931,592.788 883.447,593.214 881.967,593.641 880.49,594.071 879.017,594.502 877.547,594.935 876.081,595.37 \\n 874.619,595.807 873.159,596.245 871.704,596.686 870.252,597.128 868.804,597.571 867.359,598.017 865.918,598.464 864.481,598.913 863.047,599.364 861.617,599.816 \\n 860.19,600.27 858.768,600.726 857.349,601.184 855.934,601.643 854.522,602.104 853.114,602.567 851.71,603.032 850.31,603.498 848.914,603.966 847.521,604.435 \\n 846.133,604.906 844.748,605.379 843.367,605.854 841.99,606.33 840.616,606.808 839.247,607.288 837.881,607.769 836.52,608.252 835.162,608.737 833.809,609.223 \\n 832.459,609.711 831.113,610.2 829.771,610.691 828.434,611.184 827.1,611.679 825.77,612.175 824.445,612.672 823.123,613.171 821.805,613.672 820.492,614.175 \\n 819.183,614.679 817.877,615.184 816.576,615.691 815.279,616.2 813.986,616.71 812.698,617.222 811.413,617.736 810.133,618.251 808.857,618.767 807.585,619.285 \\n 806.317,619.805 805.054,620.326 803.794,620.849 802.539,621.373 801.289,621.899 800.042,622.426 798.8,622.955 797.562,623.485 796.329,624.017 795.1,624.55 \\n 793.875,625.085 792.654,625.621 791.438,626.159 790.227,626.698 789.019,627.239 787.816,627.781 786.618,628.324 785.424,628.869 784.234,629.416 783.049,629.964 \\n 781.868,630.513 780.692,631.064 779.52,631.616 778.353,632.17 777.19,632.725 776.032,633.281 774.878,633.839 773.729,634.398 772.585,634.959 771.445,635.521 \\n 770.309,636.084 769.178,636.649 768.052,637.215 766.931,637.783 765.814,638.351 764.701,638.922 763.593,639.493 762.49,640.066 761.392,640.64 760.298,641.216 \\n 759.209,641.793 758.125,642.371 757.045,642.951 755.97,643.531 754.9,644.114 753.835,644.697 752.774,645.282 751.718,645.868 750.667,646.455 749.62,647.043 \\n 748.579,647.633 747.542,648.224 746.51,648.817 745.483,649.41 744.461,650.005 743.443,650.601 742.431,651.198 741.423,651.797 740.42,652.397 739.422,652.998 \\n 738.429,653.6 737.441,654.203 736.457,654.808 735.479,655.413 734.506,656.02 733.537,656.628 732.574,657.238 731.615,657.848 730.662,658.46 729.713,659.072 \\n 728.769,659.686 727.831,660.301 726.897,660.918 725.968,661.535 725.045,662.153 724.126,662.773 723.213,663.394 722.304,664.015 721.401,664.638 720.503,665.262 \\n 719.61,665.887 718.721,666.513 717.838,667.14 716.961,667.769 716.088,668.398 715.22,669.028 714.358,669.66 713.5,670.292 712.648,670.926 711.801,671.56 \\n 710.959,672.196 710.122,672.833 709.291,673.47 708.464,674.109 707.643,674.748 706.827,675.389 706.016,676.031 705.211,676.673 704.41,677.317 703.615,677.961 \\n 702.826,678.607 702.041,679.253 701.262,679.9 700.488,680.549 699.719,681.198 698.955,681.848 698.197,682.499 697.444,683.151 696.697,683.804 695.955,684.458 \\n 695.218,685.113 694.486,685.768 693.76,686.425 693.039,687.082 692.323,687.74 691.613,688.4 690.908,689.06 690.209,689.72 689.515,690.382 688.826,691.045 \\n 688.143,691.708 687.465,692.372 686.793,693.037 686.126,693.703 685.464,694.369 684.808,695.037 684.157,695.705 683.512,696.374 682.872,697.044 682.237,697.714 \\n 681.608,698.386 680.985,699.058 680.367,699.73 679.754,700.404 679.147,701.078 678.545,701.753 677.949,702.429 677.359,703.106 676.773,703.783 676.194,704.461 \\n 675.62,705.139 675.051,705.819 674.488,706.499 673.931,707.179 673.379,707.861 672.832,708.543 672.292,709.225 671.756,709.909 671.227,710.592 670.702,711.277 \\n 670.184,711.962 669.671,712.648 669.163,713.335 668.662,714.022 668.165,714.709 667.675,715.398 667.19,716.087 666.71,716.776 666.236,717.466 665.768,718.157 \\n 665.305,718.848 664.849,719.54 664.397,720.232 663.951,720.925 663.511,721.618 663.077,722.312 662.648,723.006 662.225,723.701 661.808,724.397 661.396,725.093 \\n 660.99,725.789 660.589,726.486 660.194,727.184 659.805,727.881 659.422,728.58 659.044,729.279 658.672,729.978 658.306,730.678 657.945,731.378 657.59,732.078 \\n 657.241,732.779 656.897,733.481 656.559,734.183 656.227,734.885 655.901,735.588 655.58,736.291 655.265,736.994 654.956,737.698 654.652,738.402 654.354,739.107 \\n 654.062,739.812 653.776,740.517 653.495,741.223 653.22,741.929 652.951,742.635 652.688,743.341 652.43,744.048 652.178,744.756 651.932,745.463 651.692,746.171 \\n 651.457,746.879 651.229,747.587 651.005,748.296 650.788,749.005 650.577,749.714 650.371,750.424 650.171,751.133 649.977,751.843 649.788,752.554 649.606,753.264 \\n 649.429,753.975 649.258,754.685 649.092,755.396 648.933,756.108 648.779,756.819 648.631,757.531 648.489,758.242 648.352,758.954 648.222,759.666 648.097,760.379 \\n 647.978,761.091 647.865,761.804 647.758,762.516 647.656,763.229 647.56,763.942 647.47,764.655 647.386,765.368 647.308,766.081 647.235,766.795 647.168,767.508 \\n 647.107,768.222 647.052,768.935 647.003,769.649 646.959,770.363 646.921,771.076 646.889,771.79 646.863,772.504 646.843,773.218 646.829,773.932 646.82,774.646 \\n 646.817,775.359 646.82,776.073 646.829,776.787 646.843,777.501 646.863,778.215 646.889,778.929 646.921,779.642 646.959,780.356 647.003,781.07 647.052,781.784 \\n 647.107,782.497 647.168,783.211 647.235,783.924 647.308,784.637 647.386,785.351 647.47,786.064 647.56,786.777 647.656,787.49 647.758,788.203 647.865,788.915 \\n 647.978,789.628 648.097,790.34 648.222,791.052 648.352,791.765 648.489,792.476 648.631,793.188 648.779,793.9 648.933,794.611 649.092,795.322 649.258,796.033 \\n 649.429,796.744 649.606,797.455 649.788,798.165 649.977,798.875 650.171,799.585 650.371,800.295 650.577,801.005 650.788,801.714 651.005,802.423 651.229,803.131 \\n 651.457,803.84 651.692,804.548 651.932,805.256 652.178,805.963 652.43,806.67 652.688,807.377 652.951,808.084 653.22,808.79 653.495,809.496 653.776,810.202 \\n 654.062,810.907 654.354,811.612 654.652,812.317 654.956,813.021 655.265,813.725 655.58,814.428 655.901,815.131 656.227,815.834 656.559,816.536 656.897,817.238 \\n 657.241,817.939 657.59,818.64 657.945,819.341 658.306,820.041 658.672,820.741 659.044,821.44 659.422,822.139 659.805,822.837 660.194,823.535 660.589,824.233 \\n 660.99,824.93 661.396,825.626 661.808,826.322 662.225,827.017 662.648,827.712 663.077,828.407 663.511,829.101 663.951,829.794 664.397,830.487 664.849,831.179 \\n 665.305,831.871 665.768,832.562 666.236,833.253 666.71,833.943 667.19,834.632 667.675,835.321 668.165,836.009 668.662,836.697 669.163,837.384 669.671,838.071 \\n 670.184,838.757 670.702,839.442 671.227,840.126 671.756,840.81 672.292,841.494 672.832,842.176 673.379,842.858 673.931,843.54 674.488,844.22 675.051,844.9 \\n 675.62,845.58 676.194,846.258 676.773,846.936 677.359,847.613 677.949,848.29 678.545,848.965 679.147,849.64 679.754,850.315 680.367,850.988 680.985,851.661 \\n 681.608,852.333 682.237,853.005 682.872,853.675 683.512,854.345 684.157,855.014 684.808,855.682 685.464,856.349 686.126,857.016 686.793,857.682 687.465,858.347 \\n 688.143,859.011 688.826,859.674 689.515,860.337 690.209,860.998 690.908,861.659 691.613,862.319 692.323,862.978 693.039,863.637 693.76,864.294 694.486,864.951 \\n 695.218,865.606 695.955,866.261 696.697,866.915 697.444,867.568 698.197,868.22 698.955,868.871 699.719,869.521 700.488,870.17 701.262,870.818 702.041,871.466 \\n 702.826,872.112 703.615,872.758 704.41,873.402 705.211,874.046 706.016,874.688 706.827,875.33 707.643,875.97 708.464,876.61 709.291,877.249 710.122,877.886 \\n 710.959,878.523 711.801,879.158 712.648,879.793 713.5,880.426 714.358,881.059 715.22,881.69 716.088,882.321 716.961,882.95 717.838,883.578 718.721,884.206 \\n 719.61,884.832 720.503,885.457 721.401,886.081 722.304,886.704 723.213,887.325 724.126,887.946 725.045,888.566 725.968,889.184 726.897,889.801 727.831,890.417 \\n 728.769,891.032 729.713,891.646 730.662,892.259 731.615,892.871 732.574,893.481 733.537,894.09 734.506,894.699 735.479,895.305 736.457,895.911 737.441,896.516 \\n 738.429,897.119 739.422,897.721 740.42,898.322 741.423,898.922 742.431,899.52 743.443,900.118 744.461,900.714 745.483,901.309 746.51,901.902 747.542,902.494 \\n 748.579,903.086 749.62,903.675 750.667,904.264 751.718,904.851 752.774,905.437 753.835,906.022 754.9,906.605 755.97,907.187 757.045,907.768 758.125,908.348 \\n 759.209,908.926 760.298,909.503 761.392,910.078 762.49,910.653 763.593,911.226 764.701,911.797 765.814,912.367 766.931,912.936 768.052,913.504 769.178,914.07 \\n 770.309,914.635 771.445,915.198 772.585,915.76 773.729,916.321 774.878,916.88 776.032,917.438 777.19,917.994 778.353,918.549 779.52,919.103 780.692,919.655 \\n 781.868,920.206 783.049,920.755 784.234,921.303 785.424,921.849 786.618,922.394 787.816,922.938 789.019,923.48 790.227,924.021 791.438,924.56 792.654,925.098 \\n 793.875,925.634 795.1,926.169 796.329,926.702 797.562,927.234 798.8,927.764 800.042,928.293 801.289,928.82 802.539,929.346 803.794,929.87 805.054,930.393 \\n 806.317,930.914 807.585,931.433 808.857,931.952 810.133,932.468 811.413,932.983 812.698,933.497 813.986,934.008 815.279,934.519 816.576,935.028 817.877,935.535 \\n 819.183,936.04 820.492,936.544 821.805,937.047 823.123,937.547 824.445,938.047 825.77,938.544 827.1,939.04 828.434,939.535 829.771,940.027 831.113,940.519 \\n 832.459,941.008 833.809,941.496 835.162,941.982 836.52,942.467 837.881,942.95 839.247,943.431 840.616,943.911 841.99,944.389 843.367,944.865 844.748,945.339 \\n 846.133,945.812 847.521,946.284 848.914,946.753 850.31,947.221 851.71,947.687 853.114,948.152 854.522,948.615 855.934,949.076 857.349,949.535 858.768,949.993 \\n 860.19,950.449 861.617,950.903 863.047,951.355 864.481,951.806 865.918,952.255 867.359,952.702 868.804,953.148 870.252,953.591 871.704,954.033 873.159,954.473 \\n 874.619,954.912 876.081,955.349 877.547,955.783 879.017,956.217 880.49,956.648 881.967,957.077 883.447,957.505 884.931,957.931 886.418,958.355 887.909,958.778 \\n 889.403,959.198 890.9,959.617 892.401,960.034 893.905,960.449 895.413,960.862 896.923,961.273 898.438,961.683 899.955,962.091 901.476,962.497 903,962.901 \\n 904.528,963.303 906.059,963.703 907.593,964.102 909.13,964.498 910.67,964.893 912.214,965.286 913.761,965.677 915.311,966.066 916.864,966.453 918.42,966.839 \\n 919.98,967.222 921.542,967.604 923.108,967.983 924.677,968.361 926.248,968.737 927.823,969.111 929.401,969.483 930.982,969.853 932.566,970.221 934.153,970.588 \\n 935.743,970.952 937.336,971.314 938.931,971.675 940.53,972.033 942.132,972.39 943.736,972.745 945.343,973.097 946.954,973.448 948.567,973.797 950.183,974.144 \\n 951.801,974.489 953.423,974.832 955.047,975.173 956.674,975.512 958.304,975.849 959.937,976.184 961.572,976.517 963.21,976.848 964.851,977.177 966.494,977.504 \\n 968.14,977.829 969.788,978.152 971.44,978.473 973.094,978.792 974.75,979.109 976.409,979.425 978.071,979.738 979.735,980.049 981.401,980.358 983.07,980.665 \\n 984.742,980.97 986.416,981.273 988.093,981.573 989.772,981.872 991.453,982.169 993.137,982.464 994.823,982.757 996.512,983.047 998.202,983.336 999.896,983.623 \\n 1001.59,983.907 1003.29,984.19 1004.99,984.47 1006.69,984.749 1008.4,985.025 1010.1,985.299 1011.81,985.571 1013.52,985.841 1015.24,986.109 1016.95,986.375 \\n 1018.67,986.639 1020.39,986.901 1022.11,987.16 1023.84,987.418 1025.56,987.674 1027.29,987.927 1029.02,988.178 1030.76,988.427 1032.49,988.675 1034.23,988.919 \\n 1035.97,989.162 1037.71,989.403 1039.45,989.642 1041.19,989.878 1042.94,990.113 1044.69,990.345 1046.44,990.575 1048.19,990.803 1049.94,991.029 1051.7,991.253 \\n 1053.46,991.475 1055.21,991.694 1056.98,991.912 1058.74,992.127 1060.5,992.34 1062.27,992.551 1064.04,992.76 1065.81,992.967 1067.58,993.171 1069.35,993.373 \\n 1071.13,993.574 1072.9,993.772 1074.68,993.968 1076.46,994.161 1078.24,994.353 1080.02,994.543 1081.81,994.73 1083.59,994.915 1085.38,995.098 1087.17,995.279 \\n 1088.96,995.457 1090.75,995.634 1092.54,995.808 1094.34,995.98 1096.13,996.15 1097.93,996.318 1099.73,996.483 1101.53,996.647 1103.33,996.808 1105.13,996.967 \\n 1106.93,997.124 1108.74,997.278 1110.55,997.431 1112.35,997.581 1114.16,997.729 1115.97,997.875 1117.78,998.019 1119.59,998.16 1121.41,998.3 1123.22,998.437 \\n 1125.04,998.571 1126.85,998.704 1128.67,998.835 1130.49,998.963 1132.31,999.089 1134.13,999.213 1135.95,999.334 1137.78,999.454 1139.6,999.571 1141.42,999.686 \\n 1143.25,999.799 1145.08,999.909 1146.9,1000.02 1148.73,1000.12 1150.56,1000.23 1152.39,1000.33 1154.22,1000.43 1156.05,1000.53 1157.89,1000.62 1159.72,1000.71 \\n 1161.55,1000.8 1163.39,1000.89 1165.22,1000.98 1167.06,1001.06 1168.89,1001.14 1170.73,1001.22 1172.57,1001.3 1174.41,1001.38 1176.25,1001.45 1178.09,1001.52 \\n 1179.93,1001.59 1181.77,1001.65 1183.61,1001.72 1185.45,1001.78 1187.29,1001.84 1189.14,1001.9 1190.98,1001.95 1192.82,1002 1194.67,1002.05 1196.51,1002.1 \\n 1198.36,1002.15 1200.2,1002.19 1202.05,1002.23 1203.9,1002.27 1205.74,1002.31 1207.59,1002.34 1209.44,1002.38 1211.28,1002.41 1213.13,1002.43 1214.98,1002.46 \\n 1216.83,1002.48 1218.67,1002.51 1220.52,1002.52 1222.37,1002.54 1224.22,1002.56 1226.07,1002.57 1227.92,1002.58 1229.76,1002.59 1231.61,1002.59 1233.46,1002.6 \\n 1235.31,1002.6 1237.16,1002.6 1239.01,1002.59 1240.86,1002.59 1242.71,1002.58 1244.55,1002.57 1246.4,1002.56 1248.25,1002.54 1250.1,1002.52 1251.95,1002.51 \\n 1253.8,1002.48 1255.64,1002.46 1257.49,1002.43 1259.34,1002.41 1261.19,1002.38 1263.03,1002.34 1264.88,1002.31 1266.72,1002.27 1268.57,1002.23 1270.42,1002.19 \\n 1272.26,1002.15 1274.11,1002.1 1275.95,1002.05 1277.8,1002 1279.64,1001.95 1281.48,1001.9 1283.33,1001.84 1285.17,1001.78 1287.01,1001.72 1288.85,1001.65 \\n 1290.69,1001.59 1292.53,1001.52 1294.37,1001.45 1296.21,1001.38 1298.05,1001.3 1299.89,1001.22 1301.73,1001.14 1303.56,1001.06 1305.4,1000.98 1307.23,1000.89 \\n 1309.07,1000.8 1310.9,1000.71 1312.73,1000.62 1314.57,1000.53 1316.4,1000.43 1318.23,1000.33 1320.06,1000.23 1321.89,1000.12 1323.72,1000.02 1325.54,999.909 \\n 1327.37,999.799 1329.2,999.686 1331.02,999.571 1332.84,999.454 1334.67,999.334 1336.49,999.213 1338.31,999.089 1340.13,998.963 1341.95,998.835 1343.77,998.704 \\n 1345.58,998.571 1347.4,998.437 1349.21,998.3 1351.03,998.16 1352.84,998.019 1354.65,997.875 1356.46,997.729 1358.27,997.581 1360.07,997.431 1361.88,997.278 \\n 1363.69,997.124 1365.49,996.967 1367.29,996.808 1369.09,996.647 1370.89,996.483 1372.69,996.318 1374.49,996.15 1376.28,995.98 1378.08,995.808 1379.87,995.634 \\n 1381.66,995.457 1383.45,995.279 1385.24,995.098 1387.03,994.915 1388.81,994.73 1390.6,994.543 1392.38,994.353 1394.16,994.161 1395.94,993.968 1397.72,993.772 \\n 1399.49,993.574 1401.27,993.373 1403.04,993.171 1404.81,992.967 1406.58,992.76 1408.35,992.551 1410.12,992.34 1411.88,992.127 1413.64,991.912 1415.41,991.694 \\n 1417.16,991.475 1418.92,991.253 1420.68,991.029 1422.43,990.803 1424.18,990.575 1425.93,990.345 1427.68,990.113 1429.43,989.878 1431.17,989.642 1432.91,989.403 \\n 1434.66,989.162 1436.39,988.919 1438.13,988.675 1439.86,988.427 1441.6,988.178 1443.33,987.927 1445.06,987.674 1446.78,987.418 1448.51,987.16 1450.23,986.901 \\n 1451.95,986.639 1453.67,986.375 1455.38,986.109 1457.1,985.841 1458.81,985.571 1460.52,985.299 1462.22,985.025 1463.93,984.749 1465.63,984.47 1467.33,984.19 \\n 1469.03,983.907 1470.72,983.623 1472.42,983.336 1474.11,983.047 1475.8,982.757 1477.48,982.464 1479.17,982.169 1480.85,981.872 1482.53,981.573 1484.2,981.273 \\n 1485.88,980.97 1487.55,980.665 1489.22,980.358 1490.89,980.049 1492.55,979.738 1494.21,979.425 1495.87,979.109 1497.53,978.792 1499.18,978.473 1500.83,978.152 \\n 1502.48,977.829 1504.13,977.504 1505.77,977.177 1507.41,976.848 1509.05,976.517 1510.68,976.184 1512.32,975.849 1513.95,975.512 1515.57,975.173 1517.2,974.832 \\n 1518.82,974.489 1520.44,974.144 1522.05,973.797 1523.67,973.448 1525.28,973.097 1526.88,972.745 1528.49,972.39 1530.09,972.033 1531.69,971.675 1533.28,971.314 \\n 1534.88,970.952 1536.47,970.588 1538.05,970.221 1539.64,969.853 1541.22,969.483 1542.8,969.111 1544.37,968.737 1545.94,968.361 1547.51,967.983 1549.08,967.604 \\n 1550.64,967.222 1552.2,966.839 1553.76,966.453 1555.31,966.066 1556.86,965.677 1558.41,965.286 1559.95,964.893 1561.49,964.498 1563.03,964.102 1564.56,963.703 \\n 1566.09,963.303 1567.62,962.901 1569.14,962.497 1570.67,962.091 1572.18,961.683 1573.7,961.273 1575.21,960.862 1576.72,960.449 1578.22,960.034 1579.72,959.617 \\n 1581.22,959.198 1582.71,958.778 1584.2,958.355 1585.69,957.931 1587.17,957.505 1588.65,957.077 1590.13,956.648 1591.6,956.217 1593.07,955.783 1594.54,955.349 \\n 1596,954.912 1597.46,954.473 1598.92,954.033 1600.37,953.591 1601.82,953.148 1603.26,952.702 1604.7,952.255 1606.14,951.806 1607.57,951.355 1609,950.903 \\n 1610.43,950.449 1611.85,949.993 1613.27,949.535 1614.69,949.076 1616.1,948.615 1617.51,948.152 1618.91,947.687 1620.31,947.221 1621.71,946.753 1623.1,946.284 \\n 1624.49,945.812 1625.87,945.339 1627.25,944.865 1628.63,944.389 1630,943.911 1631.37,943.431 1632.74,942.95 1634.1,942.467 1635.46,941.982 1636.81,941.496 \\n 1638.16,941.008 1639.51,940.519 1640.85,940.027 1642.19,939.535 1643.52,939.04 1644.85,938.544 1646.18,938.047 1647.5,937.547 1648.82,937.047 1650.13,936.544 \\n 1651.44,936.04 1652.74,935.535 1654.04,935.028 1655.34,934.519 1656.63,934.008 1657.92,933.497 1659.21,932.983 1660.49,932.468 1661.76,931.952 1663.04,931.433 \\n 1664.3,930.914 1665.57,930.393 1666.83,929.87 1668.08,929.346 1669.33,928.82 1670.58,928.293 1671.82,927.764 1673.06,927.234 1674.29,926.702 1675.52,926.169 \\n 1676.75,925.634 1677.97,925.098 1679.18,924.56 1680.39,924.021 1681.6,923.48 1682.8,922.938 1684,922.394 1685.2,921.849 1686.39,921.303 1687.57,920.755 \\n 1688.75,920.206 1689.93,919.655 1691.1,919.103 1692.27,918.549 1693.43,917.994 1694.59,917.438 1695.74,916.88 1696.89,916.321 1698.04,915.76 1699.18,915.198 \\n 1700.31,914.635 1701.44,914.07 1702.57,913.504 1703.69,912.936 1704.81,912.367 1705.92,911.797 1707.03,911.226 1708.13,910.653 1709.23,910.078 1710.32,909.503 \\n 1711.41,908.926 1712.5,908.348 1713.58,907.768 1714.65,907.187 1715.72,906.605 1716.79,906.022 1717.85,905.437 1718.9,904.851 1719.95,904.264 1721,903.675 \\n 1722.04,903.086 1723.08,902.494 1724.11,901.902 1725.14,901.309 1726.16,900.714 1727.18,900.118 1728.19,899.52 1729.2,898.922 1730.2,898.322 1731.2,897.721 \\n 1732.19,897.119 1733.18,896.516 1734.16,895.911 1735.14,895.305 1736.11,894.699 1737.08,894.09 1738.05,893.481 1739.01,892.871 1739.96,892.259 1740.91,891.646 \\n 1741.85,891.032 1742.79,890.417 1743.72,889.801 1744.65,889.184 1745.58,888.566 1746.49,887.946 1747.41,887.325 1748.32,886.704 1749.22,886.081 1750.12,885.457 \\n 1751.01,884.832 1751.9,884.206 1752.78,883.578 1753.66,882.95 1754.53,882.321 1755.4,881.69 1756.26,881.059 1757.12,880.426 1757.97,879.793 1758.82,879.158 \\n 1759.66,878.523 1760.5,877.886 1761.33,877.249 1762.16,876.61 1762.98,875.97 1763.79,875.33 1764.6,874.688 1765.41,874.046 1766.21,873.402 1767.01,872.758 \\n 1767.79,872.112 1768.58,871.466 1769.36,870.818 1770.13,870.17 1770.9,869.521 1771.66,868.871 1772.42,868.22 1773.18,867.568 1773.92,866.915 1774.67,866.261 \\n 1775.4,865.606 1776.13,864.951 1776.86,864.294 1777.58,863.637 1778.3,862.978 1779.01,862.319 1779.71,861.659 1780.41,860.998 1781.11,860.337 1781.79,859.674 \\n 1782.48,859.011 1783.16,858.347 1783.83,857.682 1784.49,857.016 1785.16,856.349 1785.81,855.682 1786.46,855.014 1787.11,854.345 1787.75,853.675 1788.38,853.005 \\n 1789.01,852.333 1789.64,851.661 1790.25,850.988 1790.87,850.315 1791.47,849.64 1792.08,848.965 1792.67,848.29 1793.26,847.613 1793.85,846.936 1794.43,846.258 \\n 1795,845.58 1795.57,844.9 1796.13,844.22 1796.69,843.54 1797.24,842.858 1797.79,842.176 1798.33,841.494 1798.86,840.81 1799.39,840.126 1799.92,839.442 \\n 1800.44,838.757 1800.95,838.071 1801.46,837.384 1801.96,836.697 1802.46,836.009 1802.95,835.321 1803.43,834.632 1803.91,833.943 1804.38,833.253 1804.85,832.562 \\n 1805.31,831.871 1805.77,831.179 1806.22,830.487 1806.67,829.794 1807.11,829.101 1807.54,828.407 1807.97,827.712 1808.4,827.017 1808.81,826.322 1809.22,825.626 \\n 1809.63,824.93 1810.03,824.233 1810.43,823.535 1810.82,822.837 1811.2,822.139 1811.58,821.44 1811.95,820.741 1812.31,820.041 1812.68,819.341 1813.03,818.64 \\n 1813.38,817.939 1813.72,817.238 1814.06,816.536 1814.39,815.834 1814.72,815.131 1815.04,814.428 1815.36,813.725 1815.66,813.021 1815.97,812.317 1816.27,811.612 \\n 1816.56,810.907 1816.84,810.202 1817.13,809.496 1817.4,808.79 1817.67,808.084 1817.93,807.377 1818.19,806.67 1818.44,805.963 1818.69,805.256 1818.93,804.548 \\n 1819.16,803.84 1819.39,803.131 1819.62,802.423 1819.83,801.714 1820.04,801.005 1820.25,800.295 1820.45,799.585 1820.64,798.875 1820.83,798.165 1821.01,797.455 \\n 1821.19,796.744 1821.36,796.033 1821.53,795.322 1821.69,794.611 1821.84,793.9 1821.99,793.188 1822.13,792.476 1822.27,791.765 1822.4,791.052 1822.52,790.34 \\n 1822.64,789.628 1822.76,788.915 1822.86,788.203 1822.96,787.49 1823.06,786.777 1823.15,786.064 1823.23,785.351 1823.31,784.637 1823.39,783.924 1823.45,783.211 \\n 1823.51,782.497 1823.57,781.784 1823.62,781.07 1823.66,780.356 1823.7,779.642 1823.73,778.929 1823.76,778.215 1823.78,777.501 1823.79,776.787 1823.8,776.073 \\n 1823.8,775.359 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#808080; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1529.56,775.359 1529.56,773.218 1529.55,771.076 1529.54,768.935 1529.53,766.793 1529.52,764.652 1529.5,762.51 1529.49,760.369 1529.46,758.228 1529.44,756.087 \\n 1529.41,753.946 1529.38,751.806 1529.35,749.666 1529.31,747.526 1529.27,745.386 1529.23,743.246 1529.19,741.107 1529.14,738.969 1529.09,736.83 1529.03,734.692 \\n 1528.98,732.554 1528.92,730.417 1528.85,728.28 1528.79,726.144 1528.72,724.008 1528.65,721.873 1528.58,719.738 1528.5,717.604 1528.42,715.47 1528.34,713.337 \\n 1528.25,711.205 1528.16,709.073 1528.07,706.942 1527.98,704.811 1527.88,702.681 1527.78,700.552 1527.68,698.424 1527.57,696.296 1527.46,694.17 1527.35,692.044 \\n 1527.24,689.918 1527.12,687.794 1527,685.671 1526.88,683.548 1526.75,681.426 1526.62,679.305 1526.49,677.186 1526.36,675.067 1526.22,672.949 1526.08,670.832 \\n 1525.93,668.716 1525.79,666.602 1525.64,664.488 1525.49,662.375 1525.33,660.264 1525.18,658.153 1525.02,656.044 1524.85,653.936 1524.69,651.829 1524.52,649.724 \\n 1524.34,647.619 1524.17,645.516 1523.99,643.415 1523.81,641.314 1523.63,639.215 1523.44,637.117 1523.25,635.021 1523.06,632.925 1522.87,630.832 1522.67,628.74 \\n 1522.47,626.649 1522.27,624.559 1522.06,622.472 1521.85,620.385 1521.64,618.3 1521.43,616.217 1521.21,614.135 1520.99,612.055 1520.77,609.977 1520.54,607.9 \\n 1520.31,605.825 1520.08,603.751 1519.85,601.679 1519.61,599.609 1519.37,597.541 1519.13,595.474 1518.88,593.409 1518.63,591.346 1518.38,589.285 1518.13,587.226 \\n 1517.87,585.168 1517.61,583.112 1517.35,581.059 1517.09,579.007 1516.82,576.957 1516.55,574.909 1516.28,572.863 1516,570.819 1515.72,568.777 1515.44,566.737 \\n 1515.16,564.699 1514.87,562.663 1514.58,560.63 1514.29,558.598 1513.99,556.569 1513.69,554.541 1513.39,552.516 1513.09,550.493 1512.78,548.473 1512.47,546.454 \\n 1512.16,544.438 1511.85,542.424 1511.53,540.413 1511.21,538.403 1510.89,536.396 1510.56,534.392 1510.23,532.389 1509.9,530.39 1509.57,528.392 1509.23,526.397 \\n 1508.89,524.405 1508.55,522.415 1508.21,520.427 1507.86,518.442 1507.51,516.46 1507.16,514.48 1506.8,512.503 1506.45,510.528 1506.09,508.556 1505.72,506.586 \\n 1505.36,504.619 1504.99,502.655 1504.62,500.694 1504.24,498.735 1503.87,496.779 1503.49,494.826 1503.11,492.875 1502.72,490.927 1502.33,488.982 1501.94,487.04 \\n 1501.55,485.101 1501.16,483.165 1500.76,481.231 1500.36,479.3 1499.96,477.373 1499.55,475.448 1499.14,473.526 1498.73,471.607 1498.32,469.692 1497.9,467.779 \\n 1497.49,465.869 1497.06,463.962 1496.64,462.059 1496.22,460.158 1495.79,458.261 1495.36,456.366 1494.92,454.475 1494.49,452.587 1494.05,450.703 1493.6,448.821 \\n 1493.16,446.943 1492.71,445.067 1492.26,443.196 1491.81,441.327 1491.36,439.462 1490.9,437.6 1490.44,435.741 1489.98,433.886 1489.52,432.034 1489.05,430.185 \\n 1488.58,428.34 1488.11,426.499 1487.63,424.66 1487.16,422.825 1486.68,420.994 1486.2,419.166 1485.71,417.342 1485.23,415.521 1484.74,413.704 1484.24,411.89 \\n 1483.75,410.08 1483.25,408.274 1482.76,406.471 1482.25,404.672 1481.75,402.876 1481.24,401.084 1480.74,399.296 1480.22,397.512 1479.71,395.731 1479.19,393.954 \\n 1478.68,392.181 1478.16,390.412 1477.63,388.646 1477.11,386.884 1476.58,385.126 1476.05,383.372 1475.52,381.622 1474.98,379.875 1474.44,378.133 1473.9,376.394 \\n 1473.36,374.66 1472.82,372.929 1472.27,371.202 1471.72,369.48 1471.17,367.761 1470.61,366.046 1470.06,364.336 1469.5,362.629 1468.94,360.927 1468.38,359.228 \\n 1467.81,357.534 1467.24,355.844 1466.67,354.158 1466.1,352.476 1465.53,350.798 1464.95,349.125 1464.37,347.455 1463.79,345.79 1463.21,344.129 1462.62,342.473 \\n 1462.03,340.82 1461.44,339.172 1460.85,337.529 1460.25,335.889 1459.66,334.254 1459.06,332.623 1458.46,330.997 1457.85,329.375 1457.25,327.758 1456.64,326.144 \\n 1456.03,324.536 1455.42,322.932 1454.8,321.332 1454.18,319.736 1453.57,318.146 1452.94,316.559 1452.32,314.978 1451.7,313.4 1451.07,311.828 1450.44,310.26 \\n 1449.81,308.696 1449.17,307.137 1448.54,305.583 1447.9,304.033 1447.26,302.488 1446.62,300.948 1445.97,299.412 1445.33,297.881 1444.68,296.355 1444.03,294.834 \\n 1443.37,293.317 1442.72,291.805 1442.06,290.298 1441.4,288.795 1440.74,287.298 1440.08,285.805 1439.42,284.317 1438.75,282.834 1438.08,281.355 1437.41,279.882 \\n 1436.74,278.413 1436.06,276.95 1435.38,275.491 1434.71,274.037 1434.02,272.589 1433.34,271.145 1432.66,269.706 1431.97,268.272 1431.28,266.843 1430.59,265.419 \\n 1429.9,264 1429.2,262.587 1428.51,261.178 1427.81,259.774 1427.11,258.376 1426.41,256.982 1425.7,255.594 1425,254.211 1424.29,252.833 1423.58,251.46 \\n 1422.87,250.092 1422.16,248.729 1421.44,247.372 1420.73,246.02 1420.01,244.673 1419.29,243.331 1418.56,241.995 1417.84,240.664 1417.11,239.338 1416.39,238.017 \\n 1415.66,236.702 1414.92,235.392 1414.19,234.087 1413.46,232.788 1412.72,231.494 1411.98,230.206 1411.24,228.922 1410.5,227.645 1409.76,226.372 1409.01,225.105 \\n 1408.26,223.844 1407.52,222.587 1406.76,221.337 1406.01,220.092 1405.26,218.852 1404.5,217.618 1403.75,216.389 1402.99,215.166 1402.23,213.948 1401.47,212.736 \\n 1400.7,211.529 1399.94,210.328 1399.17,209.133 1398.4,207.943 1397.63,206.759 1396.86,205.58 1396.08,204.407 1395.31,203.24 1394.53,202.078 1393.76,200.922 \\n 1392.98,199.772 1392.19,198.627 1391.41,197.488 1390.63,196.355 1389.84,195.227 1389.05,194.105 1388.26,192.989 1387.47,191.879 1386.68,190.774 1385.89,189.675 \\n 1385.09,188.582 1384.3,187.495 1383.5,186.413 1382.7,185.338 1381.9,184.268 1381.1,183.204 1380.29,182.145 1379.49,181.093 1378.68,180.047 1377.87,179.006 \\n 1377.06,177.971 1376.25,176.943 1375.44,175.92 1374.63,174.903 1373.81,173.892 1373,172.887 1372.18,171.887 1371.36,170.894 1370.54,169.907 1369.72,168.926 \\n 1368.9,167.95 1368.07,166.981 1367.25,166.018 1366.42,165.061 1365.59,164.109 1364.76,163.164 1363.93,162.225 1363.1,161.292 1362.26,160.365 1361.43,159.444 \\n 1360.59,158.529 1359.76,157.62 1358.92,156.717 1358.08,155.821 1357.24,154.93 1356.4,154.046 1355.55,153.167 1354.71,152.295 1353.86,151.429 1353.02,150.57 \\n 1352.17,149.716 1351.32,148.868 1350.47,148.027 1349.62,147.192 1348.77,146.363 1347.91,145.54 1347.06,144.724 1346.2,143.914 1345.35,143.11 1344.49,142.312 \\n 1343.63,141.52 1342.77,140.735 1341.91,139.956 1341.05,139.183 1340.18,138.417 1339.32,137.657 1338.45,136.903 1337.59,136.155 1336.72,135.414 1335.85,134.679 \\n 1334.98,133.951 1334.11,133.228 1333.24,132.512 1332.37,131.803 1331.5,131.099 1330.62,130.403 1329.75,129.712 1328.87,129.028 1327.99,128.35 1327.12,127.679 \\n 1326.24,127.014 1325.36,126.355 1324.48,125.703 1323.6,125.057 1322.71,124.418 1321.83,123.785 1320.95,123.158 1320.06,122.538 1319.18,121.924 1318.29,121.317 \\n 1317.4,120.717 1316.51,120.122 1315.63,119.534 1314.74,118.953 1313.85,118.378 1312.95,117.81 1312.06,117.248 1311.17,116.693 1310.28,116.144 1309.38,115.601 \\n 1308.49,115.066 1307.59,114.536 1306.69,114.013 1305.8,113.497 1304.9,112.987 1304,112.484 1303.1,111.987 1302.2,111.497 1301.3,111.014 1300.4,110.537 \\n 1299.5,110.066 1298.6,109.602 1297.69,109.145 1296.79,108.694 1295.88,108.25 1294.98,107.812 1294.07,107.381 1293.17,106.957 1292.26,106.539 1291.35,106.128 \\n 1290.45,105.723 1289.54,105.325 1288.63,104.934 1287.72,104.549 1286.81,104.171 1285.9,103.799 1284.99,103.435 1284.08,103.076 1283.17,102.725 1282.25,102.38 \\n 1281.34,102.041 1280.43,101.71 1279.51,101.385 1278.6,101.066 1277.68,100.754 1276.77,100.449 1275.85,100.151 1274.94,99.8591 1274.02,99.574 1273.11,99.2956 \\n 1272.19,99.0238 1271.27,98.7587 1270.35,98.5003 1269.44,98.2486 1268.52,98.0036 1267.6,97.7652 1266.68,97.5335 1265.76,97.3086 1264.84,97.0903 1263.92,96.8787 \\n 1263,96.6738 1262.08,96.4756 1261.16,96.2841 1260.24,96.0993 1259.32,95.9212 1258.4,95.7498 1257.47,95.5851 1256.55,95.4272 1255.63,95.2759 1254.71,95.1314 \\n 1253.79,94.9935 1252.86,94.8624 1251.94,94.738 1251.02,94.6203 1250.09,94.5094 1249.17,94.4051 1248.25,94.3076 1247.32,94.2168 1246.4,94.1327 1245.48,94.0553 \\n 1244.55,93.9847 1243.63,93.9208 1242.7,93.8636 1241.78,93.8132 1240.86,93.7694 1239.93,93.7324 1239.01,93.7022 1238.08,93.6786 1237.16,93.6618 1236.23,93.6517 \\n 1235.31,93.6483 1234.39,93.6517 1233.46,93.6618 1232.54,93.6786 1231.61,93.7022 1230.69,93.7324 1229.76,93.7694 1228.84,93.8132 1227.92,93.8636 1226.99,93.9208 \\n 1226.07,93.9847 1225.14,94.0553 1224.22,94.1327 1223.3,94.2168 1222.37,94.3076 1221.45,94.4051 1220.53,94.5094 1219.6,94.6203 1218.68,94.738 1217.76,94.8624 \\n 1216.83,94.9935 1215.91,95.1314 1214.99,95.2759 1214.07,95.4272 1213.15,95.5851 1212.22,95.7498 1211.3,95.9212 1210.38,96.0993 1209.46,96.2841 1208.54,96.4756 \\n 1207.62,96.6738 1206.7,96.8787 1205.78,97.0903 1204.86,97.3086 1203.94,97.5335 1203.02,97.7652 1202.1,98.0036 1201.18,98.2486 1200.27,98.5003 1199.35,98.7587 \\n 1198.43,99.0238 1197.51,99.2956 1196.6,99.574 1195.68,99.8591 1194.77,100.151 1193.85,100.449 1192.94,100.754 1192.02,101.066 1191.11,101.385 1190.19,101.71 \\n 1189.28,102.041 1188.37,102.38 1187.45,102.725 1186.54,103.076 1185.63,103.435 1184.72,103.799 1183.81,104.171 1182.9,104.549 1181.99,104.934 1181.08,105.325 \\n 1180.17,105.723 1179.27,106.128 1178.36,106.539 1177.45,106.957 1176.55,107.381 1175.64,107.812 1174.74,108.25 1173.83,108.694 1172.93,109.145 1172.02,109.602 \\n 1171.12,110.066 1170.22,110.537 1169.32,111.014 1168.42,111.497 1167.52,111.987 1166.62,112.484 1165.72,112.987 1164.82,113.497 1163.93,114.013 1163.03,114.536 \\n 1162.13,115.066 1161.24,115.601 1160.34,116.144 1159.45,116.693 1158.56,117.248 1157.67,117.81 1156.78,118.378 1155.88,118.953 1155,119.534 1154.11,120.122 \\n 1153.22,120.717 1152.33,121.317 1151.44,121.924 1150.56,122.538 1149.67,123.158 1148.79,123.785 1147.91,124.418 1147.02,125.057 1146.14,125.703 1145.26,126.355 \\n 1144.38,127.014 1143.5,127.679 1142.63,128.35 1141.75,129.028 1140.87,129.712 1140,130.403 1139.12,131.099 1138.25,131.803 1137.38,132.512 1136.51,133.228 \\n 1135.64,133.951 1134.77,134.679 1133.9,135.414 1133.03,136.155 1132.17,136.903 1131.3,137.657 1130.44,138.417 1129.57,139.183 1128.71,139.956 1127.85,140.735 \\n 1126.99,141.52 1126.13,142.312 1125.27,143.11 1124.42,143.914 1123.56,144.724 1122.71,145.54 1121.85,146.363 1121,147.192 1120.15,148.027 1119.3,148.868 \\n 1118.45,149.716 1117.6,150.57 1116.76,151.429 1115.91,152.295 1115.07,153.167 1114.22,154.046 1113.38,154.93 1112.54,155.821 1111.7,156.717 1110.86,157.62 \\n 1110.03,158.529 1109.19,159.444 1108.36,160.365 1107.52,161.292 1106.69,162.225 1105.86,163.164 1105.03,164.109 1104.2,165.061 1103.37,166.018 1102.55,166.981 \\n 1101.73,167.95 1100.9,168.926 1100.08,169.907 1099.26,170.894 1098.44,171.887 1097.62,172.887 1096.81,173.892 1095.99,174.903 1095.18,175.92 1094.37,176.943 \\n 1093.56,177.971 1092.75,179.006 1091.94,180.047 1091.13,181.093 1090.33,182.145 1089.52,183.204 1088.72,184.268 1087.92,185.338 1087.12,186.413 1086.32,187.495 \\n 1085.53,188.582 1084.73,189.675 1083.94,190.774 1083.15,191.879 1082.36,192.989 1081.57,194.105 1080.78,195.227 1079.99,196.355 1079.21,197.488 1078.43,198.627 \\n 1077.64,199.772 1076.87,200.922 1076.09,202.078 1075.31,203.24 1074.54,204.407 1073.76,205.58 1072.99,206.759 1072.22,207.943 1071.45,209.133 1070.68,210.328 \\n 1069.92,211.529 1069.16,212.736 1068.39,213.948 1067.63,215.166 1066.87,216.389 1066.12,217.618 1065.36,218.852 1064.61,220.092 1063.86,221.337 1063.11,222.587 \\n 1062.36,223.844 1061.61,225.105 1060.86,226.372 1060.12,227.645 1059.38,228.922 1058.64,230.206 1057.9,231.494 1057.16,232.788 1056.43,234.087 1055.7,235.392 \\n 1054.96,236.702 1054.23,238.017 1053.51,239.338 1052.78,240.664 1052.06,241.995 1051.33,243.331 1050.61,244.673 1049.9,246.02 1049.18,247.372 1048.46,248.729 \\n 1047.75,250.092 1047.04,251.46 1046.33,252.833 1045.62,254.211 1044.92,255.594 1044.21,256.982 1043.51,258.376 1042.81,259.774 1042.11,261.178 1041.42,262.587 \\n 1040.72,264 1040.03,265.419 1039.34,266.843 1038.65,268.272 1037.96,269.706 1037.28,271.145 1036.6,272.589 1035.92,274.037 1035.24,275.491 1034.56,276.95 \\n 1033.88,278.413 1033.21,279.882 1032.54,281.355 1031.87,282.834 1031.21,284.317 1030.54,285.805 1029.88,287.298 1029.22,288.795 1028.56,290.298 1027.9,291.805 \\n 1027.25,293.317 1026.59,294.834 1025.94,296.355 1025.29,297.881 1024.65,299.412 1024,300.948 1023.36,302.488 1022.72,304.033 1022.08,305.583 1021.45,307.137 \\n 1020.81,308.696 1020.18,310.26 1019.55,311.828 1018.92,313.4 1018.3,314.978 1017.68,316.559 1017.06,318.146 1016.44,319.736 1015.82,321.332 1015.2,322.932 \\n 1014.59,324.536 1013.98,326.144 1013.37,327.758 1012.77,329.375 1012.16,330.997 1011.56,332.623 1010.96,334.254 1010.37,335.889 1009.77,337.529 1009.18,339.172 \\n 1008.59,340.82 1008,342.473 1007.42,344.129 1006.83,345.79 1006.25,347.455 1005.67,349.125 1005.09,350.798 1004.52,352.476 1003.95,354.158 1003.38,355.844 \\n 1002.81,357.534 1002.24,359.228 1001.68,360.927 1001.12,362.629 1000.56,364.336 1000.01,366.046 999.452,367.761 998.9,369.48 998.351,371.202 997.804,372.929 \\n 997.26,374.66 996.717,376.394 996.178,378.133 995.64,379.875 995.105,381.622 994.572,383.372 994.042,385.126 993.514,386.884 992.989,388.646 992.465,390.412 \\n 991.945,392.181 991.426,393.954 990.91,395.731 990.397,397.512 989.885,399.296 989.377,401.084 988.87,402.876 988.367,404.672 987.865,406.471 987.366,408.274 \\n 986.87,410.08 986.375,411.89 985.884,413.704 985.395,415.521 984.908,417.342 984.424,419.166 983.942,420.994 983.463,422.825 982.986,424.66 982.512,426.499 \\n 982.04,428.34 981.57,430.185 981.104,432.034 980.639,433.886 980.178,435.741 979.718,437.6 979.262,439.462 978.807,441.327 978.356,443.196 977.907,445.067 \\n 977.46,446.943 977.016,448.821 976.574,450.703 976.135,452.587 975.699,454.475 975.265,456.366 974.834,458.261 974.405,460.158 973.979,462.059 973.555,463.962 \\n 973.135,465.869 972.716,467.779 972.3,469.692 971.887,471.607 971.477,473.526 971.069,475.448 970.663,477.373 970.26,479.3 969.86,481.231 969.463,483.165 \\n 969.068,485.101 968.676,487.04 968.286,488.982 967.899,490.927 967.515,492.875 967.133,494.826 966.754,496.779 966.377,498.735 966.004,500.694 965.632,502.655 \\n 965.264,504.619 964.898,506.586 964.535,508.556 964.175,510.528 963.817,512.503 963.462,514.48 963.109,516.46 962.76,518.442 962.413,520.427 962.068,522.415 \\n 961.727,524.405 961.388,526.397 961.051,528.392 960.718,530.39 960.387,532.389 960.059,534.392 959.734,536.396 959.411,538.403 959.091,540.413 958.774,542.424 \\n 958.459,544.438 958.147,546.454 957.838,548.473 957.532,550.493 957.229,552.516 956.928,554.541 956.63,556.569 956.334,558.598 956.042,560.63 955.752,562.663 \\n 955.465,564.699 955.181,566.737 954.899,568.777 954.62,570.819 954.345,572.863 954.071,574.909 953.801,576.957 953.533,579.007 953.268,581.059 953.006,583.112 \\n 952.747,585.168 952.491,587.226 952.237,589.285 951.986,591.346 951.738,593.409 951.492,595.474 951.25,597.541 951.01,599.609 950.773,601.679 950.539,603.751 \\n 950.308,605.825 950.079,607.9 949.854,609.977 949.631,612.055 949.411,614.135 949.194,616.217 948.979,618.3 948.768,620.385 948.559,622.472 948.353,624.559 \\n 948.15,626.649 947.95,628.74 947.752,630.832 947.558,632.925 947.366,635.021 947.177,637.117 946.991,639.215 946.808,641.314 946.628,643.415 946.45,645.516 \\n 946.275,647.619 946.104,649.724 945.935,651.829 945.769,653.936 945.605,656.044 945.445,658.153 945.288,660.264 945.133,662.375 944.981,664.488 944.832,666.602 \\n 944.686,668.716 944.543,670.832 944.403,672.949 944.265,675.067 944.131,677.186 943.999,679.305 943.87,681.426 943.744,683.548 943.621,685.671 943.501,687.794 \\n 943.384,689.918 943.269,692.044 943.158,694.17 943.049,696.296 942.943,698.424 942.841,700.552 942.741,702.681 942.643,704.811 942.549,706.942 942.458,709.073 \\n 942.369,711.205 942.284,713.337 942.201,715.47 942.121,717.604 942.045,719.738 941.971,721.873 941.9,724.008 941.831,726.144 941.766,728.28 941.704,730.417 \\n 941.644,732.554 941.588,734.692 941.534,736.83 941.483,738.969 941.435,741.107 941.39,743.246 941.348,745.386 941.309,747.526 941.273,749.666 941.239,751.806 \\n 941.209,753.946 941.181,756.087 941.156,758.228 941.135,760.369 941.116,762.51 941.1,764.652 941.087,766.793 941.077,768.935 941.069,771.076 941.065,773.218 \\n 941.064,775.359 941.065,777.501 941.069,779.643 941.077,781.784 941.087,783.926 941.1,786.067 941.116,788.209 941.135,790.35 941.156,792.491 941.181,794.632 \\n 941.209,796.772 941.239,798.913 941.273,801.053 941.309,803.193 941.348,805.333 941.39,807.472 941.435,809.612 941.483,811.75 941.534,813.889 941.588,816.027 \\n 941.644,818.164 941.704,820.302 941.766,822.438 941.831,824.575 941.9,826.711 941.971,828.846 942.045,830.981 942.121,833.115 942.201,835.249 942.284,837.382 \\n 942.369,839.514 942.458,841.646 942.549,843.777 942.643,845.908 942.741,848.037 942.841,850.167 942.943,852.295 943.049,854.422 943.158,856.549 943.269,858.675 \\n 943.384,860.8 943.501,862.925 943.621,865.048 943.744,867.171 943.87,869.293 943.999,871.413 944.131,873.533 944.265,875.652 944.403,877.77 944.543,879.887 \\n 944.686,882.003 944.832,884.117 944.981,886.231 945.133,888.344 945.288,890.455 945.445,892.565 945.605,894.675 945.769,896.783 945.935,898.889 946.104,900.995 \\n 946.275,903.099 946.45,905.202 946.628,907.304 946.808,909.405 946.991,911.504 947.177,913.602 947.366,915.698 947.558,917.793 947.752,919.887 947.95,921.979 \\n 948.15,924.07 948.353,926.159 948.559,928.247 948.768,930.334 948.979,932.418 949.194,934.502 949.411,936.583 949.631,938.664 949.854,940.742 950.079,942.819 \\n 950.308,944.894 950.539,946.968 950.773,949.039 951.01,951.11 951.25,953.178 951.492,955.245 951.738,957.309 951.986,959.373 952.237,961.434 952.491,963.493 \\n 952.747,965.551 953.006,967.606 953.268,969.66 953.533,971.712 953.801,973.762 954.071,975.81 954.345,977.856 954.62,979.9 954.899,981.942 955.181,983.982 \\n 955.465,986.02 955.752,988.056 956.042,990.089 956.334,992.121 956.63,994.15 956.928,996.178 957.229,998.203 957.532,1000.23 957.838,1002.25 958.147,1004.26 \\n 958.459,1006.28 958.774,1008.29 959.091,1010.31 959.411,1012.32 959.734,1014.32 960.059,1016.33 960.387,1018.33 960.718,1020.33 961.051,1022.33 961.388,1024.32 \\n 961.727,1026.31 962.068,1028.3 962.413,1030.29 962.76,1032.28 963.109,1034.26 963.462,1036.24 963.817,1038.22 964.175,1040.19 964.535,1042.16 964.898,1044.13 \\n 965.264,1046.1 965.632,1048.06 966.004,1050.03 966.377,1051.98 966.754,1053.94 967.133,1055.89 967.515,1057.84 967.899,1059.79 968.286,1061.74 968.676,1063.68 \\n 969.068,1065.62 969.463,1067.55 969.86,1069.49 970.26,1071.42 970.663,1073.35 971.069,1075.27 971.477,1077.19 971.887,1079.11 972.3,1081.03 972.716,1082.94 \\n 973.135,1084.85 973.555,1086.76 973.979,1088.66 974.405,1090.56 974.834,1092.46 975.265,1094.35 975.699,1096.24 976.135,1098.13 976.574,1100.02 977.016,1101.9 \\n 977.46,1103.78 977.907,1105.65 978.356,1107.52 978.807,1109.39 979.262,1111.26 979.718,1113.12 980.178,1114.98 980.639,1116.83 981.104,1118.68 981.57,1120.53 \\n 982.04,1122.38 982.512,1124.22 982.986,1126.06 983.463,1127.89 983.942,1129.72 984.424,1131.55 984.908,1133.38 985.395,1135.2 985.884,1137.01 986.375,1138.83 \\n 986.87,1140.64 987.366,1142.44 987.865,1144.25 988.367,1146.05 988.87,1147.84 989.377,1149.63 989.885,1151.42 990.397,1153.21 990.91,1154.99 991.426,1156.76 \\n 991.945,1158.54 992.465,1160.31 992.989,1162.07 993.514,1163.83 994.042,1165.59 994.572,1167.35 995.105,1169.1 995.64,1170.84 996.178,1172.59 996.717,1174.32 \\n 997.26,1176.06 997.804,1177.79 998.351,1179.52 998.9,1181.24 999.452,1182.96 1000.01,1184.67 1000.56,1186.38 1001.12,1188.09 1001.68,1189.79 1002.24,1191.49 \\n 1002.81,1193.18 1003.38,1194.88 1003.95,1196.56 1004.52,1198.24 1005.09,1199.92 1005.67,1201.59 1006.25,1203.26 1006.83,1204.93 1007.42,1206.59 1008,1208.25 \\n 1008.59,1209.9 1009.18,1211.55 1009.77,1213.19 1010.37,1214.83 1010.96,1216.46 1011.56,1218.1 1012.16,1219.72 1012.77,1221.34 1013.37,1222.96 1013.98,1224.57 \\n 1014.59,1226.18 1015.2,1227.79 1015.82,1229.39 1016.44,1230.98 1017.06,1232.57 1017.68,1234.16 1018.3,1235.74 1018.92,1237.32 1019.55,1238.89 1020.18,1240.46 \\n 1020.81,1242.02 1021.45,1243.58 1022.08,1245.14 1022.72,1246.69 1023.36,1248.23 1024,1249.77 1024.65,1251.31 1025.29,1252.84 1025.94,1254.36 1026.59,1255.89 \\n 1027.25,1257.4 1027.9,1258.91 1028.56,1260.42 1029.22,1261.92 1029.88,1263.42 1030.54,1264.91 1031.21,1266.4 1031.87,1267.89 1032.54,1269.36 1033.21,1270.84 \\n 1033.88,1272.31 1034.56,1273.77 1035.24,1275.23 1035.92,1276.68 1036.6,1278.13 1037.28,1279.57 1037.96,1281.01 1038.65,1282.45 1039.34,1283.88 1040.03,1285.3 \\n 1040.72,1286.72 1041.42,1288.13 1042.11,1289.54 1042.81,1290.94 1043.51,1292.34 1044.21,1293.74 1044.92,1295.12 1045.62,1296.51 1046.33,1297.89 1047.04,1299.26 \\n 1047.75,1300.63 1048.46,1301.99 1049.18,1303.35 1049.9,1304.7 1050.61,1306.05 1051.33,1307.39 1052.06,1308.72 1052.78,1310.06 1053.51,1311.38 1054.23,1312.7 \\n 1054.96,1314.02 1055.7,1315.33 1056.43,1316.63 1057.16,1317.93 1057.9,1319.22 1058.64,1320.51 1059.38,1321.8 1060.12,1323.07 1060.86,1324.35 1061.61,1325.61 \\n 1062.36,1326.88 1063.11,1328.13 1063.86,1329.38 1064.61,1330.63 1065.36,1331.87 1066.12,1333.1 1066.87,1334.33 1067.63,1335.55 1068.39,1336.77 1069.16,1337.98 \\n 1069.92,1339.19 1070.68,1340.39 1071.45,1341.59 1072.22,1342.78 1072.99,1343.96 1073.76,1345.14 1074.54,1346.31 1075.31,1347.48 1076.09,1348.64 1076.87,1349.8 \\n 1077.64,1350.95 1078.43,1352.09 1079.21,1353.23 1079.99,1354.36 1080.78,1355.49 1081.57,1356.61 1082.36,1357.73 1083.15,1358.84 1083.94,1359.94 1084.73,1361.04 \\n 1085.53,1362.14 1086.32,1363.22 1087.12,1364.31 1087.92,1365.38 1088.72,1366.45 1089.52,1367.52 1090.33,1368.57 1091.13,1369.63 1091.94,1370.67 1092.75,1371.71 \\n 1093.56,1372.75 1094.37,1373.78 1095.18,1374.8 1095.99,1375.82 1096.81,1376.83 1097.62,1377.83 1098.44,1378.83 1099.26,1379.82 1100.08,1380.81 1100.9,1381.79 \\n 1101.73,1382.77 1102.55,1383.74 1103.37,1384.7 1104.2,1385.66 1105.03,1386.61 1105.86,1387.55 1106.69,1388.49 1107.52,1389.43 1108.36,1390.35 1109.19,1391.28 \\n 1110.03,1392.19 1110.86,1393.1 1111.7,1394 1112.54,1394.9 1113.38,1395.79 1114.22,1396.67 1115.07,1397.55 1115.91,1398.42 1116.76,1399.29 1117.6,1400.15 \\n 1118.45,1401 1119.3,1401.85 1120.15,1402.69 1121,1403.53 1121.85,1404.36 1122.71,1405.18 1123.56,1405.99 1124.42,1406.81 1125.27,1407.61 1126.13,1408.41 \\n 1126.99,1409.2 1127.85,1409.98 1128.71,1410.76 1129.57,1411.54 1130.44,1412.3 1131.3,1413.06 1132.17,1413.82 1133.03,1414.56 1133.9,1415.3 1134.77,1416.04 \\n 1135.64,1416.77 1136.51,1417.49 1137.38,1418.21 1138.25,1418.92 1139.12,1419.62 1140,1420.32 1140.87,1421.01 1141.75,1421.69 1142.63,1422.37 1143.5,1423.04 \\n 1144.38,1423.71 1145.26,1424.36 1146.14,1425.02 1147.02,1425.66 1147.91,1426.3 1148.79,1426.93 1149.67,1427.56 1150.56,1428.18 1151.44,1428.79 1152.33,1429.4 \\n 1153.22,1430 1154.11,1430.6 1155,1431.18 1155.88,1431.77 1156.78,1432.34 1157.67,1432.91 1158.56,1433.47 1159.45,1434.03 1160.34,1434.58 1161.24,1435.12 \\n 1162.13,1435.65 1163.03,1436.18 1163.93,1436.71 1164.82,1437.22 1165.72,1437.73 1166.62,1438.23 1167.52,1438.73 1168.42,1439.22 1169.32,1439.71 1170.22,1440.18 \\n 1171.12,1440.65 1172.02,1441.12 1172.93,1441.57 1173.83,1442.02 1174.74,1442.47 1175.64,1442.91 1176.55,1443.34 1177.45,1443.76 1178.36,1444.18 1179.27,1444.59 \\n 1180.17,1445 1181.08,1445.39 1181.99,1445.78 1182.9,1446.17 1183.81,1446.55 1184.72,1446.92 1185.63,1447.28 1186.54,1447.64 1187.45,1447.99 1188.37,1448.34 \\n 1189.28,1448.68 1190.19,1449.01 1191.11,1449.33 1192.02,1449.65 1192.94,1449.96 1193.85,1450.27 1194.77,1450.57 1195.68,1450.86 1196.6,1451.14 1197.51,1451.42 \\n 1198.43,1451.69 1199.35,1451.96 1200.27,1452.22 1201.18,1452.47 1202.1,1452.72 1203.02,1452.95 1203.94,1453.19 1204.86,1453.41 1205.78,1453.63 1206.7,1453.84 \\n 1207.62,1454.05 1208.54,1454.24 1209.46,1454.43 1210.38,1454.62 1211.3,1454.8 1212.22,1454.97 1213.15,1455.13 1214.07,1455.29 1214.99,1455.44 1215.91,1455.59 \\n 1216.83,1455.73 1217.76,1455.86 1218.68,1455.98 1219.6,1456.1 1220.53,1456.21 1221.45,1456.31 1222.37,1456.41 1223.3,1456.5 1224.22,1456.59 1225.14,1456.66 \\n 1226.07,1456.73 1226.99,1456.8 1227.92,1456.86 1228.84,1456.91 1229.76,1456.95 1230.69,1456.99 1231.61,1457.02 1232.54,1457.04 1233.46,1457.06 1234.39,1457.07 \\n 1235.31,1457.07 1236.23,1457.07 1237.16,1457.06 1238.08,1457.04 1239.01,1457.02 1239.93,1456.99 1240.86,1456.95 1241.78,1456.91 1242.7,1456.86 1243.63,1456.8 \\n 1244.55,1456.73 1245.48,1456.66 1246.4,1456.59 1247.32,1456.5 1248.25,1456.41 1249.17,1456.31 1250.09,1456.21 1251.02,1456.1 1251.94,1455.98 1252.86,1455.86 \\n 1253.79,1455.73 1254.71,1455.59 1255.63,1455.44 1256.55,1455.29 1257.47,1455.13 1258.4,1454.97 1259.32,1454.8 1260.24,1454.62 1261.16,1454.43 1262.08,1454.24 \\n 1263,1454.05 1263.92,1453.84 1264.84,1453.63 1265.76,1453.41 1266.68,1453.19 1267.6,1452.95 1268.52,1452.72 1269.44,1452.47 1270.35,1452.22 1271.27,1451.96 \\n 1272.19,1451.69 1273.11,1451.42 1274.02,1451.14 1274.94,1450.86 1275.85,1450.57 1276.77,1450.27 1277.68,1449.96 1278.6,1449.65 1279.51,1449.33 1280.43,1449.01 \\n 1281.34,1448.68 1282.25,1448.34 1283.17,1447.99 1284.08,1447.64 1284.99,1447.28 1285.9,1446.92 1286.81,1446.55 1287.72,1446.17 1288.63,1445.78 1289.54,1445.39 \\n 1290.45,1445 1291.35,1444.59 1292.26,1444.18 1293.17,1443.76 1294.07,1443.34 1294.98,1442.91 1295.88,1442.47 1296.79,1442.02 1297.69,1441.57 1298.6,1441.12 \\n 1299.5,1440.65 1300.4,1440.18 1301.3,1439.71 1302.2,1439.22 1303.1,1438.73 1304,1438.23 1304.9,1437.73 1305.8,1437.22 1306.69,1436.71 1307.59,1436.18 \\n 1308.49,1435.65 1309.38,1435.12 1310.28,1434.58 1311.17,1434.03 1312.06,1433.47 1312.95,1432.91 1313.85,1432.34 1314.74,1431.77 1315.63,1431.18 1316.51,1430.6 \\n 1317.4,1430 1318.29,1429.4 1319.18,1428.79 1320.06,1428.18 1320.95,1427.56 1321.83,1426.93 1322.71,1426.3 1323.6,1425.66 1324.48,1425.02 1325.36,1424.36 \\n 1326.24,1423.71 1327.12,1423.04 1327.99,1422.37 1328.87,1421.69 1329.75,1421.01 1330.62,1420.32 1331.5,1419.62 1332.37,1418.92 1333.24,1418.21 1334.11,1417.49 \\n 1334.98,1416.77 1335.85,1416.04 1336.72,1415.3 1337.59,1414.56 1338.45,1413.82 1339.32,1413.06 1340.18,1412.3 1341.05,1411.54 1341.91,1410.76 1342.77,1409.98 \\n 1343.63,1409.2 1344.49,1408.41 1345.35,1407.61 1346.2,1406.81 1347.06,1405.99 1347.91,1405.18 1348.77,1404.36 1349.62,1403.53 1350.47,1402.69 1351.32,1401.85 \\n 1352.17,1401 1353.02,1400.15 1353.86,1399.29 1354.71,1398.42 1355.55,1397.55 1356.4,1396.67 1357.24,1395.79 1358.08,1394.9 1358.92,1394 1359.76,1393.1 \\n 1360.59,1392.19 1361.43,1391.28 1362.26,1390.35 1363.1,1389.43 1363.93,1388.49 1364.76,1387.55 1365.59,1386.61 1366.42,1385.66 1367.25,1384.7 1368.07,1383.74 \\n 1368.9,1382.77 1369.72,1381.79 1370.54,1380.81 1371.36,1379.82 1372.18,1378.83 1373,1377.83 1373.81,1376.83 1374.63,1375.82 1375.44,1374.8 1376.25,1373.78 \\n 1377.06,1372.75 1377.87,1371.71 1378.68,1370.67 1379.49,1369.63 1380.29,1368.57 1381.1,1367.52 1381.9,1366.45 1382.7,1365.38 1383.5,1364.31 1384.3,1363.22 \\n 1385.09,1362.14 1385.89,1361.04 1386.68,1359.94 1387.47,1358.84 1388.26,1357.73 1389.05,1356.61 1389.84,1355.49 1390.63,1354.36 1391.41,1353.23 1392.19,1352.09 \\n 1392.98,1350.95 1393.76,1349.8 1394.53,1348.64 1395.31,1347.48 1396.08,1346.31 1396.86,1345.14 1397.63,1343.96 1398.4,1342.78 1399.17,1341.59 1399.94,1340.39 \\n 1400.7,1339.19 1401.47,1337.98 1402.23,1336.77 1402.99,1335.55 1403.75,1334.33 1404.5,1333.1 1405.26,1331.87 1406.01,1330.63 1406.76,1329.38 1407.52,1328.13 \\n 1408.26,1326.88 1409.01,1325.61 1409.76,1324.35 1410.5,1323.07 1411.24,1321.8 1411.98,1320.51 1412.72,1319.22 1413.46,1317.93 1414.19,1316.63 1414.92,1315.33 \\n 1415.66,1314.02 1416.39,1312.7 1417.11,1311.38 1417.84,1310.06 1418.56,1308.72 1419.29,1307.39 1420.01,1306.05 1420.73,1304.7 1421.44,1303.35 1422.16,1301.99 \\n 1422.87,1300.63 1423.58,1299.26 1424.29,1297.89 1425,1296.51 1425.7,1295.12 1426.41,1293.74 1427.11,1292.34 1427.81,1290.94 1428.51,1289.54 1429.2,1288.13 \\n 1429.9,1286.72 1430.59,1285.3 1431.28,1283.88 1431.97,1282.45 1432.66,1281.01 1433.34,1279.57 1434.02,1278.13 1434.71,1276.68 1435.38,1275.23 1436.06,1273.77 \\n 1436.74,1272.31 1437.41,1270.84 1438.08,1269.36 1438.75,1267.89 1439.42,1266.4 1440.08,1264.91 1440.74,1263.42 1441.4,1261.92 1442.06,1260.42 1442.72,1258.91 \\n 1443.37,1257.4 1444.03,1255.89 1444.68,1254.36 1445.33,1252.84 1445.97,1251.31 1446.62,1249.77 1447.26,1248.23 1447.9,1246.69 1448.54,1245.14 1449.17,1243.58 \\n 1449.81,1242.02 1450.44,1240.46 1451.07,1238.89 1451.7,1237.32 1452.32,1235.74 1452.94,1234.16 1453.57,1232.57 1454.18,1230.98 1454.8,1229.39 1455.42,1227.79 \\n 1456.03,1226.18 1456.64,1224.57 1457.25,1222.96 1457.85,1221.34 1458.46,1219.72 1459.06,1218.1 1459.66,1216.46 1460.25,1214.83 1460.85,1213.19 1461.44,1211.55 \\n 1462.03,1209.9 1462.62,1208.25 1463.21,1206.59 1463.79,1204.93 1464.37,1203.26 1464.95,1201.59 1465.53,1199.92 1466.1,1198.24 1466.67,1196.56 1467.24,1194.88 \\n 1467.81,1193.18 1468.38,1191.49 1468.94,1189.79 1469.5,1188.09 1470.06,1186.38 1470.61,1184.67 1471.17,1182.96 1471.72,1181.24 1472.27,1179.52 1472.82,1177.79 \\n 1473.36,1176.06 1473.9,1174.32 1474.44,1172.59 1474.98,1170.84 1475.52,1169.1 1476.05,1167.35 1476.58,1165.59 1477.11,1163.83 1477.63,1162.07 1478.16,1160.31 \\n 1478.68,1158.54 1479.19,1156.76 1479.71,1154.99 1480.22,1153.21 1480.74,1151.42 1481.24,1149.63 1481.75,1147.84 1482.25,1146.05 1482.76,1144.25 1483.25,1142.44 \\n 1483.75,1140.64 1484.24,1138.83 1484.74,1137.01 1485.23,1135.2 1485.71,1133.38 1486.2,1131.55 1486.68,1129.72 1487.16,1127.89 1487.63,1126.06 1488.11,1124.22 \\n 1488.58,1122.38 1489.05,1120.53 1489.52,1118.68 1489.98,1116.83 1490.44,1114.98 1490.9,1113.12 1491.36,1111.26 1491.81,1109.39 1492.26,1107.52 1492.71,1105.65 \\n 1493.16,1103.78 1493.6,1101.9 1494.05,1100.02 1494.49,1098.13 1494.92,1096.24 1495.36,1094.35 1495.79,1092.46 1496.22,1090.56 1496.64,1088.66 1497.06,1086.76 \\n 1497.49,1084.85 1497.9,1082.94 1498.32,1081.03 1498.73,1079.11 1499.14,1077.19 1499.55,1075.27 1499.96,1073.35 1500.36,1071.42 1500.76,1069.49 1501.16,1067.55 \\n 1501.55,1065.62 1501.94,1063.68 1502.33,1061.74 1502.72,1059.79 1503.11,1057.84 1503.49,1055.89 1503.87,1053.94 1504.24,1051.98 1504.62,1050.03 1504.99,1048.06 \\n 1505.36,1046.1 1505.72,1044.13 1506.09,1042.16 1506.45,1040.19 1506.8,1038.22 1507.16,1036.24 1507.51,1034.26 1507.86,1032.28 1508.21,1030.29 1508.55,1028.3 \\n 1508.89,1026.31 1509.23,1024.32 1509.57,1022.33 1509.9,1020.33 1510.23,1018.33 1510.56,1016.33 1510.89,1014.32 1511.21,1012.32 1511.53,1010.31 1511.85,1008.29 \\n 1512.16,1006.28 1512.47,1004.26 1512.78,1002.25 1513.09,1000.23 1513.39,998.203 1513.69,996.178 1513.99,994.15 1514.29,992.121 1514.58,990.089 1514.87,988.056 \\n 1515.16,986.02 1515.44,983.982 1515.72,981.942 1516,979.9 1516.28,977.856 1516.55,975.81 1516.82,973.762 1517.09,971.712 1517.35,969.66 1517.61,967.606 \\n 1517.87,965.551 1518.13,963.493 1518.38,961.434 1518.63,959.373 1518.88,957.309 1519.13,955.245 1519.37,953.178 1519.61,951.11 1519.85,949.039 1520.08,946.968 \\n 1520.31,944.894 1520.54,942.819 1520.77,940.742 1520.99,938.664 1521.21,936.583 1521.43,934.502 1521.64,932.418 1521.85,930.334 1522.06,928.247 1522.27,926.159 \\n 1522.47,924.07 1522.67,921.979 1522.87,919.887 1523.06,917.793 1523.25,915.698 1523.44,913.602 1523.63,911.504 1523.81,909.405 1523.99,907.304 1524.17,905.202 \\n 1524.34,903.099 1524.52,900.995 1524.69,898.889 1524.85,896.783 1525.02,894.675 1525.18,892.565 1525.33,890.455 1525.49,888.344 1525.64,886.231 1525.79,884.117 \\n 1525.93,882.003 1526.08,879.887 1526.22,877.77 1526.36,875.652 1526.49,873.533 1526.62,871.413 1526.75,869.293 1526.88,867.171 1527,865.048 1527.12,862.925 \\n 1527.24,860.8 1527.35,858.675 1527.46,856.549 1527.57,854.422 1527.68,852.295 1527.78,850.167 1527.88,848.037 1527.98,845.908 1528.07,843.777 1528.16,841.646 \\n 1528.25,839.514 1528.34,837.382 1528.42,835.249 1528.5,833.115 1528.58,830.981 1528.65,828.846 1528.72,826.711 1528.79,824.575 1528.85,822.438 1528.92,820.302 \\n 1528.98,818.164 1529.03,816.027 1529.09,813.889 1529.14,811.75 1529.19,809.612 1529.23,807.472 1529.27,805.333 1529.31,803.193 1529.35,801.053 1529.38,798.913 \\n 1529.41,796.772 1529.44,794.632 1529.46,792.491 1529.49,790.35 1529.5,788.209 1529.52,786.067 1529.53,783.926 1529.54,781.784 1529.55,779.643 1529.56,777.501 \\n 1529.56,775.359 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#808080; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1938.44,424.665 1939.87,423.698 1941.28,422.736 1942.69,421.776 1944.09,420.821 1945.49,419.868 1946.88,418.92 1948.26,417.974 1949.63,417.033 1951,416.095 \\n 1952.36,415.16 1953.71,414.229 1955.06,413.301 1956.4,412.378 1957.73,411.457 1959.05,410.54 1960.37,409.627 1961.68,408.718 1962.98,407.812 1964.28,406.91 \\n 1965.57,406.011 1966.85,405.116 1968.12,404.225 1969.39,403.337 1970.65,402.453 1971.9,401.573 1973.15,400.696 1974.39,399.823 1975.62,398.954 1976.84,398.088 \\n 1978.06,397.226 1979.27,396.368 1980.47,395.514 1981.66,394.663 1982.85,393.817 1984.03,392.974 1985.2,392.134 1986.36,391.299 1987.52,390.467 1988.67,389.639 \\n 1989.81,388.815 1990.95,387.995 1992.07,387.178 1993.19,386.366 1994.31,385.557 1995.41,384.752 1996.51,383.951 1997.6,383.154 1998.68,382.36 1999.76,381.571 \\n 2000.82,380.785 2001.88,380.003 2002.94,379.226 2003.98,378.452 2005.02,377.682 2006.05,376.916 2007.07,376.154 2008.08,375.395 2009.09,374.641 2010.09,373.891 \\n 2011.08,373.145 2012.06,372.402 2013.04,371.664 2014.01,370.93 2014.97,370.199 2015.92,369.473 2016.87,368.75 2017.8,368.032 2018.73,367.318 2019.66,366.607 \\n 2020.57,365.901 2021.48,365.199 2022.37,364.501 2023.27,363.806 2024.15,363.116 2025.02,362.43 2025.89,361.748 2026.75,361.071 2027.6,360.397 2028.45,359.727 \\n 2029.28,359.062 2030.11,358.4 2030.93,357.743 2031.75,357.09 2032.55,356.44 2033.35,355.796 2034.14,355.155 2034.92,354.518 2035.69,353.886 2036.46,353.257 \\n 2037.22,352.633 2037.97,352.013 2038.71,351.397 2039.44,350.785 2040.17,350.178 2040.89,349.575 2041.6,348.976 2042.3,348.381 2042.99,347.79 2043.68,347.204 \\n 2044.36,346.622 2045.03,346.044 2045.69,345.47 2046.34,344.9 2046.99,344.335 2047.63,343.774 2048.26,343.217 2048.88,342.665 2049.49,342.117 2050.1,341.573 \\n 2050.7,341.033 2051.29,340.498 2051.87,339.967 2052.44,339.44 2053.01,338.918 2053.57,338.399 2054.12,337.886 2054.66,337.376 2055.19,336.871 2055.72,336.37 \\n 2056.24,335.873 2056.74,335.381 2057.25,334.893 2057.74,334.41 2058.22,333.931 2058.7,333.456 2059.17,332.985 2059.63,332.519 2060.08,332.058 2060.53,331.6 \\n 2060.96,331.147 2061.39,330.699 2061.81,330.255 2062.22,329.815 2062.62,329.38 2063.02,328.949 2063.41,328.522 2063.79,328.1 2064.16,327.682 2064.52,327.269 \\n 2064.87,326.86 2065.22,326.455 2065.56,326.055 2065.89,325.66 2066.21,325.268 2066.52,324.882 2066.83,324.499 2067.13,324.122 2067.41,323.748 2067.69,323.379 \\n 2067.97,323.015 2068.23,322.655 2068.49,322.299 2068.74,321.948 2068.97,321.602 2069.21,321.26 2069.43,320.922 2069.64,320.589 2069.85,320.26 2070.05,319.936 \\n 2070.24,319.616 2070.42,319.301 2070.59,318.99 2070.76,318.684 2070.92,318.383 2071.07,318.085 2071.21,317.793 2071.34,317.505 2071.46,317.221 2071.58,316.942 \\n 2071.69,316.667 2071.79,316.397 2071.88,316.132 2071.96,315.871 2072.03,315.614 2072.1,315.363 2072.16,315.115 2072.21,314.872 2072.25,314.634 2072.28,314.4 \\n 2072.31,314.171 2072.33,313.947 2072.33,313.727 2072.33,313.511 2072.33,313.3 2072.31,313.094 2072.29,312.892 2072.25,312.695 2072.21,312.502 2072.16,312.314 \\n 2072.11,312.13 2072.04,311.951 2071.97,311.777 2071.88,311.607 2071.79,311.442 2071.69,311.281 2071.59,311.125 2071.47,310.973 2071.35,310.826 2071.22,310.684 \\n 2071.08,310.546 2070.93,310.413 2070.77,310.285 2070.61,310.161 2070.43,310.041 2070.25,309.926 2070.06,309.816 2069.86,309.711 2069.66,309.609 2069.44,309.513 \\n 2069.22,309.421 2068.99,309.334 2068.75,309.251 2068.5,309.173 2068.25,309.1 2067.99,309.031 2067.71,308.967 2067.43,308.907 2067.14,308.852 2066.85,308.802 \\n 2066.54,308.756 2066.23,308.715 2065.91,308.678 2065.58,308.646 2065.24,308.619 2064.9,308.596 2064.54,308.578 2064.18,308.564 2063.81,308.555 2063.43,308.551 \\n 2063.05,308.551 2062.65,308.556 2062.25,308.565 2061.84,308.579 2061.42,308.598 2060.99,308.621 2060.56,308.649 2060.11,308.682 2059.66,308.719 2059.2,308.761 \\n 2058.73,308.807 2058.26,308.858 2057.77,308.913 2057.28,308.973 2056.78,309.038 2056.27,309.107 2055.75,309.181 2055.23,309.26 2054.69,309.343 2054.15,309.431 \\n 2053.6,309.523 2053.05,309.62 2052.48,309.722 2051.91,309.828 2051.33,309.938 2050.74,310.054 2050.14,310.174 2049.53,310.298 2048.92,310.427 2048.3,310.561 \\n 2047.67,310.699 2047.03,310.842 2046.39,310.989 2045.73,311.141 2045.07,311.298 2044.4,311.459 2043.72,311.625 2043.04,311.795 2042.34,311.97 2041.64,312.15 \\n 2040.93,312.334 2040.22,312.522 2039.49,312.716 2038.76,312.913 2038.02,313.116 2037.27,313.322 2036.51,313.534 2035.74,313.75 2034.97,313.97 2034.19,314.196 \\n 2033.4,314.425 2032.61,314.659 2031.8,314.898 2030.99,315.141 2030.17,315.389 2029.34,315.642 2028.5,315.899 2027.66,316.16 2026.81,316.426 2025.95,316.697 \\n 2025.08,316.972 2024.21,317.251 2023.33,317.535 2022.43,317.824 2021.54,318.117 2020.63,318.415 2019.72,318.717 2018.8,319.023 2017.87,319.335 2016.93,319.65 \\n 2015.98,319.97 2015.03,320.295 2014.07,320.624 2013.1,320.958 2012.13,321.296 2011.15,321.638 2010.15,321.986 2009.16,322.337 2008.15,322.693 2007.14,323.054 \\n 2006.12,323.419 2005.09,323.788 2004.05,324.162 2003.01,324.54 2001.95,324.923 2000.89,325.31 1999.83,325.702 1998.75,326.098 1997.67,326.498 1996.58,326.903 \\n 1995.49,327.313 1994.38,327.727 1993.27,328.145 1992.15,328.567 1991.02,328.994 1989.89,329.426 1988.75,329.862 1987.6,330.302 1986.44,330.747 1985.28,331.196 \\n 1984.11,331.649 1982.93,332.107 1981.74,332.569 1980.55,333.036 1979.35,333.507 1978.14,333.982 1976.92,334.461 1975.7,334.945 1974.47,335.434 1973.23,335.926 \\n 1971.99,336.423 1970.73,336.925 1969.48,337.43 1968.21,337.94 1966.94,338.455 1965.65,338.973 1964.37,339.496 1963.07,340.023 1961.77,340.555 1960.46,341.091 \\n 1959.14,341.631 1957.82,342.175 1956.49,342.724 1955.15,343.277 1953.8,343.834 1952.45,344.395 1951.09,344.961 1949.72,345.531 1948.35,346.105 1946.97,346.684 \\n 1945.58,347.266 1944.19,347.853 1942.78,348.444 1941.38,349.04 1939.96,349.639 1938.54,350.243 1937.11,350.851 1935.67,351.463 1934.23,352.079 1932.78,352.7 \\n 1931.32,353.324 1929.86,353.953 1928.38,354.586 1926.91,355.223 1925.42,355.864 1923.93,356.51 1922.43,357.159 1920.93,357.813 1919.42,358.471 1917.9,359.133 \\n 1916.37,359.799 1914.84,360.469 1913.3,361.143 1911.76,361.821 1910.2,362.504 1908.64,363.19 1907.08,363.881 1905.51,364.575 1903.93,365.274 1902.34,365.976 \\n 1900.75,366.683 1899.15,367.394 1897.55,368.109 1895.94,368.827 1894.32,369.55 1892.69,370.277 1891.06,371.008 1889.43,371.743 1887.78,372.482 1886.13,373.224 \\n 1884.48,373.971 1882.81,374.722 1881.14,375.476 1879.47,376.235 1877.78,376.997 1876.1,377.764 1874.4,378.534 1872.7,379.309 1870.99,380.087 1869.28,380.869 \\n 1867.56,381.655 1865.83,382.445 1864.1,383.239 1862.36,384.036 1860.61,384.838 1858.86,385.643 1857.11,386.452 1855.34,387.265 1853.57,388.082 1851.8,388.903 \\n 1850.02,389.728 1848.23,390.556 1846.43,391.388 1844.63,392.224 1842.83,393.064 1841.02,393.907 1839.2,394.754 1837.38,395.605 1835.55,396.46 1833.71,397.318 \\n 1831.87,398.181 1830.02,399.047 1828.17,399.916 1826.31,400.79 1824.44,401.667 1822.57,402.547 1820.7,403.432 1818.81,404.32 1816.92,405.212 1815.03,406.107 \\n 1813.13,407.006 1811.23,407.909 1809.31,408.815 1807.4,409.725 1805.48,410.638 1803.55,411.555 1801.61,412.476 1799.68,413.4 1797.73,414.328 1795.78,415.26 \\n 1793.82,416.195 1791.86,417.133 1789.9,418.075 1787.92,419.021 1785.95,419.97 1783.96,420.923 1781.97,421.879 1779.98,422.838 1777.98,423.802 1775.98,424.768 \\n 1773.97,425.738 1771.95,426.712 1769.93,427.688 1767.9,428.669 1765.87,429.653 1763.84,430.64 1761.79,431.63 1759.75,432.624 1757.7,433.622 1755.64,434.622 \\n 1753.58,435.626 1751.51,436.634 1749.44,437.645 1747.36,438.659 1745.28,439.676 1743.19,440.697 1741.1,441.721 1739,442.748 1736.9,443.779 1734.79,444.813 \\n 1732.68,445.85 1730.56,446.89 1728.44,447.934 1726.31,448.981 1724.18,450.031 1722.04,451.084 1719.9,452.141 1717.75,453.201 1715.6,454.264 1713.44,455.33 \\n 1711.28,456.399 1709.12,457.471 1706.95,458.547 1704.77,459.625 1702.59,460.707 1700.41,461.792 1698.22,462.88 1696.03,463.971 1693.83,465.065 1691.63,466.162 \\n 1689.42,467.262 1687.21,468.366 1684.99,469.472 1682.77,470.581 1680.55,471.694 1678.32,472.809 1676.09,473.927 1673.85,475.048 1671.61,476.173 1669.36,477.3 \\n 1667.11,478.43 1664.86,479.563 1662.6,480.699 1660.33,481.838 1658.07,482.98 1655.79,484.124 1653.52,485.272 1651.24,486.422 1648.96,487.576 1646.67,488.732 \\n 1644.38,489.891 1642.08,491.052 1639.78,492.217 1637.47,493.384 1635.17,494.554 1632.85,495.727 1630.54,496.903 1628.22,498.081 1625.89,499.263 1623.57,500.446 \\n 1621.24,501.633 1618.9,502.822 1616.56,504.014 1614.22,505.209 1611.87,506.406 1609.52,507.606 1607.17,508.809 1604.81,510.014 1602.45,511.222 1600.08,512.432 \\n 1597.71,513.646 1595.34,514.861 1592.97,516.079 1590.59,517.3 1588.2,518.524 1585.82,519.749 1583.43,520.978 1581.04,522.209 1578.64,523.442 1576.24,524.678 \\n 1573.84,525.916 1571.43,527.157 1569.02,528.4 1566.61,529.646 1564.19,530.894 1561.77,532.145 1559.35,533.398 1556.92,534.653 1554.49,535.911 1552.06,537.171 \\n 1549.62,538.433 1547.19,539.698 1544.74,540.965 1542.3,542.235 1539.85,543.506 1537.4,544.78 1534.95,546.056 1532.49,547.335 1530.03,548.616 1527.57,549.899 \\n 1525.1,551.184 1522.63,552.472 1520.16,553.761 1517.69,555.053 1515.21,556.347 1512.73,557.643 1510.25,558.942 1507.76,560.242 1505.28,561.545 1502.79,562.849 \\n 1500.29,564.156 1497.8,565.465 1495.3,566.776 1492.8,568.089 1490.29,569.404 1487.79,570.721 1485.28,572.04 1482.77,573.361 1480.26,574.684 1477.74,576.009 \\n 1475.22,577.337 1472.7,578.666 1470.18,579.996 1467.65,581.329 1465.13,582.664 1462.6,584.001 1460.07,585.339 1457.53,586.68 1454.99,588.022 1452.46,589.366 \\n 1449.92,590.712 1447.37,592.06 1444.83,593.41 1442.28,594.761 1439.73,596.115 1437.18,597.469 1434.63,598.826 1432.07,600.185 1429.52,601.545 1426.96,602.907 \\n 1424.4,604.27 1421.83,605.636 1419.27,607.003 1416.7,608.371 1414.14,609.742 1411.57,611.114 1408.99,612.487 1406.42,613.862 1403.85,615.239 1401.27,616.618 \\n 1398.69,617.997 1396.11,619.379 1393.53,620.762 1390.95,622.146 1388.36,623.532 1385.78,624.92 1383.19,626.309 1380.6,627.699 1378.01,629.091 1375.42,630.485 \\n 1372.83,631.88 1370.23,633.276 1367.63,634.674 1365.04,636.073 1362.44,637.473 1359.84,638.875 1357.24,640.278 1354.64,641.682 1352.03,643.088 1349.43,644.495 \\n 1346.82,645.903 1344.22,647.313 1341.61,648.724 1339,650.136 1336.39,651.55 1333.78,652.964 1331.17,654.38 1328.55,655.797 1325.94,657.215 1323.33,658.635 \\n 1320.71,660.055 1318.09,661.477 1315.48,662.9 1312.86,664.323 1310.24,665.748 1307.62,667.174 1305,668.602 1302.38,670.03 1299.76,671.459 1297.14,672.889 \\n 1294.51,674.32 1291.89,675.753 1289.27,677.186 1286.64,678.62 1284.02,680.055 1281.39,681.491 1278.77,682.928 1276.14,684.366 1273.51,685.805 1270.89,687.245 \\n 1268.26,688.686 1265.63,690.127 1263,691.569 1260.38,693.012 1257.75,694.456 1255.12,695.901 1252.49,697.347 1249.86,698.793 1247.23,700.24 1244.6,701.688 \\n 1241.97,703.136 1239.34,704.585 1236.71,706.035 1234.08,707.486 1231.45,708.937 1228.82,710.389 1226.19,711.842 1223.57,713.295 1220.94,714.749 1218.31,716.203 \\n 1215.68,717.658 1213.05,719.114 1210.42,720.57 1207.79,722.026 1205.16,723.484 1202.54,724.941 1199.91,726.399 1197.28,727.858 1194.66,729.317 1192.03,730.777 \\n 1189.4,732.237 1186.78,733.697 1184.15,735.158 1181.53,736.62 1178.9,738.081 1176.28,739.543 1173.66,741.006 1171.04,742.468 1168.41,743.931 1165.79,745.395 \\n 1163.17,746.858 1160.55,748.322 1157.94,749.786 1155.32,751.251 1152.7,752.716 1150.08,754.181 1147.47,755.646 1144.85,757.111 1142.24,758.576 1139.63,760.042 \\n 1137.02,761.508 1134.4,762.974 1131.79,764.44 1129.19,765.906 1126.58,767.372 1123.97,768.839 1121.37,770.305 1118.76,771.772 1116.16,773.238 1113.56,774.705 \\n 1110.95,776.171 1108.35,777.638 1105.76,779.104 1103.16,780.571 1100.56,782.037 1097.97,783.503 1095.37,784.97 1092.78,786.436 1090.19,787.902 1087.6,789.368 \\n 1085.02,790.834 1082.43,792.299 1079.85,793.765 1077.26,795.23 1074.68,796.695 1072.1,798.16 1069.52,799.625 1066.95,801.089 1064.37,802.553 1061.8,804.017 \\n 1059.23,805.481 1056.66,806.944 1054.09,808.407 1051.52,809.87 1048.96,811.332 1046.39,812.794 1043.83,814.256 1041.27,815.717 1038.72,817.178 1036.16,818.638 \\n 1033.61,820.098 1031.06,821.558 1028.51,823.017 1025.96,824.475 1023.42,825.933 1020.87,827.391 1018.33,828.848 1015.79,830.305 1013.26,831.761 1010.72,833.216 \\n 1008.19,834.671 1005.66,836.126 1003.13,837.579 1000.61,839.033 998.087,840.485 995.566,841.937 993.048,843.388 990.532,844.839 988.019,846.288 985.508,847.738 \\n 982.999,849.186 980.493,850.634 977.989,852.081 975.488,853.527 972.99,854.972 970.494,856.417 968.001,857.861 965.51,859.304 963.023,860.746 960.537,862.187 \\n 958.055,863.628 955.575,865.068 953.098,866.506 950.624,867.944 948.152,869.381 945.684,870.817 943.218,872.252 940.755,873.686 938.295,875.119 935.838,876.551 \\n 933.384,877.983 930.933,879.413 928.485,880.842 926.04,882.27 923.598,883.697 921.159,885.123 918.723,886.548 916.291,887.972 913.861,889.394 911.435,890.816 \\n 909.012,892.236 906.592,893.655 904.175,895.073 901.762,896.49 899.352,897.906 896.945,899.32 894.541,900.734 892.141,902.146 889.744,903.557 887.351,904.966 \\n 884.961,906.374 882.575,907.781 880.192,909.187 877.812,910.591 875.436,911.994 873.064,913.396 870.695,914.796 868.33,916.195 865.969,917.592 863.611,918.988 \\n 861.256,920.383 858.906,921.776 856.559,923.168 854.216,924.558 851.876,925.947 849.541,927.335 847.209,928.721 844.881,930.105 842.557,931.488 840.237,932.869 \\n 837.921,934.249 835.608,935.627 833.3,937.004 830.995,938.379 828.695,939.752 826.398,941.124 824.106,942.494 821.818,943.862 819.533,945.229 817.253,946.594 \\n 814.977,947.958 812.705,949.319 810.438,950.679 808.174,952.038 805.915,953.394 803.66,954.749 801.409,956.102 799.162,957.453 796.92,958.803 794.682,960.15 \\n 792.449,961.496 790.219,962.84 787.995,964.182 785.774,965.523 783.558,966.861 781.347,968.198 779.14,969.532 776.937,970.865 774.739,972.196 772.546,973.524 \\n 770.357,974.851 768.173,976.176 765.993,977.499 763.818,978.82 761.648,980.139 759.482,981.456 757.321,982.77 755.165,984.083 753.013,985.394 750.866,986.703 \\n 748.724,988.009 746.587,989.314 744.455,990.616 742.327,991.916 740.204,993.214 738.087,994.51 735.974,995.804 733.866,997.096 731.763,998.385 729.665,999.672 \\n 727.572,1000.96 725.484,1002.24 723.401,1003.52 721.323,1004.8 719.25,1006.07 717.182,1007.35 715.119,1008.62 713.062,1009.89 711.009,1011.16 708.962,1012.42 \\n 706.92,1013.68 704.883,1014.94 702.852,1016.2 700.825,1017.46 698.804,1018.71 696.789,1019.96 694.778,1021.21 692.773,1022.45 690.773,1023.69 688.779,1024.94 \\n 686.79,1026.17 684.807,1027.41 682.828,1028.64 680.856,1029.87 678.889,1031.1 676.927,1032.33 674.971,1033.55 673.02,1034.77 671.075,1035.99 669.135,1037.2 \\n 667.201,1038.42 665.273,1039.63 663.35,1040.83 661.433,1042.04 659.522,1043.24 657.616,1044.44 655.716,1045.64 653.822,1046.83 651.933,1048.02 650.05,1049.21 \\n 648.173,1050.4 646.302,1051.58 644.437,1052.76 642.577,1053.94 640.723,1055.12 638.875,1056.29 637.033,1057.46 635.197,1058.63 633.367,1059.79 631.543,1060.95 \\n 629.725,1062.11 627.912,1063.27 626.106,1064.42 624.306,1065.57 622.512,1066.72 620.723,1067.86 618.941,1069 617.165,1070.14 615.395,1071.28 613.631,1072.41 \\n 611.874,1073.54 610.122,1074.67 608.377,1075.79 606.638,1076.91 604.905,1078.03 603.178,1079.14 601.457,1080.26 599.743,1081.37 598.035,1082.47 596.333,1083.57 \\n 594.638,1084.67 592.949,1085.77 591.266,1086.86 589.59,1087.96 587.92,1089.04 586.256,1090.13 584.599,1091.21 582.948,1092.29 581.304,1093.36 579.666,1094.43 \\n 578.035,1095.5 576.41,1096.57 574.791,1097.63 573.179,1098.69 571.574,1099.75 569.975,1100.8 568.383,1101.85 566.797,1102.9 565.218,1103.94 563.646,1104.98 \\n 562.08,1106.02 560.521,1107.05 558.968,1108.08 557.422,1109.11 555.883,1110.13 554.351,1111.15 552.825,1112.17 551.306,1113.18 549.794,1114.19 548.288,1115.2 \\n 546.789,1116.2 545.297,1117.2 543.812,1118.2 542.334,1119.19 540.863,1120.18 539.398,1121.17 537.94,1122.16 536.489,1123.14 535.046,1124.11 533.608,1125.08 \\n 532.178,1126.05 530.755,1127.02 529.339,1127.98 527.93,1128.94 526.528,1129.9 525.132,1130.85 523.744,1131.8 522.363,1132.74 520.989,1133.69 519.621,1134.62 \\n 518.261,1135.56 516.908,1136.49 515.562,1137.42 514.224,1138.34 512.892,1139.26 511.567,1140.18 510.25,1141.09 508.94,1142 507.637,1142.91 506.341,1143.81 \\n 505.052,1144.71 503.77,1145.6 502.496,1146.49 501.229,1147.38 499.969,1148.27 498.717,1149.15 497.471,1150.02 496.233,1150.9 495.003,1151.77 493.779,1152.63 \\n 492.563,1153.49 491.355,1154.35 490.153,1155.2 488.959,1156.06 487.772,1156.9 486.593,1157.75 485.421,1158.58 484.257,1159.42 483.1,1160.25 481.95,1161.08 \\n 480.808,1161.9 479.673,1162.72 478.545,1163.54 477.426,1164.35 476.313,1165.16 475.208,1165.97 474.111,1166.77 473.021,1167.57 471.939,1168.36 470.864,1169.15 \\n 469.797,1169.93 468.737,1170.72 467.685,1171.49 466.64,1172.27 465.603,1173.04 464.574,1173.8 463.552,1174.57 462.538,1175.32 461.531,1176.08 460.532,1176.83 \\n 459.541,1177.57 458.557,1178.32 457.581,1179.05 456.613,1179.79 455.652,1180.52 454.7,1181.25 453.754,1181.97 452.817,1182.69 451.887,1183.4 450.965,1184.11 \\n 450.051,1184.82 449.144,1185.52 448.246,1186.22 447.355,1186.91 446.471,1187.6 445.596,1188.29 444.728,1188.97 443.868,1189.65 443.016,1190.32 442.172,1190.99 \\n 441.336,1191.66 440.507,1192.32 439.687,1192.98 438.874,1193.63 438.069,1194.28 437.272,1194.92 436.482,1195.56 435.701,1196.2 434.927,1196.83 434.162,1197.46 \\n 433.404,1198.09 432.655,1198.71 431.913,1199.32 431.179,1199.93 430.453,1200.54 429.735,1201.14 429.025,1201.74 428.323,1202.34 427.628,1202.93 426.942,1203.52 \\n 426.264,1204.1 425.594,1204.68 424.932,1205.25 424.277,1205.82 423.631,1206.38 422.993,1206.94 422.363,1207.5 421.741,1208.05 421.127,1208.6 420.52,1209.15 \\n 419.922,1209.69 419.332,1210.22 418.75,1210.75 418.177,1211.28 417.611,1211.8 417.053,1212.32 416.503,1212.83 415.962,1213.34 415.428,1213.85 414.903,1214.35 \\n 414.385,1214.85 413.876,1215.34 413.375,1215.83 412.882,1216.31 412.397,1216.79 411.92,1217.26 411.452,1217.73 410.991,1218.2 410.539,1218.66 410.095,1219.12 \\n 409.659,1219.57 409.231,1220.02 408.811,1220.46 408.399,1220.9 407.996,1221.34 407.6,1221.77 407.213,1222.2 406.834,1222.62 406.464,1223.04 406.101,1223.45 \\n 405.747,1223.86 405.4,1224.26 405.062,1224.66 404.732,1225.06 404.411,1225.45 404.097,1225.84 403.792,1226.22 403.495,1226.6 403.206,1226.97 402.926,1227.34 \\n 402.653,1227.7 402.389,1228.06 402.133,1228.42 401.885,1228.77 401.646,1229.12 401.414,1229.46 401.191,1229.8 400.977,1230.13 400.77,1230.46 400.572,1230.78 \\n 400.382,1231.1 400.2,1231.42 400.026,1231.73 399.861,1232.03 399.703,1232.34 399.555,1232.63 399.414,1232.93 399.282,1233.21 399.157,1233.5 399.041,1233.78 \\n 398.934,1234.05 398.834,1234.32 398.743,1234.59 398.66,1234.85 398.586,1235.1 398.519,1235.36 398.461,1235.6 398.412,1235.85 398.37,1236.08 398.337,1236.32 \\n 398.312,1236.55 398.295,1236.77 398.286,1236.99 398.286,1237.21 398.294,1237.42 398.31,1237.63 398.335,1237.83 398.368,1238.02 398.409,1238.22 398.458,1238.41 \\n 398.515,1238.59 398.581,1238.77 398.655,1238.94 398.738,1239.11 398.828,1239.28 398.927,1239.44 399.034,1239.59 399.149,1239.75 399.273,1239.89 399.405,1240.03 \\n 399.545,1240.17 399.693,1240.31 399.85,1240.43 400.015,1240.56 400.188,1240.68 400.369,1240.79 400.559,1240.9 400.756,1241.01 400.963,1241.11 401.177,1241.21 \\n 401.399,1241.3 401.63,1241.38 401.869,1241.47 402.116,1241.55 402.372,1241.62 402.635,1241.69 402.907,1241.75 403.187,1241.81 403.475,1241.87 403.772,1241.92 \\n 404.077,1241.96 404.39,1242 404.711,1242.04 405.04,1242.07 405.377,1242.1 405.723,1242.12 406.077,1242.14 406.439,1242.15 406.809,1242.16 407.188,1242.17 \\n 407.574,1242.17 407.969,1242.16 408.372,1242.15 408.783,1242.14 409.202,1242.12 409.63,1242.1 410.065,1242.07 410.509,1242.04 410.961,1242 411.421,1241.96 \\n 411.889,1241.91 412.365,1241.86 412.849,1241.81 413.342,1241.75 413.843,1241.68 414.351,1241.61 414.868,1241.54 415.393,1241.46 415.926,1241.38 416.467,1241.29 \\n 417.016,1241.2 417.573,1241.1 418.139,1241 418.712,1240.89 419.293,1240.78 419.883,1240.67 420.48,1240.55 421.086,1240.42 421.699,1240.29 422.321,1240.16 \\n 422.951,1240.02 423.588,1239.88 424.234,1239.73 424.888,1239.58 425.549,1239.42 426.219,1239.26 426.897,1239.09 427.582,1238.92 428.276,1238.75 428.978,1238.57 \\n 429.687,1238.39 430.405,1238.2 431.13,1238 431.863,1237.81 432.605,1237.6 433.354,1237.4 434.111,1237.18 434.876,1236.97 435.649,1236.75 436.43,1236.52 \\n 437.219,1236.29 438.015,1236.06 438.82,1235.82 439.632,1235.58 440.452,1235.33 441.28,1235.08 442.116,1234.82 442.96,1234.56 443.811,1234.29 444.671,1234.02 \\n 445.538,1233.75 446.413,1233.47 447.295,1233.18 448.186,1232.89 449.084,1232.6 449.99,1232.3 450.904,1232 451.825,1231.7 452.755,1231.38 453.692,1231.07 \\n 454.636,1230.75 455.589,1230.42 456.549,1230.09 457.516,1229.76 458.492,1229.42 459.475,1229.08 460.466,1228.73 461.464,1228.38 462.47,1228.03 463.484,1227.67 \\n 464.505,1227.3 465.534,1226.93 466.571,1226.56 467.615,1226.18 468.666,1225.8 469.726,1225.41 470.792,1225.02 471.867,1224.62 472.949,1224.22 474.038,1223.82 \\n 475.135,1223.41 476.239,1222.99 477.351,1222.57 478.471,1222.15 479.597,1221.72 480.732,1221.29 481.873,1220.86 483.023,1220.42 484.179,1219.97 485.343,1219.52 \\n 486.515,1219.07 487.693,1218.61 488.88,1218.15 490.073,1217.68 491.274,1217.21 492.482,1216.74 493.698,1216.26 494.921,1215.77 496.151,1215.29 497.389,1214.79 \\n 498.633,1214.3 499.886,1213.79 501.145,1213.29 502.411,1212.78 503.685,1212.26 504.966,1211.75 506.255,1211.22 507.55,1210.7 508.853,1210.16 510.162,1209.63 \\n 511.479,1209.09 512.803,1208.54 514.135,1208 515.473,1207.44 516.818,1206.88 518.171,1206.32 519.531,1205.76 520.897,1205.19 522.271,1204.61 523.652,1204.04 \\n 525.039,1203.45 526.434,1202.87 527.836,1202.27 529.245,1201.68 530.661,1201.08 532.083,1200.48 533.513,1199.87 534.949,1199.26 536.393,1198.64 537.843,1198.02 \\n 539.301,1197.39 540.765,1196.77 542.236,1196.13 543.713,1195.5 545.198,1194.85 546.69,1194.21 548.188,1193.56 549.693,1192.91 551.205,1192.25 552.723,1191.59 \\n 554.249,1190.92 555.781,1190.25 557.319,1189.58 558.865,1188.9 560.417,1188.22 561.975,1187.53 563.541,1186.84 565.113,1186.14 566.692,1185.45 568.277,1184.74 \\n 569.869,1184.04 571.467,1183.32 573.072,1182.61 574.683,1181.89 576.301,1181.17 577.926,1180.44 579.557,1179.71 581.194,1178.98 582.838,1178.24 584.489,1177.49 \\n 586.145,1176.75 587.809,1176 589.478,1175.24 591.154,1174.48 592.836,1173.72 594.525,1172.95 596.22,1172.18 597.921,1171.41 599.629,1170.63 601.343,1169.85 \\n 603.063,1169.06 604.789,1168.27 606.522,1167.48 608.261,1166.68 610.006,1165.88 611.757,1165.08 613.514,1164.27 615.277,1163.45 617.047,1162.64 618.823,1161.82 \\n 620.604,1160.99 622.392,1160.16 624.186,1159.33 625.986,1158.49 627.792,1157.66 629.604,1156.81 631.421,1155.96 633.245,1155.11 635.075,1154.26 636.911,1153.4 \\n 638.752,1152.54 640.6,1151.67 642.453,1150.8 644.312,1149.93 646.177,1149.05 648.048,1148.17 649.925,1147.29 651.807,1146.4 653.696,1145.51 655.589,1144.61 \\n 657.489,1143.71 659.394,1142.81 661.306,1141.9 663.222,1140.99 665.145,1140.08 667.073,1139.16 669.006,1138.24 670.945,1137.32 672.89,1136.39 674.84,1135.46 \\n 676.796,1134.52 678.757,1133.59 680.724,1132.64 682.697,1131.7 684.674,1130.75 686.658,1129.8 688.646,1128.84 690.64,1127.88 692.64,1126.92 694.644,1125.95 \\n 696.654,1124.98 698.67,1124.01 700.69,1123.03 702.716,1122.05 704.748,1121.07 706.784,1120.08 708.826,1119.09 710.873,1118.09 712.925,1117.1 714.982,1116.1 \\n 717.044,1115.09 719.112,1114.08 721.184,1113.07 723.262,1112.06 725.344,1111.04 727.432,1110.02 729.525,1109 731.623,1107.97 733.725,1106.94 735.833,1105.91 \\n 737.945,1104.87 740.063,1103.83 742.185,1102.78 744.312,1101.74 746.445,1100.69 748.581,1099.63 750.723,1098.58 752.87,1097.52 755.021,1096.46 757.177,1095.39 \\n 759.338,1094.32 761.503,1093.25 763.673,1092.17 765.848,1091.09 768.027,1090.01 770.211,1088.93 772.4,1087.84 774.593,1086.75 776.79,1085.65 778.993,1084.56 \\n 781.199,1083.46 783.411,1082.35 785.626,1081.25 787.846,1080.14 790.071,1079.03 792.3,1077.91 794.533,1076.79 796.771,1075.67 799.013,1074.55 801.259,1073.42 \\n 803.509,1072.29 805.764,1071.16 808.023,1070.02 810.286,1068.88 812.554,1067.74 814.826,1066.59 817.101,1065.45 819.381,1064.3 821.665,1063.14 823.953,1061.99 \\n 826.245,1060.83 828.542,1059.67 830.842,1058.5 833.146,1057.33 835.454,1056.16 837.766,1054.99 840.082,1053.82 842.402,1052.64 844.726,1051.46 847.054,1050.27 \\n 849.385,1049.09 851.721,1047.9 854.06,1046.7 856.403,1045.51 858.749,1044.31 861.099,1043.11 863.453,1041.91 865.811,1040.7 868.172,1039.5 870.537,1038.29 \\n 872.906,1037.07 875.278,1035.86 877.654,1034.64 880.033,1033.42 882.416,1032.2 884.802,1030.97 887.192,1029.74 889.585,1028.51 891.981,1027.28 894.381,1026.04 \\n 896.784,1024.8 899.191,1023.56 901.601,1022.32 904.014,1021.07 906.43,1019.82 908.85,1018.57 911.273,1017.32 913.699,1016.07 916.129,1014.81 918.561,1013.55 \\n 920.997,1012.29 923.435,1011.02 925.877,1009.75 928.322,1008.48 930.77,1007.21 933.221,1005.94 935.675,1004.66 938.131,1003.38 940.591,1002.1 943.054,1000.82 \\n 945.519,999.535 947.988,998.247 950.459,996.958 952.933,995.666 955.41,994.372 957.889,993.076 960.372,991.777 962.857,990.477 965.344,989.174 967.835,987.87 \\n 970.328,986.563 972.823,985.254 975.322,983.943 977.823,982.63 980.326,981.315 982.832,979.998 985.34,978.679 987.851,977.358 990.364,976.034 992.88,974.709 \\n 995.398,973.382 997.918,972.053 1000.44,970.722 1002.97,969.389 1005.49,968.055 1008.02,966.718 1010.56,965.379 1013.09,964.039 1015.63,962.697 1018.16,961.352 \\n 1020.7,960.006 1023.25,958.659 1025.79,957.309 1028.34,955.958 1030.89,954.604 1033.44,953.249 1035.99,951.893 1038.55,950.534 1041.1,949.174 1043.66,947.812 \\n 1046.22,946.448 1048.79,945.083 1051.35,943.716 1053.92,942.347 1056.48,940.977 1059.05,939.605 1061.63,938.232 1064.2,936.856 1066.77,935.48 1069.35,934.101 \\n 1071.93,932.721 1074.51,931.34 1077.09,929.957 1079.67,928.572 1082.26,927.186 1084.84,925.799 1087.43,924.41 1090.02,923.019 1092.61,921.627 1095.2,920.234 \\n 1097.8,918.839 1100.39,917.443 1102.99,916.045 1105.58,914.646 1108.18,913.246 1110.78,911.844 1113.38,910.441 1115.98,909.037 1118.59,907.631 1121.19,906.224 \\n 1123.8,904.815 1126.4,903.406 1129.01,901.995 1131.62,900.583 1134.23,899.169 1136.84,897.755 1139.45,896.339 1142.07,894.922 1144.68,893.504 1147.29,892.084 \\n 1149.91,890.664 1152.53,889.242 1155.14,887.819 1157.76,886.395 1160.38,884.97 1163,883.544 1165.62,882.117 1168.24,880.689 1170.86,879.26 1173.48,877.83 \\n 1176.11,876.398 1178.73,874.966 1181.35,873.533 1183.98,872.099 1186.6,870.663 1189.23,869.227 1191.85,867.79 1194.48,866.352 1197.11,864.914 1199.73,863.474 \\n 1202.36,862.033 1204.99,860.592 1207.62,859.15 1210.25,857.706 1212.87,856.262 1215.5,854.818 1218.13,853.372 1220.76,851.926 1223.39,850.479 1226.02,849.031 \\n 1228.65,847.583 1231.28,846.133 1233.91,844.683 1236.54,843.233 1239.17,841.782 1241.8,840.33 1244.43,838.877 1247.06,837.424 1249.68,835.97 1252.31,834.516 \\n 1254.94,833.061 1257.57,831.605 1260.2,830.149 1262.83,828.692 1265.46,827.235 1268.08,825.778 1270.71,824.319 1273.34,822.861 1275.97,821.402 1278.59,819.942 \\n 1281.22,818.482 1283.84,817.021 1286.47,815.561 1289.09,814.099 1291.72,812.638 1294.34,811.176 1296.96,809.713 1299.58,808.25 1302.21,806.787 1304.83,805.324 \\n 1307.45,803.86 1310.07,802.397 1312.68,800.932 1315.3,799.468 1317.92,798.003 1320.54,796.538 1323.15,795.073 1325.77,793.608 1328.38,792.142 1330.99,790.677 \\n 1333.6,789.211 1336.22,787.745 1338.83,786.279 1341.43,784.813 1344.04,783.346 1346.65,781.88 1349.25,780.414 1351.86,778.947 1354.46,777.481 1357.07,776.014 \\n 1359.67,774.548 1362.27,773.081 1364.86,771.615 1367.46,770.148 1370.06,768.682 1372.65,767.216 1375.25,765.749 1377.84,764.283 1380.43,762.817 1383.02,761.351 \\n 1385.6,759.885 1388.19,758.42 1390.78,756.954 1393.36,755.489 1395.94,754.024 1398.52,752.559 1401.1,751.094 1403.67,749.63 1406.25,748.166 1408.82,746.702 \\n 1411.39,745.238 1413.96,743.775 1416.53,742.312 1419.1,740.849 1421.66,739.387 1424.23,737.925 1426.79,736.463 1429.35,735.002 1431.9,733.541 1434.46,732.081 \\n 1437.01,730.621 1439.56,729.161 1442.11,727.702 1444.66,726.244 1447.2,724.785 1449.75,723.328 1452.29,721.871 1454.83,720.414 1457.36,718.958 1459.9,717.502 \\n 1462.43,716.048 1464.96,714.593 1467.49,713.139 1470.01,711.686 1472.53,710.234 1475.05,708.782 1477.57,707.331 1480.09,705.88 1482.6,704.43 1485.11,702.981 \\n 1487.62,701.533 1490.13,700.085 1492.63,698.638 1495.13,697.192 1497.63,695.747 1500.13,694.302 1502.62,692.858 1505.11,691.415 1507.6,689.973 1510.08,688.531 \\n 1512.57,687.091 1515.05,685.651 1517.52,684.213 1520,682.775 1522.47,681.338 1524.94,679.902 1527.4,678.467 1529.87,677.033 1532.33,675.599 1534.78,674.167 \\n 1537.24,672.736 1539.69,671.306 1542.14,669.877 1544.58,668.449 1547.02,667.022 1549.46,665.596 1551.9,664.171 1554.33,662.747 1556.76,661.325 1559.19,659.903 \\n 1561.61,658.483 1564.03,657.063 1566.45,655.645 1568.86,654.229 1571.27,652.813 1573.68,651.398 1576.08,649.985 1578.48,648.573 1580.88,647.162 1583.27,645.753 \\n 1585.66,644.345 1588.05,642.938 1590.43,641.532 1592.81,640.128 1595.18,638.725 1597.56,637.323 1599.93,635.923 1602.29,634.524 1604.65,633.126 1607.01,631.73 \\n 1609.36,630.336 1611.71,628.942 1614.06,627.551 1616.4,626.16 1618.74,624.771 1621.08,623.384 1623.41,621.998 1625.74,620.614 1628.06,619.231 1630.38,617.85 \\n 1632.7,616.47 1635.01,615.092 1637.32,613.715 1639.63,612.34 1641.93,610.967 1644.22,609.595 1646.51,608.225 1648.8,606.856 1651.09,605.49 1653.37,604.125 \\n 1655.64,602.761 1657.92,601.399 1660.18,600.039 1662.45,598.681 1664.71,597.324 1666.96,595.97 1669.21,594.617 1671.46,593.265 1673.7,591.916 1675.94,590.568 \\n 1678.17,589.223 1680.4,587.879 1682.63,586.536 1684.85,585.196 1687.06,583.858 1689.27,582.521 1691.48,581.187 1693.68,579.854 1695.88,578.523 1698.07,577.194 \\n 1700.26,575.868 1702.45,574.543 1704.63,573.22 1706.8,571.899 1708.97,570.58 1711.14,569.263 1713.3,567.948 1715.46,566.636 1717.61,565.325 1719.75,564.016 \\n 1721.9,562.71 1724.03,561.405 1726.17,560.103 1728.29,558.803 1730.42,557.504 1732.53,556.209 1734.65,554.915 1736.75,553.623 1738.86,552.334 1740.96,551.046 \\n 1743.05,549.761 1745.14,548.479 1747.22,547.198 1749.3,545.92 1751.37,544.644 1753.44,543.37 1755.5,542.099 1757.56,540.829 1759.61,539.563 1761.66,538.298 \\n 1763.7,537.036 1765.74,535.776 1767.77,534.519 1769.8,533.264 1771.82,532.011 1773.83,530.761 1775.84,529.513 1777.85,528.267 1779.85,527.024 1781.84,525.784 \\n 1783.83,524.546 1785.81,523.31 1787.79,522.077 1789.76,520.846 1791.73,519.618 1793.69,518.393 1795.65,517.169 1797.6,515.949 1799.55,514.731 1801.49,513.516 \\n 1803.42,512.303 1805.35,511.093 1807.27,509.885 1809.19,508.68 1811.1,507.478 1813,506.278 1814.9,505.081 1816.8,503.887 1818.69,502.695 1820.57,501.506 \\n 1822.45,500.32 1824.32,499.136 1826.18,497.955 1828.04,496.777 1829.9,495.602 1831.75,494.429 1833.59,493.259 1835.42,492.092 1837.25,490.928 1839.08,489.767 \\n 1840.9,488.608 1842.71,487.452 1844.51,486.299 1846.31,485.149 1848.11,484.002 1849.9,482.857 1851.68,481.716 1853.46,480.577 1855.23,479.442 1856.99,478.309 \\n 1858.75,477.179 1860.5,476.052 1862.24,474.928 1863.98,473.807 1865.72,472.689 1867.44,471.574 1869.16,470.462 1870.88,469.353 1872.59,468.247 1874.29,467.145 \\n 1875.98,466.045 1877.67,464.948 1879.35,463.854 1881.03,462.763 1882.7,461.676 1884.36,460.591 1886.02,459.51 1887.67,458.431 1889.32,457.356 1890.95,456.284 \\n 1892.59,455.215 1894.21,454.15 1895.83,453.087 1897.44,452.028 1899.05,450.972 1900.65,449.919 1902.24,448.869 1903.82,447.822 1905.4,446.779 1906.97,445.739 \\n 1908.54,444.702 1910.1,443.669 1911.65,442.638 1913.2,441.611 1914.74,440.588 1916.27,439.567 1917.8,438.55 1919.31,437.536 1920.83,436.526 1922.33,435.519 \\n 1923.83,434.515 1925.32,433.515 1926.81,432.518 1928.29,431.524 1929.76,430.534 1931.22,429.547 1932.68,428.564 1934.13,427.584 1935.57,426.607 1937.01,425.634 \\n 1938.44,424.665 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#800080; stroke-width:8; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 2258.1,729.85 2258.28,727.697 2258.45,725.545 2258.61,723.392 2258.76,721.241 2258.89,719.09 2259.02,716.94 2259.14,714.79 2259.25,712.64 2259.35,710.492 \\n 2259.44,708.344 2259.52,706.196 2259.59,704.05 2259.65,701.904 2259.7,699.759 2259.74,697.614 2259.76,695.47 2259.78,693.328 2259.79,691.185 2259.79,689.044 \\n 2259.78,686.904 2259.76,684.764 2259.72,682.626 2259.68,680.488 2259.63,678.351 2259.57,676.215 2259.49,674.08 2259.41,671.946 2259.32,669.813 2259.21,667.682 \\n 2259.1,665.551 2258.98,663.421 2258.85,661.293 2258.7,659.165 2258.55,657.039 2258.39,654.914 2258.21,652.79 2258.03,650.667 2257.83,648.546 2257.63,646.425 \\n 2257.42,644.306 2257.19,642.189 2256.96,640.072 2256.71,637.957 2256.46,635.844 2256.19,633.731 2255.92,631.621 2255.64,629.511 2255.34,627.403 2255.04,625.296 \\n 2254.72,623.191 2254.4,621.088 2254.06,618.986 2253.72,616.885 2253.36,614.786 2253,612.689 2252.62,610.593 2252.24,608.499 2251.84,606.406 2251.44,604.316 \\n 2251.02,602.226 2250.6,600.139 2250.16,598.053 2249.72,595.969 2249.26,593.887 2248.79,591.807 2248.32,589.728 2247.83,587.651 2247.34,585.577 2246.83,583.504 \\n 2246.32,581.432 2245.79,579.363 2245.26,577.296 2244.71,575.231 2244.16,573.167 2243.59,571.106 2243.02,569.047 2242.43,566.989 2241.84,564.934 2241.23,562.881 \\n 2240.62,560.83 2239.99,558.781 2239.36,556.734 2238.71,554.689 2238.06,552.647 2237.4,550.607 2236.72,548.568 2236.04,546.533 2235.34,544.499 2234.64,542.468 \\n 2233.93,540.439 2233.2,538.412 2232.47,536.388 2231.72,534.366 2230.97,532.346 2230.21,530.329 2229.44,528.314 2228.65,526.302 2227.86,524.292 2227.06,522.285 \\n 2226.25,520.28 2225.43,518.277 2224.59,516.277 2223.75,514.28 2222.9,512.285 2222.04,510.293 2221.17,508.304 2220.29,506.317 2219.4,504.333 2218.5,502.351 \\n 2217.59,500.372 2216.67,498.396 2215.74,496.423 2214.8,494.452 2213.86,492.484 2212.9,490.519 2211.93,488.557 2210.95,486.597 2209.97,484.641 2208.97,482.687 \\n 2207.97,480.736 2206.95,478.788 2205.92,476.843 2204.89,474.901 2203.85,472.962 2202.79,471.026 2201.73,469.093 2200.66,467.163 2199.57,465.236 2198.48,463.312 \\n 2197.38,461.391 2196.27,459.473 2195.15,457.558 2194.02,455.647 2192.88,453.739 2191.73,451.833 2190.57,449.931 2189.4,448.033 2188.23,446.137 2187.04,444.245 \\n 2185.84,442.356 2184.64,440.47 2183.42,438.587 2182.2,436.708 2180.97,434.832 2179.72,432.96 2178.47,431.091 2177.21,429.225 2175.94,427.363 2174.66,425.504 \\n 2173.37,423.649 2172.07,421.797 2170.77,419.948 2169.45,418.104 2168.12,416.262 2166.79,414.424 2165.44,412.59 2164.09,410.759 2162.73,408.932 2161.35,407.109 \\n 2159.97,405.289 2158.58,403.473 2157.18,401.66 2155.77,399.852 2154.36,398.046 2152.93,396.245 2151.49,394.447 2150.05,392.654 2148.6,390.863 2147.13,389.077 \\n 2145.66,387.295 2144.18,385.516 2142.69,383.741 2141.19,381.97 2139.69,380.203 2138.17,378.44 2136.64,376.681 2135.11,374.925 2133.57,373.174 2132.01,371.427 \\n 2130.45,369.683 2128.88,367.944 2127.3,366.208 2125.72,364.477 2124.12,362.75 2122.52,361.026 2120.9,359.307 2119.28,357.592 2117.65,355.881 2116.01,354.175 \\n 2114.36,352.472 2112.7,350.774 2111.04,349.079 2109.36,347.389 2107.68,345.703 2105.99,344.022 2104.29,342.345 2102.58,340.672 2100.86,339.003 2099.13,337.338 \\n 2097.4,335.678 2095.66,334.022 2093.9,332.371 2092.14,330.724 2090.38,329.081 2088.6,327.443 2086.81,325.809 2085.02,324.18 2083.22,322.555 2081.41,320.934 \\n 2079.59,319.318 2077.76,317.707 2075.92,316.1 2074.08,314.497 2072.23,312.9 2070.37,311.306 2068.5,309.717 2066.62,308.133 2064.74,306.554 2062.84,304.979 \\n 2060.94,303.409 2059.03,301.843 2057.12,300.282 2055.19,298.726 2053.26,297.174 2051.31,295.627 2049.36,294.085 2047.41,292.548 2045.44,291.015 2043.47,289.487 \\n 2041.48,287.964 2039.49,286.446 2037.5,284.933 2035.49,283.424 2033.48,281.921 2031.45,280.422 2029.42,278.928 2027.39,277.439 2025.34,275.955 2023.29,274.476 \\n 2021.23,273.001 2019.16,271.532 2017.08,270.068 2015,268.608 2012.91,267.154 2010.81,265.705 2008.7,264.261 2006.59,262.821 2004.47,261.387 2002.34,259.958 \\n 2000.2,258.534 1998.05,257.115 1995.9,255.701 1993.74,254.293 1991.57,252.889 1989.4,251.491 1987.22,250.098 1985.03,248.71 1982.83,247.327 1980.62,245.949 \\n 1978.41,244.577 1976.19,243.21 1973.97,241.848 1971.73,240.491 1969.49,239.14 1967.24,237.794 1964.99,236.453 1962.73,235.118 1960.46,233.787 1958.18,232.463 \\n 1955.89,231.143 1953.6,229.829 1951.3,228.521 1949,227.217 1946.69,225.92 1944.37,224.627 1942.04,223.34 1939.71,222.059 1937.37,220.783 1935.02,219.512 \\n 1932.66,218.247 1930.3,216.987 1927.93,215.733 1925.56,214.485 1923.18,213.242 1920.79,212.004 1918.39,210.772 1915.99,209.546 1913.58,208.325 1911.17,207.11 \\n 1908.75,205.9 1906.32,204.696 1903.88,203.498 1901.44,202.306 1898.99,201.119 1896.54,199.937 1894.07,198.761 1891.61,197.592 1889.13,196.427 1886.65,195.269 \\n 1884.16,194.116 1881.67,192.969 1879.17,191.827 1876.66,190.692 1874.15,189.562 1871.63,188.438 1869.1,187.32 1866.57,186.207 1864.03,185.101 1861.49,184 \\n 1858.94,182.905 1856.38,181.816 1853.82,180.733 1851.25,179.655 1848.68,178.584 1846.1,177.518 1843.51,176.458 1840.92,175.404 1838.32,174.357 1835.71,173.315 \\n 1833.1,172.279 1830.49,171.249 1827.86,170.224 1825.23,169.206 1822.6,168.194 1819.96,167.188 1817.31,166.188 1814.66,165.194 1812,164.206 1809.34,163.224 \\n 1806.67,162.248 1804,161.278 1801.32,160.314 1798.63,159.356 1795.94,158.404 1793.25,157.458 1790.54,156.519 1787.84,155.585 1785.12,154.658 1782.4,153.737 \\n 1779.68,152.821 1776.95,151.913 1774.22,151.01 1771.48,150.113 1768.73,149.223 1765.98,148.338 1763.23,147.46 1760.47,146.588 1757.7,145.723 1754.93,144.863 \\n 1752.15,144.01 1749.37,143.163 1746.58,142.322 1743.79,141.487 1741,140.659 1738.19,139.837 1735.39,139.021 1732.58,138.212 1729.76,137.409 1726.94,136.612 \\n 1724.11,135.821 1721.28,135.037 1718.45,134.259 1715.6,133.487 1712.76,132.722 1709.91,131.963 1707.05,131.21 1704.2,130.464 1701.33,129.724 1698.46,128.991 \\n 1695.59,128.264 1692.71,127.543 1689.83,126.829 1686.94,126.121 1684.05,125.419 1681.16,124.724 1678.26,124.035 1675.35,123.353 1672.44,122.677 1669.53,122.008 \\n 1666.61,121.345 1663.69,120.688 1660.77,120.038 1657.84,119.395 1654.9,118.758 1651.96,118.127 1649.02,117.503 1646.08,116.885 1643.13,116.274 1640.17,115.67 \\n 1637.21,115.071 1634.25,114.48 1631.28,113.895 1628.31,113.316 1625.34,112.744 1622.36,112.179 1619.38,111.62 1616.39,111.067 1613.4,110.522 1610.41,109.982 \\n 1607.41,109.45 1604.41,108.924 1601.41,108.404 1598.4,107.891 1595.39,107.385 1592.37,106.885 1589.36,106.392 1586.33,105.905 1583.31,105.425 1580.28,104.952 \\n 1577.25,104.485 1574.21,104.025 1571.17,103.571 1568.13,103.125 1565.09,102.684 1562.04,102.251 1558.98,101.824 1555.93,101.403 1552.87,100.99 1549.81,100.583 \\n 1546.74,100.182 1543.68,99.7888 1540.61,99.4018 1537.53,99.0215 1534.46,98.6478 1531.38,98.2808 1528.29,97.9205 1525.21,97.5669 1522.12,97.22 1519.03,96.8798 \\n 1515.93,96.5463 1512.84,96.2194 1509.74,95.8993 1506.64,95.5859 1503.53,95.2792 1500.42,94.9792 1497.31,94.6859 1494.2,94.3993 1491.08,94.1195 1487.97,93.8463 \\n 1484.85,93.5799 1481.72,93.3203 1478.6,93.0673 1475.47,92.8211 1472.34,92.5817 1469.21,92.3489 1466.07,92.1229 1462.94,91.9037 1459.8,91.6912 1456.66,91.4854 \\n 1453.51,91.2864 1450.37,91.0942 1447.22,90.9087 1444.07,90.7299 1440.92,90.558 1437.76,90.3927 1434.61,90.2343 1431.45,90.0826 1428.29,89.9376 1425.13,89.7995 \\n 1421.96,89.668 1418.8,89.5434 1415.63,89.4255 1412.46,89.3144 1409.29,89.2101 1406.12,89.1125 1402.94,89.0217 1399.77,88.9377 1396.59,88.8605 1393.41,88.79 \\n 1390.23,88.7263 1387.05,88.6694 1383.86,88.6193 1380.68,88.5759 1377.49,88.5394 1374.3,88.5096 1371.12,88.4866 1367.92,88.4703 1364.73,88.4609 1361.54,88.4582 \\n 1358.34,88.4623 1355.15,88.4731 1351.95,88.4908 1348.75,88.5152 1345.55,88.5465 1342.35,88.5844 1339.15,88.6292 1335.95,88.6808 1332.75,88.7391 1329.54,88.8042 \\n 1326.34,88.8761 1323.13,88.9547 1319.92,89.0402 1316.72,89.1324 1313.51,89.2313 1310.3,89.3371 1307.09,89.4496 1303.88,89.5689 1300.66,89.6949 1297.45,89.8278 \\n 1294.24,89.9674 1291.03,90.1137 1287.81,90.2668 1284.6,90.4267 1281.38,90.5933 1278.17,90.7667 1274.95,90.9469 1271.74,91.1338 1268.52,91.3275 1265.3,91.5279 \\n 1262.08,91.735 1258.87,91.949 1255.65,92.1696 1252.43,92.397 1249.21,92.6311 1245.99,92.872 1242.78,93.1196 1239.56,93.374 1236.34,93.635 1233.12,93.9029 \\n 1229.9,94.1774 1226.68,94.4586 1223.47,94.7466 1220.25,95.0413 1217.03,95.3427 1213.81,95.6508 1210.59,95.9656 1207.38,96.2872 1204.16,96.6154 1200.94,96.9503 \\n 1197.73,97.292 1194.51,97.6403 1191.29,97.9953 1188.08,98.357 1184.86,98.7254 1181.65,99.1004 1178.44,99.4821 1175.22,99.8705 1172.01,100.266 1168.8,100.667 \\n 1165.59,101.076 1162.38,101.491 1159.17,101.912 1155.96,102.341 1152.75,102.776 1149.54,103.217 1146.33,103.666 1143.13,104.121 1139.92,104.582 1136.72,105.05 \\n 1133.52,105.525 1130.31,106.006 1127.11,106.494 1123.91,106.989 1120.71,107.49 1117.52,107.998 1114.32,108.512 1111.12,109.033 1107.93,109.56 1104.74,110.095 \\n 1101.55,110.635 1098.35,111.182 1095.17,111.736 1091.98,112.296 1088.79,112.863 1085.61,113.437 1082.42,114.017 1079.24,114.603 1076.06,115.196 1072.88,115.795 \\n 1069.71,116.401 1066.53,117.014 1063.36,117.633 1060.19,118.258 1057.02,118.89 1053.85,119.529 1050.68,120.174 1047.52,120.825 1044.35,121.483 1041.19,122.147 \\n 1038.03,122.818 1034.87,123.495 1031.72,124.179 1028.57,124.869 1025.41,125.565 1022.27,126.268 1019.12,126.977 1015.97,127.693 1012.83,128.415 1009.69,129.144 \\n 1006.55,129.878 1003.42,130.62 1000.28,131.367 997.151,132.121 994.021,132.881 990.895,133.648 987.77,134.421 984.648,135.2 981.529,135.986 978.412,136.778 \\n 975.298,137.576 972.186,138.381 969.077,139.191 965.97,140.008 962.866,140.832 959.765,141.661 956.666,142.497 953.57,143.339 950.477,144.188 947.387,145.042 \\n 944.3,145.903 941.215,146.77 938.134,147.643 935.055,148.523 931.979,149.408 928.907,150.3 925.837,151.198 922.77,152.102 919.707,153.012 916.647,153.929 \\n 913.589,154.851 910.535,155.78 907.484,156.715 904.437,157.655 901.392,158.602 898.351,159.555 895.313,160.515 892.279,161.48 889.248,162.451 886.22,163.428 \\n 883.196,164.412 880.176,165.401 877.158,166.396 874.145,167.398 871.135,168.405 868.128,169.419 865.125,170.438 862.126,171.463 859.13,172.495 856.139,173.532 \\n 853.151,174.575 850.166,175.624 847.186,176.679 844.209,177.74 841.236,178.807 838.267,179.88 835.302,180.958 832.341,182.043 829.384,183.133 826.431,184.229 \\n 823.482,185.331 820.537,186.439 817.596,187.553 814.66,188.672 811.727,189.798 808.799,190.929 805.874,192.065 802.954,193.208 800.039,194.356 797.127,195.51 \\n 794.22,196.67 791.317,197.836 788.419,199.007 785.525,200.184 782.636,201.366 779.751,202.554 776.87,203.748 773.994,204.948 771.123,206.153 768.256,207.363 \\n 765.393,208.58 762.536,209.802 759.683,211.029 756.835,212.262 753.991,213.501 751.152,214.745 748.318,215.995 745.489,217.25 742.665,218.511 739.845,219.777 \\n 737.03,221.049 734.221,222.326 731.416,223.609 728.616,224.897 725.821,226.19 723.032,227.489 720.247,228.794 717.467,230.103 714.693,231.419 711.923,232.739 \\n 709.159,234.065 706.4,235.396 703.646,236.733 700.898,238.075 698.155,239.422 695.417,240.774 692.684,242.132 689.957,243.495 687.235,244.863 684.518,246.237 \\n 681.807,247.615 679.102,248.999 676.402,250.388 673.707,251.783 671.018,253.182 668.334,254.587 665.657,255.996 662.984,257.411 660.318,258.831 657.657,260.256 \\n 655.002,261.686 652.352,263.122 649.708,264.562 647.07,266.007 644.438,267.458 641.812,268.913 639.191,270.373 636.577,271.839 633.968,273.309 631.365,274.784 \\n 628.769,276.265 626.178,277.75 623.593,279.24 621.014,280.735 618.441,282.234 615.875,283.739 613.314,285.249 610.76,286.763 608.212,288.282 605.67,289.806 \\n 603.134,291.335 600.605,292.869 598.081,294.407 595.564,295.95 593.054,297.498 590.549,299.051 588.051,300.608 585.56,302.17 583.075,303.736 580.596,305.308 \\n 578.124,306.883 575.658,308.464 573.198,310.049 570.746,311.639 568.3,313.233 565.86,314.832 563.427,316.435 561.001,318.043 558.581,319.656 556.168,321.273 \\n 553.761,322.894 551.362,324.52 548.969,326.15 546.583,327.785 544.204,329.424 541.831,331.068 539.466,332.716 537.107,334.368 534.755,336.025 532.41,337.686 \\n 530.072,339.351 527.741,341.021 525.417,342.695 523.1,344.373 520.79,346.055 518.487,347.742 516.191,349.433 513.902,351.128 511.621,352.827 509.346,354.531 \\n 507.079,356.239 504.818,357.95 502.566,359.666 500.32,361.386 498.081,363.11 495.85,364.838 493.626,366.571 491.41,368.307 489.2,370.047 486.998,371.791 \\n 484.804,373.54 482.617,375.292 480.437,377.048 478.265,378.808 476.1,380.572 473.943,382.34 471.793,384.112 469.651,385.887 467.516,387.667 465.389,389.45 \\n 463.27,391.237 461.158,393.028 459.054,394.823 456.957,396.621 454.868,398.423 452.787,400.229 450.714,402.039 448.648,403.852 446.59,405.669 444.54,407.49 \\n 442.498,409.314 440.463,411.142 438.437,412.973 436.418,414.808 434.407,416.647 432.404,418.489 430.409,420.334 428.422,422.184 426.442,424.036 424.471,425.892 \\n 422.508,427.752 420.553,429.615 418.606,431.481 416.667,433.351 414.736,435.224 412.813,437.101 410.898,438.98 408.991,440.864 407.093,442.75 405.202,444.64 \\n 403.32,446.533 401.446,448.429 399.581,450.329 397.723,452.231 395.874,454.137 394.033,456.046 392.201,457.958 390.376,459.874 388.561,461.792 386.753,463.714 \\n 384.954,465.638 383.163,467.566 381.381,469.497 379.607,471.43 377.841,473.367 376.084,475.307 374.336,477.25 372.595,479.195 370.864,481.144 369.141,483.095 \\n 367.426,485.049 365.721,487.007 364.023,488.967 362.334,490.93 360.654,492.895 358.983,494.864 357.32,496.835 355.666,498.809 354.02,500.786 352.384,502.765 \\n 350.756,504.747 349.136,506.732 347.526,508.719 345.924,510.709 344.331,512.702 342.746,514.697 341.171,516.695 339.604,518.695 338.047,520.698 336.498,522.704 \\n 334.958,524.712 333.426,526.722 331.904,528.735 330.391,530.75 328.886,532.768 327.391,534.788 325.904,536.811 324.427,538.835 322.958,540.863 321.499,542.892 \\n 320.048,544.924 318.607,546.958 317.174,548.994 315.751,551.033 314.337,553.073 312.931,555.116 311.535,557.162 310.148,559.209 308.77,561.258 307.402,563.31 \\n 306.042,565.363 304.692,567.419 303.351,569.477 302.019,571.536 300.696,573.598 299.382,575.662 298.078,577.728 296.783,579.795 295.497,581.865 294.22,583.937 \\n 292.953,586.01 291.695,588.085 290.447,590.162 289.207,592.241 287.977,594.322 286.757,596.405 285.545,598.489 284.343,600.575 283.151,602.663 281.968,604.752 \\n 280.794,606.844 279.63,608.936 278.475,611.031 277.33,613.127 276.194,615.225 275.067,617.324 273.95,619.425 272.843,621.527 271.745,623.631 270.656,625.737 \\n 269.577,627.843 268.508,629.952 267.448,632.062 266.397,634.173 265.356,636.285 264.325,638.399 263.304,640.515 262.292,642.631 261.289,644.749 260.296,646.868 \\n 259.313,648.989 258.339,651.111 257.376,653.234 256.421,655.358 255.477,657.483 254.542,659.61 253.616,661.737 252.701,663.866 251.795,665.996 250.899,668.127 \\n 250.012,670.259 249.136,672.392 248.269,674.526 247.411,676.661 246.564,678.797 245.726,680.934 244.898,683.072 244.08,685.211 243.271,687.351 242.473,689.492 \\n 241.684,691.633 240.905,693.775 240.136,695.918 239.376,698.062 238.627,700.207 237.887,702.352 237.157,704.498 236.437,706.645 235.727,708.793 235.026,710.941 \\n 234.336,713.09 233.655,715.239 232.985,717.389 232.324,719.539 231.673,721.691 231.032,723.842 230.401,725.994 229.78,728.147 229.169,730.3 228.567,732.454 \\n 227.976,734.608 227.395,736.762 226.823,738.917 226.262,741.072 225.71,743.227 225.168,745.383 224.637,747.539 224.115,749.695 223.603,751.852 223.102,754.009 \\n 222.61,756.166 222.128,758.323 221.657,760.48 221.195,762.638 220.743,764.795 220.302,766.953 219.87,769.111 219.448,771.269 219.037,773.427 218.635,775.585 \\n 218.244,777.743 217.862,779.901 217.491,782.059 217.129,784.217 216.778,786.374 216.437,788.532 216.105,790.689 215.784,792.847 215.473,795.004 215.172,797.161 \\n 214.881,799.318 214.6,801.474 214.329,803.63 214.068,805.786 213.818,807.942 213.577,810.097 213.346,812.253 213.126,814.407 212.916,816.561 212.715,818.715 \\n 212.525,820.869 212.345,823.022 212.175,825.174 212.015,827.326 211.865,829.478 211.726,831.629 211.596,833.779 211.477,835.929 211.367,838.078 211.268,840.227 \\n 211.179,842.375 211.1,844.522 211.031,846.669 210.972,848.815 210.923,850.96 210.885,853.105 210.856,855.248 210.838,857.391 210.829,859.533 210.831,861.675 \\n 210.843,863.815 210.865,865.955 210.897,868.093 210.94,870.231 210.992,872.368 211.055,874.504 211.127,876.639 211.21,878.772 211.303,880.905 211.406,883.037 \\n 211.519,885.168 211.642,887.298 211.775,889.426 211.918,891.554 212.072,893.68 212.235,895.805 212.409,897.929 212.593,900.052 212.786,902.173 212.99,904.293 \\n 213.204,906.412 213.428,908.53 213.663,910.646 213.907,912.761 214.161,914.875 214.426,916.987 214.7,919.098 214.985,921.208 215.279,923.316 215.584,925.422 \\n 215.899,927.527 216.224,929.631 216.558,931.733 216.903,933.834 217.258,935.933 217.623,938.03 217.999,940.126 218.384,942.22 218.779,944.312 219.184,946.403 \\n 219.599,948.492 220.024,950.58 220.46,952.665 220.905,954.749 221.36,956.832 221.826,958.912 222.301,960.991 222.786,963.067 223.282,965.142 223.787,967.215 \\n 224.302,969.286 224.827,971.356 225.363,973.423 225.908,975.488 226.463,977.552 227.028,979.613 227.603,981.672 228.188,983.73 228.783,985.785 229.388,987.838 \\n 230.003,989.889 230.627,991.938 231.262,993.985 231.907,996.03 232.561,998.072 233.225,1000.11 233.9,1002.15 234.584,1004.19 235.278,1006.22 235.982,1008.25 \\n 236.695,1010.28 237.419,1012.31 238.152,1014.33 238.896,1016.35 239.649,1018.37 240.412,1020.39 241.185,1022.4 241.967,1024.42 242.759,1026.43 243.562,1028.43 \\n 244.374,1030.44 245.195,1032.44 246.027,1034.44 246.868,1036.44 247.719,1038.43 248.58,1040.43 249.45,1042.42 250.331,1044.4 251.221,1046.39 252.12,1048.37 \\n 253.03,1050.35 253.949,1052.32 254.878,1054.3 255.816,1056.27 256.764,1058.23 257.722,1060.2 258.689,1062.16 259.666,1064.12 260.653,1066.08 261.649,1068.03 \\n 262.655,1069.98 263.671,1071.93 264.696,1073.88 265.731,1075.82 266.775,1077.76 267.829,1079.69 268.892,1081.63 269.965,1083.56 271.047,1085.48 272.139,1087.41 \\n 273.241,1089.33 274.352,1091.25 275.472,1093.16 276.602,1095.07 277.741,1096.98 278.89,1098.89 280.048,1100.79 281.216,1102.69 282.393,1104.58 283.58,1106.47 \\n 284.776,1108.36 285.981,1110.25 287.196,1112.13 288.42,1114.01 289.653,1115.89 290.896,1117.76 292.148,1119.63 293.409,1121.49 294.68,1123.36 295.959,1125.21 \\n 297.249,1127.07 298.547,1128.92 299.855,1130.77 301.172,1132.62 302.498,1134.46 303.833,1136.29 305.178,1138.13 306.531,1139.96 307.894,1141.79 309.266,1143.61 \\n 310.647,1145.43 312.038,1147.25 313.437,1149.06 314.846,1150.87 316.263,1152.67 317.69,1154.47 319.125,1156.27 320.57,1158.07 322.024,1159.86 323.487,1161.64 \\n 324.959,1163.42 326.439,1165.2 327.929,1166.98 329.428,1168.75 330.935,1170.52 332.452,1172.28 333.977,1174.04 335.512,1175.79 337.055,1177.54 338.607,1179.29 \\n 340.168,1181.04 341.738,1182.78 343.317,1184.51 344.904,1186.24 346.5,1187.97 348.105,1189.69 349.719,1191.41 351.342,1193.13 352.973,1194.84 354.613,1196.54 \\n 356.261,1198.25 357.919,1199.95 359.585,1201.64 361.259,1203.33 362.942,1205.02 364.634,1206.7 366.335,1208.37 368.044,1210.05 369.761,1211.72 371.487,1213.38 \\n 373.222,1215.04 374.965,1216.7 376.717,1218.35 378.477,1219.99 380.245,1221.64 382.022,1223.28 383.808,1224.91 385.601,1226.54 387.404,1228.16 389.214,1229.78 \\n 391.033,1231.4 392.86,1233.01 394.696,1234.62 396.54,1236.22 398.392,1237.82 400.252,1239.41 402.121,1241 403.998,1242.59 405.883,1244.17 407.776,1245.74 \\n 409.678,1247.31 411.587,1248.88 413.505,1250.44 415.431,1251.99 417.365,1253.54 419.307,1255.09 421.257,1256.63 423.215,1258.17 425.181,1259.7 427.155,1261.23 \\n 429.137,1262.75 431.127,1264.27 433.125,1265.79 435.131,1267.29 437.145,1268.8 439.166,1270.3 441.196,1271.79 443.233,1273.28 445.278,1274.76 447.331,1276.24 \\n 449.392,1277.72 451.46,1279.19 453.537,1280.65 455.621,1282.11 457.712,1283.56 459.812,1285.01 461.919,1286.46 464.033,1287.9 466.155,1289.33 468.285,1290.76 \\n 470.423,1292.18 472.567,1293.6 474.72,1295.02 476.88,1296.43 479.047,1297.83 481.222,1299.23 483.405,1300.62 485.594,1302.01 487.791,1303.39 489.996,1304.77 \\n 492.208,1306.14 494.427,1307.51 496.654,1308.87 498.888,1310.23 501.129,1311.58 503.377,1312.93 505.633,1314.27 507.895,1315.6 510.165,1316.93 512.442,1318.26 \\n 514.727,1319.58 517.018,1320.89 519.316,1322.2 521.622,1323.5 523.935,1324.8 526.254,1326.09 528.581,1327.38 530.914,1328.66 533.255,1329.94 535.602,1331.21 \\n 537.957,1332.47 540.318,1333.73 542.686,1334.99 545.061,1336.23 547.442,1337.48 549.831,1338.71 552.226,1339.95 554.628,1341.17 557.037,1342.39 559.452,1343.61 \\n 561.875,1344.82 564.303,1346.02 566.739,1347.22 569.181,1348.41 571.629,1349.6 574.084,1350.78 576.546,1351.96 579.014,1353.13 581.489,1354.29 583.97,1355.45 \\n 586.457,1356.6 588.951,1357.75 591.451,1358.89 593.958,1360.03 596.471,1361.16 598.99,1362.28 601.516,1363.4 604.048,1364.51 606.586,1365.62 609.13,1366.72 \\n 611.68,1367.81 614.237,1368.9 616.8,1369.99 619.368,1371.06 621.943,1372.14 624.524,1373.2 627.111,1374.26 629.704,1375.31 632.303,1376.36 634.908,1377.4 \\n 637.519,1378.44 640.135,1379.47 642.758,1380.49 645.386,1381.51 648.021,1382.52 650.661,1383.53 653.307,1384.53 655.958,1385.53 658.615,1386.51 661.278,1387.5 \\n 663.947,1388.47 666.621,1389.44 669.301,1390.41 671.987,1391.36 674.678,1392.31 677.374,1393.26 680.077,1394.2 682.784,1395.13 685.497,1396.06 688.216,1396.98 \\n 690.939,1397.9 693.669,1398.81 696.403,1399.71 699.143,1400.61 701.888,1401.5 704.639,1402.38 707.394,1403.26 710.155,1404.13 712.921,1405 715.693,1405.86 \\n 718.469,1406.71 721.25,1407.56 724.037,1408.4 726.828,1409.23 729.625,1410.06 732.427,1410.88 735.233,1411.7 738.045,1412.51 740.861,1413.31 743.682,1414.11 \\n 746.508,1414.9 749.339,1415.68 752.175,1416.46 755.016,1417.23 757.861,1418 760.711,1418.76 763.566,1419.51 766.425,1420.25 769.289,1420.99 772.157,1421.73 \\n 775.03,1422.46 777.908,1423.18 780.79,1423.89 783.677,1424.6 786.568,1425.3 789.464,1425.99 792.363,1426.68 795.268,1427.37 798.176,1428.04 801.089,1428.71 \\n 804.007,1429.37 806.928,1430.03 809.854,1430.68 812.784,1431.32 815.718,1431.96 818.656,1432.59 821.598,1433.22 824.545,1433.83 827.495,1434.44 830.45,1435.05 \\n 833.408,1435.65 836.371,1436.24 839.337,1436.82 842.308,1437.4 845.282,1437.97 848.26,1438.54 851.242,1439.1 854.227,1439.65 857.217,1440.2 860.21,1440.74 \\n 863.207,1441.27 866.207,1441.8 869.212,1442.31 872.219,1442.83 875.231,1443.33 878.246,1443.83 881.264,1444.33 884.286,1444.81 887.312,1445.29 890.34,1445.77 \\n 893.373,1446.23 896.408,1446.69 899.447,1447.15 902.489,1447.59 905.535,1448.03 908.584,1448.47 911.636,1448.9 914.691,1449.32 917.75,1449.73 920.811,1450.14 \\n 923.876,1450.54 926.943,1450.93 930.014,1451.32 933.088,1451.7 936.165,1452.07 939.244,1452.44 942.327,1452.8 945.413,1453.15 948.501,1453.5 951.592,1453.84 \\n 954.686,1454.17 957.783,1454.5 960.883,1454.82 963.985,1455.13 967.09,1455.44 970.197,1455.74 973.307,1456.03 976.42,1456.32 979.535,1456.6 982.653,1456.87 \\n 985.774,1457.14 988.896,1457.4 992.022,1457.65 995.149,1457.9 998.279,1458.14 1001.41,1458.37 1004.55,1458.6 1007.68,1458.82 1010.82,1459.03 1013.96,1459.23 \\n 1017.11,1459.43 1020.25,1459.62 1023.4,1459.81 1026.55,1459.99 1029.7,1460.16 1032.86,1460.33 1036.01,1460.48 1039.17,1460.64 1042.33,1460.78 1045.49,1460.92 \\n 1048.66,1461.05 1051.82,1461.18 1054.99,1461.29 1058.16,1461.4 1061.33,1461.51 1064.5,1461.61 1067.68,1461.7 1070.85,1461.78 1074.03,1461.86 1077.21,1461.93 \\n 1080.39,1461.99 1083.57,1462.05 1086.76,1462.1 1089.94,1462.14 1093.13,1462.18 1096.32,1462.21 1099.5,1462.23 1102.7,1462.25 1105.89,1462.26 1109.08,1462.26 \\n 1112.28,1462.26 1115.47,1462.25 1118.67,1462.23 1121.87,1462.2 1125.07,1462.17 1128.27,1462.13 1131.47,1462.09 1134.67,1462.04 1137.87,1461.98 1141.08,1461.91 \\n 1144.28,1461.84 1147.49,1461.76 1150.7,1461.68 1153.9,1461.59 1157.11,1461.49 1160.32,1461.38 1163.53,1461.27 1166.74,1461.15 1169.96,1461.02 1173.17,1460.89 \\n 1176.38,1460.75 1179.59,1460.61 1182.81,1460.45 1186.02,1460.29 1189.24,1460.13 1192.45,1459.95 1195.67,1459.77 1198.89,1459.59 1202.1,1459.39 1205.32,1459.19 \\n 1208.54,1458.98 1211.75,1458.77 1214.97,1458.55 1218.19,1458.32 1221.41,1458.09 1224.63,1457.85 1227.84,1457.6 1231.06,1457.34 1234.28,1457.08 1237.5,1456.82 \\n 1240.72,1456.54 1243.94,1456.26 1247.15,1455.97 1250.37,1455.68 1253.59,1455.38 1256.81,1455.07 1260.03,1454.75 1263.24,1454.43 1266.46,1454.1 1269.68,1453.77 \\n 1272.89,1453.43 1276.11,1453.08 1279.33,1452.72 1282.54,1452.36 1285.76,1451.99 1288.97,1451.62 1292.18,1451.24 1295.4,1450.85 1298.61,1450.45 1301.82,1450.05 \\n 1305.03,1449.64 1308.24,1449.23 1311.45,1448.81 1314.66,1448.38 1317.87,1447.94 1321.08,1447.5 1324.29,1447.05 1327.49,1446.6 1330.7,1446.14 1333.9,1445.67 \\n 1337.11,1445.19 1340.31,1444.71 1343.51,1444.22 1346.71,1443.73 1349.91,1443.23 1353.1,1442.72 1356.3,1442.21 1359.5,1441.69 1362.69,1441.16 1365.88,1440.62 \\n 1369.08,1440.08 1372.27,1439.54 1375.45,1438.98 1378.64,1438.42 1381.83,1437.86 1385.01,1437.28 1388.2,1436.7 1391.38,1436.12 1394.56,1435.52 1397.74,1434.92 \\n 1400.91,1434.32 1404.09,1433.7 1407.26,1433.09 1410.43,1432.46 1413.6,1431.83 1416.77,1431.19 1419.94,1430.55 1423.1,1429.89 1426.27,1429.24 1429.43,1428.57 \\n 1432.59,1427.9 1435.75,1427.22 1438.9,1426.54 1442.05,1425.85 1445.21,1425.15 1448.35,1424.45 1451.5,1423.74 1454.65,1423.03 1457.79,1422.3 1460.93,1421.58 \\n 1464.07,1420.84 1467.2,1420.1 1470.34,1419.35 1473.47,1418.6 1476.6,1417.84 1479.73,1417.07 1482.85,1416.3 1485.97,1415.52 1489.09,1414.73 1492.21,1413.94 \\n 1495.32,1413.14 1498.43,1412.34 1501.54,1411.53 1504.65,1410.71 1507.75,1409.89 1510.86,1409.06 1513.95,1408.22 1517.05,1407.38 1520.14,1406.53 1523.23,1405.68 \\n 1526.32,1404.82 1529.41,1403.95 1532.49,1403.08 1535.57,1402.2 1538.64,1401.31 1541.71,1400.42 1544.78,1399.52 1547.85,1398.62 1550.91,1397.71 1553.97,1396.79 \\n 1557.03,1395.87 1560.09,1394.94 1563.14,1394 1566.18,1393.06 1569.23,1392.12 1572.27,1391.16 1575.31,1390.2 1578.34,1389.24 1581.37,1388.27 1584.4,1387.29 \\n 1587.42,1386.31 1590.44,1385.32 1593.46,1384.32 1596.48,1383.32 1599.49,1382.31 1602.49,1381.3 1605.5,1380.28 1608.49,1379.26 1611.49,1378.22 1614.48,1377.19 \\n 1617.47,1376.14 1620.45,1375.09 1623.43,1374.04 1626.41,1372.98 1629.38,1371.91 1632.35,1370.84 1635.32,1369.76 1638.28,1368.68 1641.24,1367.59 1644.19,1366.49 \\n 1647.14,1365.39 1650.08,1364.28 1653.02,1363.17 1655.96,1362.05 1658.89,1360.92 1661.82,1359.79 1664.75,1358.65 1667.67,1357.51 1670.58,1356.36 1673.49,1355.21 \\n 1676.4,1354.05 1679.3,1352.88 1682.2,1351.71 1685.1,1350.54 1687.98,1349.35 1690.87,1348.16 1693.75,1346.97 1696.63,1345.77 1699.5,1344.57 1702.36,1343.36 \\n 1705.23,1342.14 1708.08,1340.92 1710.94,1339.69 1713.79,1338.46 1716.63,1337.22 1719.47,1335.97 1722.3,1334.72 1725.13,1333.47 1727.96,1332.21 1730.78,1330.94 \\n 1733.59,1329.67 1736.4,1328.39 1739.2,1327.11 1742,1325.82 1744.8,1324.53 1747.59,1323.23 1750.37,1321.93 1753.15,1320.62 1755.93,1319.3 1758.7,1317.98 \\n 1761.46,1316.65 1764.22,1315.32 1766.97,1313.99 1769.72,1312.64 1772.47,1311.3 1775.2,1309.94 1777.94,1308.59 1780.66,1307.22 1783.39,1305.86 1786.1,1304.48 \\n 1788.81,1303.1 1791.52,1301.72 1794.22,1300.33 1796.91,1298.94 1799.6,1297.54 1802.29,1296.13 1804.96,1294.72 1807.64,1293.31 1810.3,1291.89 1812.96,1290.46 \\n 1815.62,1289.03 1818.27,1287.6 1820.91,1286.16 1823.55,1284.71 1826.18,1283.26 1828.81,1281.81 1831.43,1280.35 1834.04,1278.88 1836.65,1277.41 1839.26,1275.93 \\n 1841.85,1274.45 1844.44,1272.97 1847.03,1271.48 1849.61,1269.98 1852.18,1268.48 1854.75,1266.98 1857.31,1265.47 1859.86,1263.96 1862.41,1262.44 1864.95,1260.91 \\n 1867.49,1259.38 1870.02,1257.85 1872.54,1256.31 1875.06,1254.77 1877.57,1253.22 1880.07,1251.67 1882.57,1250.11 1885.06,1248.55 1887.55,1246.98 1890.02,1245.41 \\n 1892.5,1243.84 1894.96,1242.25 1897.42,1240.67 1899.87,1239.08 1902.32,1237.49 1904.76,1235.89 1907.19,1234.28 1909.62,1232.68 1912.04,1231.06 1914.45,1229.45 \\n 1916.86,1227.82 1919.26,1226.2 1921.65,1224.57 1924.04,1222.93 1926.42,1221.29 1928.79,1219.65 1931.15,1218 1933.51,1216.35 1935.87,1214.69 1938.21,1213.03 \\n 1940.55,1211.37 1942.88,1209.7 1945.2,1208.02 1947.52,1206.35 1949.83,1204.66 1952.13,1202.98 1954.43,1201.29 1956.72,1199.59 1959,1197.89 1961.27,1196.19 \\n 1963.54,1194.48 1965.8,1192.77 1968.05,1191.05 1970.3,1189.33 1972.54,1187.61 1974.77,1185.88 1976.99,1184.15 1979.21,1182.41 1981.42,1180.67 1983.62,1178.93 \\n 1985.82,1177.18 1988,1175.43 1990.18,1173.67 1992.36,1171.91 1994.52,1170.15 1996.68,1168.38 1998.83,1166.61 2000.97,1164.83 2003.1,1163.05 2005.23,1161.27 \\n 2007.35,1159.48 2009.46,1157.69 2011.57,1155.9 2013.66,1154.1 2015.75,1152.3 2017.83,1150.49 2019.91,1148.68 2021.97,1146.87 2024.03,1145.05 2026.08,1143.23 \\n 2028.12,1141.4 2030.16,1139.58 2032.18,1137.75 2034.2,1135.91 2036.21,1134.07 2038.22,1132.23 2040.21,1130.38 2042.2,1128.54 2044.18,1126.68 2046.15,1124.83 \\n 2048.11,1122.97 2050.07,1121.1 2052.01,1119.24 2053.95,1117.37 2055.88,1115.49 2057.81,1113.62 2059.72,1111.74 2061.63,1109.86 2063.53,1107.97 2065.42,1106.08 \\n 2067.3,1104.19 2069.17,1102.29 2071.04,1100.39 2072.9,1098.49 2074.75,1096.58 2076.59,1094.67 2078.42,1092.76 2080.24,1090.85 2082.06,1088.93 2083.87,1087.01 \\n 2085.67,1085.08 2087.46,1083.15 2089.24,1081.22 2091.01,1079.29 2092.78,1077.35 2094.54,1075.41 2096.28,1073.47 2098.02,1071.52 2099.76,1069.58 2101.48,1067.62 \\n 2103.19,1065.67 2104.9,1063.71 2106.6,1061.75 2108.29,1059.79 2109.97,1057.82 2111.64,1055.86 2113.3,1053.88 2114.95,1051.91 2116.6,1049.93 2118.24,1047.95 \\n 2119.86,1045.97 2121.48,1043.99 2123.09,1042 2124.7,1040.01 2126.29,1038.02 2127.87,1036.02 2129.45,1034.02 2131.02,1032.02 2132.57,1030.02 2134.12,1028.01 \\n 2135.66,1026.01 2137.19,1024 2138.72,1021.98 2140.23,1019.97 2141.73,1017.95 2143.23,1015.93 2144.72,1013.91 2146.19,1011.88 2147.66,1009.86 2149.12,1007.83 \\n 2150.57,1005.79 2152.01,1003.76 2153.45,1001.72 2154.87,999.686 2156.28,997.645 2157.69,995.602 2159.09,993.557 2160.47,991.51 2161.85,989.461 2163.22,987.409 \\n 2164.58,985.356 2165.93,983.3 2167.27,981.242 2168.6,979.182 2169.92,977.121 2171.24,975.057 2172.54,972.991 2173.84,970.923 2175.12,968.854 2176.4,966.782 \\n 2177.67,964.709 2178.93,962.634 2180.17,960.556 2181.41,958.477 2182.64,956.397 2183.86,954.314 2185.08,952.23 2186.28,950.144 2187.47,948.056 2188.65,945.966 \\n 2189.83,943.875 2190.99,941.782 2192.15,939.688 2193.29,937.592 2194.43,935.494 2195.55,933.395 2196.67,931.294 2197.78,929.192 2198.88,927.088 2199.96,924.982 \\n 2201.04,922.875 2202.11,920.767 2203.17,918.657 2204.22,916.546 2205.26,914.433 2206.3,912.32 2207.32,910.204 2208.33,908.088 2209.33,905.97 2210.32,903.85 \\n 2211.31,901.73 2212.28,899.608 2213.24,897.485 2214.2,895.361 2215.14,893.236 2216.08,891.109 2217,888.981 2217.92,886.853 2218.83,884.723 2219.72,882.592 \\n 2220.61,880.46 2221.48,878.327 2222.35,876.193 2223.21,874.057 2224.06,871.921 2224.89,869.784 2225.72,867.646 2226.54,865.508 2227.35,863.368 2228.15,861.227 \\n 2228.94,859.086 2229.72,856.944 2230.48,854.8 2231.24,852.657 2231.99,850.512 2232.73,848.367 2233.46,846.22 2234.18,844.074 2234.89,841.926 2235.59,839.778 \\n 2236.28,837.629 2236.97,835.48 2237.64,833.33 2238.3,831.179 2238.95,829.028 2239.59,826.877 2240.22,824.725 2240.84,822.572 2241.45,820.419 2242.05,818.265 \\n 2242.64,816.111 2243.23,813.957 2243.8,811.802 2244.36,809.647 2244.91,807.492 2245.45,805.336 2245.98,803.18 2246.51,801.024 2247.02,798.867 2247.52,796.71 \\n 2248.01,794.553 2248.49,792.396 2248.96,790.239 2249.43,788.081 2249.88,785.923 2250.32,783.766 2250.75,781.608 2251.17,779.45 2251.58,777.292 2251.99,775.134 \\n 2252.38,772.976 2252.76,770.818 2253.13,768.66 2253.49,766.502 2253.84,764.345 2254.18,762.187 2254.52,760.029 2254.84,757.872 2255.15,755.715 2255.45,753.558 \\n 2255.74,751.401 2256.02,749.245 2256.29,747.088 2256.55,744.932 2256.8,742.777 2257.04,740.621 2257.27,738.466 2257.49,736.312 2257.7,734.157 2257.91,732.003 \\n 2258.1,729.85 \\n \\&quot;/&gt;\\n&lt;/svg&gt;\\n&quot;},&quot;nodeType&quot;:&quot;DOM&quot;,&quot;type&quot;:&quot;node&quot;,&quot;instanceArgs&quot;:{&quot;namespace&quot;:&quot;html&quot;,&quot;tag&quot;:&quot;div&quot;},&quot;children&quot;:[]}]})</unsafe-script>"}}},"children":[{"props":{"id":"out","setInnerHtml":"<div class='display:none'></div><unsafe-script style='display:none'>\nWebIO.mount(this.previousSibling,{&quot;props&quot;:{&quot;attributes&quot;:{&quot;class&quot;:&quot;interact-flex-row&quot;}},&quot;nodeType&quot;:&quot;DOM&quot;,&quot;type&quot;:&quot;node&quot;,&quot;instanceArgs&quot;:{&quot;namespace&quot;:&quot;html&quot;,&quot;tag&quot;:&quot;div&quot;},&quot;children&quot;:[{&quot;props&quot;:{&quot;setInnerHtml&quot;:&quot;&lt;?xml version=\\&quot;1.0\\&quot; encoding=\\&quot;utf-8\\&quot;?&gt;\\n&lt;svg xmlns=\\&quot;http://www.w3.org/2000/svg\\&quot; xmlns:xlink=\\&quot;http://www.w3.org/1999/xlink\\&quot; width=\\&quot;600\\&quot; height=\\&quot;400\\&quot; viewBox=\\&quot;0 0 2400 1600\\&quot;&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip8200\\&quot;&gt;\\n &lt;rect x=\\&quot;0\\&quot; y=\\&quot;0\\&quot; width=\\&quot;2000\\&quot; height=\\&quot;2000\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip8201\\&quot;&gt;\\n &lt;rect x=\\&quot;0\\&quot; y=\\&quot;0\\&quot; width=\\&quot;2400\\&quot; height=\\&quot;1600\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;polygon clip-path=\\&quot;url(#clip8201)\\&quot; points=\\&quot;\\n0,1600 2400,1600 2400,0 0,0 \\n \\&quot; fill=\\&quot;#ffffff\\&quot; fill-rule=\\&quot;evenodd\\&quot; fill-opacity=\\&quot;1\\&quot;/&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip8202\\&quot;&gt;\\n &lt;rect x=\\&quot;480\\&quot; y=\\&quot;0\\&quot; width=\\&quot;1681\\&quot; height=\\&quot;1600\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;polygon clip-path=\\&quot;url(#clip8201)\\&quot; points=\\&quot;\\n149.361,1503.47 2321.26,1503.47 2321.26,47.2441 149.361,47.2441 \\n \\&quot; fill=\\&quot;#ffffff\\&quot; fill-rule=\\&quot;evenodd\\&quot; fill-opacity=\\&quot;1\\&quot;/&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip8203\\&quot;&gt;\\n &lt;rect x=\\&quot;149\\&quot; y=\\&quot;47\\&quot; width=\\&quot;2173\\&quot; height=\\&quot;1457\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 352.57,1503.47 352.57,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 646.817,1503.47 646.817,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 941.064,1503.47 941.064,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 1235.31,1503.47 1235.31,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 1529.56,1503.47 1529.56,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 1823.8,1503.47 1823.8,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 2118.05,1503.47 2118.05,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,1457.07 2321.26,1457.07 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,1229.83 2321.26,1229.83 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,1002.6 2321.26,1002.6 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,775.359 2321.26,775.359 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,548.122 2321.26,548.122 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,320.885 2321.26,320.885 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 149.361,93.6483 2321.26,93.6483 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,1503.47 2321.26,1503.47 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,1503.47 149.361,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 352.57,1503.47 352.57,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 646.817,1503.47 646.817,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 941.064,1503.47 941.064,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1235.31,1503.47 1235.31,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1529.56,1503.47 1529.56,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1823.8,1503.47 1823.8,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 2118.05,1503.47 2118.05,1481.63 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,1457.07 181.939,1457.07 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,1229.83 181.939,1229.83 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,1002.6 181.939,1002.6 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,775.359 181.939,775.359 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,548.122 181.939,548.122 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,320.885 181.939,320.885 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 149.361,93.6483 181.939,93.6483 \\n \\&quot;/&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 352.57, 1557.47)\\&quot; x=\\&quot;352.57\\&quot; y=\\&quot;1557.47\\&quot;&gt;-3&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 646.817, 1557.47)\\&quot; x=\\&quot;646.817\\&quot; y=\\&quot;1557.47\\&quot;&gt;-2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 941.064, 1557.47)\\&quot; x=\\&quot;941.064\\&quot; y=\\&quot;1557.47\\&quot;&gt;-1&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 1235.31, 1557.47)\\&quot; x=\\&quot;1235.31\\&quot; y=\\&quot;1557.47\\&quot;&gt;0&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 1529.56, 1557.47)\\&quot; x=\\&quot;1529.56\\&quot; y=\\&quot;1557.47\\&quot;&gt;1&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 1823.8, 1557.47)\\&quot; x=\\&quot;1823.8\\&quot; y=\\&quot;1557.47\\&quot;&gt;2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 2118.05, 1557.47)\\&quot; x=\\&quot;2118.05\\&quot; y=\\&quot;1557.47\\&quot;&gt;3&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 1474.57)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;1474.57\\&quot;&gt;-3&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 1247.33)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;1247.33\\&quot;&gt;-2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 1020.1)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;1020.1\\&quot;&gt;-1&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 792.859)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;792.859\\&quot;&gt;0&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 565.622)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;565.622\\&quot;&gt;1&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 338.385)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;338.385\\&quot;&gt;2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip8201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 125.361, 111.148)\\&quot; x=\\&quot;125.361\\&quot; y=\\&quot;111.148\\&quot;&gt;3&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#808080; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1823.8,775.359 1823.8,774.646 1823.79,773.932 1823.78,773.218 1823.76,772.504 1823.73,771.79 1823.7,771.076 1823.66,770.363 1823.62,769.649 1823.57,768.935 \\n 1823.51,768.222 1823.45,767.508 1823.39,766.795 1823.31,766.081 1823.23,765.368 1823.15,764.655 1823.06,763.942 1822.96,763.229 1822.86,762.516 1822.76,761.804 \\n 1822.64,761.091 1822.52,760.379 1822.4,759.666 1822.27,758.954 1822.13,758.242 1821.99,757.531 1821.84,756.819 1821.69,756.108 1821.53,755.396 1821.36,754.685 \\n 1821.19,753.975 1821.01,753.264 1820.83,752.554 1820.64,751.843 1820.45,751.133 1820.25,750.424 1820.04,749.714 1819.83,749.005 1819.62,748.296 1819.39,747.587 \\n 1819.16,746.879 1818.93,746.171 1818.69,745.463 1818.44,744.756 1818.19,744.048 1817.93,743.341 1817.67,742.635 1817.4,741.929 1817.13,741.223 1816.84,740.517 \\n 1816.56,739.812 1816.27,739.107 1815.97,738.402 1815.66,737.698 1815.36,736.994 1815.04,736.291 1814.72,735.588 1814.39,734.885 1814.06,734.183 1813.72,733.481 \\n 1813.38,732.779 1813.03,732.078 1812.68,731.378 1812.31,730.678 1811.95,729.978 1811.58,729.279 1811.2,728.58 1810.82,727.881 1810.43,727.184 1810.03,726.486 \\n 1809.63,725.789 1809.22,725.093 1808.81,724.397 1808.4,723.701 1807.97,723.006 1807.54,722.312 1807.11,721.618 1806.67,720.925 1806.22,720.232 1805.77,719.54 \\n 1805.31,718.848 1804.85,718.157 1804.38,717.466 1803.91,716.776 1803.43,716.087 1802.95,715.398 1802.46,714.709 1801.96,714.022 1801.46,713.335 1800.95,712.648 \\n 1800.44,711.962 1799.92,711.277 1799.39,710.592 1798.86,709.909 1798.33,709.225 1797.79,708.543 1797.24,707.861 1796.69,707.179 1796.13,706.499 1795.57,705.819 \\n 1795,705.139 1794.43,704.461 1793.85,703.783 1793.26,703.106 1792.67,702.429 1792.08,701.753 1791.47,701.078 1790.87,700.404 1790.25,699.73 1789.64,699.058 \\n 1789.01,698.386 1788.38,697.714 1787.75,697.044 1787.11,696.374 1786.46,695.705 1785.81,695.037 1785.16,694.369 1784.49,693.703 1783.83,693.037 1783.16,692.372 \\n 1782.48,691.708 1781.79,691.045 1781.11,690.382 1780.41,689.72 1779.71,689.06 1779.01,688.4 1778.3,687.74 1777.58,687.082 1776.86,686.425 1776.13,685.768 \\n 1775.4,685.113 1774.67,684.458 1773.92,683.804 1773.18,683.151 1772.42,682.499 1771.66,681.848 1770.9,681.198 1770.13,680.549 1769.36,679.9 1768.58,679.253 \\n 1767.79,678.607 1767.01,677.961 1766.21,677.317 1765.41,676.673 1764.6,676.031 1763.79,675.389 1762.98,674.748 1762.16,674.109 1761.33,673.47 1760.5,672.833 \\n 1759.66,672.196 1758.82,671.56 1757.97,670.926 1757.12,670.292 1756.26,669.66 1755.4,669.028 1754.53,668.398 1753.66,667.769 1752.78,667.14 1751.9,666.513 \\n 1751.01,665.887 1750.12,665.262 1749.22,664.638 1748.32,664.015 1747.41,663.394 1746.49,662.773 1745.58,662.153 1744.65,661.535 1743.72,660.918 1742.79,660.301 \\n 1741.85,659.686 1740.91,659.072 1739.96,658.46 1739.01,657.848 1738.05,657.238 1737.08,656.628 1736.11,656.02 1735.14,655.413 1734.16,654.808 1733.18,654.203 \\n 1732.19,653.6 1731.2,652.998 1730.2,652.397 1729.2,651.797 1728.19,651.198 1727.18,650.601 1726.16,650.005 1725.14,649.41 1724.11,648.817 1723.08,648.224 \\n 1722.04,647.633 1721,647.043 1719.95,646.455 1718.9,645.868 1717.85,645.282 1716.79,644.697 1715.72,644.114 1714.65,643.531 1713.58,642.951 1712.5,642.371 \\n 1711.41,641.793 1710.32,641.216 1709.23,640.64 1708.13,640.066 1707.03,639.493 1705.92,638.922 1704.81,638.351 1703.69,637.783 1702.57,637.215 1701.44,636.649 \\n 1700.31,636.084 1699.18,635.521 1698.04,634.959 1696.89,634.398 1695.74,633.839 1694.59,633.281 1693.43,632.725 1692.27,632.17 1691.1,631.616 1689.93,631.064 \\n 1688.75,630.513 1687.57,629.964 1686.39,629.416 1685.2,628.869 1684,628.324 1682.8,627.781 1681.6,627.239 1680.39,626.698 1679.18,626.159 1677.97,625.621 \\n 1676.75,625.085 1675.52,624.55 1674.29,624.017 1673.06,623.485 1671.82,622.955 1670.58,622.426 1669.33,621.899 1668.08,621.373 1666.83,620.849 1665.57,620.326 \\n 1664.3,619.805 1663.04,619.285 1661.76,618.767 1660.49,618.251 1659.21,617.736 1657.92,617.222 1656.63,616.71 1655.34,616.2 1654.04,615.691 1652.74,615.184 \\n 1651.44,614.679 1650.13,614.175 1648.82,613.672 1647.5,613.171 1646.18,612.672 1644.85,612.175 1643.52,611.679 1642.19,611.184 1640.85,610.691 1639.51,610.2 \\n 1638.16,609.711 1636.81,609.223 1635.46,608.737 1634.1,608.252 1632.74,607.769 1631.37,607.288 1630,606.808 1628.63,606.33 1627.25,605.854 1625.87,605.379 \\n 1624.49,604.906 1623.1,604.435 1621.71,603.966 1620.31,603.498 1618.91,603.032 1617.51,602.567 1616.1,602.104 1614.69,601.643 1613.27,601.184 1611.85,600.726 \\n 1610.43,600.27 1609,599.816 1607.57,599.364 1606.14,598.913 1604.7,598.464 1603.26,598.017 1601.82,597.571 1600.37,597.128 1598.92,596.686 1597.46,596.245 \\n 1596,595.807 1594.54,595.37 1593.07,594.935 1591.6,594.502 1590.13,594.071 1588.65,593.641 1587.17,593.214 1585.69,592.788 1584.2,592.364 1582.71,591.941 \\n 1581.22,591.521 1579.72,591.102 1578.22,590.685 1576.72,590.27 1575.21,589.857 1573.7,589.446 1572.18,589.036 1570.67,588.628 1569.14,588.222 1567.62,587.818 \\n 1566.09,587.416 1564.56,587.016 1563.03,586.617 1561.49,586.221 1559.95,585.826 1558.41,585.433 1556.86,585.042 1555.31,584.653 1553.76,584.266 1552.2,583.88 \\n 1550.64,583.497 1549.08,583.115 1547.51,582.736 1545.94,582.358 1544.37,581.982 1542.8,581.608 1541.22,581.236 1539.64,580.866 1538.05,580.498 1536.47,580.131 \\n 1534.88,579.767 1533.28,579.405 1531.69,579.044 1530.09,578.685 1528.49,578.329 1526.88,577.974 1525.28,577.621 1523.67,577.271 1522.05,576.922 1520.44,576.575 \\n 1518.82,576.23 1517.2,575.887 1515.57,575.546 1513.95,575.207 1512.32,574.87 1510.68,574.535 1509.05,574.202 1507.41,573.871 1505.77,573.542 1504.13,573.215 \\n 1502.48,572.89 1500.83,572.567 1499.18,572.246 1497.53,571.926 1495.87,571.609 1494.21,571.294 1492.55,570.981 1490.89,570.67 1489.22,570.361 1487.55,570.054 \\n 1485.88,569.749 1484.2,569.446 1482.53,569.145 1480.85,568.846 1479.17,568.55 1477.48,568.255 1475.8,567.962 1474.11,567.671 1472.42,567.383 1470.72,567.096 \\n 1469.03,566.812 1467.33,566.529 1465.63,566.249 1463.93,565.97 1462.22,565.694 1460.52,565.42 1458.81,565.148 1457.1,564.878 1455.38,564.61 1453.67,564.344 \\n 1451.95,564.08 1450.23,563.818 1448.51,563.558 1446.78,563.301 1445.06,563.045 1443.33,562.792 1441.6,562.541 1439.86,562.291 1438.13,562.044 1436.39,561.799 \\n 1434.66,561.556 1432.91,561.316 1431.17,561.077 1429.43,560.841 1427.68,560.606 1425.93,560.374 1424.18,560.144 1422.43,559.916 1420.68,559.69 1418.92,559.466 \\n 1417.16,559.244 1415.41,559.025 1413.64,558.807 1411.88,558.592 1410.12,558.379 1408.35,558.168 1406.58,557.959 1404.81,557.752 1403.04,557.548 1401.27,557.345 \\n 1399.49,557.145 1397.72,556.947 1395.94,556.751 1394.16,556.557 1392.38,556.366 1390.6,556.176 1388.81,555.989 1387.03,555.804 1385.24,555.621 1383.45,555.44 \\n 1381.66,555.261 1379.87,555.085 1378.08,554.911 1376.28,554.739 1374.49,554.569 1372.69,554.401 1370.89,554.235 1369.09,554.072 1367.29,553.911 1365.49,553.752 \\n 1363.69,553.595 1361.88,553.44 1360.07,553.288 1358.27,553.138 1356.46,552.99 1354.65,552.844 1352.84,552.7 1351.03,552.559 1349.21,552.419 1347.4,552.282 \\n 1345.58,552.147 1343.77,552.015 1341.95,551.884 1340.13,551.756 1338.31,551.63 1336.49,551.506 1334.67,551.384 1332.84,551.265 1331.02,551.148 1329.2,551.033 \\n 1327.37,550.92 1325.54,550.809 1323.72,550.701 1321.89,550.595 1320.06,550.491 1318.23,550.389 1316.4,550.29 1314.57,550.193 1312.73,550.098 1310.9,550.005 \\n 1309.07,549.914 1307.23,549.826 1305.4,549.74 1303.56,549.656 1301.73,549.574 1299.89,549.495 1298.05,549.417 1296.21,549.342 1294.37,549.27 1292.53,549.199 \\n 1290.69,549.131 1288.85,549.065 1287.01,549.001 1285.17,548.939 1283.33,548.88 1281.48,548.823 1279.64,548.768 1277.8,548.715 1275.95,548.665 1274.11,548.617 \\n 1272.26,548.571 1270.42,548.527 1268.57,548.486 1266.72,548.446 1264.88,548.409 1263.03,548.375 1261.19,548.342 1259.34,548.312 1257.49,548.284 1255.64,548.258 \\n 1253.8,548.235 1251.95,548.213 1250.1,548.194 1248.25,548.177 1246.4,548.163 1244.55,548.15 1242.71,548.14 1240.86,548.132 1239.01,548.127 1237.16,548.124 \\n 1235.31,548.122 1233.46,548.124 1231.61,548.127 1229.76,548.132 1227.92,548.14 1226.07,548.15 1224.22,548.163 1222.37,548.177 1220.52,548.194 1218.67,548.213 \\n 1216.83,548.235 1214.98,548.258 1213.13,548.284 1211.28,548.312 1209.44,548.342 1207.59,548.375 1205.74,548.409 1203.9,548.446 1202.05,548.486 1200.2,548.527 \\n 1198.36,548.571 1196.51,548.617 1194.67,548.665 1192.82,548.715 1190.98,548.768 1189.14,548.823 1187.29,548.88 1185.45,548.939 1183.61,549.001 1181.77,549.065 \\n 1179.93,549.131 1178.09,549.199 1176.25,549.27 1174.41,549.342 1172.57,549.417 1170.73,549.495 1168.89,549.574 1167.06,549.656 1165.22,549.74 1163.39,549.826 \\n 1161.55,549.914 1159.72,550.005 1157.89,550.098 1156.05,550.193 1154.22,550.29 1152.39,550.389 1150.56,550.491 1148.73,550.595 1146.9,550.701 1145.08,550.809 \\n 1143.25,550.92 1141.42,551.033 1139.6,551.148 1137.78,551.265 1135.95,551.384 1134.13,551.506 1132.31,551.63 1130.49,551.756 1128.67,551.884 1126.85,552.015 \\n 1125.04,552.147 1123.22,552.282 1121.41,552.419 1119.59,552.559 1117.78,552.7 1115.97,552.844 1114.16,552.99 1112.35,553.138 1110.55,553.288 1108.74,553.44 \\n 1106.93,553.595 1105.13,553.752 1103.33,553.911 1101.53,554.072 1099.73,554.235 1097.93,554.401 1096.13,554.569 1094.34,554.739 1092.54,554.911 1090.75,555.085 \\n 1088.96,555.261 1087.17,555.44 1085.38,555.621 1083.59,555.804 1081.81,555.989 1080.02,556.176 1078.24,556.366 1076.46,556.557 1074.68,556.751 1072.9,556.947 \\n 1071.13,557.145 1069.35,557.345 1067.58,557.548 1065.81,557.752 1064.04,557.959 1062.27,558.168 1060.5,558.379 1058.74,558.592 1056.98,558.807 1055.21,559.025 \\n 1053.46,559.244 1051.7,559.466 1049.94,559.69 1048.19,559.916 1046.44,560.144 1044.69,560.374 1042.94,560.606 1041.19,560.841 1039.45,561.077 1037.71,561.316 \\n 1035.97,561.556 1034.23,561.799 1032.49,562.044 1030.76,562.291 1029.02,562.541 1027.29,562.792 1025.56,563.045 1023.84,563.301 1022.11,563.558 1020.39,563.818 \\n 1018.67,564.08 1016.95,564.344 1015.24,564.61 1013.52,564.878 1011.81,565.148 1010.1,565.42 1008.4,565.694 1006.69,565.97 1004.99,566.249 1003.29,566.529 \\n 1001.59,566.812 999.896,567.096 998.202,567.383 996.512,567.671 994.823,567.962 993.137,568.255 991.453,568.55 989.772,568.846 988.093,569.145 986.416,569.446 \\n 984.742,569.749 983.07,570.054 981.401,570.361 979.735,570.67 978.071,570.981 976.409,571.294 974.75,571.609 973.094,571.926 971.44,572.246 969.788,572.567 \\n 968.14,572.89 966.494,573.215 964.851,573.542 963.21,573.871 961.572,574.202 959.937,574.535 958.304,574.87 956.674,575.207 955.047,575.546 953.423,575.887 \\n 951.801,576.23 950.183,576.575 948.567,576.922 946.954,577.271 945.343,577.621 943.736,577.974 942.132,578.329 940.53,578.685 938.931,579.044 937.336,579.405 \\n 935.743,579.767 934.153,580.131 932.566,580.498 930.982,580.866 929.401,581.236 927.823,581.608 926.248,581.982 924.677,582.358 923.108,582.736 921.542,583.115 \\n 919.98,583.497 918.42,583.88 916.864,584.266 915.311,584.653 913.761,585.042 912.214,585.433 910.67,585.826 909.13,586.221 907.593,586.617 906.059,587.016 \\n 904.528,587.416 903,587.818 901.476,588.222 899.955,588.628 898.438,589.036 896.923,589.446 895.413,589.857 893.905,590.27 892.401,590.685 890.9,591.102 \\n 889.403,591.521 887.909,591.941 886.418,592.364 884.931,592.788 883.447,593.214 881.967,593.641 880.49,594.071 879.017,594.502 877.547,594.935 876.081,595.37 \\n 874.619,595.807 873.159,596.245 871.704,596.686 870.252,597.128 868.804,597.571 867.359,598.017 865.918,598.464 864.481,598.913 863.047,599.364 861.617,599.816 \\n 860.19,600.27 858.768,600.726 857.349,601.184 855.934,601.643 854.522,602.104 853.114,602.567 851.71,603.032 850.31,603.498 848.914,603.966 847.521,604.435 \\n 846.133,604.906 844.748,605.379 843.367,605.854 841.99,606.33 840.616,606.808 839.247,607.288 837.881,607.769 836.52,608.252 835.162,608.737 833.809,609.223 \\n 832.459,609.711 831.113,610.2 829.771,610.691 828.434,611.184 827.1,611.679 825.77,612.175 824.445,612.672 823.123,613.171 821.805,613.672 820.492,614.175 \\n 819.183,614.679 817.877,615.184 816.576,615.691 815.279,616.2 813.986,616.71 812.698,617.222 811.413,617.736 810.133,618.251 808.857,618.767 807.585,619.285 \\n 806.317,619.805 805.054,620.326 803.794,620.849 802.539,621.373 801.289,621.899 800.042,622.426 798.8,622.955 797.562,623.485 796.329,624.017 795.1,624.55 \\n 793.875,625.085 792.654,625.621 791.438,626.159 790.227,626.698 789.019,627.239 787.816,627.781 786.618,628.324 785.424,628.869 784.234,629.416 783.049,629.964 \\n 781.868,630.513 780.692,631.064 779.52,631.616 778.353,632.17 777.19,632.725 776.032,633.281 774.878,633.839 773.729,634.398 772.585,634.959 771.445,635.521 \\n 770.309,636.084 769.178,636.649 768.052,637.215 766.931,637.783 765.814,638.351 764.701,638.922 763.593,639.493 762.49,640.066 761.392,640.64 760.298,641.216 \\n 759.209,641.793 758.125,642.371 757.045,642.951 755.97,643.531 754.9,644.114 753.835,644.697 752.774,645.282 751.718,645.868 750.667,646.455 749.62,647.043 \\n 748.579,647.633 747.542,648.224 746.51,648.817 745.483,649.41 744.461,650.005 743.443,650.601 742.431,651.198 741.423,651.797 740.42,652.397 739.422,652.998 \\n 738.429,653.6 737.441,654.203 736.457,654.808 735.479,655.413 734.506,656.02 733.537,656.628 732.574,657.238 731.615,657.848 730.662,658.46 729.713,659.072 \\n 728.769,659.686 727.831,660.301 726.897,660.918 725.968,661.535 725.045,662.153 724.126,662.773 723.213,663.394 722.304,664.015 721.401,664.638 720.503,665.262 \\n 719.61,665.887 718.721,666.513 717.838,667.14 716.961,667.769 716.088,668.398 715.22,669.028 714.358,669.66 713.5,670.292 712.648,670.926 711.801,671.56 \\n 710.959,672.196 710.122,672.833 709.291,673.47 708.464,674.109 707.643,674.748 706.827,675.389 706.016,676.031 705.211,676.673 704.41,677.317 703.615,677.961 \\n 702.826,678.607 702.041,679.253 701.262,679.9 700.488,680.549 699.719,681.198 698.955,681.848 698.197,682.499 697.444,683.151 696.697,683.804 695.955,684.458 \\n 695.218,685.113 694.486,685.768 693.76,686.425 693.039,687.082 692.323,687.74 691.613,688.4 690.908,689.06 690.209,689.72 689.515,690.382 688.826,691.045 \\n 688.143,691.708 687.465,692.372 686.793,693.037 686.126,693.703 685.464,694.369 684.808,695.037 684.157,695.705 683.512,696.374 682.872,697.044 682.237,697.714 \\n 681.608,698.386 680.985,699.058 680.367,699.73 679.754,700.404 679.147,701.078 678.545,701.753 677.949,702.429 677.359,703.106 676.773,703.783 676.194,704.461 \\n 675.62,705.139 675.051,705.819 674.488,706.499 673.931,707.179 673.379,707.861 672.832,708.543 672.292,709.225 671.756,709.909 671.227,710.592 670.702,711.277 \\n 670.184,711.962 669.671,712.648 669.163,713.335 668.662,714.022 668.165,714.709 667.675,715.398 667.19,716.087 666.71,716.776 666.236,717.466 665.768,718.157 \\n 665.305,718.848 664.849,719.54 664.397,720.232 663.951,720.925 663.511,721.618 663.077,722.312 662.648,723.006 662.225,723.701 661.808,724.397 661.396,725.093 \\n 660.99,725.789 660.589,726.486 660.194,727.184 659.805,727.881 659.422,728.58 659.044,729.279 658.672,729.978 658.306,730.678 657.945,731.378 657.59,732.078 \\n 657.241,732.779 656.897,733.481 656.559,734.183 656.227,734.885 655.901,735.588 655.58,736.291 655.265,736.994 654.956,737.698 654.652,738.402 654.354,739.107 \\n 654.062,739.812 653.776,740.517 653.495,741.223 653.22,741.929 652.951,742.635 652.688,743.341 652.43,744.048 652.178,744.756 651.932,745.463 651.692,746.171 \\n 651.457,746.879 651.229,747.587 651.005,748.296 650.788,749.005 650.577,749.714 650.371,750.424 650.171,751.133 649.977,751.843 649.788,752.554 649.606,753.264 \\n 649.429,753.975 649.258,754.685 649.092,755.396 648.933,756.108 648.779,756.819 648.631,757.531 648.489,758.242 648.352,758.954 648.222,759.666 648.097,760.379 \\n 647.978,761.091 647.865,761.804 647.758,762.516 647.656,763.229 647.56,763.942 647.47,764.655 647.386,765.368 647.308,766.081 647.235,766.795 647.168,767.508 \\n 647.107,768.222 647.052,768.935 647.003,769.649 646.959,770.363 646.921,771.076 646.889,771.79 646.863,772.504 646.843,773.218 646.829,773.932 646.82,774.646 \\n 646.817,775.359 646.82,776.073 646.829,776.787 646.843,777.501 646.863,778.215 646.889,778.929 646.921,779.642 646.959,780.356 647.003,781.07 647.052,781.784 \\n 647.107,782.497 647.168,783.211 647.235,783.924 647.308,784.637 647.386,785.351 647.47,786.064 647.56,786.777 647.656,787.49 647.758,788.203 647.865,788.915 \\n 647.978,789.628 648.097,790.34 648.222,791.052 648.352,791.765 648.489,792.476 648.631,793.188 648.779,793.9 648.933,794.611 649.092,795.322 649.258,796.033 \\n 649.429,796.744 649.606,797.455 649.788,798.165 649.977,798.875 650.171,799.585 650.371,800.295 650.577,801.005 650.788,801.714 651.005,802.423 651.229,803.131 \\n 651.457,803.84 651.692,804.548 651.932,805.256 652.178,805.963 652.43,806.67 652.688,807.377 652.951,808.084 653.22,808.79 653.495,809.496 653.776,810.202 \\n 654.062,810.907 654.354,811.612 654.652,812.317 654.956,813.021 655.265,813.725 655.58,814.428 655.901,815.131 656.227,815.834 656.559,816.536 656.897,817.238 \\n 657.241,817.939 657.59,818.64 657.945,819.341 658.306,820.041 658.672,820.741 659.044,821.44 659.422,822.139 659.805,822.837 660.194,823.535 660.589,824.233 \\n 660.99,824.93 661.396,825.626 661.808,826.322 662.225,827.017 662.648,827.712 663.077,828.407 663.511,829.101 663.951,829.794 664.397,830.487 664.849,831.179 \\n 665.305,831.871 665.768,832.562 666.236,833.253 666.71,833.943 667.19,834.632 667.675,835.321 668.165,836.009 668.662,836.697 669.163,837.384 669.671,838.071 \\n 670.184,838.757 670.702,839.442 671.227,840.126 671.756,840.81 672.292,841.494 672.832,842.176 673.379,842.858 673.931,843.54 674.488,844.22 675.051,844.9 \\n 675.62,845.58 676.194,846.258 676.773,846.936 677.359,847.613 677.949,848.29 678.545,848.965 679.147,849.64 679.754,850.315 680.367,850.988 680.985,851.661 \\n 681.608,852.333 682.237,853.005 682.872,853.675 683.512,854.345 684.157,855.014 684.808,855.682 685.464,856.349 686.126,857.016 686.793,857.682 687.465,858.347 \\n 688.143,859.011 688.826,859.674 689.515,860.337 690.209,860.998 690.908,861.659 691.613,862.319 692.323,862.978 693.039,863.637 693.76,864.294 694.486,864.951 \\n 695.218,865.606 695.955,866.261 696.697,866.915 697.444,867.568 698.197,868.22 698.955,868.871 699.719,869.521 700.488,870.17 701.262,870.818 702.041,871.466 \\n 702.826,872.112 703.615,872.758 704.41,873.402 705.211,874.046 706.016,874.688 706.827,875.33 707.643,875.97 708.464,876.61 709.291,877.249 710.122,877.886 \\n 710.959,878.523 711.801,879.158 712.648,879.793 713.5,880.426 714.358,881.059 715.22,881.69 716.088,882.321 716.961,882.95 717.838,883.578 718.721,884.206 \\n 719.61,884.832 720.503,885.457 721.401,886.081 722.304,886.704 723.213,887.325 724.126,887.946 725.045,888.566 725.968,889.184 726.897,889.801 727.831,890.417 \\n 728.769,891.032 729.713,891.646 730.662,892.259 731.615,892.871 732.574,893.481 733.537,894.09 734.506,894.699 735.479,895.305 736.457,895.911 737.441,896.516 \\n 738.429,897.119 739.422,897.721 740.42,898.322 741.423,898.922 742.431,899.52 743.443,900.118 744.461,900.714 745.483,901.309 746.51,901.902 747.542,902.494 \\n 748.579,903.086 749.62,903.675 750.667,904.264 751.718,904.851 752.774,905.437 753.835,906.022 754.9,906.605 755.97,907.187 757.045,907.768 758.125,908.348 \\n 759.209,908.926 760.298,909.503 761.392,910.078 762.49,910.653 763.593,911.226 764.701,911.797 765.814,912.367 766.931,912.936 768.052,913.504 769.178,914.07 \\n 770.309,914.635 771.445,915.198 772.585,915.76 773.729,916.321 774.878,916.88 776.032,917.438 777.19,917.994 778.353,918.549 779.52,919.103 780.692,919.655 \\n 781.868,920.206 783.049,920.755 784.234,921.303 785.424,921.849 786.618,922.394 787.816,922.938 789.019,923.48 790.227,924.021 791.438,924.56 792.654,925.098 \\n 793.875,925.634 795.1,926.169 796.329,926.702 797.562,927.234 798.8,927.764 800.042,928.293 801.289,928.82 802.539,929.346 803.794,929.87 805.054,930.393 \\n 806.317,930.914 807.585,931.433 808.857,931.952 810.133,932.468 811.413,932.983 812.698,933.497 813.986,934.008 815.279,934.519 816.576,935.028 817.877,935.535 \\n 819.183,936.04 820.492,936.544 821.805,937.047 823.123,937.547 824.445,938.047 825.77,938.544 827.1,939.04 828.434,939.535 829.771,940.027 831.113,940.519 \\n 832.459,941.008 833.809,941.496 835.162,941.982 836.52,942.467 837.881,942.95 839.247,943.431 840.616,943.911 841.99,944.389 843.367,944.865 844.748,945.339 \\n 846.133,945.812 847.521,946.284 848.914,946.753 850.31,947.221 851.71,947.687 853.114,948.152 854.522,948.615 855.934,949.076 857.349,949.535 858.768,949.993 \\n 860.19,950.449 861.617,950.903 863.047,951.355 864.481,951.806 865.918,952.255 867.359,952.702 868.804,953.148 870.252,953.591 871.704,954.033 873.159,954.473 \\n 874.619,954.912 876.081,955.349 877.547,955.783 879.017,956.217 880.49,956.648 881.967,957.077 883.447,957.505 884.931,957.931 886.418,958.355 887.909,958.778 \\n 889.403,959.198 890.9,959.617 892.401,960.034 893.905,960.449 895.413,960.862 896.923,961.273 898.438,961.683 899.955,962.091 901.476,962.497 903,962.901 \\n 904.528,963.303 906.059,963.703 907.593,964.102 909.13,964.498 910.67,964.893 912.214,965.286 913.761,965.677 915.311,966.066 916.864,966.453 918.42,966.839 \\n 919.98,967.222 921.542,967.604 923.108,967.983 924.677,968.361 926.248,968.737 927.823,969.111 929.401,969.483 930.982,969.853 932.566,970.221 934.153,970.588 \\n 935.743,970.952 937.336,971.314 938.931,971.675 940.53,972.033 942.132,972.39 943.736,972.745 945.343,973.097 946.954,973.448 948.567,973.797 950.183,974.144 \\n 951.801,974.489 953.423,974.832 955.047,975.173 956.674,975.512 958.304,975.849 959.937,976.184 961.572,976.517 963.21,976.848 964.851,977.177 966.494,977.504 \\n 968.14,977.829 969.788,978.152 971.44,978.473 973.094,978.792 974.75,979.109 976.409,979.425 978.071,979.738 979.735,980.049 981.401,980.358 983.07,980.665 \\n 984.742,980.97 986.416,981.273 988.093,981.573 989.772,981.872 991.453,982.169 993.137,982.464 994.823,982.757 996.512,983.047 998.202,983.336 999.896,983.623 \\n 1001.59,983.907 1003.29,984.19 1004.99,984.47 1006.69,984.749 1008.4,985.025 1010.1,985.299 1011.81,985.571 1013.52,985.841 1015.24,986.109 1016.95,986.375 \\n 1018.67,986.639 1020.39,986.901 1022.11,987.16 1023.84,987.418 1025.56,987.674 1027.29,987.927 1029.02,988.178 1030.76,988.427 1032.49,988.675 1034.23,988.919 \\n 1035.97,989.162 1037.71,989.403 1039.45,989.642 1041.19,989.878 1042.94,990.113 1044.69,990.345 1046.44,990.575 1048.19,990.803 1049.94,991.029 1051.7,991.253 \\n 1053.46,991.475 1055.21,991.694 1056.98,991.912 1058.74,992.127 1060.5,992.34 1062.27,992.551 1064.04,992.76 1065.81,992.967 1067.58,993.171 1069.35,993.373 \\n 1071.13,993.574 1072.9,993.772 1074.68,993.968 1076.46,994.161 1078.24,994.353 1080.02,994.543 1081.81,994.73 1083.59,994.915 1085.38,995.098 1087.17,995.279 \\n 1088.96,995.457 1090.75,995.634 1092.54,995.808 1094.34,995.98 1096.13,996.15 1097.93,996.318 1099.73,996.483 1101.53,996.647 1103.33,996.808 1105.13,996.967 \\n 1106.93,997.124 1108.74,997.278 1110.55,997.431 1112.35,997.581 1114.16,997.729 1115.97,997.875 1117.78,998.019 1119.59,998.16 1121.41,998.3 1123.22,998.437 \\n 1125.04,998.571 1126.85,998.704 1128.67,998.835 1130.49,998.963 1132.31,999.089 1134.13,999.213 1135.95,999.334 1137.78,999.454 1139.6,999.571 1141.42,999.686 \\n 1143.25,999.799 1145.08,999.909 1146.9,1000.02 1148.73,1000.12 1150.56,1000.23 1152.39,1000.33 1154.22,1000.43 1156.05,1000.53 1157.89,1000.62 1159.72,1000.71 \\n 1161.55,1000.8 1163.39,1000.89 1165.22,1000.98 1167.06,1001.06 1168.89,1001.14 1170.73,1001.22 1172.57,1001.3 1174.41,1001.38 1176.25,1001.45 1178.09,1001.52 \\n 1179.93,1001.59 1181.77,1001.65 1183.61,1001.72 1185.45,1001.78 1187.29,1001.84 1189.14,1001.9 1190.98,1001.95 1192.82,1002 1194.67,1002.05 1196.51,1002.1 \\n 1198.36,1002.15 1200.2,1002.19 1202.05,1002.23 1203.9,1002.27 1205.74,1002.31 1207.59,1002.34 1209.44,1002.38 1211.28,1002.41 1213.13,1002.43 1214.98,1002.46 \\n 1216.83,1002.48 1218.67,1002.51 1220.52,1002.52 1222.37,1002.54 1224.22,1002.56 1226.07,1002.57 1227.92,1002.58 1229.76,1002.59 1231.61,1002.59 1233.46,1002.6 \\n 1235.31,1002.6 1237.16,1002.6 1239.01,1002.59 1240.86,1002.59 1242.71,1002.58 1244.55,1002.57 1246.4,1002.56 1248.25,1002.54 1250.1,1002.52 1251.95,1002.51 \\n 1253.8,1002.48 1255.64,1002.46 1257.49,1002.43 1259.34,1002.41 1261.19,1002.38 1263.03,1002.34 1264.88,1002.31 1266.72,1002.27 1268.57,1002.23 1270.42,1002.19 \\n 1272.26,1002.15 1274.11,1002.1 1275.95,1002.05 1277.8,1002 1279.64,1001.95 1281.48,1001.9 1283.33,1001.84 1285.17,1001.78 1287.01,1001.72 1288.85,1001.65 \\n 1290.69,1001.59 1292.53,1001.52 1294.37,1001.45 1296.21,1001.38 1298.05,1001.3 1299.89,1001.22 1301.73,1001.14 1303.56,1001.06 1305.4,1000.98 1307.23,1000.89 \\n 1309.07,1000.8 1310.9,1000.71 1312.73,1000.62 1314.57,1000.53 1316.4,1000.43 1318.23,1000.33 1320.06,1000.23 1321.89,1000.12 1323.72,1000.02 1325.54,999.909 \\n 1327.37,999.799 1329.2,999.686 1331.02,999.571 1332.84,999.454 1334.67,999.334 1336.49,999.213 1338.31,999.089 1340.13,998.963 1341.95,998.835 1343.77,998.704 \\n 1345.58,998.571 1347.4,998.437 1349.21,998.3 1351.03,998.16 1352.84,998.019 1354.65,997.875 1356.46,997.729 1358.27,997.581 1360.07,997.431 1361.88,997.278 \\n 1363.69,997.124 1365.49,996.967 1367.29,996.808 1369.09,996.647 1370.89,996.483 1372.69,996.318 1374.49,996.15 1376.28,995.98 1378.08,995.808 1379.87,995.634 \\n 1381.66,995.457 1383.45,995.279 1385.24,995.098 1387.03,994.915 1388.81,994.73 1390.6,994.543 1392.38,994.353 1394.16,994.161 1395.94,993.968 1397.72,993.772 \\n 1399.49,993.574 1401.27,993.373 1403.04,993.171 1404.81,992.967 1406.58,992.76 1408.35,992.551 1410.12,992.34 1411.88,992.127 1413.64,991.912 1415.41,991.694 \\n 1417.16,991.475 1418.92,991.253 1420.68,991.029 1422.43,990.803 1424.18,990.575 1425.93,990.345 1427.68,990.113 1429.43,989.878 1431.17,989.642 1432.91,989.403 \\n 1434.66,989.162 1436.39,988.919 1438.13,988.675 1439.86,988.427 1441.6,988.178 1443.33,987.927 1445.06,987.674 1446.78,987.418 1448.51,987.16 1450.23,986.901 \\n 1451.95,986.639 1453.67,986.375 1455.38,986.109 1457.1,985.841 1458.81,985.571 1460.52,985.299 1462.22,985.025 1463.93,984.749 1465.63,984.47 1467.33,984.19 \\n 1469.03,983.907 1470.72,983.623 1472.42,983.336 1474.11,983.047 1475.8,982.757 1477.48,982.464 1479.17,982.169 1480.85,981.872 1482.53,981.573 1484.2,981.273 \\n 1485.88,980.97 1487.55,980.665 1489.22,980.358 1490.89,980.049 1492.55,979.738 1494.21,979.425 1495.87,979.109 1497.53,978.792 1499.18,978.473 1500.83,978.152 \\n 1502.48,977.829 1504.13,977.504 1505.77,977.177 1507.41,976.848 1509.05,976.517 1510.68,976.184 1512.32,975.849 1513.95,975.512 1515.57,975.173 1517.2,974.832 \\n 1518.82,974.489 1520.44,974.144 1522.05,973.797 1523.67,973.448 1525.28,973.097 1526.88,972.745 1528.49,972.39 1530.09,972.033 1531.69,971.675 1533.28,971.314 \\n 1534.88,970.952 1536.47,970.588 1538.05,970.221 1539.64,969.853 1541.22,969.483 1542.8,969.111 1544.37,968.737 1545.94,968.361 1547.51,967.983 1549.08,967.604 \\n 1550.64,967.222 1552.2,966.839 1553.76,966.453 1555.31,966.066 1556.86,965.677 1558.41,965.286 1559.95,964.893 1561.49,964.498 1563.03,964.102 1564.56,963.703 \\n 1566.09,963.303 1567.62,962.901 1569.14,962.497 1570.67,962.091 1572.18,961.683 1573.7,961.273 1575.21,960.862 1576.72,960.449 1578.22,960.034 1579.72,959.617 \\n 1581.22,959.198 1582.71,958.778 1584.2,958.355 1585.69,957.931 1587.17,957.505 1588.65,957.077 1590.13,956.648 1591.6,956.217 1593.07,955.783 1594.54,955.349 \\n 1596,954.912 1597.46,954.473 1598.92,954.033 1600.37,953.591 1601.82,953.148 1603.26,952.702 1604.7,952.255 1606.14,951.806 1607.57,951.355 1609,950.903 \\n 1610.43,950.449 1611.85,949.993 1613.27,949.535 1614.69,949.076 1616.1,948.615 1617.51,948.152 1618.91,947.687 1620.31,947.221 1621.71,946.753 1623.1,946.284 \\n 1624.49,945.812 1625.87,945.339 1627.25,944.865 1628.63,944.389 1630,943.911 1631.37,943.431 1632.74,942.95 1634.1,942.467 1635.46,941.982 1636.81,941.496 \\n 1638.16,941.008 1639.51,940.519 1640.85,940.027 1642.19,939.535 1643.52,939.04 1644.85,938.544 1646.18,938.047 1647.5,937.547 1648.82,937.047 1650.13,936.544 \\n 1651.44,936.04 1652.74,935.535 1654.04,935.028 1655.34,934.519 1656.63,934.008 1657.92,933.497 1659.21,932.983 1660.49,932.468 1661.76,931.952 1663.04,931.433 \\n 1664.3,930.914 1665.57,930.393 1666.83,929.87 1668.08,929.346 1669.33,928.82 1670.58,928.293 1671.82,927.764 1673.06,927.234 1674.29,926.702 1675.52,926.169 \\n 1676.75,925.634 1677.97,925.098 1679.18,924.56 1680.39,924.021 1681.6,923.48 1682.8,922.938 1684,922.394 1685.2,921.849 1686.39,921.303 1687.57,920.755 \\n 1688.75,920.206 1689.93,919.655 1691.1,919.103 1692.27,918.549 1693.43,917.994 1694.59,917.438 1695.74,916.88 1696.89,916.321 1698.04,915.76 1699.18,915.198 \\n 1700.31,914.635 1701.44,914.07 1702.57,913.504 1703.69,912.936 1704.81,912.367 1705.92,911.797 1707.03,911.226 1708.13,910.653 1709.23,910.078 1710.32,909.503 \\n 1711.41,908.926 1712.5,908.348 1713.58,907.768 1714.65,907.187 1715.72,906.605 1716.79,906.022 1717.85,905.437 1718.9,904.851 1719.95,904.264 1721,903.675 \\n 1722.04,903.086 1723.08,902.494 1724.11,901.902 1725.14,901.309 1726.16,900.714 1727.18,900.118 1728.19,899.52 1729.2,898.922 1730.2,898.322 1731.2,897.721 \\n 1732.19,897.119 1733.18,896.516 1734.16,895.911 1735.14,895.305 1736.11,894.699 1737.08,894.09 1738.05,893.481 1739.01,892.871 1739.96,892.259 1740.91,891.646 \\n 1741.85,891.032 1742.79,890.417 1743.72,889.801 1744.65,889.184 1745.58,888.566 1746.49,887.946 1747.41,887.325 1748.32,886.704 1749.22,886.081 1750.12,885.457 \\n 1751.01,884.832 1751.9,884.206 1752.78,883.578 1753.66,882.95 1754.53,882.321 1755.4,881.69 1756.26,881.059 1757.12,880.426 1757.97,879.793 1758.82,879.158 \\n 1759.66,878.523 1760.5,877.886 1761.33,877.249 1762.16,876.61 1762.98,875.97 1763.79,875.33 1764.6,874.688 1765.41,874.046 1766.21,873.402 1767.01,872.758 \\n 1767.79,872.112 1768.58,871.466 1769.36,870.818 1770.13,870.17 1770.9,869.521 1771.66,868.871 1772.42,868.22 1773.18,867.568 1773.92,866.915 1774.67,866.261 \\n 1775.4,865.606 1776.13,864.951 1776.86,864.294 1777.58,863.637 1778.3,862.978 1779.01,862.319 1779.71,861.659 1780.41,860.998 1781.11,860.337 1781.79,859.674 \\n 1782.48,859.011 1783.16,858.347 1783.83,857.682 1784.49,857.016 1785.16,856.349 1785.81,855.682 1786.46,855.014 1787.11,854.345 1787.75,853.675 1788.38,853.005 \\n 1789.01,852.333 1789.64,851.661 1790.25,850.988 1790.87,850.315 1791.47,849.64 1792.08,848.965 1792.67,848.29 1793.26,847.613 1793.85,846.936 1794.43,846.258 \\n 1795,845.58 1795.57,844.9 1796.13,844.22 1796.69,843.54 1797.24,842.858 1797.79,842.176 1798.33,841.494 1798.86,840.81 1799.39,840.126 1799.92,839.442 \\n 1800.44,838.757 1800.95,838.071 1801.46,837.384 1801.96,836.697 1802.46,836.009 1802.95,835.321 1803.43,834.632 1803.91,833.943 1804.38,833.253 1804.85,832.562 \\n 1805.31,831.871 1805.77,831.179 1806.22,830.487 1806.67,829.794 1807.11,829.101 1807.54,828.407 1807.97,827.712 1808.4,827.017 1808.81,826.322 1809.22,825.626 \\n 1809.63,824.93 1810.03,824.233 1810.43,823.535 1810.82,822.837 1811.2,822.139 1811.58,821.44 1811.95,820.741 1812.31,820.041 1812.68,819.341 1813.03,818.64 \\n 1813.38,817.939 1813.72,817.238 1814.06,816.536 1814.39,815.834 1814.72,815.131 1815.04,814.428 1815.36,813.725 1815.66,813.021 1815.97,812.317 1816.27,811.612 \\n 1816.56,810.907 1816.84,810.202 1817.13,809.496 1817.4,808.79 1817.67,808.084 1817.93,807.377 1818.19,806.67 1818.44,805.963 1818.69,805.256 1818.93,804.548 \\n 1819.16,803.84 1819.39,803.131 1819.62,802.423 1819.83,801.714 1820.04,801.005 1820.25,800.295 1820.45,799.585 1820.64,798.875 1820.83,798.165 1821.01,797.455 \\n 1821.19,796.744 1821.36,796.033 1821.53,795.322 1821.69,794.611 1821.84,793.9 1821.99,793.188 1822.13,792.476 1822.27,791.765 1822.4,791.052 1822.52,790.34 \\n 1822.64,789.628 1822.76,788.915 1822.86,788.203 1822.96,787.49 1823.06,786.777 1823.15,786.064 1823.23,785.351 1823.31,784.637 1823.39,783.924 1823.45,783.211 \\n 1823.51,782.497 1823.57,781.784 1823.62,781.07 1823.66,780.356 1823.7,779.642 1823.73,778.929 1823.76,778.215 1823.78,777.501 1823.79,776.787 1823.8,776.073 \\n 1823.8,775.359 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#808080; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1529.56,775.359 1529.56,773.218 1529.55,771.076 1529.54,768.935 1529.53,766.793 1529.52,764.652 1529.5,762.51 1529.49,760.369 1529.46,758.228 1529.44,756.087 \\n 1529.41,753.946 1529.38,751.806 1529.35,749.666 1529.31,747.526 1529.27,745.386 1529.23,743.246 1529.19,741.107 1529.14,738.969 1529.09,736.83 1529.03,734.692 \\n 1528.98,732.554 1528.92,730.417 1528.85,728.28 1528.79,726.144 1528.72,724.008 1528.65,721.873 1528.58,719.738 1528.5,717.604 1528.42,715.47 1528.34,713.337 \\n 1528.25,711.205 1528.16,709.073 1528.07,706.942 1527.98,704.811 1527.88,702.681 1527.78,700.552 1527.68,698.424 1527.57,696.296 1527.46,694.17 1527.35,692.044 \\n 1527.24,689.918 1527.12,687.794 1527,685.671 1526.88,683.548 1526.75,681.426 1526.62,679.305 1526.49,677.186 1526.36,675.067 1526.22,672.949 1526.08,670.832 \\n 1525.93,668.716 1525.79,666.602 1525.64,664.488 1525.49,662.375 1525.33,660.264 1525.18,658.153 1525.02,656.044 1524.85,653.936 1524.69,651.829 1524.52,649.724 \\n 1524.34,647.619 1524.17,645.516 1523.99,643.415 1523.81,641.314 1523.63,639.215 1523.44,637.117 1523.25,635.021 1523.06,632.925 1522.87,630.832 1522.67,628.74 \\n 1522.47,626.649 1522.27,624.559 1522.06,622.472 1521.85,620.385 1521.64,618.3 1521.43,616.217 1521.21,614.135 1520.99,612.055 1520.77,609.977 1520.54,607.9 \\n 1520.31,605.825 1520.08,603.751 1519.85,601.679 1519.61,599.609 1519.37,597.541 1519.13,595.474 1518.88,593.409 1518.63,591.346 1518.38,589.285 1518.13,587.226 \\n 1517.87,585.168 1517.61,583.112 1517.35,581.059 1517.09,579.007 1516.82,576.957 1516.55,574.909 1516.28,572.863 1516,570.819 1515.72,568.777 1515.44,566.737 \\n 1515.16,564.699 1514.87,562.663 1514.58,560.63 1514.29,558.598 1513.99,556.569 1513.69,554.541 1513.39,552.516 1513.09,550.493 1512.78,548.473 1512.47,546.454 \\n 1512.16,544.438 1511.85,542.424 1511.53,540.413 1511.21,538.403 1510.89,536.396 1510.56,534.392 1510.23,532.389 1509.9,530.39 1509.57,528.392 1509.23,526.397 \\n 1508.89,524.405 1508.55,522.415 1508.21,520.427 1507.86,518.442 1507.51,516.46 1507.16,514.48 1506.8,512.503 1506.45,510.528 1506.09,508.556 1505.72,506.586 \\n 1505.36,504.619 1504.99,502.655 1504.62,500.694 1504.24,498.735 1503.87,496.779 1503.49,494.826 1503.11,492.875 1502.72,490.927 1502.33,488.982 1501.94,487.04 \\n 1501.55,485.101 1501.16,483.165 1500.76,481.231 1500.36,479.3 1499.96,477.373 1499.55,475.448 1499.14,473.526 1498.73,471.607 1498.32,469.692 1497.9,467.779 \\n 1497.49,465.869 1497.06,463.962 1496.64,462.059 1496.22,460.158 1495.79,458.261 1495.36,456.366 1494.92,454.475 1494.49,452.587 1494.05,450.703 1493.6,448.821 \\n 1493.16,446.943 1492.71,445.067 1492.26,443.196 1491.81,441.327 1491.36,439.462 1490.9,437.6 1490.44,435.741 1489.98,433.886 1489.52,432.034 1489.05,430.185 \\n 1488.58,428.34 1488.11,426.499 1487.63,424.66 1487.16,422.825 1486.68,420.994 1486.2,419.166 1485.71,417.342 1485.23,415.521 1484.74,413.704 1484.24,411.89 \\n 1483.75,410.08 1483.25,408.274 1482.76,406.471 1482.25,404.672 1481.75,402.876 1481.24,401.084 1480.74,399.296 1480.22,397.512 1479.71,395.731 1479.19,393.954 \\n 1478.68,392.181 1478.16,390.412 1477.63,388.646 1477.11,386.884 1476.58,385.126 1476.05,383.372 1475.52,381.622 1474.98,379.875 1474.44,378.133 1473.9,376.394 \\n 1473.36,374.66 1472.82,372.929 1472.27,371.202 1471.72,369.48 1471.17,367.761 1470.61,366.046 1470.06,364.336 1469.5,362.629 1468.94,360.927 1468.38,359.228 \\n 1467.81,357.534 1467.24,355.844 1466.67,354.158 1466.1,352.476 1465.53,350.798 1464.95,349.125 1464.37,347.455 1463.79,345.79 1463.21,344.129 1462.62,342.473 \\n 1462.03,340.82 1461.44,339.172 1460.85,337.529 1460.25,335.889 1459.66,334.254 1459.06,332.623 1458.46,330.997 1457.85,329.375 1457.25,327.758 1456.64,326.144 \\n 1456.03,324.536 1455.42,322.932 1454.8,321.332 1454.18,319.736 1453.57,318.146 1452.94,316.559 1452.32,314.978 1451.7,313.4 1451.07,311.828 1450.44,310.26 \\n 1449.81,308.696 1449.17,307.137 1448.54,305.583 1447.9,304.033 1447.26,302.488 1446.62,300.948 1445.97,299.412 1445.33,297.881 1444.68,296.355 1444.03,294.834 \\n 1443.37,293.317 1442.72,291.805 1442.06,290.298 1441.4,288.795 1440.74,287.298 1440.08,285.805 1439.42,284.317 1438.75,282.834 1438.08,281.355 1437.41,279.882 \\n 1436.74,278.413 1436.06,276.95 1435.38,275.491 1434.71,274.037 1434.02,272.589 1433.34,271.145 1432.66,269.706 1431.97,268.272 1431.28,266.843 1430.59,265.419 \\n 1429.9,264 1429.2,262.587 1428.51,261.178 1427.81,259.774 1427.11,258.376 1426.41,256.982 1425.7,255.594 1425,254.211 1424.29,252.833 1423.58,251.46 \\n 1422.87,250.092 1422.16,248.729 1421.44,247.372 1420.73,246.02 1420.01,244.673 1419.29,243.331 1418.56,241.995 1417.84,240.664 1417.11,239.338 1416.39,238.017 \\n 1415.66,236.702 1414.92,235.392 1414.19,234.087 1413.46,232.788 1412.72,231.494 1411.98,230.206 1411.24,228.922 1410.5,227.645 1409.76,226.372 1409.01,225.105 \\n 1408.26,223.844 1407.52,222.587 1406.76,221.337 1406.01,220.092 1405.26,218.852 1404.5,217.618 1403.75,216.389 1402.99,215.166 1402.23,213.948 1401.47,212.736 \\n 1400.7,211.529 1399.94,210.328 1399.17,209.133 1398.4,207.943 1397.63,206.759 1396.86,205.58 1396.08,204.407 1395.31,203.24 1394.53,202.078 1393.76,200.922 \\n 1392.98,199.772 1392.19,198.627 1391.41,197.488 1390.63,196.355 1389.84,195.227 1389.05,194.105 1388.26,192.989 1387.47,191.879 1386.68,190.774 1385.89,189.675 \\n 1385.09,188.582 1384.3,187.495 1383.5,186.413 1382.7,185.338 1381.9,184.268 1381.1,183.204 1380.29,182.145 1379.49,181.093 1378.68,180.047 1377.87,179.006 \\n 1377.06,177.971 1376.25,176.943 1375.44,175.92 1374.63,174.903 1373.81,173.892 1373,172.887 1372.18,171.887 1371.36,170.894 1370.54,169.907 1369.72,168.926 \\n 1368.9,167.95 1368.07,166.981 1367.25,166.018 1366.42,165.061 1365.59,164.109 1364.76,163.164 1363.93,162.225 1363.1,161.292 1362.26,160.365 1361.43,159.444 \\n 1360.59,158.529 1359.76,157.62 1358.92,156.717 1358.08,155.821 1357.24,154.93 1356.4,154.046 1355.55,153.167 1354.71,152.295 1353.86,151.429 1353.02,150.57 \\n 1352.17,149.716 1351.32,148.868 1350.47,148.027 1349.62,147.192 1348.77,146.363 1347.91,145.54 1347.06,144.724 1346.2,143.914 1345.35,143.11 1344.49,142.312 \\n 1343.63,141.52 1342.77,140.735 1341.91,139.956 1341.05,139.183 1340.18,138.417 1339.32,137.657 1338.45,136.903 1337.59,136.155 1336.72,135.414 1335.85,134.679 \\n 1334.98,133.951 1334.11,133.228 1333.24,132.512 1332.37,131.803 1331.5,131.099 1330.62,130.403 1329.75,129.712 1328.87,129.028 1327.99,128.35 1327.12,127.679 \\n 1326.24,127.014 1325.36,126.355 1324.48,125.703 1323.6,125.057 1322.71,124.418 1321.83,123.785 1320.95,123.158 1320.06,122.538 1319.18,121.924 1318.29,121.317 \\n 1317.4,120.717 1316.51,120.122 1315.63,119.534 1314.74,118.953 1313.85,118.378 1312.95,117.81 1312.06,117.248 1311.17,116.693 1310.28,116.144 1309.38,115.601 \\n 1308.49,115.066 1307.59,114.536 1306.69,114.013 1305.8,113.497 1304.9,112.987 1304,112.484 1303.1,111.987 1302.2,111.497 1301.3,111.014 1300.4,110.537 \\n 1299.5,110.066 1298.6,109.602 1297.69,109.145 1296.79,108.694 1295.88,108.25 1294.98,107.812 1294.07,107.381 1293.17,106.957 1292.26,106.539 1291.35,106.128 \\n 1290.45,105.723 1289.54,105.325 1288.63,104.934 1287.72,104.549 1286.81,104.171 1285.9,103.799 1284.99,103.435 1284.08,103.076 1283.17,102.725 1282.25,102.38 \\n 1281.34,102.041 1280.43,101.71 1279.51,101.385 1278.6,101.066 1277.68,100.754 1276.77,100.449 1275.85,100.151 1274.94,99.8591 1274.02,99.574 1273.11,99.2956 \\n 1272.19,99.0238 1271.27,98.7587 1270.35,98.5003 1269.44,98.2486 1268.52,98.0036 1267.6,97.7652 1266.68,97.5335 1265.76,97.3086 1264.84,97.0903 1263.92,96.8787 \\n 1263,96.6738 1262.08,96.4756 1261.16,96.2841 1260.24,96.0993 1259.32,95.9212 1258.4,95.7498 1257.47,95.5851 1256.55,95.4272 1255.63,95.2759 1254.71,95.1314 \\n 1253.79,94.9935 1252.86,94.8624 1251.94,94.738 1251.02,94.6203 1250.09,94.5094 1249.17,94.4051 1248.25,94.3076 1247.32,94.2168 1246.4,94.1327 1245.48,94.0553 \\n 1244.55,93.9847 1243.63,93.9208 1242.7,93.8636 1241.78,93.8132 1240.86,93.7694 1239.93,93.7324 1239.01,93.7022 1238.08,93.6786 1237.16,93.6618 1236.23,93.6517 \\n 1235.31,93.6483 1234.39,93.6517 1233.46,93.6618 1232.54,93.6786 1231.61,93.7022 1230.69,93.7324 1229.76,93.7694 1228.84,93.8132 1227.92,93.8636 1226.99,93.9208 \\n 1226.07,93.9847 1225.14,94.0553 1224.22,94.1327 1223.3,94.2168 1222.37,94.3076 1221.45,94.4051 1220.53,94.5094 1219.6,94.6203 1218.68,94.738 1217.76,94.8624 \\n 1216.83,94.9935 1215.91,95.1314 1214.99,95.2759 1214.07,95.4272 1213.15,95.5851 1212.22,95.7498 1211.3,95.9212 1210.38,96.0993 1209.46,96.2841 1208.54,96.4756 \\n 1207.62,96.6738 1206.7,96.8787 1205.78,97.0903 1204.86,97.3086 1203.94,97.5335 1203.02,97.7652 1202.1,98.0036 1201.18,98.2486 1200.27,98.5003 1199.35,98.7587 \\n 1198.43,99.0238 1197.51,99.2956 1196.6,99.574 1195.68,99.8591 1194.77,100.151 1193.85,100.449 1192.94,100.754 1192.02,101.066 1191.11,101.385 1190.19,101.71 \\n 1189.28,102.041 1188.37,102.38 1187.45,102.725 1186.54,103.076 1185.63,103.435 1184.72,103.799 1183.81,104.171 1182.9,104.549 1181.99,104.934 1181.08,105.325 \\n 1180.17,105.723 1179.27,106.128 1178.36,106.539 1177.45,106.957 1176.55,107.381 1175.64,107.812 1174.74,108.25 1173.83,108.694 1172.93,109.145 1172.02,109.602 \\n 1171.12,110.066 1170.22,110.537 1169.32,111.014 1168.42,111.497 1167.52,111.987 1166.62,112.484 1165.72,112.987 1164.82,113.497 1163.93,114.013 1163.03,114.536 \\n 1162.13,115.066 1161.24,115.601 1160.34,116.144 1159.45,116.693 1158.56,117.248 1157.67,117.81 1156.78,118.378 1155.88,118.953 1155,119.534 1154.11,120.122 \\n 1153.22,120.717 1152.33,121.317 1151.44,121.924 1150.56,122.538 1149.67,123.158 1148.79,123.785 1147.91,124.418 1147.02,125.057 1146.14,125.703 1145.26,126.355 \\n 1144.38,127.014 1143.5,127.679 1142.63,128.35 1141.75,129.028 1140.87,129.712 1140,130.403 1139.12,131.099 1138.25,131.803 1137.38,132.512 1136.51,133.228 \\n 1135.64,133.951 1134.77,134.679 1133.9,135.414 1133.03,136.155 1132.17,136.903 1131.3,137.657 1130.44,138.417 1129.57,139.183 1128.71,139.956 1127.85,140.735 \\n 1126.99,141.52 1126.13,142.312 1125.27,143.11 1124.42,143.914 1123.56,144.724 1122.71,145.54 1121.85,146.363 1121,147.192 1120.15,148.027 1119.3,148.868 \\n 1118.45,149.716 1117.6,150.57 1116.76,151.429 1115.91,152.295 1115.07,153.167 1114.22,154.046 1113.38,154.93 1112.54,155.821 1111.7,156.717 1110.86,157.62 \\n 1110.03,158.529 1109.19,159.444 1108.36,160.365 1107.52,161.292 1106.69,162.225 1105.86,163.164 1105.03,164.109 1104.2,165.061 1103.37,166.018 1102.55,166.981 \\n 1101.73,167.95 1100.9,168.926 1100.08,169.907 1099.26,170.894 1098.44,171.887 1097.62,172.887 1096.81,173.892 1095.99,174.903 1095.18,175.92 1094.37,176.943 \\n 1093.56,177.971 1092.75,179.006 1091.94,180.047 1091.13,181.093 1090.33,182.145 1089.52,183.204 1088.72,184.268 1087.92,185.338 1087.12,186.413 1086.32,187.495 \\n 1085.53,188.582 1084.73,189.675 1083.94,190.774 1083.15,191.879 1082.36,192.989 1081.57,194.105 1080.78,195.227 1079.99,196.355 1079.21,197.488 1078.43,198.627 \\n 1077.64,199.772 1076.87,200.922 1076.09,202.078 1075.31,203.24 1074.54,204.407 1073.76,205.58 1072.99,206.759 1072.22,207.943 1071.45,209.133 1070.68,210.328 \\n 1069.92,211.529 1069.16,212.736 1068.39,213.948 1067.63,215.166 1066.87,216.389 1066.12,217.618 1065.36,218.852 1064.61,220.092 1063.86,221.337 1063.11,222.587 \\n 1062.36,223.844 1061.61,225.105 1060.86,226.372 1060.12,227.645 1059.38,228.922 1058.64,230.206 1057.9,231.494 1057.16,232.788 1056.43,234.087 1055.7,235.392 \\n 1054.96,236.702 1054.23,238.017 1053.51,239.338 1052.78,240.664 1052.06,241.995 1051.33,243.331 1050.61,244.673 1049.9,246.02 1049.18,247.372 1048.46,248.729 \\n 1047.75,250.092 1047.04,251.46 1046.33,252.833 1045.62,254.211 1044.92,255.594 1044.21,256.982 1043.51,258.376 1042.81,259.774 1042.11,261.178 1041.42,262.587 \\n 1040.72,264 1040.03,265.419 1039.34,266.843 1038.65,268.272 1037.96,269.706 1037.28,271.145 1036.6,272.589 1035.92,274.037 1035.24,275.491 1034.56,276.95 \\n 1033.88,278.413 1033.21,279.882 1032.54,281.355 1031.87,282.834 1031.21,284.317 1030.54,285.805 1029.88,287.298 1029.22,288.795 1028.56,290.298 1027.9,291.805 \\n 1027.25,293.317 1026.59,294.834 1025.94,296.355 1025.29,297.881 1024.65,299.412 1024,300.948 1023.36,302.488 1022.72,304.033 1022.08,305.583 1021.45,307.137 \\n 1020.81,308.696 1020.18,310.26 1019.55,311.828 1018.92,313.4 1018.3,314.978 1017.68,316.559 1017.06,318.146 1016.44,319.736 1015.82,321.332 1015.2,322.932 \\n 1014.59,324.536 1013.98,326.144 1013.37,327.758 1012.77,329.375 1012.16,330.997 1011.56,332.623 1010.96,334.254 1010.37,335.889 1009.77,337.529 1009.18,339.172 \\n 1008.59,340.82 1008,342.473 1007.42,344.129 1006.83,345.79 1006.25,347.455 1005.67,349.125 1005.09,350.798 1004.52,352.476 1003.95,354.158 1003.38,355.844 \\n 1002.81,357.534 1002.24,359.228 1001.68,360.927 1001.12,362.629 1000.56,364.336 1000.01,366.046 999.452,367.761 998.9,369.48 998.351,371.202 997.804,372.929 \\n 997.26,374.66 996.717,376.394 996.178,378.133 995.64,379.875 995.105,381.622 994.572,383.372 994.042,385.126 993.514,386.884 992.989,388.646 992.465,390.412 \\n 991.945,392.181 991.426,393.954 990.91,395.731 990.397,397.512 989.885,399.296 989.377,401.084 988.87,402.876 988.367,404.672 987.865,406.471 987.366,408.274 \\n 986.87,410.08 986.375,411.89 985.884,413.704 985.395,415.521 984.908,417.342 984.424,419.166 983.942,420.994 983.463,422.825 982.986,424.66 982.512,426.499 \\n 982.04,428.34 981.57,430.185 981.104,432.034 980.639,433.886 980.178,435.741 979.718,437.6 979.262,439.462 978.807,441.327 978.356,443.196 977.907,445.067 \\n 977.46,446.943 977.016,448.821 976.574,450.703 976.135,452.587 975.699,454.475 975.265,456.366 974.834,458.261 974.405,460.158 973.979,462.059 973.555,463.962 \\n 973.135,465.869 972.716,467.779 972.3,469.692 971.887,471.607 971.477,473.526 971.069,475.448 970.663,477.373 970.26,479.3 969.86,481.231 969.463,483.165 \\n 969.068,485.101 968.676,487.04 968.286,488.982 967.899,490.927 967.515,492.875 967.133,494.826 966.754,496.779 966.377,498.735 966.004,500.694 965.632,502.655 \\n 965.264,504.619 964.898,506.586 964.535,508.556 964.175,510.528 963.817,512.503 963.462,514.48 963.109,516.46 962.76,518.442 962.413,520.427 962.068,522.415 \\n 961.727,524.405 961.388,526.397 961.051,528.392 960.718,530.39 960.387,532.389 960.059,534.392 959.734,536.396 959.411,538.403 959.091,540.413 958.774,542.424 \\n 958.459,544.438 958.147,546.454 957.838,548.473 957.532,550.493 957.229,552.516 956.928,554.541 956.63,556.569 956.334,558.598 956.042,560.63 955.752,562.663 \\n 955.465,564.699 955.181,566.737 954.899,568.777 954.62,570.819 954.345,572.863 954.071,574.909 953.801,576.957 953.533,579.007 953.268,581.059 953.006,583.112 \\n 952.747,585.168 952.491,587.226 952.237,589.285 951.986,591.346 951.738,593.409 951.492,595.474 951.25,597.541 951.01,599.609 950.773,601.679 950.539,603.751 \\n 950.308,605.825 950.079,607.9 949.854,609.977 949.631,612.055 949.411,614.135 949.194,616.217 948.979,618.3 948.768,620.385 948.559,622.472 948.353,624.559 \\n 948.15,626.649 947.95,628.74 947.752,630.832 947.558,632.925 947.366,635.021 947.177,637.117 946.991,639.215 946.808,641.314 946.628,643.415 946.45,645.516 \\n 946.275,647.619 946.104,649.724 945.935,651.829 945.769,653.936 945.605,656.044 945.445,658.153 945.288,660.264 945.133,662.375 944.981,664.488 944.832,666.602 \\n 944.686,668.716 944.543,670.832 944.403,672.949 944.265,675.067 944.131,677.186 943.999,679.305 943.87,681.426 943.744,683.548 943.621,685.671 943.501,687.794 \\n 943.384,689.918 943.269,692.044 943.158,694.17 943.049,696.296 942.943,698.424 942.841,700.552 942.741,702.681 942.643,704.811 942.549,706.942 942.458,709.073 \\n 942.369,711.205 942.284,713.337 942.201,715.47 942.121,717.604 942.045,719.738 941.971,721.873 941.9,724.008 941.831,726.144 941.766,728.28 941.704,730.417 \\n 941.644,732.554 941.588,734.692 941.534,736.83 941.483,738.969 941.435,741.107 941.39,743.246 941.348,745.386 941.309,747.526 941.273,749.666 941.239,751.806 \\n 941.209,753.946 941.181,756.087 941.156,758.228 941.135,760.369 941.116,762.51 941.1,764.652 941.087,766.793 941.077,768.935 941.069,771.076 941.065,773.218 \\n 941.064,775.359 941.065,777.501 941.069,779.643 941.077,781.784 941.087,783.926 941.1,786.067 941.116,788.209 941.135,790.35 941.156,792.491 941.181,794.632 \\n 941.209,796.772 941.239,798.913 941.273,801.053 941.309,803.193 941.348,805.333 941.39,807.472 941.435,809.612 941.483,811.75 941.534,813.889 941.588,816.027 \\n 941.644,818.164 941.704,820.302 941.766,822.438 941.831,824.575 941.9,826.711 941.971,828.846 942.045,830.981 942.121,833.115 942.201,835.249 942.284,837.382 \\n 942.369,839.514 942.458,841.646 942.549,843.777 942.643,845.908 942.741,848.037 942.841,850.167 942.943,852.295 943.049,854.422 943.158,856.549 943.269,858.675 \\n 943.384,860.8 943.501,862.925 943.621,865.048 943.744,867.171 943.87,869.293 943.999,871.413 944.131,873.533 944.265,875.652 944.403,877.77 944.543,879.887 \\n 944.686,882.003 944.832,884.117 944.981,886.231 945.133,888.344 945.288,890.455 945.445,892.565 945.605,894.675 945.769,896.783 945.935,898.889 946.104,900.995 \\n 946.275,903.099 946.45,905.202 946.628,907.304 946.808,909.405 946.991,911.504 947.177,913.602 947.366,915.698 947.558,917.793 947.752,919.887 947.95,921.979 \\n 948.15,924.07 948.353,926.159 948.559,928.247 948.768,930.334 948.979,932.418 949.194,934.502 949.411,936.583 949.631,938.664 949.854,940.742 950.079,942.819 \\n 950.308,944.894 950.539,946.968 950.773,949.039 951.01,951.11 951.25,953.178 951.492,955.245 951.738,957.309 951.986,959.373 952.237,961.434 952.491,963.493 \\n 952.747,965.551 953.006,967.606 953.268,969.66 953.533,971.712 953.801,973.762 954.071,975.81 954.345,977.856 954.62,979.9 954.899,981.942 955.181,983.982 \\n 955.465,986.02 955.752,988.056 956.042,990.089 956.334,992.121 956.63,994.15 956.928,996.178 957.229,998.203 957.532,1000.23 957.838,1002.25 958.147,1004.26 \\n 958.459,1006.28 958.774,1008.29 959.091,1010.31 959.411,1012.32 959.734,1014.32 960.059,1016.33 960.387,1018.33 960.718,1020.33 961.051,1022.33 961.388,1024.32 \\n 961.727,1026.31 962.068,1028.3 962.413,1030.29 962.76,1032.28 963.109,1034.26 963.462,1036.24 963.817,1038.22 964.175,1040.19 964.535,1042.16 964.898,1044.13 \\n 965.264,1046.1 965.632,1048.06 966.004,1050.03 966.377,1051.98 966.754,1053.94 967.133,1055.89 967.515,1057.84 967.899,1059.79 968.286,1061.74 968.676,1063.68 \\n 969.068,1065.62 969.463,1067.55 969.86,1069.49 970.26,1071.42 970.663,1073.35 971.069,1075.27 971.477,1077.19 971.887,1079.11 972.3,1081.03 972.716,1082.94 \\n 973.135,1084.85 973.555,1086.76 973.979,1088.66 974.405,1090.56 974.834,1092.46 975.265,1094.35 975.699,1096.24 976.135,1098.13 976.574,1100.02 977.016,1101.9 \\n 977.46,1103.78 977.907,1105.65 978.356,1107.52 978.807,1109.39 979.262,1111.26 979.718,1113.12 980.178,1114.98 980.639,1116.83 981.104,1118.68 981.57,1120.53 \\n 982.04,1122.38 982.512,1124.22 982.986,1126.06 983.463,1127.89 983.942,1129.72 984.424,1131.55 984.908,1133.38 985.395,1135.2 985.884,1137.01 986.375,1138.83 \\n 986.87,1140.64 987.366,1142.44 987.865,1144.25 988.367,1146.05 988.87,1147.84 989.377,1149.63 989.885,1151.42 990.397,1153.21 990.91,1154.99 991.426,1156.76 \\n 991.945,1158.54 992.465,1160.31 992.989,1162.07 993.514,1163.83 994.042,1165.59 994.572,1167.35 995.105,1169.1 995.64,1170.84 996.178,1172.59 996.717,1174.32 \\n 997.26,1176.06 997.804,1177.79 998.351,1179.52 998.9,1181.24 999.452,1182.96 1000.01,1184.67 1000.56,1186.38 1001.12,1188.09 1001.68,1189.79 1002.24,1191.49 \\n 1002.81,1193.18 1003.38,1194.88 1003.95,1196.56 1004.52,1198.24 1005.09,1199.92 1005.67,1201.59 1006.25,1203.26 1006.83,1204.93 1007.42,1206.59 1008,1208.25 \\n 1008.59,1209.9 1009.18,1211.55 1009.77,1213.19 1010.37,1214.83 1010.96,1216.46 1011.56,1218.1 1012.16,1219.72 1012.77,1221.34 1013.37,1222.96 1013.98,1224.57 \\n 1014.59,1226.18 1015.2,1227.79 1015.82,1229.39 1016.44,1230.98 1017.06,1232.57 1017.68,1234.16 1018.3,1235.74 1018.92,1237.32 1019.55,1238.89 1020.18,1240.46 \\n 1020.81,1242.02 1021.45,1243.58 1022.08,1245.14 1022.72,1246.69 1023.36,1248.23 1024,1249.77 1024.65,1251.31 1025.29,1252.84 1025.94,1254.36 1026.59,1255.89 \\n 1027.25,1257.4 1027.9,1258.91 1028.56,1260.42 1029.22,1261.92 1029.88,1263.42 1030.54,1264.91 1031.21,1266.4 1031.87,1267.89 1032.54,1269.36 1033.21,1270.84 \\n 1033.88,1272.31 1034.56,1273.77 1035.24,1275.23 1035.92,1276.68 1036.6,1278.13 1037.28,1279.57 1037.96,1281.01 1038.65,1282.45 1039.34,1283.88 1040.03,1285.3 \\n 1040.72,1286.72 1041.42,1288.13 1042.11,1289.54 1042.81,1290.94 1043.51,1292.34 1044.21,1293.74 1044.92,1295.12 1045.62,1296.51 1046.33,1297.89 1047.04,1299.26 \\n 1047.75,1300.63 1048.46,1301.99 1049.18,1303.35 1049.9,1304.7 1050.61,1306.05 1051.33,1307.39 1052.06,1308.72 1052.78,1310.06 1053.51,1311.38 1054.23,1312.7 \\n 1054.96,1314.02 1055.7,1315.33 1056.43,1316.63 1057.16,1317.93 1057.9,1319.22 1058.64,1320.51 1059.38,1321.8 1060.12,1323.07 1060.86,1324.35 1061.61,1325.61 \\n 1062.36,1326.88 1063.11,1328.13 1063.86,1329.38 1064.61,1330.63 1065.36,1331.87 1066.12,1333.1 1066.87,1334.33 1067.63,1335.55 1068.39,1336.77 1069.16,1337.98 \\n 1069.92,1339.19 1070.68,1340.39 1071.45,1341.59 1072.22,1342.78 1072.99,1343.96 1073.76,1345.14 1074.54,1346.31 1075.31,1347.48 1076.09,1348.64 1076.87,1349.8 \\n 1077.64,1350.95 1078.43,1352.09 1079.21,1353.23 1079.99,1354.36 1080.78,1355.49 1081.57,1356.61 1082.36,1357.73 1083.15,1358.84 1083.94,1359.94 1084.73,1361.04 \\n 1085.53,1362.14 1086.32,1363.22 1087.12,1364.31 1087.92,1365.38 1088.72,1366.45 1089.52,1367.52 1090.33,1368.57 1091.13,1369.63 1091.94,1370.67 1092.75,1371.71 \\n 1093.56,1372.75 1094.37,1373.78 1095.18,1374.8 1095.99,1375.82 1096.81,1376.83 1097.62,1377.83 1098.44,1378.83 1099.26,1379.82 1100.08,1380.81 1100.9,1381.79 \\n 1101.73,1382.77 1102.55,1383.74 1103.37,1384.7 1104.2,1385.66 1105.03,1386.61 1105.86,1387.55 1106.69,1388.49 1107.52,1389.43 1108.36,1390.35 1109.19,1391.28 \\n 1110.03,1392.19 1110.86,1393.1 1111.7,1394 1112.54,1394.9 1113.38,1395.79 1114.22,1396.67 1115.07,1397.55 1115.91,1398.42 1116.76,1399.29 1117.6,1400.15 \\n 1118.45,1401 1119.3,1401.85 1120.15,1402.69 1121,1403.53 1121.85,1404.36 1122.71,1405.18 1123.56,1405.99 1124.42,1406.81 1125.27,1407.61 1126.13,1408.41 \\n 1126.99,1409.2 1127.85,1409.98 1128.71,1410.76 1129.57,1411.54 1130.44,1412.3 1131.3,1413.06 1132.17,1413.82 1133.03,1414.56 1133.9,1415.3 1134.77,1416.04 \\n 1135.64,1416.77 1136.51,1417.49 1137.38,1418.21 1138.25,1418.92 1139.12,1419.62 1140,1420.32 1140.87,1421.01 1141.75,1421.69 1142.63,1422.37 1143.5,1423.04 \\n 1144.38,1423.71 1145.26,1424.36 1146.14,1425.02 1147.02,1425.66 1147.91,1426.3 1148.79,1426.93 1149.67,1427.56 1150.56,1428.18 1151.44,1428.79 1152.33,1429.4 \\n 1153.22,1430 1154.11,1430.6 1155,1431.18 1155.88,1431.77 1156.78,1432.34 1157.67,1432.91 1158.56,1433.47 1159.45,1434.03 1160.34,1434.58 1161.24,1435.12 \\n 1162.13,1435.65 1163.03,1436.18 1163.93,1436.71 1164.82,1437.22 1165.72,1437.73 1166.62,1438.23 1167.52,1438.73 1168.42,1439.22 1169.32,1439.71 1170.22,1440.18 \\n 1171.12,1440.65 1172.02,1441.12 1172.93,1441.57 1173.83,1442.02 1174.74,1442.47 1175.64,1442.91 1176.55,1443.34 1177.45,1443.76 1178.36,1444.18 1179.27,1444.59 \\n 1180.17,1445 1181.08,1445.39 1181.99,1445.78 1182.9,1446.17 1183.81,1446.55 1184.72,1446.92 1185.63,1447.28 1186.54,1447.64 1187.45,1447.99 1188.37,1448.34 \\n 1189.28,1448.68 1190.19,1449.01 1191.11,1449.33 1192.02,1449.65 1192.94,1449.96 1193.85,1450.27 1194.77,1450.57 1195.68,1450.86 1196.6,1451.14 1197.51,1451.42 \\n 1198.43,1451.69 1199.35,1451.96 1200.27,1452.22 1201.18,1452.47 1202.1,1452.72 1203.02,1452.95 1203.94,1453.19 1204.86,1453.41 1205.78,1453.63 1206.7,1453.84 \\n 1207.62,1454.05 1208.54,1454.24 1209.46,1454.43 1210.38,1454.62 1211.3,1454.8 1212.22,1454.97 1213.15,1455.13 1214.07,1455.29 1214.99,1455.44 1215.91,1455.59 \\n 1216.83,1455.73 1217.76,1455.86 1218.68,1455.98 1219.6,1456.1 1220.53,1456.21 1221.45,1456.31 1222.37,1456.41 1223.3,1456.5 1224.22,1456.59 1225.14,1456.66 \\n 1226.07,1456.73 1226.99,1456.8 1227.92,1456.86 1228.84,1456.91 1229.76,1456.95 1230.69,1456.99 1231.61,1457.02 1232.54,1457.04 1233.46,1457.06 1234.39,1457.07 \\n 1235.31,1457.07 1236.23,1457.07 1237.16,1457.06 1238.08,1457.04 1239.01,1457.02 1239.93,1456.99 1240.86,1456.95 1241.78,1456.91 1242.7,1456.86 1243.63,1456.8 \\n 1244.55,1456.73 1245.48,1456.66 1246.4,1456.59 1247.32,1456.5 1248.25,1456.41 1249.17,1456.31 1250.09,1456.21 1251.02,1456.1 1251.94,1455.98 1252.86,1455.86 \\n 1253.79,1455.73 1254.71,1455.59 1255.63,1455.44 1256.55,1455.29 1257.47,1455.13 1258.4,1454.97 1259.32,1454.8 1260.24,1454.62 1261.16,1454.43 1262.08,1454.24 \\n 1263,1454.05 1263.92,1453.84 1264.84,1453.63 1265.76,1453.41 1266.68,1453.19 1267.6,1452.95 1268.52,1452.72 1269.44,1452.47 1270.35,1452.22 1271.27,1451.96 \\n 1272.19,1451.69 1273.11,1451.42 1274.02,1451.14 1274.94,1450.86 1275.85,1450.57 1276.77,1450.27 1277.68,1449.96 1278.6,1449.65 1279.51,1449.33 1280.43,1449.01 \\n 1281.34,1448.68 1282.25,1448.34 1283.17,1447.99 1284.08,1447.64 1284.99,1447.28 1285.9,1446.92 1286.81,1446.55 1287.72,1446.17 1288.63,1445.78 1289.54,1445.39 \\n 1290.45,1445 1291.35,1444.59 1292.26,1444.18 1293.17,1443.76 1294.07,1443.34 1294.98,1442.91 1295.88,1442.47 1296.79,1442.02 1297.69,1441.57 1298.6,1441.12 \\n 1299.5,1440.65 1300.4,1440.18 1301.3,1439.71 1302.2,1439.22 1303.1,1438.73 1304,1438.23 1304.9,1437.73 1305.8,1437.22 1306.69,1436.71 1307.59,1436.18 \\n 1308.49,1435.65 1309.38,1435.12 1310.28,1434.58 1311.17,1434.03 1312.06,1433.47 1312.95,1432.91 1313.85,1432.34 1314.74,1431.77 1315.63,1431.18 1316.51,1430.6 \\n 1317.4,1430 1318.29,1429.4 1319.18,1428.79 1320.06,1428.18 1320.95,1427.56 1321.83,1426.93 1322.71,1426.3 1323.6,1425.66 1324.48,1425.02 1325.36,1424.36 \\n 1326.24,1423.71 1327.12,1423.04 1327.99,1422.37 1328.87,1421.69 1329.75,1421.01 1330.62,1420.32 1331.5,1419.62 1332.37,1418.92 1333.24,1418.21 1334.11,1417.49 \\n 1334.98,1416.77 1335.85,1416.04 1336.72,1415.3 1337.59,1414.56 1338.45,1413.82 1339.32,1413.06 1340.18,1412.3 1341.05,1411.54 1341.91,1410.76 1342.77,1409.98 \\n 1343.63,1409.2 1344.49,1408.41 1345.35,1407.61 1346.2,1406.81 1347.06,1405.99 1347.91,1405.18 1348.77,1404.36 1349.62,1403.53 1350.47,1402.69 1351.32,1401.85 \\n 1352.17,1401 1353.02,1400.15 1353.86,1399.29 1354.71,1398.42 1355.55,1397.55 1356.4,1396.67 1357.24,1395.79 1358.08,1394.9 1358.92,1394 1359.76,1393.1 \\n 1360.59,1392.19 1361.43,1391.28 1362.26,1390.35 1363.1,1389.43 1363.93,1388.49 1364.76,1387.55 1365.59,1386.61 1366.42,1385.66 1367.25,1384.7 1368.07,1383.74 \\n 1368.9,1382.77 1369.72,1381.79 1370.54,1380.81 1371.36,1379.82 1372.18,1378.83 1373,1377.83 1373.81,1376.83 1374.63,1375.82 1375.44,1374.8 1376.25,1373.78 \\n 1377.06,1372.75 1377.87,1371.71 1378.68,1370.67 1379.49,1369.63 1380.29,1368.57 1381.1,1367.52 1381.9,1366.45 1382.7,1365.38 1383.5,1364.31 1384.3,1363.22 \\n 1385.09,1362.14 1385.89,1361.04 1386.68,1359.94 1387.47,1358.84 1388.26,1357.73 1389.05,1356.61 1389.84,1355.49 1390.63,1354.36 1391.41,1353.23 1392.19,1352.09 \\n 1392.98,1350.95 1393.76,1349.8 1394.53,1348.64 1395.31,1347.48 1396.08,1346.31 1396.86,1345.14 1397.63,1343.96 1398.4,1342.78 1399.17,1341.59 1399.94,1340.39 \\n 1400.7,1339.19 1401.47,1337.98 1402.23,1336.77 1402.99,1335.55 1403.75,1334.33 1404.5,1333.1 1405.26,1331.87 1406.01,1330.63 1406.76,1329.38 1407.52,1328.13 \\n 1408.26,1326.88 1409.01,1325.61 1409.76,1324.35 1410.5,1323.07 1411.24,1321.8 1411.98,1320.51 1412.72,1319.22 1413.46,1317.93 1414.19,1316.63 1414.92,1315.33 \\n 1415.66,1314.02 1416.39,1312.7 1417.11,1311.38 1417.84,1310.06 1418.56,1308.72 1419.29,1307.39 1420.01,1306.05 1420.73,1304.7 1421.44,1303.35 1422.16,1301.99 \\n 1422.87,1300.63 1423.58,1299.26 1424.29,1297.89 1425,1296.51 1425.7,1295.12 1426.41,1293.74 1427.11,1292.34 1427.81,1290.94 1428.51,1289.54 1429.2,1288.13 \\n 1429.9,1286.72 1430.59,1285.3 1431.28,1283.88 1431.97,1282.45 1432.66,1281.01 1433.34,1279.57 1434.02,1278.13 1434.71,1276.68 1435.38,1275.23 1436.06,1273.77 \\n 1436.74,1272.31 1437.41,1270.84 1438.08,1269.36 1438.75,1267.89 1439.42,1266.4 1440.08,1264.91 1440.74,1263.42 1441.4,1261.92 1442.06,1260.42 1442.72,1258.91 \\n 1443.37,1257.4 1444.03,1255.89 1444.68,1254.36 1445.33,1252.84 1445.97,1251.31 1446.62,1249.77 1447.26,1248.23 1447.9,1246.69 1448.54,1245.14 1449.17,1243.58 \\n 1449.81,1242.02 1450.44,1240.46 1451.07,1238.89 1451.7,1237.32 1452.32,1235.74 1452.94,1234.16 1453.57,1232.57 1454.18,1230.98 1454.8,1229.39 1455.42,1227.79 \\n 1456.03,1226.18 1456.64,1224.57 1457.25,1222.96 1457.85,1221.34 1458.46,1219.72 1459.06,1218.1 1459.66,1216.46 1460.25,1214.83 1460.85,1213.19 1461.44,1211.55 \\n 1462.03,1209.9 1462.62,1208.25 1463.21,1206.59 1463.79,1204.93 1464.37,1203.26 1464.95,1201.59 1465.53,1199.92 1466.1,1198.24 1466.67,1196.56 1467.24,1194.88 \\n 1467.81,1193.18 1468.38,1191.49 1468.94,1189.79 1469.5,1188.09 1470.06,1186.38 1470.61,1184.67 1471.17,1182.96 1471.72,1181.24 1472.27,1179.52 1472.82,1177.79 \\n 1473.36,1176.06 1473.9,1174.32 1474.44,1172.59 1474.98,1170.84 1475.52,1169.1 1476.05,1167.35 1476.58,1165.59 1477.11,1163.83 1477.63,1162.07 1478.16,1160.31 \\n 1478.68,1158.54 1479.19,1156.76 1479.71,1154.99 1480.22,1153.21 1480.74,1151.42 1481.24,1149.63 1481.75,1147.84 1482.25,1146.05 1482.76,1144.25 1483.25,1142.44 \\n 1483.75,1140.64 1484.24,1138.83 1484.74,1137.01 1485.23,1135.2 1485.71,1133.38 1486.2,1131.55 1486.68,1129.72 1487.16,1127.89 1487.63,1126.06 1488.11,1124.22 \\n 1488.58,1122.38 1489.05,1120.53 1489.52,1118.68 1489.98,1116.83 1490.44,1114.98 1490.9,1113.12 1491.36,1111.26 1491.81,1109.39 1492.26,1107.52 1492.71,1105.65 \\n 1493.16,1103.78 1493.6,1101.9 1494.05,1100.02 1494.49,1098.13 1494.92,1096.24 1495.36,1094.35 1495.79,1092.46 1496.22,1090.56 1496.64,1088.66 1497.06,1086.76 \\n 1497.49,1084.85 1497.9,1082.94 1498.32,1081.03 1498.73,1079.11 1499.14,1077.19 1499.55,1075.27 1499.96,1073.35 1500.36,1071.42 1500.76,1069.49 1501.16,1067.55 \\n 1501.55,1065.62 1501.94,1063.68 1502.33,1061.74 1502.72,1059.79 1503.11,1057.84 1503.49,1055.89 1503.87,1053.94 1504.24,1051.98 1504.62,1050.03 1504.99,1048.06 \\n 1505.36,1046.1 1505.72,1044.13 1506.09,1042.16 1506.45,1040.19 1506.8,1038.22 1507.16,1036.24 1507.51,1034.26 1507.86,1032.28 1508.21,1030.29 1508.55,1028.3 \\n 1508.89,1026.31 1509.23,1024.32 1509.57,1022.33 1509.9,1020.33 1510.23,1018.33 1510.56,1016.33 1510.89,1014.32 1511.21,1012.32 1511.53,1010.31 1511.85,1008.29 \\n 1512.16,1006.28 1512.47,1004.26 1512.78,1002.25 1513.09,1000.23 1513.39,998.203 1513.69,996.178 1513.99,994.15 1514.29,992.121 1514.58,990.089 1514.87,988.056 \\n 1515.16,986.02 1515.44,983.982 1515.72,981.942 1516,979.9 1516.28,977.856 1516.55,975.81 1516.82,973.762 1517.09,971.712 1517.35,969.66 1517.61,967.606 \\n 1517.87,965.551 1518.13,963.493 1518.38,961.434 1518.63,959.373 1518.88,957.309 1519.13,955.245 1519.37,953.178 1519.61,951.11 1519.85,949.039 1520.08,946.968 \\n 1520.31,944.894 1520.54,942.819 1520.77,940.742 1520.99,938.664 1521.21,936.583 1521.43,934.502 1521.64,932.418 1521.85,930.334 1522.06,928.247 1522.27,926.159 \\n 1522.47,924.07 1522.67,921.979 1522.87,919.887 1523.06,917.793 1523.25,915.698 1523.44,913.602 1523.63,911.504 1523.81,909.405 1523.99,907.304 1524.17,905.202 \\n 1524.34,903.099 1524.52,900.995 1524.69,898.889 1524.85,896.783 1525.02,894.675 1525.18,892.565 1525.33,890.455 1525.49,888.344 1525.64,886.231 1525.79,884.117 \\n 1525.93,882.003 1526.08,879.887 1526.22,877.77 1526.36,875.652 1526.49,873.533 1526.62,871.413 1526.75,869.293 1526.88,867.171 1527,865.048 1527.12,862.925 \\n 1527.24,860.8 1527.35,858.675 1527.46,856.549 1527.57,854.422 1527.68,852.295 1527.78,850.167 1527.88,848.037 1527.98,845.908 1528.07,843.777 1528.16,841.646 \\n 1528.25,839.514 1528.34,837.382 1528.42,835.249 1528.5,833.115 1528.58,830.981 1528.65,828.846 1528.72,826.711 1528.79,824.575 1528.85,822.438 1528.92,820.302 \\n 1528.98,818.164 1529.03,816.027 1529.09,813.889 1529.14,811.75 1529.19,809.612 1529.23,807.472 1529.27,805.333 1529.31,803.193 1529.35,801.053 1529.38,798.913 \\n 1529.41,796.772 1529.44,794.632 1529.46,792.491 1529.49,790.35 1529.5,788.209 1529.52,786.067 1529.53,783.926 1529.54,781.784 1529.55,779.643 1529.56,777.501 \\n 1529.56,775.359 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#808080; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1938.44,424.665 1939.87,423.698 1941.28,422.736 1942.69,421.776 1944.09,420.821 1945.49,419.868 1946.88,418.92 1948.26,417.974 1949.63,417.033 1951,416.095 \\n 1952.36,415.16 1953.71,414.229 1955.06,413.301 1956.4,412.378 1957.73,411.457 1959.05,410.54 1960.37,409.627 1961.68,408.718 1962.98,407.812 1964.28,406.91 \\n 1965.57,406.011 1966.85,405.116 1968.12,404.225 1969.39,403.337 1970.65,402.453 1971.9,401.573 1973.15,400.696 1974.39,399.823 1975.62,398.954 1976.84,398.088 \\n 1978.06,397.226 1979.27,396.368 1980.47,395.514 1981.66,394.663 1982.85,393.817 1984.03,392.974 1985.2,392.134 1986.36,391.299 1987.52,390.467 1988.67,389.639 \\n 1989.81,388.815 1990.95,387.995 1992.07,387.178 1993.19,386.366 1994.31,385.557 1995.41,384.752 1996.51,383.951 1997.6,383.154 1998.68,382.36 1999.76,381.571 \\n 2000.82,380.785 2001.88,380.003 2002.94,379.226 2003.98,378.452 2005.02,377.682 2006.05,376.916 2007.07,376.154 2008.08,375.395 2009.09,374.641 2010.09,373.891 \\n 2011.08,373.145 2012.06,372.402 2013.04,371.664 2014.01,370.93 2014.97,370.199 2015.92,369.473 2016.87,368.75 2017.8,368.032 2018.73,367.318 2019.66,366.607 \\n 2020.57,365.901 2021.48,365.199 2022.37,364.501 2023.27,363.806 2024.15,363.116 2025.02,362.43 2025.89,361.748 2026.75,361.071 2027.6,360.397 2028.45,359.727 \\n 2029.28,359.062 2030.11,358.4 2030.93,357.743 2031.75,357.09 2032.55,356.44 2033.35,355.796 2034.14,355.155 2034.92,354.518 2035.69,353.886 2036.46,353.257 \\n 2037.22,352.633 2037.97,352.013 2038.71,351.397 2039.44,350.785 2040.17,350.178 2040.89,349.575 2041.6,348.976 2042.3,348.381 2042.99,347.79 2043.68,347.204 \\n 2044.36,346.622 2045.03,346.044 2045.69,345.47 2046.34,344.9 2046.99,344.335 2047.63,343.774 2048.26,343.217 2048.88,342.665 2049.49,342.117 2050.1,341.573 \\n 2050.7,341.033 2051.29,340.498 2051.87,339.967 2052.44,339.44 2053.01,338.918 2053.57,338.399 2054.12,337.886 2054.66,337.376 2055.19,336.871 2055.72,336.37 \\n 2056.24,335.873 2056.74,335.381 2057.25,334.893 2057.74,334.41 2058.22,333.931 2058.7,333.456 2059.17,332.985 2059.63,332.519 2060.08,332.058 2060.53,331.6 \\n 2060.96,331.147 2061.39,330.699 2061.81,330.255 2062.22,329.815 2062.62,329.38 2063.02,328.949 2063.41,328.522 2063.79,328.1 2064.16,327.682 2064.52,327.269 \\n 2064.87,326.86 2065.22,326.455 2065.56,326.055 2065.89,325.66 2066.21,325.268 2066.52,324.882 2066.83,324.499 2067.13,324.122 2067.41,323.748 2067.69,323.379 \\n 2067.97,323.015 2068.23,322.655 2068.49,322.299 2068.74,321.948 2068.97,321.602 2069.21,321.26 2069.43,320.922 2069.64,320.589 2069.85,320.26 2070.05,319.936 \\n 2070.24,319.616 2070.42,319.301 2070.59,318.99 2070.76,318.684 2070.92,318.383 2071.07,318.085 2071.21,317.793 2071.34,317.505 2071.46,317.221 2071.58,316.942 \\n 2071.69,316.667 2071.79,316.397 2071.88,316.132 2071.96,315.871 2072.03,315.614 2072.1,315.363 2072.16,315.115 2072.21,314.872 2072.25,314.634 2072.28,314.4 \\n 2072.31,314.171 2072.33,313.947 2072.33,313.727 2072.33,313.511 2072.33,313.3 2072.31,313.094 2072.29,312.892 2072.25,312.695 2072.21,312.502 2072.16,312.314 \\n 2072.11,312.13 2072.04,311.951 2071.97,311.777 2071.88,311.607 2071.79,311.442 2071.69,311.281 2071.59,311.125 2071.47,310.973 2071.35,310.826 2071.22,310.684 \\n 2071.08,310.546 2070.93,310.413 2070.77,310.285 2070.61,310.161 2070.43,310.041 2070.25,309.926 2070.06,309.816 2069.86,309.711 2069.66,309.609 2069.44,309.513 \\n 2069.22,309.421 2068.99,309.334 2068.75,309.251 2068.5,309.173 2068.25,309.1 2067.99,309.031 2067.71,308.967 2067.43,308.907 2067.14,308.852 2066.85,308.802 \\n 2066.54,308.756 2066.23,308.715 2065.91,308.678 2065.58,308.646 2065.24,308.619 2064.9,308.596 2064.54,308.578 2064.18,308.564 2063.81,308.555 2063.43,308.551 \\n 2063.05,308.551 2062.65,308.556 2062.25,308.565 2061.84,308.579 2061.42,308.598 2060.99,308.621 2060.56,308.649 2060.11,308.682 2059.66,308.719 2059.2,308.761 \\n 2058.73,308.807 2058.26,308.858 2057.77,308.913 2057.28,308.973 2056.78,309.038 2056.27,309.107 2055.75,309.181 2055.23,309.26 2054.69,309.343 2054.15,309.431 \\n 2053.6,309.523 2053.05,309.62 2052.48,309.722 2051.91,309.828 2051.33,309.938 2050.74,310.054 2050.14,310.174 2049.53,310.298 2048.92,310.427 2048.3,310.561 \\n 2047.67,310.699 2047.03,310.842 2046.39,310.989 2045.73,311.141 2045.07,311.298 2044.4,311.459 2043.72,311.625 2043.04,311.795 2042.34,311.97 2041.64,312.15 \\n 2040.93,312.334 2040.22,312.522 2039.49,312.716 2038.76,312.913 2038.02,313.116 2037.27,313.322 2036.51,313.534 2035.74,313.75 2034.97,313.97 2034.19,314.196 \\n 2033.4,314.425 2032.61,314.659 2031.8,314.898 2030.99,315.141 2030.17,315.389 2029.34,315.642 2028.5,315.899 2027.66,316.16 2026.81,316.426 2025.95,316.697 \\n 2025.08,316.972 2024.21,317.251 2023.33,317.535 2022.43,317.824 2021.54,318.117 2020.63,318.415 2019.72,318.717 2018.8,319.023 2017.87,319.335 2016.93,319.65 \\n 2015.98,319.97 2015.03,320.295 2014.07,320.624 2013.1,320.958 2012.13,321.296 2011.15,321.638 2010.15,321.986 2009.16,322.337 2008.15,322.693 2007.14,323.054 \\n 2006.12,323.419 2005.09,323.788 2004.05,324.162 2003.01,324.54 2001.95,324.923 2000.89,325.31 1999.83,325.702 1998.75,326.098 1997.67,326.498 1996.58,326.903 \\n 1995.49,327.313 1994.38,327.727 1993.27,328.145 1992.15,328.567 1991.02,328.994 1989.89,329.426 1988.75,329.862 1987.6,330.302 1986.44,330.747 1985.28,331.196 \\n 1984.11,331.649 1982.93,332.107 1981.74,332.569 1980.55,333.036 1979.35,333.507 1978.14,333.982 1976.92,334.461 1975.7,334.945 1974.47,335.434 1973.23,335.926 \\n 1971.99,336.423 1970.73,336.925 1969.48,337.43 1968.21,337.94 1966.94,338.455 1965.65,338.973 1964.37,339.496 1963.07,340.023 1961.77,340.555 1960.46,341.091 \\n 1959.14,341.631 1957.82,342.175 1956.49,342.724 1955.15,343.277 1953.8,343.834 1952.45,344.395 1951.09,344.961 1949.72,345.531 1948.35,346.105 1946.97,346.684 \\n 1945.58,347.266 1944.19,347.853 1942.78,348.444 1941.38,349.04 1939.96,349.639 1938.54,350.243 1937.11,350.851 1935.67,351.463 1934.23,352.079 1932.78,352.7 \\n 1931.32,353.324 1929.86,353.953 1928.38,354.586 1926.91,355.223 1925.42,355.864 1923.93,356.51 1922.43,357.159 1920.93,357.813 1919.42,358.471 1917.9,359.133 \\n 1916.37,359.799 1914.84,360.469 1913.3,361.143 1911.76,361.821 1910.2,362.504 1908.64,363.19 1907.08,363.881 1905.51,364.575 1903.93,365.274 1902.34,365.976 \\n 1900.75,366.683 1899.15,367.394 1897.55,368.109 1895.94,368.827 1894.32,369.55 1892.69,370.277 1891.06,371.008 1889.43,371.743 1887.78,372.482 1886.13,373.224 \\n 1884.48,373.971 1882.81,374.722 1881.14,375.476 1879.47,376.235 1877.78,376.997 1876.1,377.764 1874.4,378.534 1872.7,379.309 1870.99,380.087 1869.28,380.869 \\n 1867.56,381.655 1865.83,382.445 1864.1,383.239 1862.36,384.036 1860.61,384.838 1858.86,385.643 1857.11,386.452 1855.34,387.265 1853.57,388.082 1851.8,388.903 \\n 1850.02,389.728 1848.23,390.556 1846.43,391.388 1844.63,392.224 1842.83,393.064 1841.02,393.907 1839.2,394.754 1837.38,395.605 1835.55,396.46 1833.71,397.318 \\n 1831.87,398.181 1830.02,399.047 1828.17,399.916 1826.31,400.79 1824.44,401.667 1822.57,402.547 1820.7,403.432 1818.81,404.32 1816.92,405.212 1815.03,406.107 \\n 1813.13,407.006 1811.23,407.909 1809.31,408.815 1807.4,409.725 1805.48,410.638 1803.55,411.555 1801.61,412.476 1799.68,413.4 1797.73,414.328 1795.78,415.26 \\n 1793.82,416.195 1791.86,417.133 1789.9,418.075 1787.92,419.021 1785.95,419.97 1783.96,420.923 1781.97,421.879 1779.98,422.838 1777.98,423.802 1775.98,424.768 \\n 1773.97,425.738 1771.95,426.712 1769.93,427.688 1767.9,428.669 1765.87,429.653 1763.84,430.64 1761.79,431.63 1759.75,432.624 1757.7,433.622 1755.64,434.622 \\n 1753.58,435.626 1751.51,436.634 1749.44,437.645 1747.36,438.659 1745.28,439.676 1743.19,440.697 1741.1,441.721 1739,442.748 1736.9,443.779 1734.79,444.813 \\n 1732.68,445.85 1730.56,446.89 1728.44,447.934 1726.31,448.981 1724.18,450.031 1722.04,451.084 1719.9,452.141 1717.75,453.201 1715.6,454.264 1713.44,455.33 \\n 1711.28,456.399 1709.12,457.471 1706.95,458.547 1704.77,459.625 1702.59,460.707 1700.41,461.792 1698.22,462.88 1696.03,463.971 1693.83,465.065 1691.63,466.162 \\n 1689.42,467.262 1687.21,468.366 1684.99,469.472 1682.77,470.581 1680.55,471.694 1678.32,472.809 1676.09,473.927 1673.85,475.048 1671.61,476.173 1669.36,477.3 \\n 1667.11,478.43 1664.86,479.563 1662.6,480.699 1660.33,481.838 1658.07,482.98 1655.79,484.124 1653.52,485.272 1651.24,486.422 1648.96,487.576 1646.67,488.732 \\n 1644.38,489.891 1642.08,491.052 1639.78,492.217 1637.47,493.384 1635.17,494.554 1632.85,495.727 1630.54,496.903 1628.22,498.081 1625.89,499.263 1623.57,500.446 \\n 1621.24,501.633 1618.9,502.822 1616.56,504.014 1614.22,505.209 1611.87,506.406 1609.52,507.606 1607.17,508.809 1604.81,510.014 1602.45,511.222 1600.08,512.432 \\n 1597.71,513.646 1595.34,514.861 1592.97,516.079 1590.59,517.3 1588.2,518.524 1585.82,519.749 1583.43,520.978 1581.04,522.209 1578.64,523.442 1576.24,524.678 \\n 1573.84,525.916 1571.43,527.157 1569.02,528.4 1566.61,529.646 1564.19,530.894 1561.77,532.145 1559.35,533.398 1556.92,534.653 1554.49,535.911 1552.06,537.171 \\n 1549.62,538.433 1547.19,539.698 1544.74,540.965 1542.3,542.235 1539.85,543.506 1537.4,544.78 1534.95,546.056 1532.49,547.335 1530.03,548.616 1527.57,549.899 \\n 1525.1,551.184 1522.63,552.472 1520.16,553.761 1517.69,555.053 1515.21,556.347 1512.73,557.643 1510.25,558.942 1507.76,560.242 1505.28,561.545 1502.79,562.849 \\n 1500.29,564.156 1497.8,565.465 1495.3,566.776 1492.8,568.089 1490.29,569.404 1487.79,570.721 1485.28,572.04 1482.77,573.361 1480.26,574.684 1477.74,576.009 \\n 1475.22,577.337 1472.7,578.666 1470.18,579.996 1467.65,581.329 1465.13,582.664 1462.6,584.001 1460.07,585.339 1457.53,586.68 1454.99,588.022 1452.46,589.366 \\n 1449.92,590.712 1447.37,592.06 1444.83,593.41 1442.28,594.761 1439.73,596.115 1437.18,597.469 1434.63,598.826 1432.07,600.185 1429.52,601.545 1426.96,602.907 \\n 1424.4,604.27 1421.83,605.636 1419.27,607.003 1416.7,608.371 1414.14,609.742 1411.57,611.114 1408.99,612.487 1406.42,613.862 1403.85,615.239 1401.27,616.618 \\n 1398.69,617.997 1396.11,619.379 1393.53,620.762 1390.95,622.146 1388.36,623.532 1385.78,624.92 1383.19,626.309 1380.6,627.699 1378.01,629.091 1375.42,630.485 \\n 1372.83,631.88 1370.23,633.276 1367.63,634.674 1365.04,636.073 1362.44,637.473 1359.84,638.875 1357.24,640.278 1354.64,641.682 1352.03,643.088 1349.43,644.495 \\n 1346.82,645.903 1344.22,647.313 1341.61,648.724 1339,650.136 1336.39,651.55 1333.78,652.964 1331.17,654.38 1328.55,655.797 1325.94,657.215 1323.33,658.635 \\n 1320.71,660.055 1318.09,661.477 1315.48,662.9 1312.86,664.323 1310.24,665.748 1307.62,667.174 1305,668.602 1302.38,670.03 1299.76,671.459 1297.14,672.889 \\n 1294.51,674.32 1291.89,675.753 1289.27,677.186 1286.64,678.62 1284.02,680.055 1281.39,681.491 1278.77,682.928 1276.14,684.366 1273.51,685.805 1270.89,687.245 \\n 1268.26,688.686 1265.63,690.127 1263,691.569 1260.38,693.012 1257.75,694.456 1255.12,695.901 1252.49,697.347 1249.86,698.793 1247.23,700.24 1244.6,701.688 \\n 1241.97,703.136 1239.34,704.585 1236.71,706.035 1234.08,707.486 1231.45,708.937 1228.82,710.389 1226.19,711.842 1223.57,713.295 1220.94,714.749 1218.31,716.203 \\n 1215.68,717.658 1213.05,719.114 1210.42,720.57 1207.79,722.026 1205.16,723.484 1202.54,724.941 1199.91,726.399 1197.28,727.858 1194.66,729.317 1192.03,730.777 \\n 1189.4,732.237 1186.78,733.697 1184.15,735.158 1181.53,736.62 1178.9,738.081 1176.28,739.543 1173.66,741.006 1171.04,742.468 1168.41,743.931 1165.79,745.395 \\n 1163.17,746.858 1160.55,748.322 1157.94,749.786 1155.32,751.251 1152.7,752.716 1150.08,754.181 1147.47,755.646 1144.85,757.111 1142.24,758.576 1139.63,760.042 \\n 1137.02,761.508 1134.4,762.974 1131.79,764.44 1129.19,765.906 1126.58,767.372 1123.97,768.839 1121.37,770.305 1118.76,771.772 1116.16,773.238 1113.56,774.705 \\n 1110.95,776.171 1108.35,777.638 1105.76,779.104 1103.16,780.571 1100.56,782.037 1097.97,783.503 1095.37,784.97 1092.78,786.436 1090.19,787.902 1087.6,789.368 \\n 1085.02,790.834 1082.43,792.299 1079.85,793.765 1077.26,795.23 1074.68,796.695 1072.1,798.16 1069.52,799.625 1066.95,801.089 1064.37,802.553 1061.8,804.017 \\n 1059.23,805.481 1056.66,806.944 1054.09,808.407 1051.52,809.87 1048.96,811.332 1046.39,812.794 1043.83,814.256 1041.27,815.717 1038.72,817.178 1036.16,818.638 \\n 1033.61,820.098 1031.06,821.558 1028.51,823.017 1025.96,824.475 1023.42,825.933 1020.87,827.391 1018.33,828.848 1015.79,830.305 1013.26,831.761 1010.72,833.216 \\n 1008.19,834.671 1005.66,836.126 1003.13,837.579 1000.61,839.033 998.087,840.485 995.566,841.937 993.048,843.388 990.532,844.839 988.019,846.288 985.508,847.738 \\n 982.999,849.186 980.493,850.634 977.989,852.081 975.488,853.527 972.99,854.972 970.494,856.417 968.001,857.861 965.51,859.304 963.023,860.746 960.537,862.187 \\n 958.055,863.628 955.575,865.068 953.098,866.506 950.624,867.944 948.152,869.381 945.684,870.817 943.218,872.252 940.755,873.686 938.295,875.119 935.838,876.551 \\n 933.384,877.983 930.933,879.413 928.485,880.842 926.04,882.27 923.598,883.697 921.159,885.123 918.723,886.548 916.291,887.972 913.861,889.394 911.435,890.816 \\n 909.012,892.236 906.592,893.655 904.175,895.073 901.762,896.49 899.352,897.906 896.945,899.32 894.541,900.734 892.141,902.146 889.744,903.557 887.351,904.966 \\n 884.961,906.374 882.575,907.781 880.192,909.187 877.812,910.591 875.436,911.994 873.064,913.396 870.695,914.796 868.33,916.195 865.969,917.592 863.611,918.988 \\n 861.256,920.383 858.906,921.776 856.559,923.168 854.216,924.558 851.876,925.947 849.541,927.335 847.209,928.721 844.881,930.105 842.557,931.488 840.237,932.869 \\n 837.921,934.249 835.608,935.627 833.3,937.004 830.995,938.379 828.695,939.752 826.398,941.124 824.106,942.494 821.818,943.862 819.533,945.229 817.253,946.594 \\n 814.977,947.958 812.705,949.319 810.438,950.679 808.174,952.038 805.915,953.394 803.66,954.749 801.409,956.102 799.162,957.453 796.92,958.803 794.682,960.15 \\n 792.449,961.496 790.219,962.84 787.995,964.182 785.774,965.523 783.558,966.861 781.347,968.198 779.14,969.532 776.937,970.865 774.739,972.196 772.546,973.524 \\n 770.357,974.851 768.173,976.176 765.993,977.499 763.818,978.82 761.648,980.139 759.482,981.456 757.321,982.77 755.165,984.083 753.013,985.394 750.866,986.703 \\n 748.724,988.009 746.587,989.314 744.455,990.616 742.327,991.916 740.204,993.214 738.087,994.51 735.974,995.804 733.866,997.096 731.763,998.385 729.665,999.672 \\n 727.572,1000.96 725.484,1002.24 723.401,1003.52 721.323,1004.8 719.25,1006.07 717.182,1007.35 715.119,1008.62 713.062,1009.89 711.009,1011.16 708.962,1012.42 \\n 706.92,1013.68 704.883,1014.94 702.852,1016.2 700.825,1017.46 698.804,1018.71 696.789,1019.96 694.778,1021.21 692.773,1022.45 690.773,1023.69 688.779,1024.94 \\n 686.79,1026.17 684.807,1027.41 682.828,1028.64 680.856,1029.87 678.889,1031.1 676.927,1032.33 674.971,1033.55 673.02,1034.77 671.075,1035.99 669.135,1037.2 \\n 667.201,1038.42 665.273,1039.63 663.35,1040.83 661.433,1042.04 659.522,1043.24 657.616,1044.44 655.716,1045.64 653.822,1046.83 651.933,1048.02 650.05,1049.21 \\n 648.173,1050.4 646.302,1051.58 644.437,1052.76 642.577,1053.94 640.723,1055.12 638.875,1056.29 637.033,1057.46 635.197,1058.63 633.367,1059.79 631.543,1060.95 \\n 629.725,1062.11 627.912,1063.27 626.106,1064.42 624.306,1065.57 622.512,1066.72 620.723,1067.86 618.941,1069 617.165,1070.14 615.395,1071.28 613.631,1072.41 \\n 611.874,1073.54 610.122,1074.67 608.377,1075.79 606.638,1076.91 604.905,1078.03 603.178,1079.14 601.457,1080.26 599.743,1081.37 598.035,1082.47 596.333,1083.57 \\n 594.638,1084.67 592.949,1085.77 591.266,1086.86 589.59,1087.96 587.92,1089.04 586.256,1090.13 584.599,1091.21 582.948,1092.29 581.304,1093.36 579.666,1094.43 \\n 578.035,1095.5 576.41,1096.57 574.791,1097.63 573.179,1098.69 571.574,1099.75 569.975,1100.8 568.383,1101.85 566.797,1102.9 565.218,1103.94 563.646,1104.98 \\n 562.08,1106.02 560.521,1107.05 558.968,1108.08 557.422,1109.11 555.883,1110.13 554.351,1111.15 552.825,1112.17 551.306,1113.18 549.794,1114.19 548.288,1115.2 \\n 546.789,1116.2 545.297,1117.2 543.812,1118.2 542.334,1119.19 540.863,1120.18 539.398,1121.17 537.94,1122.16 536.489,1123.14 535.046,1124.11 533.608,1125.08 \\n 532.178,1126.05 530.755,1127.02 529.339,1127.98 527.93,1128.94 526.528,1129.9 525.132,1130.85 523.744,1131.8 522.363,1132.74 520.989,1133.69 519.621,1134.62 \\n 518.261,1135.56 516.908,1136.49 515.562,1137.42 514.224,1138.34 512.892,1139.26 511.567,1140.18 510.25,1141.09 508.94,1142 507.637,1142.91 506.341,1143.81 \\n 505.052,1144.71 503.77,1145.6 502.496,1146.49 501.229,1147.38 499.969,1148.27 498.717,1149.15 497.471,1150.02 496.233,1150.9 495.003,1151.77 493.779,1152.63 \\n 492.563,1153.49 491.355,1154.35 490.153,1155.2 488.959,1156.06 487.772,1156.9 486.593,1157.75 485.421,1158.58 484.257,1159.42 483.1,1160.25 481.95,1161.08 \\n 480.808,1161.9 479.673,1162.72 478.545,1163.54 477.426,1164.35 476.313,1165.16 475.208,1165.97 474.111,1166.77 473.021,1167.57 471.939,1168.36 470.864,1169.15 \\n 469.797,1169.93 468.737,1170.72 467.685,1171.49 466.64,1172.27 465.603,1173.04 464.574,1173.8 463.552,1174.57 462.538,1175.32 461.531,1176.08 460.532,1176.83 \\n 459.541,1177.57 458.557,1178.32 457.581,1179.05 456.613,1179.79 455.652,1180.52 454.7,1181.25 453.754,1181.97 452.817,1182.69 451.887,1183.4 450.965,1184.11 \\n 450.051,1184.82 449.144,1185.52 448.246,1186.22 447.355,1186.91 446.471,1187.6 445.596,1188.29 444.728,1188.97 443.868,1189.65 443.016,1190.32 442.172,1190.99 \\n 441.336,1191.66 440.507,1192.32 439.687,1192.98 438.874,1193.63 438.069,1194.28 437.272,1194.92 436.482,1195.56 435.701,1196.2 434.927,1196.83 434.162,1197.46 \\n 433.404,1198.09 432.655,1198.71 431.913,1199.32 431.179,1199.93 430.453,1200.54 429.735,1201.14 429.025,1201.74 428.323,1202.34 427.628,1202.93 426.942,1203.52 \\n 426.264,1204.1 425.594,1204.68 424.932,1205.25 424.277,1205.82 423.631,1206.38 422.993,1206.94 422.363,1207.5 421.741,1208.05 421.127,1208.6 420.52,1209.15 \\n 419.922,1209.69 419.332,1210.22 418.75,1210.75 418.177,1211.28 417.611,1211.8 417.053,1212.32 416.503,1212.83 415.962,1213.34 415.428,1213.85 414.903,1214.35 \\n 414.385,1214.85 413.876,1215.34 413.375,1215.83 412.882,1216.31 412.397,1216.79 411.92,1217.26 411.452,1217.73 410.991,1218.2 410.539,1218.66 410.095,1219.12 \\n 409.659,1219.57 409.231,1220.02 408.811,1220.46 408.399,1220.9 407.996,1221.34 407.6,1221.77 407.213,1222.2 406.834,1222.62 406.464,1223.04 406.101,1223.45 \\n 405.747,1223.86 405.4,1224.26 405.062,1224.66 404.732,1225.06 404.411,1225.45 404.097,1225.84 403.792,1226.22 403.495,1226.6 403.206,1226.97 402.926,1227.34 \\n 402.653,1227.7 402.389,1228.06 402.133,1228.42 401.885,1228.77 401.646,1229.12 401.414,1229.46 401.191,1229.8 400.977,1230.13 400.77,1230.46 400.572,1230.78 \\n 400.382,1231.1 400.2,1231.42 400.026,1231.73 399.861,1232.03 399.703,1232.34 399.555,1232.63 399.414,1232.93 399.282,1233.21 399.157,1233.5 399.041,1233.78 \\n 398.934,1234.05 398.834,1234.32 398.743,1234.59 398.66,1234.85 398.586,1235.1 398.519,1235.36 398.461,1235.6 398.412,1235.85 398.37,1236.08 398.337,1236.32 \\n 398.312,1236.55 398.295,1236.77 398.286,1236.99 398.286,1237.21 398.294,1237.42 398.31,1237.63 398.335,1237.83 398.368,1238.02 398.409,1238.22 398.458,1238.41 \\n 398.515,1238.59 398.581,1238.77 398.655,1238.94 398.738,1239.11 398.828,1239.28 398.927,1239.44 399.034,1239.59 399.149,1239.75 399.273,1239.89 399.405,1240.03 \\n 399.545,1240.17 399.693,1240.31 399.85,1240.43 400.015,1240.56 400.188,1240.68 400.369,1240.79 400.559,1240.9 400.756,1241.01 400.963,1241.11 401.177,1241.21 \\n 401.399,1241.3 401.63,1241.38 401.869,1241.47 402.116,1241.55 402.372,1241.62 402.635,1241.69 402.907,1241.75 403.187,1241.81 403.475,1241.87 403.772,1241.92 \\n 404.077,1241.96 404.39,1242 404.711,1242.04 405.04,1242.07 405.377,1242.1 405.723,1242.12 406.077,1242.14 406.439,1242.15 406.809,1242.16 407.188,1242.17 \\n 407.574,1242.17 407.969,1242.16 408.372,1242.15 408.783,1242.14 409.202,1242.12 409.63,1242.1 410.065,1242.07 410.509,1242.04 410.961,1242 411.421,1241.96 \\n 411.889,1241.91 412.365,1241.86 412.849,1241.81 413.342,1241.75 413.843,1241.68 414.351,1241.61 414.868,1241.54 415.393,1241.46 415.926,1241.38 416.467,1241.29 \\n 417.016,1241.2 417.573,1241.1 418.139,1241 418.712,1240.89 419.293,1240.78 419.883,1240.67 420.48,1240.55 421.086,1240.42 421.699,1240.29 422.321,1240.16 \\n 422.951,1240.02 423.588,1239.88 424.234,1239.73 424.888,1239.58 425.549,1239.42 426.219,1239.26 426.897,1239.09 427.582,1238.92 428.276,1238.75 428.978,1238.57 \\n 429.687,1238.39 430.405,1238.2 431.13,1238 431.863,1237.81 432.605,1237.6 433.354,1237.4 434.111,1237.18 434.876,1236.97 435.649,1236.75 436.43,1236.52 \\n 437.219,1236.29 438.015,1236.06 438.82,1235.82 439.632,1235.58 440.452,1235.33 441.28,1235.08 442.116,1234.82 442.96,1234.56 443.811,1234.29 444.671,1234.02 \\n 445.538,1233.75 446.413,1233.47 447.295,1233.18 448.186,1232.89 449.084,1232.6 449.99,1232.3 450.904,1232 451.825,1231.7 452.755,1231.38 453.692,1231.07 \\n 454.636,1230.75 455.589,1230.42 456.549,1230.09 457.516,1229.76 458.492,1229.42 459.475,1229.08 460.466,1228.73 461.464,1228.38 462.47,1228.03 463.484,1227.67 \\n 464.505,1227.3 465.534,1226.93 466.571,1226.56 467.615,1226.18 468.666,1225.8 469.726,1225.41 470.792,1225.02 471.867,1224.62 472.949,1224.22 474.038,1223.82 \\n 475.135,1223.41 476.239,1222.99 477.351,1222.57 478.471,1222.15 479.597,1221.72 480.732,1221.29 481.873,1220.86 483.023,1220.42 484.179,1219.97 485.343,1219.52 \\n 486.515,1219.07 487.693,1218.61 488.88,1218.15 490.073,1217.68 491.274,1217.21 492.482,1216.74 493.698,1216.26 494.921,1215.77 496.151,1215.29 497.389,1214.79 \\n 498.633,1214.3 499.886,1213.79 501.145,1213.29 502.411,1212.78 503.685,1212.26 504.966,1211.75 506.255,1211.22 507.55,1210.7 508.853,1210.16 510.162,1209.63 \\n 511.479,1209.09 512.803,1208.54 514.135,1208 515.473,1207.44 516.818,1206.88 518.171,1206.32 519.531,1205.76 520.897,1205.19 522.271,1204.61 523.652,1204.04 \\n 525.039,1203.45 526.434,1202.87 527.836,1202.27 529.245,1201.68 530.661,1201.08 532.083,1200.48 533.513,1199.87 534.949,1199.26 536.393,1198.64 537.843,1198.02 \\n 539.301,1197.39 540.765,1196.77 542.236,1196.13 543.713,1195.5 545.198,1194.85 546.69,1194.21 548.188,1193.56 549.693,1192.91 551.205,1192.25 552.723,1191.59 \\n 554.249,1190.92 555.781,1190.25 557.319,1189.58 558.865,1188.9 560.417,1188.22 561.975,1187.53 563.541,1186.84 565.113,1186.14 566.692,1185.45 568.277,1184.74 \\n 569.869,1184.04 571.467,1183.32 573.072,1182.61 574.683,1181.89 576.301,1181.17 577.926,1180.44 579.557,1179.71 581.194,1178.98 582.838,1178.24 584.489,1177.49 \\n 586.145,1176.75 587.809,1176 589.478,1175.24 591.154,1174.48 592.836,1173.72 594.525,1172.95 596.22,1172.18 597.921,1171.41 599.629,1170.63 601.343,1169.85 \\n 603.063,1169.06 604.789,1168.27 606.522,1167.48 608.261,1166.68 610.006,1165.88 611.757,1165.08 613.514,1164.27 615.277,1163.45 617.047,1162.64 618.823,1161.82 \\n 620.604,1160.99 622.392,1160.16 624.186,1159.33 625.986,1158.49 627.792,1157.66 629.604,1156.81 631.421,1155.96 633.245,1155.11 635.075,1154.26 636.911,1153.4 \\n 638.752,1152.54 640.6,1151.67 642.453,1150.8 644.312,1149.93 646.177,1149.05 648.048,1148.17 649.925,1147.29 651.807,1146.4 653.696,1145.51 655.589,1144.61 \\n 657.489,1143.71 659.394,1142.81 661.306,1141.9 663.222,1140.99 665.145,1140.08 667.073,1139.16 669.006,1138.24 670.945,1137.32 672.89,1136.39 674.84,1135.46 \\n 676.796,1134.52 678.757,1133.59 680.724,1132.64 682.697,1131.7 684.674,1130.75 686.658,1129.8 688.646,1128.84 690.64,1127.88 692.64,1126.92 694.644,1125.95 \\n 696.654,1124.98 698.67,1124.01 700.69,1123.03 702.716,1122.05 704.748,1121.07 706.784,1120.08 708.826,1119.09 710.873,1118.09 712.925,1117.1 714.982,1116.1 \\n 717.044,1115.09 719.112,1114.08 721.184,1113.07 723.262,1112.06 725.344,1111.04 727.432,1110.02 729.525,1109 731.623,1107.97 733.725,1106.94 735.833,1105.91 \\n 737.945,1104.87 740.063,1103.83 742.185,1102.78 744.312,1101.74 746.445,1100.69 748.581,1099.63 750.723,1098.58 752.87,1097.52 755.021,1096.46 757.177,1095.39 \\n 759.338,1094.32 761.503,1093.25 763.673,1092.17 765.848,1091.09 768.027,1090.01 770.211,1088.93 772.4,1087.84 774.593,1086.75 776.79,1085.65 778.993,1084.56 \\n 781.199,1083.46 783.411,1082.35 785.626,1081.25 787.846,1080.14 790.071,1079.03 792.3,1077.91 794.533,1076.79 796.771,1075.67 799.013,1074.55 801.259,1073.42 \\n 803.509,1072.29 805.764,1071.16 808.023,1070.02 810.286,1068.88 812.554,1067.74 814.826,1066.59 817.101,1065.45 819.381,1064.3 821.665,1063.14 823.953,1061.99 \\n 826.245,1060.83 828.542,1059.67 830.842,1058.5 833.146,1057.33 835.454,1056.16 837.766,1054.99 840.082,1053.82 842.402,1052.64 844.726,1051.46 847.054,1050.27 \\n 849.385,1049.09 851.721,1047.9 854.06,1046.7 856.403,1045.51 858.749,1044.31 861.099,1043.11 863.453,1041.91 865.811,1040.7 868.172,1039.5 870.537,1038.29 \\n 872.906,1037.07 875.278,1035.86 877.654,1034.64 880.033,1033.42 882.416,1032.2 884.802,1030.97 887.192,1029.74 889.585,1028.51 891.981,1027.28 894.381,1026.04 \\n 896.784,1024.8 899.191,1023.56 901.601,1022.32 904.014,1021.07 906.43,1019.82 908.85,1018.57 911.273,1017.32 913.699,1016.07 916.129,1014.81 918.561,1013.55 \\n 920.997,1012.29 923.435,1011.02 925.877,1009.75 928.322,1008.48 930.77,1007.21 933.221,1005.94 935.675,1004.66 938.131,1003.38 940.591,1002.1 943.054,1000.82 \\n 945.519,999.535 947.988,998.247 950.459,996.958 952.933,995.666 955.41,994.372 957.889,993.076 960.372,991.777 962.857,990.477 965.344,989.174 967.835,987.87 \\n 970.328,986.563 972.823,985.254 975.322,983.943 977.823,982.63 980.326,981.315 982.832,979.998 985.34,978.679 987.851,977.358 990.364,976.034 992.88,974.709 \\n 995.398,973.382 997.918,972.053 1000.44,970.722 1002.97,969.389 1005.49,968.055 1008.02,966.718 1010.56,965.379 1013.09,964.039 1015.63,962.697 1018.16,961.352 \\n 1020.7,960.006 1023.25,958.659 1025.79,957.309 1028.34,955.958 1030.89,954.604 1033.44,953.249 1035.99,951.893 1038.55,950.534 1041.1,949.174 1043.66,947.812 \\n 1046.22,946.448 1048.79,945.083 1051.35,943.716 1053.92,942.347 1056.48,940.977 1059.05,939.605 1061.63,938.232 1064.2,936.856 1066.77,935.48 1069.35,934.101 \\n 1071.93,932.721 1074.51,931.34 1077.09,929.957 1079.67,928.572 1082.26,927.186 1084.84,925.799 1087.43,924.41 1090.02,923.019 1092.61,921.627 1095.2,920.234 \\n 1097.8,918.839 1100.39,917.443 1102.99,916.045 1105.58,914.646 1108.18,913.246 1110.78,911.844 1113.38,910.441 1115.98,909.037 1118.59,907.631 1121.19,906.224 \\n 1123.8,904.815 1126.4,903.406 1129.01,901.995 1131.62,900.583 1134.23,899.169 1136.84,897.755 1139.45,896.339 1142.07,894.922 1144.68,893.504 1147.29,892.084 \\n 1149.91,890.664 1152.53,889.242 1155.14,887.819 1157.76,886.395 1160.38,884.97 1163,883.544 1165.62,882.117 1168.24,880.689 1170.86,879.26 1173.48,877.83 \\n 1176.11,876.398 1178.73,874.966 1181.35,873.533 1183.98,872.099 1186.6,870.663 1189.23,869.227 1191.85,867.79 1194.48,866.352 1197.11,864.914 1199.73,863.474 \\n 1202.36,862.033 1204.99,860.592 1207.62,859.15 1210.25,857.706 1212.87,856.262 1215.5,854.818 1218.13,853.372 1220.76,851.926 1223.39,850.479 1226.02,849.031 \\n 1228.65,847.583 1231.28,846.133 1233.91,844.683 1236.54,843.233 1239.17,841.782 1241.8,840.33 1244.43,838.877 1247.06,837.424 1249.68,835.97 1252.31,834.516 \\n 1254.94,833.061 1257.57,831.605 1260.2,830.149 1262.83,828.692 1265.46,827.235 1268.08,825.778 1270.71,824.319 1273.34,822.861 1275.97,821.402 1278.59,819.942 \\n 1281.22,818.482 1283.84,817.021 1286.47,815.561 1289.09,814.099 1291.72,812.638 1294.34,811.176 1296.96,809.713 1299.58,808.25 1302.21,806.787 1304.83,805.324 \\n 1307.45,803.86 1310.07,802.397 1312.68,800.932 1315.3,799.468 1317.92,798.003 1320.54,796.538 1323.15,795.073 1325.77,793.608 1328.38,792.142 1330.99,790.677 \\n 1333.6,789.211 1336.22,787.745 1338.83,786.279 1341.43,784.813 1344.04,783.346 1346.65,781.88 1349.25,780.414 1351.86,778.947 1354.46,777.481 1357.07,776.014 \\n 1359.67,774.548 1362.27,773.081 1364.86,771.615 1367.46,770.148 1370.06,768.682 1372.65,767.216 1375.25,765.749 1377.84,764.283 1380.43,762.817 1383.02,761.351 \\n 1385.6,759.885 1388.19,758.42 1390.78,756.954 1393.36,755.489 1395.94,754.024 1398.52,752.559 1401.1,751.094 1403.67,749.63 1406.25,748.166 1408.82,746.702 \\n 1411.39,745.238 1413.96,743.775 1416.53,742.312 1419.1,740.849 1421.66,739.387 1424.23,737.925 1426.79,736.463 1429.35,735.002 1431.9,733.541 1434.46,732.081 \\n 1437.01,730.621 1439.56,729.161 1442.11,727.702 1444.66,726.244 1447.2,724.785 1449.75,723.328 1452.29,721.871 1454.83,720.414 1457.36,718.958 1459.9,717.502 \\n 1462.43,716.048 1464.96,714.593 1467.49,713.139 1470.01,711.686 1472.53,710.234 1475.05,708.782 1477.57,707.331 1480.09,705.88 1482.6,704.43 1485.11,702.981 \\n 1487.62,701.533 1490.13,700.085 1492.63,698.638 1495.13,697.192 1497.63,695.747 1500.13,694.302 1502.62,692.858 1505.11,691.415 1507.6,689.973 1510.08,688.531 \\n 1512.57,687.091 1515.05,685.651 1517.52,684.213 1520,682.775 1522.47,681.338 1524.94,679.902 1527.4,678.467 1529.87,677.033 1532.33,675.599 1534.78,674.167 \\n 1537.24,672.736 1539.69,671.306 1542.14,669.877 1544.58,668.449 1547.02,667.022 1549.46,665.596 1551.9,664.171 1554.33,662.747 1556.76,661.325 1559.19,659.903 \\n 1561.61,658.483 1564.03,657.063 1566.45,655.645 1568.86,654.229 1571.27,652.813 1573.68,651.398 1576.08,649.985 1578.48,648.573 1580.88,647.162 1583.27,645.753 \\n 1585.66,644.345 1588.05,642.938 1590.43,641.532 1592.81,640.128 1595.18,638.725 1597.56,637.323 1599.93,635.923 1602.29,634.524 1604.65,633.126 1607.01,631.73 \\n 1609.36,630.336 1611.71,628.942 1614.06,627.551 1616.4,626.16 1618.74,624.771 1621.08,623.384 1623.41,621.998 1625.74,620.614 1628.06,619.231 1630.38,617.85 \\n 1632.7,616.47 1635.01,615.092 1637.32,613.715 1639.63,612.34 1641.93,610.967 1644.22,609.595 1646.51,608.225 1648.8,606.856 1651.09,605.49 1653.37,604.125 \\n 1655.64,602.761 1657.92,601.399 1660.18,600.039 1662.45,598.681 1664.71,597.324 1666.96,595.97 1669.21,594.617 1671.46,593.265 1673.7,591.916 1675.94,590.568 \\n 1678.17,589.223 1680.4,587.879 1682.63,586.536 1684.85,585.196 1687.06,583.858 1689.27,582.521 1691.48,581.187 1693.68,579.854 1695.88,578.523 1698.07,577.194 \\n 1700.26,575.868 1702.45,574.543 1704.63,573.22 1706.8,571.899 1708.97,570.58 1711.14,569.263 1713.3,567.948 1715.46,566.636 1717.61,565.325 1719.75,564.016 \\n 1721.9,562.71 1724.03,561.405 1726.17,560.103 1728.29,558.803 1730.42,557.504 1732.53,556.209 1734.65,554.915 1736.75,553.623 1738.86,552.334 1740.96,551.046 \\n 1743.05,549.761 1745.14,548.479 1747.22,547.198 1749.3,545.92 1751.37,544.644 1753.44,543.37 1755.5,542.099 1757.56,540.829 1759.61,539.563 1761.66,538.298 \\n 1763.7,537.036 1765.74,535.776 1767.77,534.519 1769.8,533.264 1771.82,532.011 1773.83,530.761 1775.84,529.513 1777.85,528.267 1779.85,527.024 1781.84,525.784 \\n 1783.83,524.546 1785.81,523.31 1787.79,522.077 1789.76,520.846 1791.73,519.618 1793.69,518.393 1795.65,517.169 1797.6,515.949 1799.55,514.731 1801.49,513.516 \\n 1803.42,512.303 1805.35,511.093 1807.27,509.885 1809.19,508.68 1811.1,507.478 1813,506.278 1814.9,505.081 1816.8,503.887 1818.69,502.695 1820.57,501.506 \\n 1822.45,500.32 1824.32,499.136 1826.18,497.955 1828.04,496.777 1829.9,495.602 1831.75,494.429 1833.59,493.259 1835.42,492.092 1837.25,490.928 1839.08,489.767 \\n 1840.9,488.608 1842.71,487.452 1844.51,486.299 1846.31,485.149 1848.11,484.002 1849.9,482.857 1851.68,481.716 1853.46,480.577 1855.23,479.442 1856.99,478.309 \\n 1858.75,477.179 1860.5,476.052 1862.24,474.928 1863.98,473.807 1865.72,472.689 1867.44,471.574 1869.16,470.462 1870.88,469.353 1872.59,468.247 1874.29,467.145 \\n 1875.98,466.045 1877.67,464.948 1879.35,463.854 1881.03,462.763 1882.7,461.676 1884.36,460.591 1886.02,459.51 1887.67,458.431 1889.32,457.356 1890.95,456.284 \\n 1892.59,455.215 1894.21,454.15 1895.83,453.087 1897.44,452.028 1899.05,450.972 1900.65,449.919 1902.24,448.869 1903.82,447.822 1905.4,446.779 1906.97,445.739 \\n 1908.54,444.702 1910.1,443.669 1911.65,442.638 1913.2,441.611 1914.74,440.588 1916.27,439.567 1917.8,438.55 1919.31,437.536 1920.83,436.526 1922.33,435.519 \\n 1923.83,434.515 1925.32,433.515 1926.81,432.518 1928.29,431.524 1929.76,430.534 1931.22,429.547 1932.68,428.564 1934.13,427.584 1935.57,426.607 1937.01,425.634 \\n 1938.44,424.665 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip8203)\\&quot; style=\\&quot;stroke:#800080; stroke-width:8; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 2258.1,729.85 2258.28,727.697 2258.45,725.545 2258.61,723.392 2258.76,721.241 2258.89,719.09 2259.02,716.94 2259.14,714.79 2259.25,712.64 2259.35,710.492 \\n 2259.44,708.344 2259.52,706.196 2259.59,704.05 2259.65,701.904 2259.7,699.759 2259.74,697.614 2259.76,695.47 2259.78,693.328 2259.79,691.185 2259.79,689.044 \\n 2259.78,686.904 2259.76,684.764 2259.72,682.626 2259.68,680.488 2259.63,678.351 2259.57,676.215 2259.49,674.08 2259.41,671.946 2259.32,669.813 2259.21,667.682 \\n 2259.1,665.551 2258.98,663.421 2258.85,661.293 2258.7,659.165 2258.55,657.039 2258.39,654.914 2258.21,652.79 2258.03,650.667 2257.83,648.546 2257.63,646.425 \\n 2257.42,644.306 2257.19,642.189 2256.96,640.072 2256.71,637.957 2256.46,635.844 2256.19,633.731 2255.92,631.621 2255.64,629.511 2255.34,627.403 2255.04,625.296 \\n 2254.72,623.191 2254.4,621.088 2254.06,618.986 2253.72,616.885 2253.36,614.786 2253,612.689 2252.62,610.593 2252.24,608.499 2251.84,606.406 2251.44,604.316 \\n 2251.02,602.226 2250.6,600.139 2250.16,598.053 2249.72,595.969 2249.26,593.887 2248.79,591.807 2248.32,589.728 2247.83,587.651 2247.34,585.577 2246.83,583.504 \\n 2246.32,581.432 2245.79,579.363 2245.26,577.296 2244.71,575.231 2244.16,573.167 2243.59,571.106 2243.02,569.047 2242.43,566.989 2241.84,564.934 2241.23,562.881 \\n 2240.62,560.83 2239.99,558.781 2239.36,556.734 2238.71,554.689 2238.06,552.647 2237.4,550.607 2236.72,548.568 2236.04,546.533 2235.34,544.499 2234.64,542.468 \\n 2233.93,540.439 2233.2,538.412 2232.47,536.388 2231.72,534.366 2230.97,532.346 2230.21,530.329 2229.44,528.314 2228.65,526.302 2227.86,524.292 2227.06,522.285 \\n 2226.25,520.28 2225.43,518.277 2224.59,516.277 2223.75,514.28 2222.9,512.285 2222.04,510.293 2221.17,508.304 2220.29,506.317 2219.4,504.333 2218.5,502.351 \\n 2217.59,500.372 2216.67,498.396 2215.74,496.423 2214.8,494.452 2213.86,492.484 2212.9,490.519 2211.93,488.557 2210.95,486.597 2209.97,484.641 2208.97,482.687 \\n 2207.97,480.736 2206.95,478.788 2205.92,476.843 2204.89,474.901 2203.85,472.962 2202.79,471.026 2201.73,469.093 2200.66,467.163 2199.57,465.236 2198.48,463.312 \\n 2197.38,461.391 2196.27,459.473 2195.15,457.558 2194.02,455.647 2192.88,453.739 2191.73,451.833 2190.57,449.931 2189.4,448.033 2188.23,446.137 2187.04,444.245 \\n 2185.84,442.356 2184.64,440.47 2183.42,438.587 2182.2,436.708 2180.97,434.832 2179.72,432.96 2178.47,431.091 2177.21,429.225 2175.94,427.363 2174.66,425.504 \\n 2173.37,423.649 2172.07,421.797 2170.77,419.948 2169.45,418.104 2168.12,416.262 2166.79,414.424 2165.44,412.59 2164.09,410.759 2162.73,408.932 2161.35,407.109 \\n 2159.97,405.289 2158.58,403.473 2157.18,401.66 2155.77,399.852 2154.36,398.046 2152.93,396.245 2151.49,394.447 2150.05,392.654 2148.6,390.863 2147.13,389.077 \\n 2145.66,387.295 2144.18,385.516 2142.69,383.741 2141.19,381.97 2139.69,380.203 2138.17,378.44 2136.64,376.681 2135.11,374.925 2133.57,373.174 2132.01,371.427 \\n 2130.45,369.683 2128.88,367.944 2127.3,366.208 2125.72,364.477 2124.12,362.75 2122.52,361.026 2120.9,359.307 2119.28,357.592 2117.65,355.881 2116.01,354.175 \\n 2114.36,352.472 2112.7,350.774 2111.04,349.079 2109.36,347.389 2107.68,345.703 2105.99,344.022 2104.29,342.345 2102.58,340.672 2100.86,339.003 2099.13,337.338 \\n 2097.4,335.678 2095.66,334.022 2093.9,332.371 2092.14,330.724 2090.38,329.081 2088.6,327.443 2086.81,325.809 2085.02,324.18 2083.22,322.555 2081.41,320.934 \\n 2079.59,319.318 2077.76,317.707 2075.92,316.1 2074.08,314.497 2072.23,312.9 2070.37,311.306 2068.5,309.717 2066.62,308.133 2064.74,306.554 2062.84,304.979 \\n 2060.94,303.409 2059.03,301.843 2057.12,300.282 2055.19,298.726 2053.26,297.174 2051.31,295.627 2049.36,294.085 2047.41,292.548 2045.44,291.015 2043.47,289.487 \\n 2041.48,287.964 2039.49,286.446 2037.5,284.933 2035.49,283.424 2033.48,281.921 2031.45,280.422 2029.42,278.928 2027.39,277.439 2025.34,275.955 2023.29,274.476 \\n 2021.23,273.001 2019.16,271.532 2017.08,270.068 2015,268.608 2012.91,267.154 2010.81,265.705 2008.7,264.261 2006.59,262.821 2004.47,261.387 2002.34,259.958 \\n 2000.2,258.534 1998.05,257.115 1995.9,255.701 1993.74,254.293 1991.57,252.889 1989.4,251.491 1987.22,250.098 1985.03,248.71 1982.83,247.327 1980.62,245.949 \\n 1978.41,244.577 1976.19,243.21 1973.97,241.848 1971.73,240.491 1969.49,239.14 1967.24,237.794 1964.99,236.453 1962.73,235.118 1960.46,233.787 1958.18,232.463 \\n 1955.89,231.143 1953.6,229.829 1951.3,228.521 1949,227.217 1946.69,225.92 1944.37,224.627 1942.04,223.34 1939.71,222.059 1937.37,220.783 1935.02,219.512 \\n 1932.66,218.247 1930.3,216.987 1927.93,215.733 1925.56,214.485 1923.18,213.242 1920.79,212.004 1918.39,210.772 1915.99,209.546 1913.58,208.325 1911.17,207.11 \\n 1908.75,205.9 1906.32,204.696 1903.88,203.498 1901.44,202.306 1898.99,201.119 1896.54,199.937 1894.07,198.761 1891.61,197.592 1889.13,196.427 1886.65,195.269 \\n 1884.16,194.116 1881.67,192.969 1879.17,191.827 1876.66,190.692 1874.15,189.562 1871.63,188.438 1869.1,187.32 1866.57,186.207 1864.03,185.101 1861.49,184 \\n 1858.94,182.905 1856.38,181.816 1853.82,180.733 1851.25,179.655 1848.68,178.584 1846.1,177.518 1843.51,176.458 1840.92,175.404 1838.32,174.357 1835.71,173.315 \\n 1833.1,172.279 1830.49,171.249 1827.86,170.224 1825.23,169.206 1822.6,168.194 1819.96,167.188 1817.31,166.188 1814.66,165.194 1812,164.206 1809.34,163.224 \\n 1806.67,162.248 1804,161.278 1801.32,160.314 1798.63,159.356 1795.94,158.404 1793.25,157.458 1790.54,156.519 1787.84,155.585 1785.12,154.658 1782.4,153.737 \\n 1779.68,152.821 1776.95,151.913 1774.22,151.01 1771.48,150.113 1768.73,149.223 1765.98,148.338 1763.23,147.46 1760.47,146.588 1757.7,145.723 1754.93,144.863 \\n 1752.15,144.01 1749.37,143.163 1746.58,142.322 1743.79,141.487 1741,140.659 1738.19,139.837 1735.39,139.021 1732.58,138.212 1729.76,137.409 1726.94,136.612 \\n 1724.11,135.821 1721.28,135.037 1718.45,134.259 1715.6,133.487 1712.76,132.722 1709.91,131.963 1707.05,131.21 1704.2,130.464 1701.33,129.724 1698.46,128.991 \\n 1695.59,128.264 1692.71,127.543 1689.83,126.829 1686.94,126.121 1684.05,125.419 1681.16,124.724 1678.26,124.035 1675.35,123.353 1672.44,122.677 1669.53,122.008 \\n 1666.61,121.345 1663.69,120.688 1660.77,120.038 1657.84,119.395 1654.9,118.758 1651.96,118.127 1649.02,117.503 1646.08,116.885 1643.13,116.274 1640.17,115.67 \\n 1637.21,115.071 1634.25,114.48 1631.28,113.895 1628.31,113.316 1625.34,112.744 1622.36,112.179 1619.38,111.62 1616.39,111.067 1613.4,110.522 1610.41,109.982 \\n 1607.41,109.45 1604.41,108.924 1601.41,108.404 1598.4,107.891 1595.39,107.385 1592.37,106.885 1589.36,106.392 1586.33,105.905 1583.31,105.425 1580.28,104.952 \\n 1577.25,104.485 1574.21,104.025 1571.17,103.571 1568.13,103.125 1565.09,102.684 1562.04,102.251 1558.98,101.824 1555.93,101.403 1552.87,100.99 1549.81,100.583 \\n 1546.74,100.182 1543.68,99.7888 1540.61,99.4018 1537.53,99.0215 1534.46,98.6478 1531.38,98.2808 1528.29,97.9205 1525.21,97.5669 1522.12,97.22 1519.03,96.8798 \\n 1515.93,96.5463 1512.84,96.2194 1509.74,95.8993 1506.64,95.5859 1503.53,95.2792 1500.42,94.9792 1497.31,94.6859 1494.2,94.3993 1491.08,94.1195 1487.97,93.8463 \\n 1484.85,93.5799 1481.72,93.3203 1478.6,93.0673 1475.47,92.8211 1472.34,92.5817 1469.21,92.3489 1466.07,92.1229 1462.94,91.9037 1459.8,91.6912 1456.66,91.4854 \\n 1453.51,91.2864 1450.37,91.0942 1447.22,90.9087 1444.07,90.7299 1440.92,90.558 1437.76,90.3927 1434.61,90.2343 1431.45,90.0826 1428.29,89.9376 1425.13,89.7995 \\n 1421.96,89.668 1418.8,89.5434 1415.63,89.4255 1412.46,89.3144 1409.29,89.2101 1406.12,89.1125 1402.94,89.0217 1399.77,88.9377 1396.59,88.8605 1393.41,88.79 \\n 1390.23,88.7263 1387.05,88.6694 1383.86,88.6193 1380.68,88.5759 1377.49,88.5394 1374.3,88.5096 1371.12,88.4866 1367.92,88.4703 1364.73,88.4609 1361.54,88.4582 \\n 1358.34,88.4623 1355.15,88.4731 1351.95,88.4908 1348.75,88.5152 1345.55,88.5465 1342.35,88.5844 1339.15,88.6292 1335.95,88.6808 1332.75,88.7391 1329.54,88.8042 \\n 1326.34,88.8761 1323.13,88.9547 1319.92,89.0402 1316.72,89.1324 1313.51,89.2313 1310.3,89.3371 1307.09,89.4496 1303.88,89.5689 1300.66,89.6949 1297.45,89.8278 \\n 1294.24,89.9674 1291.03,90.1137 1287.81,90.2668 1284.6,90.4267 1281.38,90.5933 1278.17,90.7667 1274.95,90.9469 1271.74,91.1338 1268.52,91.3275 1265.3,91.5279 \\n 1262.08,91.735 1258.87,91.949 1255.65,92.1696 1252.43,92.397 1249.21,92.6311 1245.99,92.872 1242.78,93.1196 1239.56,93.374 1236.34,93.635 1233.12,93.9029 \\n 1229.9,94.1774 1226.68,94.4586 1223.47,94.7466 1220.25,95.0413 1217.03,95.3427 1213.81,95.6508 1210.59,95.9656 1207.38,96.2872 1204.16,96.6154 1200.94,96.9503 \\n 1197.73,97.292 1194.51,97.6403 1191.29,97.9953 1188.08,98.357 1184.86,98.7254 1181.65,99.1004 1178.44,99.4821 1175.22,99.8705 1172.01,100.266 1168.8,100.667 \\n 1165.59,101.076 1162.38,101.491 1159.17,101.912 1155.96,102.341 1152.75,102.776 1149.54,103.217 1146.33,103.666 1143.13,104.121 1139.92,104.582 1136.72,105.05 \\n 1133.52,105.525 1130.31,106.006 1127.11,106.494 1123.91,106.989 1120.71,107.49 1117.52,107.998 1114.32,108.512 1111.12,109.033 1107.93,109.56 1104.74,110.095 \\n 1101.55,110.635 1098.35,111.182 1095.17,111.736 1091.98,112.296 1088.79,112.863 1085.61,113.437 1082.42,114.017 1079.24,114.603 1076.06,115.196 1072.88,115.795 \\n 1069.71,116.401 1066.53,117.014 1063.36,117.633 1060.19,118.258 1057.02,118.89 1053.85,119.529 1050.68,120.174 1047.52,120.825 1044.35,121.483 1041.19,122.147 \\n 1038.03,122.818 1034.87,123.495 1031.72,124.179 1028.57,124.869 1025.41,125.565 1022.27,126.268 1019.12,126.977 1015.97,127.693 1012.83,128.415 1009.69,129.144 \\n 1006.55,129.878 1003.42,130.62 1000.28,131.367 997.151,132.121 994.021,132.881 990.895,133.648 987.77,134.421 984.648,135.2 981.529,135.986 978.412,136.778 \\n 975.298,137.576 972.186,138.381 969.077,139.191 965.97,140.008 962.866,140.832 959.765,141.661 956.666,142.497 953.57,143.339 950.477,144.188 947.387,145.042 \\n 944.3,145.903 941.215,146.77 938.134,147.643 935.055,148.523 931.979,149.408 928.907,150.3 925.837,151.198 922.77,152.102 919.707,153.012 916.647,153.929 \\n 913.589,154.851 910.535,155.78 907.484,156.715 904.437,157.655 901.392,158.602 898.351,159.555 895.313,160.515 892.279,161.48 889.248,162.451 886.22,163.428 \\n 883.196,164.412 880.176,165.401 877.158,166.396 874.145,167.398 871.135,168.405 868.128,169.419 865.125,170.438 862.126,171.463 859.13,172.495 856.139,173.532 \\n 853.151,174.575 850.166,175.624 847.186,176.679 844.209,177.74 841.236,178.807 838.267,179.88 835.302,180.958 832.341,182.043 829.384,183.133 826.431,184.229 \\n 823.482,185.331 820.537,186.439 817.596,187.553 814.66,188.672 811.727,189.798 808.799,190.929 805.874,192.065 802.954,193.208 800.039,194.356 797.127,195.51 \\n 794.22,196.67 791.317,197.836 788.419,199.007 785.525,200.184 782.636,201.366 779.751,202.554 776.87,203.748 773.994,204.948 771.123,206.153 768.256,207.363 \\n 765.393,208.58 762.536,209.802 759.683,211.029 756.835,212.262 753.991,213.501 751.152,214.745 748.318,215.995 745.489,217.25 742.665,218.511 739.845,219.777 \\n 737.03,221.049 734.221,222.326 731.416,223.609 728.616,224.897 725.821,226.19 723.032,227.489 720.247,228.794 717.467,230.103 714.693,231.419 711.923,232.739 \\n 709.159,234.065 706.4,235.396 703.646,236.733 700.898,238.075 698.155,239.422 695.417,240.774 692.684,242.132 689.957,243.495 687.235,244.863 684.518,246.237 \\n 681.807,247.615 679.102,248.999 676.402,250.388 673.707,251.783 671.018,253.182 668.334,254.587 665.657,255.996 662.984,257.411 660.318,258.831 657.657,260.256 \\n 655.002,261.686 652.352,263.122 649.708,264.562 647.07,266.007 644.438,267.458 641.812,268.913 639.191,270.373 636.577,271.839 633.968,273.309 631.365,274.784 \\n 628.769,276.265 626.178,277.75 623.593,279.24 621.014,280.735 618.441,282.234 615.875,283.739 613.314,285.249 610.76,286.763 608.212,288.282 605.67,289.806 \\n 603.134,291.335 600.605,292.869 598.081,294.407 595.564,295.95 593.054,297.498 590.549,299.051 588.051,300.608 585.56,302.17 583.075,303.736 580.596,305.308 \\n 578.124,306.883 575.658,308.464 573.198,310.049 570.746,311.639 568.3,313.233 565.86,314.832 563.427,316.435 561.001,318.043 558.581,319.656 556.168,321.273 \\n 553.761,322.894 551.362,324.52 548.969,326.15 546.583,327.785 544.204,329.424 541.831,331.068 539.466,332.716 537.107,334.368 534.755,336.025 532.41,337.686 \\n 530.072,339.351 527.741,341.021 525.417,342.695 523.1,344.373 520.79,346.055 518.487,347.742 516.191,349.433 513.902,351.128 511.621,352.827 509.346,354.531 \\n 507.079,356.239 504.818,357.95 502.566,359.666 500.32,361.386 498.081,363.11 495.85,364.838 493.626,366.571 491.41,368.307 489.2,370.047 486.998,371.791 \\n 484.804,373.54 482.617,375.292 480.437,377.048 478.265,378.808 476.1,380.572 473.943,382.34 471.793,384.112 469.651,385.887 467.516,387.667 465.389,389.45 \\n 463.27,391.237 461.158,393.028 459.054,394.823 456.957,396.621 454.868,398.423 452.787,400.229 450.714,402.039 448.648,403.852 446.59,405.669 444.54,407.49 \\n 442.498,409.314 440.463,411.142 438.437,412.973 436.418,414.808 434.407,416.647 432.404,418.489 430.409,420.334 428.422,422.184 426.442,424.036 424.471,425.892 \\n 422.508,427.752 420.553,429.615 418.606,431.481 416.667,433.351 414.736,435.224 412.813,437.101 410.898,438.98 408.991,440.864 407.093,442.75 405.202,444.64 \\n 403.32,446.533 401.446,448.429 399.581,450.329 397.723,452.231 395.874,454.137 394.033,456.046 392.201,457.958 390.376,459.874 388.561,461.792 386.753,463.714 \\n 384.954,465.638 383.163,467.566 381.381,469.497 379.607,471.43 377.841,473.367 376.084,475.307 374.336,477.25 372.595,479.195 370.864,481.144 369.141,483.095 \\n 367.426,485.049 365.721,487.007 364.023,488.967 362.334,490.93 360.654,492.895 358.983,494.864 357.32,496.835 355.666,498.809 354.02,500.786 352.384,502.765 \\n 350.756,504.747 349.136,506.732 347.526,508.719 345.924,510.709 344.331,512.702 342.746,514.697 341.171,516.695 339.604,518.695 338.047,520.698 336.498,522.704 \\n 334.958,524.712 333.426,526.722 331.904,528.735 330.391,530.75 328.886,532.768 327.391,534.788 325.904,536.811 324.427,538.835 322.958,540.863 321.499,542.892 \\n 320.048,544.924 318.607,546.958 317.174,548.994 315.751,551.033 314.337,553.073 312.931,555.116 311.535,557.162 310.148,559.209 308.77,561.258 307.402,563.31 \\n 306.042,565.363 304.692,567.419 303.351,569.477 302.019,571.536 300.696,573.598 299.382,575.662 298.078,577.728 296.783,579.795 295.497,581.865 294.22,583.937 \\n 292.953,586.01 291.695,588.085 290.447,590.162 289.207,592.241 287.977,594.322 286.757,596.405 285.545,598.489 284.343,600.575 283.151,602.663 281.968,604.752 \\n 280.794,606.844 279.63,608.936 278.475,611.031 277.33,613.127 276.194,615.225 275.067,617.324 273.95,619.425 272.843,621.527 271.745,623.631 270.656,625.737 \\n 269.577,627.843 268.508,629.952 267.448,632.062 266.397,634.173 265.356,636.285 264.325,638.399 263.304,640.515 262.292,642.631 261.289,644.749 260.296,646.868 \\n 259.313,648.989 258.339,651.111 257.376,653.234 256.421,655.358 255.477,657.483 254.542,659.61 253.616,661.737 252.701,663.866 251.795,665.996 250.899,668.127 \\n 250.012,670.259 249.136,672.392 248.269,674.526 247.411,676.661 246.564,678.797 245.726,680.934 244.898,683.072 244.08,685.211 243.271,687.351 242.473,689.492 \\n 241.684,691.633 240.905,693.775 240.136,695.918 239.376,698.062 238.627,700.207 237.887,702.352 237.157,704.498 236.437,706.645 235.727,708.793 235.026,710.941 \\n 234.336,713.09 233.655,715.239 232.985,717.389 232.324,719.539 231.673,721.691 231.032,723.842 230.401,725.994 229.78,728.147 229.169,730.3 228.567,732.454 \\n 227.976,734.608 227.395,736.762 226.823,738.917 226.262,741.072 225.71,743.227 225.168,745.383 224.637,747.539 224.115,749.695 223.603,751.852 223.102,754.009 \\n 222.61,756.166 222.128,758.323 221.657,760.48 221.195,762.638 220.743,764.795 220.302,766.953 219.87,769.111 219.448,771.269 219.037,773.427 218.635,775.585 \\n 218.244,777.743 217.862,779.901 217.491,782.059 217.129,784.217 216.778,786.374 216.437,788.532 216.105,790.689 215.784,792.847 215.473,795.004 215.172,797.161 \\n 214.881,799.318 214.6,801.474 214.329,803.63 214.068,805.786 213.818,807.942 213.577,810.097 213.346,812.253 213.126,814.407 212.916,816.561 212.715,818.715 \\n 212.525,820.869 212.345,823.022 212.175,825.174 212.015,827.326 211.865,829.478 211.726,831.629 211.596,833.779 211.477,835.929 211.367,838.078 211.268,840.227 \\n 211.179,842.375 211.1,844.522 211.031,846.669 210.972,848.815 210.923,850.96 210.885,853.105 210.856,855.248 210.838,857.391 210.829,859.533 210.831,861.675 \\n 210.843,863.815 210.865,865.955 210.897,868.093 210.94,870.231 210.992,872.368 211.055,874.504 211.127,876.639 211.21,878.772 211.303,880.905 211.406,883.037 \\n 211.519,885.168 211.642,887.298 211.775,889.426 211.918,891.554 212.072,893.68 212.235,895.805 212.409,897.929 212.593,900.052 212.786,902.173 212.99,904.293 \\n 213.204,906.412 213.428,908.53 213.663,910.646 213.907,912.761 214.161,914.875 214.426,916.987 214.7,919.098 214.985,921.208 215.279,923.316 215.584,925.422 \\n 215.899,927.527 216.224,929.631 216.558,931.733 216.903,933.834 217.258,935.933 217.623,938.03 217.999,940.126 218.384,942.22 218.779,944.312 219.184,946.403 \\n 219.599,948.492 220.024,950.58 220.46,952.665 220.905,954.749 221.36,956.832 221.826,958.912 222.301,960.991 222.786,963.067 223.282,965.142 223.787,967.215 \\n 224.302,969.286 224.827,971.356 225.363,973.423 225.908,975.488 226.463,977.552 227.028,979.613 227.603,981.672 228.188,983.73 228.783,985.785 229.388,987.838 \\n 230.003,989.889 230.627,991.938 231.262,993.985 231.907,996.03 232.561,998.072 233.225,1000.11 233.9,1002.15 234.584,1004.19 235.278,1006.22 235.982,1008.25 \\n 236.695,1010.28 237.419,1012.31 238.152,1014.33 238.896,1016.35 239.649,1018.37 240.412,1020.39 241.185,1022.4 241.967,1024.42 242.759,1026.43 243.562,1028.43 \\n 244.374,1030.44 245.195,1032.44 246.027,1034.44 246.868,1036.44 247.719,1038.43 248.58,1040.43 249.45,1042.42 250.331,1044.4 251.221,1046.39 252.12,1048.37 \\n 253.03,1050.35 253.949,1052.32 254.878,1054.3 255.816,1056.27 256.764,1058.23 257.722,1060.2 258.689,1062.16 259.666,1064.12 260.653,1066.08 261.649,1068.03 \\n 262.655,1069.98 263.671,1071.93 264.696,1073.88 265.731,1075.82 266.775,1077.76 267.829,1079.69 268.892,1081.63 269.965,1083.56 271.047,1085.48 272.139,1087.41 \\n 273.241,1089.33 274.352,1091.25 275.472,1093.16 276.602,1095.07 277.741,1096.98 278.89,1098.89 280.048,1100.79 281.216,1102.69 282.393,1104.58 283.58,1106.47 \\n 284.776,1108.36 285.981,1110.25 287.196,1112.13 288.42,1114.01 289.653,1115.89 290.896,1117.76 292.148,1119.63 293.409,1121.49 294.68,1123.36 295.959,1125.21 \\n 297.249,1127.07 298.547,1128.92 299.855,1130.77 301.172,1132.62 302.498,1134.46 303.833,1136.29 305.178,1138.13 306.531,1139.96 307.894,1141.79 309.266,1143.61 \\n 310.647,1145.43 312.038,1147.25 313.437,1149.06 314.846,1150.87 316.263,1152.67 317.69,1154.47 319.125,1156.27 320.57,1158.07 322.024,1159.86 323.487,1161.64 \\n 324.959,1163.42 326.439,1165.2 327.929,1166.98 329.428,1168.75 330.935,1170.52 332.452,1172.28 333.977,1174.04 335.512,1175.79 337.055,1177.54 338.607,1179.29 \\n 340.168,1181.04 341.738,1182.78 343.317,1184.51 344.904,1186.24 346.5,1187.97 348.105,1189.69 349.719,1191.41 351.342,1193.13 352.973,1194.84 354.613,1196.54 \\n 356.261,1198.25 357.919,1199.95 359.585,1201.64 361.259,1203.33 362.942,1205.02 364.634,1206.7 366.335,1208.37 368.044,1210.05 369.761,1211.72 371.487,1213.38 \\n 373.222,1215.04 374.965,1216.7 376.717,1218.35 378.477,1219.99 380.245,1221.64 382.022,1223.28 383.808,1224.91 385.601,1226.54 387.404,1228.16 389.214,1229.78 \\n 391.033,1231.4 392.86,1233.01 394.696,1234.62 396.54,1236.22 398.392,1237.82 400.252,1239.41 402.121,1241 403.998,1242.59 405.883,1244.17 407.776,1245.74 \\n 409.678,1247.31 411.587,1248.88 413.505,1250.44 415.431,1251.99 417.365,1253.54 419.307,1255.09 421.257,1256.63 423.215,1258.17 425.181,1259.7 427.155,1261.23 \\n 429.137,1262.75 431.127,1264.27 433.125,1265.79 435.131,1267.29 437.145,1268.8 439.166,1270.3 441.196,1271.79 443.233,1273.28 445.278,1274.76 447.331,1276.24 \\n 449.392,1277.72 451.46,1279.19 453.537,1280.65 455.621,1282.11 457.712,1283.56 459.812,1285.01 461.919,1286.46 464.033,1287.9 466.155,1289.33 468.285,1290.76 \\n 470.423,1292.18 472.567,1293.6 474.72,1295.02 476.88,1296.43 479.047,1297.83 481.222,1299.23 483.405,1300.62 485.594,1302.01 487.791,1303.39 489.996,1304.77 \\n 492.208,1306.14 494.427,1307.51 496.654,1308.87 498.888,1310.23 501.129,1311.58 503.377,1312.93 505.633,1314.27 507.895,1315.6 510.165,1316.93 512.442,1318.26 \\n 514.727,1319.58 517.018,1320.89 519.316,1322.2 521.622,1323.5 523.935,1324.8 526.254,1326.09 528.581,1327.38 530.914,1328.66 533.255,1329.94 535.602,1331.21 \\n 537.957,1332.47 540.318,1333.73 542.686,1334.99 545.061,1336.23 547.442,1337.48 549.831,1338.71 552.226,1339.95 554.628,1341.17 557.037,1342.39 559.452,1343.61 \\n 561.875,1344.82 564.303,1346.02 566.739,1347.22 569.181,1348.41 571.629,1349.6 574.084,1350.78 576.546,1351.96 579.014,1353.13 581.489,1354.29 583.97,1355.45 \\n 586.457,1356.6 588.951,1357.75 591.451,1358.89 593.958,1360.03 596.471,1361.16 598.99,1362.28 601.516,1363.4 604.048,1364.51 606.586,1365.62 609.13,1366.72 \\n 611.68,1367.81 614.237,1368.9 616.8,1369.99 619.368,1371.06 621.943,1372.14 624.524,1373.2 627.111,1374.26 629.704,1375.31 632.303,1376.36 634.908,1377.4 \\n 637.519,1378.44 640.135,1379.47 642.758,1380.49 645.386,1381.51 648.021,1382.52 650.661,1383.53 653.307,1384.53 655.958,1385.53 658.615,1386.51 661.278,1387.5 \\n 663.947,1388.47 666.621,1389.44 669.301,1390.41 671.987,1391.36 674.678,1392.31 677.374,1393.26 680.077,1394.2 682.784,1395.13 685.497,1396.06 688.216,1396.98 \\n 690.939,1397.9 693.669,1398.81 696.403,1399.71 699.143,1400.61 701.888,1401.5 704.639,1402.38 707.394,1403.26 710.155,1404.13 712.921,1405 715.693,1405.86 \\n 718.469,1406.71 721.25,1407.56 724.037,1408.4 726.828,1409.23 729.625,1410.06 732.427,1410.88 735.233,1411.7 738.045,1412.51 740.861,1413.31 743.682,1414.11 \\n 746.508,1414.9 749.339,1415.68 752.175,1416.46 755.016,1417.23 757.861,1418 760.711,1418.76 763.566,1419.51 766.425,1420.25 769.289,1420.99 772.157,1421.73 \\n 775.03,1422.46 777.908,1423.18 780.79,1423.89 783.677,1424.6 786.568,1425.3 789.464,1425.99 792.363,1426.68 795.268,1427.37 798.176,1428.04 801.089,1428.71 \\n 804.007,1429.37 806.928,1430.03 809.854,1430.68 812.784,1431.32 815.718,1431.96 818.656,1432.59 821.598,1433.22 824.545,1433.83 827.495,1434.44 830.45,1435.05 \\n 833.408,1435.65 836.371,1436.24 839.337,1436.82 842.308,1437.4 845.282,1437.97 848.26,1438.54 851.242,1439.1 854.227,1439.65 857.217,1440.2 860.21,1440.74 \\n 863.207,1441.27 866.207,1441.8 869.212,1442.31 872.219,1442.83 875.231,1443.33 878.246,1443.83 881.264,1444.33 884.286,1444.81 887.312,1445.29 890.34,1445.77 \\n 893.373,1446.23 896.408,1446.69 899.447,1447.15 902.489,1447.59 905.535,1448.03 908.584,1448.47 911.636,1448.9 914.691,1449.32 917.75,1449.73 920.811,1450.14 \\n 923.876,1450.54 926.943,1450.93 930.014,1451.32 933.088,1451.7 936.165,1452.07 939.244,1452.44 942.327,1452.8 945.413,1453.15 948.501,1453.5 951.592,1453.84 \\n 954.686,1454.17 957.783,1454.5 960.883,1454.82 963.985,1455.13 967.09,1455.44 970.197,1455.74 973.307,1456.03 976.42,1456.32 979.535,1456.6 982.653,1456.87 \\n 985.774,1457.14 988.896,1457.4 992.022,1457.65 995.149,1457.9 998.279,1458.14 1001.41,1458.37 1004.55,1458.6 1007.68,1458.82 1010.82,1459.03 1013.96,1459.23 \\n 1017.11,1459.43 1020.25,1459.62 1023.4,1459.81 1026.55,1459.99 1029.7,1460.16 1032.86,1460.33 1036.01,1460.48 1039.17,1460.64 1042.33,1460.78 1045.49,1460.92 \\n 1048.66,1461.05 1051.82,1461.18 1054.99,1461.29 1058.16,1461.4 1061.33,1461.51 1064.5,1461.61 1067.68,1461.7 1070.85,1461.78 1074.03,1461.86 1077.21,1461.93 \\n 1080.39,1461.99 1083.57,1462.05 1086.76,1462.1 1089.94,1462.14 1093.13,1462.18 1096.32,1462.21 1099.5,1462.23 1102.7,1462.25 1105.89,1462.26 1109.08,1462.26 \\n 1112.28,1462.26 1115.47,1462.25 1118.67,1462.23 1121.87,1462.2 1125.07,1462.17 1128.27,1462.13 1131.47,1462.09 1134.67,1462.04 1137.87,1461.98 1141.08,1461.91 \\n 1144.28,1461.84 1147.49,1461.76 1150.7,1461.68 1153.9,1461.59 1157.11,1461.49 1160.32,1461.38 1163.53,1461.27 1166.74,1461.15 1169.96,1461.02 1173.17,1460.89 \\n 1176.38,1460.75 1179.59,1460.61 1182.81,1460.45 1186.02,1460.29 1189.24,1460.13 1192.45,1459.95 1195.67,1459.77 1198.89,1459.59 1202.1,1459.39 1205.32,1459.19 \\n 1208.54,1458.98 1211.75,1458.77 1214.97,1458.55 1218.19,1458.32 1221.41,1458.09 1224.63,1457.85 1227.84,1457.6 1231.06,1457.34 1234.28,1457.08 1237.5,1456.82 \\n 1240.72,1456.54 1243.94,1456.26 1247.15,1455.97 1250.37,1455.68 1253.59,1455.38 1256.81,1455.07 1260.03,1454.75 1263.24,1454.43 1266.46,1454.1 1269.68,1453.77 \\n 1272.89,1453.43 1276.11,1453.08 1279.33,1452.72 1282.54,1452.36 1285.76,1451.99 1288.97,1451.62 1292.18,1451.24 1295.4,1450.85 1298.61,1450.45 1301.82,1450.05 \\n 1305.03,1449.64 1308.24,1449.23 1311.45,1448.81 1314.66,1448.38 1317.87,1447.94 1321.08,1447.5 1324.29,1447.05 1327.49,1446.6 1330.7,1446.14 1333.9,1445.67 \\n 1337.11,1445.19 1340.31,1444.71 1343.51,1444.22 1346.71,1443.73 1349.91,1443.23 1353.1,1442.72 1356.3,1442.21 1359.5,1441.69 1362.69,1441.16 1365.88,1440.62 \\n 1369.08,1440.08 1372.27,1439.54 1375.45,1438.98 1378.64,1438.42 1381.83,1437.86 1385.01,1437.28 1388.2,1436.7 1391.38,1436.12 1394.56,1435.52 1397.74,1434.92 \\n 1400.91,1434.32 1404.09,1433.7 1407.26,1433.09 1410.43,1432.46 1413.6,1431.83 1416.77,1431.19 1419.94,1430.55 1423.1,1429.89 1426.27,1429.24 1429.43,1428.57 \\n 1432.59,1427.9 1435.75,1427.22 1438.9,1426.54 1442.05,1425.85 1445.21,1425.15 1448.35,1424.45 1451.5,1423.74 1454.65,1423.03 1457.79,1422.3 1460.93,1421.58 \\n 1464.07,1420.84 1467.2,1420.1 1470.34,1419.35 1473.47,1418.6 1476.6,1417.84 1479.73,1417.07 1482.85,1416.3 1485.97,1415.52 1489.09,1414.73 1492.21,1413.94 \\n 1495.32,1413.14 1498.43,1412.34 1501.54,1411.53 1504.65,1410.71 1507.75,1409.89 1510.86,1409.06 1513.95,1408.22 1517.05,1407.38 1520.14,1406.53 1523.23,1405.68 \\n 1526.32,1404.82 1529.41,1403.95 1532.49,1403.08 1535.57,1402.2 1538.64,1401.31 1541.71,1400.42 1544.78,1399.52 1547.85,1398.62 1550.91,1397.71 1553.97,1396.79 \\n 1557.03,1395.87 1560.09,1394.94 1563.14,1394 1566.18,1393.06 1569.23,1392.12 1572.27,1391.16 1575.31,1390.2 1578.34,1389.24 1581.37,1388.27 1584.4,1387.29 \\n 1587.42,1386.31 1590.44,1385.32 1593.46,1384.32 1596.48,1383.32 1599.49,1382.31 1602.49,1381.3 1605.5,1380.28 1608.49,1379.26 1611.49,1378.22 1614.48,1377.19 \\n 1617.47,1376.14 1620.45,1375.09 1623.43,1374.04 1626.41,1372.98 1629.38,1371.91 1632.35,1370.84 1635.32,1369.76 1638.28,1368.68 1641.24,1367.59 1644.19,1366.49 \\n 1647.14,1365.39 1650.08,1364.28 1653.02,1363.17 1655.96,1362.05 1658.89,1360.92 1661.82,1359.79 1664.75,1358.65 1667.67,1357.51 1670.58,1356.36 1673.49,1355.21 \\n 1676.4,1354.05 1679.3,1352.88 1682.2,1351.71 1685.1,1350.54 1687.98,1349.35 1690.87,1348.16 1693.75,1346.97 1696.63,1345.77 1699.5,1344.57 1702.36,1343.36 \\n 1705.23,1342.14 1708.08,1340.92 1710.94,1339.69 1713.79,1338.46 1716.63,1337.22 1719.47,1335.97 1722.3,1334.72 1725.13,1333.47 1727.96,1332.21 1730.78,1330.94 \\n 1733.59,1329.67 1736.4,1328.39 1739.2,1327.11 1742,1325.82 1744.8,1324.53 1747.59,1323.23 1750.37,1321.93 1753.15,1320.62 1755.93,1319.3 1758.7,1317.98 \\n 1761.46,1316.65 1764.22,1315.32 1766.97,1313.99 1769.72,1312.64 1772.47,1311.3 1775.2,1309.94 1777.94,1308.59 1780.66,1307.22 1783.39,1305.86 1786.1,1304.48 \\n 1788.81,1303.1 1791.52,1301.72 1794.22,1300.33 1796.91,1298.94 1799.6,1297.54 1802.29,1296.13 1804.96,1294.72 1807.64,1293.31 1810.3,1291.89 1812.96,1290.46 \\n 1815.62,1289.03 1818.27,1287.6 1820.91,1286.16 1823.55,1284.71 1826.18,1283.26 1828.81,1281.81 1831.43,1280.35 1834.04,1278.88 1836.65,1277.41 1839.26,1275.93 \\n 1841.85,1274.45 1844.44,1272.97 1847.03,1271.48 1849.61,1269.98 1852.18,1268.48 1854.75,1266.98 1857.31,1265.47 1859.86,1263.96 1862.41,1262.44 1864.95,1260.91 \\n 1867.49,1259.38 1870.02,1257.85 1872.54,1256.31 1875.06,1254.77 1877.57,1253.22 1880.07,1251.67 1882.57,1250.11 1885.06,1248.55 1887.55,1246.98 1890.02,1245.41 \\n 1892.5,1243.84 1894.96,1242.25 1897.42,1240.67 1899.87,1239.08 1902.32,1237.49 1904.76,1235.89 1907.19,1234.28 1909.62,1232.68 1912.04,1231.06 1914.45,1229.45 \\n 1916.86,1227.82 1919.26,1226.2 1921.65,1224.57 1924.04,1222.93 1926.42,1221.29 1928.79,1219.65 1931.15,1218 1933.51,1216.35 1935.87,1214.69 1938.21,1213.03 \\n 1940.55,1211.37 1942.88,1209.7 1945.2,1208.02 1947.52,1206.35 1949.83,1204.66 1952.13,1202.98 1954.43,1201.29 1956.72,1199.59 1959,1197.89 1961.27,1196.19 \\n 1963.54,1194.48 1965.8,1192.77 1968.05,1191.05 1970.3,1189.33 1972.54,1187.61 1974.77,1185.88 1976.99,1184.15 1979.21,1182.41 1981.42,1180.67 1983.62,1178.93 \\n 1985.82,1177.18 1988,1175.43 1990.18,1173.67 1992.36,1171.91 1994.52,1170.15 1996.68,1168.38 1998.83,1166.61 2000.97,1164.83 2003.1,1163.05 2005.23,1161.27 \\n 2007.35,1159.48 2009.46,1157.69 2011.57,1155.9 2013.66,1154.1 2015.75,1152.3 2017.83,1150.49 2019.91,1148.68 2021.97,1146.87 2024.03,1145.05 2026.08,1143.23 \\n 2028.12,1141.4 2030.16,1139.58 2032.18,1137.75 2034.2,1135.91 2036.21,1134.07 2038.22,1132.23 2040.21,1130.38 2042.2,1128.54 2044.18,1126.68 2046.15,1124.83 \\n 2048.11,1122.97 2050.07,1121.1 2052.01,1119.24 2053.95,1117.37 2055.88,1115.49 2057.81,1113.62 2059.72,1111.74 2061.63,1109.86 2063.53,1107.97 2065.42,1106.08 \\n 2067.3,1104.19 2069.17,1102.29 2071.04,1100.39 2072.9,1098.49 2074.75,1096.58 2076.59,1094.67 2078.42,1092.76 2080.24,1090.85 2082.06,1088.93 2083.87,1087.01 \\n 2085.67,1085.08 2087.46,1083.15 2089.24,1081.22 2091.01,1079.29 2092.78,1077.35 2094.54,1075.41 2096.28,1073.47 2098.02,1071.52 2099.76,1069.58 2101.48,1067.62 \\n 2103.19,1065.67 2104.9,1063.71 2106.6,1061.75 2108.29,1059.79 2109.97,1057.82 2111.64,1055.86 2113.3,1053.88 2114.95,1051.91 2116.6,1049.93 2118.24,1047.95 \\n 2119.86,1045.97 2121.48,1043.99 2123.09,1042 2124.7,1040.01 2126.29,1038.02 2127.87,1036.02 2129.45,1034.02 2131.02,1032.02 2132.57,1030.02 2134.12,1028.01 \\n 2135.66,1026.01 2137.19,1024 2138.72,1021.98 2140.23,1019.97 2141.73,1017.95 2143.23,1015.93 2144.72,1013.91 2146.19,1011.88 2147.66,1009.86 2149.12,1007.83 \\n 2150.57,1005.79 2152.01,1003.76 2153.45,1001.72 2154.87,999.686 2156.28,997.645 2157.69,995.602 2159.09,993.557 2160.47,991.51 2161.85,989.461 2163.22,987.409 \\n 2164.58,985.356 2165.93,983.3 2167.27,981.242 2168.6,979.182 2169.92,977.121 2171.24,975.057 2172.54,972.991 2173.84,970.923 2175.12,968.854 2176.4,966.782 \\n 2177.67,964.709 2178.93,962.634 2180.17,960.556 2181.41,958.477 2182.64,956.397 2183.86,954.314 2185.08,952.23 2186.28,950.144 2187.47,948.056 2188.65,945.966 \\n 2189.83,943.875 2190.99,941.782 2192.15,939.688 2193.29,937.592 2194.43,935.494 2195.55,933.395 2196.67,931.294 2197.78,929.192 2198.88,927.088 2199.96,924.982 \\n 2201.04,922.875 2202.11,920.767 2203.17,918.657 2204.22,916.546 2205.26,914.433 2206.3,912.32 2207.32,910.204 2208.33,908.088 2209.33,905.97 2210.32,903.85 \\n 2211.31,901.73 2212.28,899.608 2213.24,897.485 2214.2,895.361 2215.14,893.236 2216.08,891.109 2217,888.981 2217.92,886.853 2218.83,884.723 2219.72,882.592 \\n 2220.61,880.46 2221.48,878.327 2222.35,876.193 2223.21,874.057 2224.06,871.921 2224.89,869.784 2225.72,867.646 2226.54,865.508 2227.35,863.368 2228.15,861.227 \\n 2228.94,859.086 2229.72,856.944 2230.48,854.8 2231.24,852.657 2231.99,850.512 2232.73,848.367 2233.46,846.22 2234.18,844.074 2234.89,841.926 2235.59,839.778 \\n 2236.28,837.629 2236.97,835.48 2237.64,833.33 2238.3,831.179 2238.95,829.028 2239.59,826.877 2240.22,824.725 2240.84,822.572 2241.45,820.419 2242.05,818.265 \\n 2242.64,816.111 2243.23,813.957 2243.8,811.802 2244.36,809.647 2244.91,807.492 2245.45,805.336 2245.98,803.18 2246.51,801.024 2247.02,798.867 2247.52,796.71 \\n 2248.01,794.553 2248.49,792.396 2248.96,790.239 2249.43,788.081 2249.88,785.923 2250.32,783.766 2250.75,781.608 2251.17,779.45 2251.58,777.292 2251.99,775.134 \\n 2252.38,772.976 2252.76,770.818 2253.13,768.66 2253.49,766.502 2253.84,764.345 2254.18,762.187 2254.52,760.029 2254.84,757.872 2255.15,755.715 2255.45,753.558 \\n 2255.74,751.401 2256.02,749.245 2256.29,747.088 2256.55,744.932 2256.8,742.777 2257.04,740.621 2257.27,738.466 2257.49,736.312 2257.7,734.157 2257.91,732.003 \\n 2258.1,729.85 \\n \\&quot;/&gt;\\n&lt;/svg&gt;\\n&quot;},&quot;nodeType&quot;:&quot;DOM&quot;,&quot;type&quot;:&quot;node&quot;,&quot;instanceArgs&quot;:{&quot;namespace&quot;:&quot;html&quot;,&quot;tag&quot;:&quot;div&quot;},&quot;children&quot;:[]}]})</unsafe-script>"},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[]}]}]})</unsafe-script>
</div>
WARNING: both CSSUtil and Base export "empty"; uses of it in module Interact must be qualified
WARNING: both Interact and Plots export "hline"; uses of it in module Main must be qualified
WARNING: both Interact and Plots export "vline"; uses of it in module Main must be qualified
WARNING: both Interact and Plots export "wrap"; uses of it in module Main must be qualified
# Example 2: clustering points
See Section 2.3 of:
http://www.optimization-online.org/DB_FILE/2005/04/1114.pdf
Given 2-D data pairs $d_i$, $i=1,\ldots,N$, these points can be partitioned into $k$ clusters by solving the following SDP.
$\begin{align}
minimize \quad& trace(W * (\mathbf{I} - X))\\
subject\ to \quad& \Sigma_j X_{ij} = 1,\quad i=1, \ldots,N \\
& trace(X) = k \\
& X \succeq 0
\end{align}$,
where $W_{ij} = e^{\frac{-||d_i - d_j||}{\sigma}}$.
```julia
""""
calculate_weight_matrix(data::Matrix{Float64})
Calculates the distance between a list of 2-D data points given as
rows in `data` matrix.
"""
function calculate_weight_matrix(data::Matrix{Float64}, σ = 1.0)
num_points = size(data, 1)
W = zeros(num_points, num_points)
for i in 1:num_points
for j in (i+1):num_points
dist = exp(-norm(data[i, :] - data[j, :]) / σ)
W[i, j] = dist
W[j, i] = dist
end
end
return W
end
function eye(x, y)
z = zeros(x, y)
for i in 1:min(x, y)
z[i, i] = 1.0
end
return z
end
function solve_cluster_problem(data::Matrix{Float64}, num_clusters::Int)
W = calculate_weight_matrix(data)
num_points = size(data, 1)
model = Model(solver = SCSSolver(verbose = false))
@variable(model, X[1:num_points, 1:num_points], SDP)
@objective(model, Min, tr(W * (eye(num_points, num_points) - X)))
@constraints(model, begin
X .>= 0
[i in 1:num_points], sum(X[i, :]) == 1
tr(X) == num_clusters
end)
status = solve(model)
X_value = getvalue(X)
cluster = zeros(Int, num_points)
cluster_counter = 0
for i in 1:num_points
if cluster[i] == 0
cluster_counter += 1
cluster[i] = cluster_counter
for j in (i+1):num_points
if norm(X_value[i, j] - X_value[i, i]) <= 1e-6
cluster[j] = cluster[i]
end
end
end
end
return cluster
end
```
solve_cluster_problem (generic function with 1 method)
Investigate the model. What goes wrong when?
```julia
data = vcat(
rand(Float64, (10, 2)) .+ [2 3],
rand(Float64, (10, 2)) .+ [4 6],
rand(Float64, (10, 2)) .+ [3.5 3]
)
@manipulate for num_clusters = 1:4
which_clusters = solve_cluster_problem(data, num_clusters)
plot(
xlabel = "x", ylabel = "y",
xlims=(0, 8), ylims = (0, 8),
legend = :bottomright
)
for k in 1:maximum(which_clusters)
points = which_clusters .== k
scatter!(data[points, 1], data[points, 2],
label = "Cluster $(k)", markersize=10)
end
plot!()
end
```
<div class='tex2jax_ignore interactbulma'>
<div class='display:none'></div><unsafe-script style='display:none'>
WebIO.mount(this.previousSibling,{"props":{},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[{"props":{"className":"field"},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[{"props":{},"nodeType":"Scope","type":"node","instanceArgs":{"imports":{"data":[{"name":"knockout","type":"js","url":"/assetserver/25899bae89003c9f86e9455b28fa31a2ae55d476-knockout.js"},{"name":"knockout_punches","type":"js","url":"/assetserver/fcb49798ed2cca503bfcbfacd04a1a3983159fc2-knockout_punches.js"},{"name":null,"type":"js","url":"/assetserver/8ae9f4aa930b66be3a755b57f25ffff7013ab980-all.js"},{"name":null,"type":"css","url":"/assetserver/c079e3891e8a630ad84ae9ed08bb334b46ca67ac-style.css"},{"name":null,"type":"css","url":"/assetserver/8786809522076f1019d38a12fad8b081c302304d-main.css"}],"type":"async_block"},"id":"knockout-component-8d34926e-dd84-4472-8b75-e544a34ee41d","handlers":{"_promises":{"importsLoaded":[function (ko, koPunches) {
ko.punches.enableAll();
ko.bindingHandlers.numericValue = {
init : function(element, valueAccessor, allBindings, data, context) {
var stringified = ko.observable(ko.unwrap(valueAccessor()));
stringified.subscribe(function(value) {
var val = parseFloat(value);
if (!isNaN(val)) {
valueAccessor()(val);
}
})
valueAccessor().subscribe(function(value) {
var str = JSON.stringify(value);
if ((str == "0") && (["-0", "-0."].indexOf(stringified()) >= 0))
return;
if (["null", ""].indexOf(str) >= 0)
return;
stringified(str);
})
ko.applyBindingsToNode(element, { value: stringified, valueUpdate: allBindings.get('valueUpdate')}, context);
}
};
var json_data = JSON.parse("{\"changes\":0,\"value\":2}");
var self = this;
function AppViewModel() {
for (var key in json_data) {
var el = json_data[key];
this[key] = Array.isArray(el) ? ko.observableArray(el) : ko.observable(el);
}
[this["changes"].subscribe((function (val){!(this.valueFromJulia["changes"]) ? (WebIO.setval({"name":"changes","scope":"knockout-component-8d34926e-dd84-4472-8b75-e544a34ee41d","id":"ob_227","type":"observable"},val)) : undefined; return this.valueFromJulia["changes"]=false}),self),this["value"].subscribe((function (val){!(this.valueFromJulia["value"]) ? (WebIO.setval({"name":"value","scope":"knockout-component-8d34926e-dd84-4472-8b75-e544a34ee41d","id":"ob_226","type":"observable"},val)) : undefined; return this.valueFromJulia["value"]=false}),self)]
}
self.model = new AppViewModel();
self.valueFromJulia = {};
for (var key in json_data) {
self.valueFromJulia[key] = false;
}
ko.applyBindings(self.model, self.dom);
}
]},"changes":[(function (val){return (val!=this.model["changes"]()) ? (this.valueFromJulia["changes"]=true, this.model["changes"](val)) : undefined})],"value":[(function (val){return (val!=this.model["value"]()) ? (this.valueFromJulia["value"]=true, this.model["value"](val)) : undefined})]},"systemjs_options":null,"observables":{"changes":{"sync":false,"id":"ob_227","value":0},"value":{"sync":true,"id":"ob_226","value":2}}},"children":[{"props":{"attributes":{"class":"interact-flex-row"}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[{"props":{"attributes":{"class":"interact-flex-row-left"}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[{"props":{"className":"interact ","style":{"padding":"5px 10px 0px 10px"}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"label"},"children":["num_clusters"]}]},{"props":{"attributes":{"class":"interact-flex-row-center"}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[{"props":{"max":4,"min":1,"attributes":{"type":"range","data-bind":"numericValue: value, valueUpdate: 'input', event: {change : function () {this.changes(this.changes()+1)}}","orient":"horizontal"},"step":1,"className":"slider slider is-fullwidth","style":{}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"input"},"children":[]}]},{"props":{"attributes":{"class":"interact-flex-row-right"}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[{"props":{"attributes":{"data-bind":"text: value"}},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"p"},"children":[]}]}]}]}]},{"props":{},"nodeType":"Scope","type":"node","instanceArgs":{"imports":{"data":[],"type":"async_block"},"id":"scope-cc39d67d-ab53-43c1-b4d0-f5495049da08","handlers":{"obs-output":[function (updated_htmlstr) {
var el = this.dom.querySelector("#out");
WebIO.propUtils.setInnerHtml(el, updated_htmlstr);
}]},"systemjs_options":null,"observables":{"obs-output":{"sync":false,"id":"ob_230","value":"<div class='display:none'></div><unsafe-script style='display:none'>\nWebIO.mount(this.previousSibling,{&quot;props&quot;:{&quot;attributes&quot;:{&quot;class&quot;:&quot;interact-flex-row&quot;}},&quot;nodeType&quot;:&quot;DOM&quot;,&quot;type&quot;:&quot;node&quot;,&quot;instanceArgs&quot;:{&quot;namespace&quot;:&quot;html&quot;,&quot;tag&quot;:&quot;div&quot;},&quot;children&quot;:[{&quot;props&quot;:{&quot;setInnerHtml&quot;:&quot;&lt;?xml version=\\&quot;1.0\\&quot; encoding=\\&quot;utf-8\\&quot;?&gt;\\n&lt;svg xmlns=\\&quot;http://www.w3.org/2000/svg\\&quot; xmlns:xlink=\\&quot;http://www.w3.org/1999/xlink\\&quot; width=\\&quot;600\\&quot; height=\\&quot;400\\&quot; viewBox=\\&quot;0 0 2400 1600\\&quot;&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip3200\\&quot;&gt;\\n &lt;rect x=\\&quot;0\\&quot; y=\\&quot;0\\&quot; width=\\&quot;2000\\&quot; height=\\&quot;2000\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip3201\\&quot;&gt;\\n &lt;rect x=\\&quot;0\\&quot; y=\\&quot;0\\&quot; width=\\&quot;2400\\&quot; height=\\&quot;1600\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;polygon clip-path=\\&quot;url(#clip3201)\\&quot; points=\\&quot;\\n0,1600 2400,1600 2400,0 0,0 \\n \\&quot; fill=\\&quot;#ffffff\\&quot; fill-rule=\\&quot;evenodd\\&quot; fill-opacity=\\&quot;1\\&quot;/&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip3202\\&quot;&gt;\\n &lt;rect x=\\&quot;480\\&quot; y=\\&quot;0\\&quot; width=\\&quot;1681\\&quot; height=\\&quot;1600\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;polygon clip-path=\\&quot;url(#clip3201)\\&quot; points=\\&quot;\\n184.243,1440.48 2321.26,1440.48 2321.26,47.2441 184.243,47.2441 \\n \\&quot; fill=\\&quot;#ffffff\\&quot; fill-rule=\\&quot;evenodd\\&quot; fill-opacity=\\&quot;1\\&quot;/&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip3203\\&quot;&gt;\\n &lt;rect x=\\&quot;184\\&quot; y=\\&quot;47\\&quot; width=\\&quot;2138\\&quot; height=\\&quot;1394\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 184.243,1440.48 184.243,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 718.497,1440.48 718.497,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 1252.75,1440.48 1252.75,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 1787.01,1440.48 1787.01,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 2321.26,1440.48 2321.26,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 184.243,1440.48 2321.26,1440.48 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 184.243,1092.17 2321.26,1092.17 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 184.243,743.863 2321.26,743.863 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 184.243,395.554 2321.26,395.554 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 184.243,47.2441 2321.26,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,1440.48 2321.26,1440.48 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,1440.48 184.243,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,1440.48 184.243,1419.58 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 718.497,1440.48 718.497,1419.58 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1252.75,1440.48 1252.75,1419.58 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1787.01,1440.48 1787.01,1419.58 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 2321.26,1440.48 2321.26,1419.58 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,1440.48 216.298,1440.48 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,1092.17 216.298,1092.17 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,743.863 216.298,743.863 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,395.554 216.298,395.554 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,47.2441 216.298,47.2441 \\n \\&quot;/&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 184.243, 1494.48)\\&quot; x=\\&quot;184.243\\&quot; y=\\&quot;1494.48\\&quot;&gt;0&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 718.497, 1494.48)\\&quot; x=\\&quot;718.497\\&quot; y=\\&quot;1494.48\\&quot;&gt;2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 1252.75, 1494.48)\\&quot; x=\\&quot;1252.75\\&quot; y=\\&quot;1494.48\\&quot;&gt;4&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 1787.01, 1494.48)\\&quot; x=\\&quot;1787.01\\&quot; y=\\&quot;1494.48\\&quot;&gt;6&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 2321.26, 1494.48)\\&quot; x=\\&quot;2321.26\\&quot; y=\\&quot;1494.48\\&quot;&gt;8&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 160.243, 1457.98)\\&quot; x=\\&quot;160.243\\&quot; y=\\&quot;1457.98\\&quot;&gt;0&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 160.243, 1109.67)\\&quot; x=\\&quot;160.243\\&quot; y=\\&quot;1109.67\\&quot;&gt;2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 160.243, 761.363)\\&quot; x=\\&quot;160.243\\&quot; y=\\&quot;761.363\\&quot;&gt;4&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 160.243, 413.054)\\&quot; x=\\&quot;160.243\\&quot; y=\\&quot;413.054\\&quot;&gt;6&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 160.243, 64.7441)\\&quot; x=\\&quot;160.243\\&quot; y=\\&quot;64.7441\\&quot;&gt;8&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:66px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 1252.75, 1590.4)\\&quot; x=\\&quot;1252.75\\&quot; y=\\&quot;1590.4\\&quot;&gt;x&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:66px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(-90, 57.6, 743.863)\\&quot; x=\\&quot;57.6\\&quot; y=\\&quot;743.863\\&quot;&gt;y&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;873.648\\&quot; cy=\\&quot;768.801\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;873.648\\&quot; cy=\\&quot;768.801\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;897.735\\&quot; cy=\\&quot;825.229\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;897.735\\&quot; cy=\\&quot;825.229\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;793.372\\&quot; cy=\\&quot;903.462\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;793.372\\&quot; cy=\\&quot;903.462\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;785.26\\&quot; cy=\\&quot;900.231\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;785.26\\&quot; cy=\\&quot;900.231\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;975.97\\&quot; cy=\\&quot;907.085\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;975.97\\&quot; cy=\\&quot;907.085\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;949.943\\&quot; cy=\\&quot;836.212\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;949.943\\&quot; cy=\\&quot;836.212\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;932.335\\&quot; cy=\\&quot;756.366\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;932.335\\&quot; cy=\\&quot;756.366\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;747.942\\&quot; cy=\\&quot;807.105\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;747.942\\&quot; cy=\\&quot;807.105\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;882.044\\&quot; cy=\\&quot;759.338\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;882.044\\&quot; cy=\\&quot;759.338\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;767.006\\&quot; cy=\\&quot;781.428\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;767.006\\&quot; cy=\\&quot;781.428\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1372.47\\&quot; cy=\\&quot;883.468\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1372.47\\&quot; cy=\\&quot;883.468\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1247.34\\&quot; cy=\\&quot;858.55\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1247.34\\&quot; cy=\\&quot;858.55\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1176.73\\&quot; cy=\\&quot;856.012\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1176.73\\&quot; cy=\\&quot;856.012\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1245.58\\&quot; cy=\\&quot;771.3\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1245.58\\&quot; cy=\\&quot;771.3\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1322.24\\&quot; cy=\\&quot;785.351\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1322.24\\&quot; cy=\\&quot;785.351\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1231.34\\&quot; cy=\\&quot;802.288\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1231.34\\&quot; cy=\\&quot;802.288\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1288.08\\&quot; cy=\\&quot;862.14\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1288.08\\&quot; cy=\\&quot;862.14\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1336.35\\&quot; cy=\\&quot;819.347\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1336.35\\&quot; cy=\\&quot;819.347\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1263.84\\&quot; cy=\\&quot;908.017\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1263.84\\&quot; cy=\\&quot;908.017\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1158.77\\&quot; cy=\\&quot;873.799\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1158.77\\&quot; cy=\\&quot;873.799\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1383.28\\&quot; cy=\\&quot;353.034\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1383.28\\&quot; cy=\\&quot;353.034\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1439.7\\&quot; cy=\\&quot;304.36\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1439.7\\&quot; cy=\\&quot;304.36\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1268.45\\&quot; cy=\\&quot;343.357\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1268.45\\&quot; cy=\\&quot;343.357\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1463.78\\&quot; cy=\\&quot;364.818\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1463.78\\&quot; cy=\\&quot;364.818\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1292.18\\&quot; cy=\\&quot;236.252\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1292.18\\&quot; cy=\\&quot;236.252\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1441.39\\&quot; cy=\\&quot;274.011\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1441.39\\&quot; cy=\\&quot;274.011\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1306.21\\&quot; cy=\\&quot;335.756\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1306.21\\&quot; cy=\\&quot;335.756\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1449.07\\&quot; cy=\\&quot;270.587\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1449.07\\&quot; cy=\\&quot;270.587\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1334.96\\&quot; cy=\\&quot;228.259\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1334.96\\&quot; cy=\\&quot;228.259\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1426.03\\&quot; cy=\\&quot;391.884\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1426.03\\&quot; cy=\\&quot;391.884\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;polygon clip-path=\\&quot;url(#clip3201)\\&quot; points=\\&quot;\\n1816.68,1296.48 2249.26,1296.48 2249.26,1115.04 1816.68,1115.04 \\n \\&quot; fill=\\&quot;#ffffff\\&quot; fill-rule=\\&quot;evenodd\\&quot; fill-opacity=\\&quot;1\\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1816.68,1296.48 2249.26,1296.48 2249.26,1115.04 1816.68,1115.04 1816.68,1296.48 \\n \\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1924.68\\&quot; cy=\\&quot;1175.52\\&quot; r=\\&quot;25\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1924.68\\&quot; cy=\\&quot;1175.52\\&quot; r=\\&quot;21\\&quot;/&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:start;\\&quot; transform=\\&quot;rotate(0, 2008.68, 1193.02)\\&quot; x=\\&quot;2008.68\\&quot; y=\\&quot;1193.02\\&quot;&gt;Cluster 1&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1924.68\\&quot; cy=\\&quot;1236\\&quot; r=\\&quot;25\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1924.68\\&quot; cy=\\&quot;1236\\&quot; r=\\&quot;21\\&quot;/&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:start;\\&quot; transform=\\&quot;rotate(0, 2008.68, 1253.5)\\&quot; x=\\&quot;2008.68\\&quot; y=\\&quot;1253.5\\&quot;&gt;Cluster 2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;/svg&gt;\\n&quot;},&quot;nodeType&quot;:&quot;DOM&quot;,&quot;type&quot;:&quot;node&quot;,&quot;instanceArgs&quot;:{&quot;namespace&quot;:&quot;html&quot;,&quot;tag&quot;:&quot;div&quot;},&quot;children&quot;:[]}]})</unsafe-script>"}}},"children":[{"props":{"id":"out","setInnerHtml":"<div class='display:none'></div><unsafe-script style='display:none'>\nWebIO.mount(this.previousSibling,{&quot;props&quot;:{&quot;attributes&quot;:{&quot;class&quot;:&quot;interact-flex-row&quot;}},&quot;nodeType&quot;:&quot;DOM&quot;,&quot;type&quot;:&quot;node&quot;,&quot;instanceArgs&quot;:{&quot;namespace&quot;:&quot;html&quot;,&quot;tag&quot;:&quot;div&quot;},&quot;children&quot;:[{&quot;props&quot;:{&quot;setInnerHtml&quot;:&quot;&lt;?xml version=\\&quot;1.0\\&quot; encoding=\\&quot;utf-8\\&quot;?&gt;\\n&lt;svg xmlns=\\&quot;http://www.w3.org/2000/svg\\&quot; xmlns:xlink=\\&quot;http://www.w3.org/1999/xlink\\&quot; width=\\&quot;600\\&quot; height=\\&quot;400\\&quot; viewBox=\\&quot;0 0 2400 1600\\&quot;&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip3200\\&quot;&gt;\\n &lt;rect x=\\&quot;0\\&quot; y=\\&quot;0\\&quot; width=\\&quot;2000\\&quot; height=\\&quot;2000\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip3201\\&quot;&gt;\\n &lt;rect x=\\&quot;0\\&quot; y=\\&quot;0\\&quot; width=\\&quot;2400\\&quot; height=\\&quot;1600\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;polygon clip-path=\\&quot;url(#clip3201)\\&quot; points=\\&quot;\\n0,1600 2400,1600 2400,0 0,0 \\n \\&quot; fill=\\&quot;#ffffff\\&quot; fill-rule=\\&quot;evenodd\\&quot; fill-opacity=\\&quot;1\\&quot;/&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip3202\\&quot;&gt;\\n &lt;rect x=\\&quot;480\\&quot; y=\\&quot;0\\&quot; width=\\&quot;1681\\&quot; height=\\&quot;1600\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;polygon clip-path=\\&quot;url(#clip3201)\\&quot; points=\\&quot;\\n184.243,1440.48 2321.26,1440.48 2321.26,47.2441 184.243,47.2441 \\n \\&quot; fill=\\&quot;#ffffff\\&quot; fill-rule=\\&quot;evenodd\\&quot; fill-opacity=\\&quot;1\\&quot;/&gt;\\n&lt;defs&gt;\\n &lt;clipPath id=\\&quot;clip3203\\&quot;&gt;\\n &lt;rect x=\\&quot;184\\&quot; y=\\&quot;47\\&quot; width=\\&quot;2138\\&quot; height=\\&quot;1394\\&quot;/&gt;\\n &lt;/clipPath&gt;\\n&lt;/defs&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 184.243,1440.48 184.243,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 718.497,1440.48 718.497,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 1252.75,1440.48 1252.75,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 1787.01,1440.48 1787.01,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 2321.26,1440.48 2321.26,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 184.243,1440.48 2321.26,1440.48 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 184.243,1092.17 2321.26,1092.17 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 184.243,743.863 2321.26,743.863 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 184.243,395.554 2321.26,395.554 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;stroke:#000000; stroke-width:2; stroke-opacity:0.1; fill:none\\&quot; points=\\&quot;\\n 184.243,47.2441 2321.26,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,1440.48 2321.26,1440.48 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,1440.48 184.243,47.2441 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,1440.48 184.243,1419.58 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 718.497,1440.48 718.497,1419.58 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1252.75,1440.48 1252.75,1419.58 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1787.01,1440.48 1787.01,1419.58 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 2321.26,1440.48 2321.26,1419.58 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,1440.48 216.298,1440.48 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,1092.17 216.298,1092.17 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,743.863 216.298,743.863 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,395.554 216.298,395.554 \\n \\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 184.243,47.2441 216.298,47.2441 \\n \\&quot;/&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 184.243, 1494.48)\\&quot; x=\\&quot;184.243\\&quot; y=\\&quot;1494.48\\&quot;&gt;0&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 718.497, 1494.48)\\&quot; x=\\&quot;718.497\\&quot; y=\\&quot;1494.48\\&quot;&gt;2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 1252.75, 1494.48)\\&quot; x=\\&quot;1252.75\\&quot; y=\\&quot;1494.48\\&quot;&gt;4&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 1787.01, 1494.48)\\&quot; x=\\&quot;1787.01\\&quot; y=\\&quot;1494.48\\&quot;&gt;6&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 2321.26, 1494.48)\\&quot; x=\\&quot;2321.26\\&quot; y=\\&quot;1494.48\\&quot;&gt;8&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 160.243, 1457.98)\\&quot; x=\\&quot;160.243\\&quot; y=\\&quot;1457.98\\&quot;&gt;0&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 160.243, 1109.67)\\&quot; x=\\&quot;160.243\\&quot; y=\\&quot;1109.67\\&quot;&gt;2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 160.243, 761.363)\\&quot; x=\\&quot;160.243\\&quot; y=\\&quot;761.363\\&quot;&gt;4&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 160.243, 413.054)\\&quot; x=\\&quot;160.243\\&quot; y=\\&quot;413.054\\&quot;&gt;6&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:end;\\&quot; transform=\\&quot;rotate(0, 160.243, 64.7441)\\&quot; x=\\&quot;160.243\\&quot; y=\\&quot;64.7441\\&quot;&gt;8&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:66px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(0, 1252.75, 1590.4)\\&quot; x=\\&quot;1252.75\\&quot; y=\\&quot;1590.4\\&quot;&gt;x&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:66px; text-anchor:middle;\\&quot; transform=\\&quot;rotate(-90, 57.6, 743.863)\\&quot; x=\\&quot;57.6\\&quot; y=\\&quot;743.863\\&quot;&gt;y&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;873.648\\&quot; cy=\\&quot;768.801\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;873.648\\&quot; cy=\\&quot;768.801\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;897.735\\&quot; cy=\\&quot;825.229\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;897.735\\&quot; cy=\\&quot;825.229\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;793.372\\&quot; cy=\\&quot;903.462\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;793.372\\&quot; cy=\\&quot;903.462\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;785.26\\&quot; cy=\\&quot;900.231\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;785.26\\&quot; cy=\\&quot;900.231\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;975.97\\&quot; cy=\\&quot;907.085\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;975.97\\&quot; cy=\\&quot;907.085\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;949.943\\&quot; cy=\\&quot;836.212\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;949.943\\&quot; cy=\\&quot;836.212\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;932.335\\&quot; cy=\\&quot;756.366\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;932.335\\&quot; cy=\\&quot;756.366\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;747.942\\&quot; cy=\\&quot;807.105\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;747.942\\&quot; cy=\\&quot;807.105\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;882.044\\&quot; cy=\\&quot;759.338\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;882.044\\&quot; cy=\\&quot;759.338\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;767.006\\&quot; cy=\\&quot;781.428\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;767.006\\&quot; cy=\\&quot;781.428\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1372.47\\&quot; cy=\\&quot;883.468\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1372.47\\&quot; cy=\\&quot;883.468\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1247.34\\&quot; cy=\\&quot;858.55\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1247.34\\&quot; cy=\\&quot;858.55\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1176.73\\&quot; cy=\\&quot;856.012\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1176.73\\&quot; cy=\\&quot;856.012\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1245.58\\&quot; cy=\\&quot;771.3\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1245.58\\&quot; cy=\\&quot;771.3\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1322.24\\&quot; cy=\\&quot;785.351\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1322.24\\&quot; cy=\\&quot;785.351\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1231.34\\&quot; cy=\\&quot;802.288\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1231.34\\&quot; cy=\\&quot;802.288\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1288.08\\&quot; cy=\\&quot;862.14\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1288.08\\&quot; cy=\\&quot;862.14\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1336.35\\&quot; cy=\\&quot;819.347\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1336.35\\&quot; cy=\\&quot;819.347\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1263.84\\&quot; cy=\\&quot;908.017\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1263.84\\&quot; cy=\\&quot;908.017\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1158.77\\&quot; cy=\\&quot;873.799\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1158.77\\&quot; cy=\\&quot;873.799\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1383.28\\&quot; cy=\\&quot;353.034\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1383.28\\&quot; cy=\\&quot;353.034\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1439.7\\&quot; cy=\\&quot;304.36\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1439.7\\&quot; cy=\\&quot;304.36\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1268.45\\&quot; cy=\\&quot;343.357\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1268.45\\&quot; cy=\\&quot;343.357\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1463.78\\&quot; cy=\\&quot;364.818\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1463.78\\&quot; cy=\\&quot;364.818\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1292.18\\&quot; cy=\\&quot;236.252\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1292.18\\&quot; cy=\\&quot;236.252\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1441.39\\&quot; cy=\\&quot;274.011\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1441.39\\&quot; cy=\\&quot;274.011\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1306.21\\&quot; cy=\\&quot;335.756\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1306.21\\&quot; cy=\\&quot;335.756\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1449.07\\&quot; cy=\\&quot;270.587\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1449.07\\&quot; cy=\\&quot;270.587\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1334.96\\&quot; cy=\\&quot;228.259\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1334.96\\&quot; cy=\\&quot;228.259\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1426.03\\&quot; cy=\\&quot;391.884\\&quot; r=\\&quot;39\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3203)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1426.03\\&quot; cy=\\&quot;391.884\\&quot; r=\\&quot;36\\&quot;/&gt;\\n&lt;polygon clip-path=\\&quot;url(#clip3201)\\&quot; points=\\&quot;\\n1816.68,1296.48 2249.26,1296.48 2249.26,1115.04 1816.68,1115.04 \\n \\&quot; fill=\\&quot;#ffffff\\&quot; fill-rule=\\&quot;evenodd\\&quot; fill-opacity=\\&quot;1\\&quot;/&gt;\\n&lt;polyline clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;stroke:#000000; stroke-width:4; stroke-opacity:1; fill:none\\&quot; points=\\&quot;\\n 1816.68,1296.48 2249.26,1296.48 2249.26,1115.04 1816.68,1115.04 1816.68,1296.48 \\n \\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1924.68\\&quot; cy=\\&quot;1175.52\\&quot; r=\\&quot;25\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;fill:#009af9; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1924.68\\&quot; cy=\\&quot;1175.52\\&quot; r=\\&quot;21\\&quot;/&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:start;\\&quot; transform=\\&quot;rotate(0, 2008.68, 1193.02)\\&quot; x=\\&quot;2008.68\\&quot; y=\\&quot;1193.02\\&quot;&gt;Cluster 1&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;fill:#000000; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1924.68\\&quot; cy=\\&quot;1236\\&quot; r=\\&quot;25\\&quot;/&gt;\\n&lt;circle clip-path=\\&quot;url(#clip3201)\\&quot; style=\\&quot;fill:#e26f46; stroke:none; fill-opacity:1\\&quot; cx=\\&quot;1924.68\\&quot; cy=\\&quot;1236\\&quot; r=\\&quot;21\\&quot;/&gt;\\n&lt;g clip-path=\\&quot;url(#clip3201)\\&quot;&gt;\\n&lt;text style=\\&quot;fill:#000000; fill-opacity:1; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; font-size:48px; text-anchor:start;\\&quot; transform=\\&quot;rotate(0, 2008.68, 1253.5)\\&quot; x=\\&quot;2008.68\\&quot; y=\\&quot;1253.5\\&quot;&gt;Cluster 2&lt;/text&gt;\\n&lt;/g&gt;\\n&lt;/svg&gt;\\n&quot;},&quot;nodeType&quot;:&quot;DOM&quot;,&quot;type&quot;:&quot;node&quot;,&quot;instanceArgs&quot;:{&quot;namespace&quot;:&quot;html&quot;,&quot;tag&quot;:&quot;div&quot;},&quot;children&quot;:[]}]})</unsafe-script>"},"nodeType":"DOM","type":"node","instanceArgs":{"namespace":"html","tag":"div"},"children":[]}]}]})</unsafe-script>
</div>
| 90a953881a3fcf3406f30be6d5d491c51e6c0d21 | 403,725 | ipynb | Jupyter Notebook | Class V - Conic modelling in JuMP.ipynb | edgBR/tutorial-grid-science-2019 | c743684ad8e5693948629a680546243ed95a7e93 | [
"MIT"
]
| null | null | null | Class V - Conic modelling in JuMP.ipynb | edgBR/tutorial-grid-science-2019 | c743684ad8e5693948629a680546243ed95a7e93 | [
"MIT"
]
| null | null | null | Class V - Conic modelling in JuMP.ipynb | edgBR/tutorial-grid-science-2019 | c743684ad8e5693948629a680546243ed95a7e93 | [
"MIT"
]
| 1 | 2020-09-03T18:53:00.000Z | 2020-09-03T18:53:00.000Z | 921.746575 | 300,275 | 0.684206 | true | 196,286 | Qwen/Qwen-72B | 1. YES
2. YES | 0.833325 | 0.863392 | 0.719485 | __label__yue_Hant | 0.126026 | 0.509937 |
# Lecture 24 - Sequential Monte Carlo in `PyMC3`
```python
import numpy as np
import pymc3 as pm
import theano as T
from theano import shared, function, tensor as tt
from sample_smc import sample_smc
try:
import sympy
except:
_=!pip install sympy
import sympy
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import logging
%matplotlib inline
```
## Objectives
+ Compute the model evidence using `PyMC3`.
+ Do model selection with `PyMC3`.
## Sanity check - Does the calculation of the evidence with PySMC work?
Let
$$
p(\theta) = \mathcal{N}(\theta|0, 1),
$$
and
$$
p(y|\theta) = \mathcal{N}(y|\theta,0).
$$
The posterior of $\theta$ given $y$ is:
$$
p(\theta|y) = \frac{p(y|\theta)p(\theta)}{Z},
$$
where
$$
Z = \int_{-\infty}^{\infty} p(y|\theta)p(\theta)d\theta.
$$
Let's first calculate $Z$ analytically.
```python
import sympy.stats
sympy.init_printing()
y, t = sympy.symbols('y \\theta')
q = 1. / sympy.sqrt(2. * sympy.pi) * sympy.exp(-0.5 * (y - t) ** 2) * \
1. / sympy.sqrt(2. * sympy.pi) * sympy.exp(-0.5 * t ** 2)
sympy.simplify(sympy.integrate(q, (t, -sympy.oo, sympy.oo)))
```
So, if the observed $y$ was zero, then the Z should be:
$$
Z = \frac{1}{2\sqrt{\pi}}.
$$
```python
Z = 1 / 2. / np.sqrt(np.pi)
print('log Z = {0:.3f}'.format(np.log(Z)))
```
log Z = -1.266
All, right. Now, let's program this thing in pysmc and compare the results.
We start with the model:
```python
model = pm.Model()
yobs = 0.
with model:
# prior over theta
theta = pm.Normal('theta', mu=0., sigma=1.,testval=0.)
# log likelihood
llk = pm.Potential('llk', pm.Normal.dist(theta, 1.).logp(yobs))
trace, smcres = sample_smc(1000)
```
Sample initial stage: ...
Stage: 0 Beta: 1.000 Steps: 25 Acce: 1.000
```python
# get the model evidence
log_evidence_smc = np.log(smcres.model.marginal_likelihood)
print('True log evidence: %.4f \nSMC log evidence: %.4f'%(np.log(Z), log_evidence_smc))
```
True log evidence: -1.2655
SMC log evidence: -1.2880
Which is close to the truth.
### Questions
+ Repeat the calculations above for a varying number of SMC particles. Start from 10 and go up to 10,000.
## Polynomial Regression
```python
def compute_design_matrix(X, phi):
"""
Arguments:
X - The observed inputs (1D array)
phi - The basis functions.
"""
num_observations = X.shape[0]
num_basis = phi.num_basis
Phi = np.ndarray((num_observations, num_basis))
for i in range(num_observations):
Phi[i, :] = phi(X[i, :])
return Phi
class PolynomialBasis(object):
"""
A set of linear basis functions.
Arguments:
degree - The degree of the polynomial.
"""
def __init__(self, degree):
self.degree = degree
self.num_basis = degree + 1
def __call__(self, x):
return np.array([x[0] ** i for i in range(self.degree + 1)])
class FourierBasis(object):
"""
A set of linear basis functions.
Arguments:
num_terms - The number of Fourier terms.
L - The period of the function.
"""
def __init__(self, num_terms, L):
self.num_terms = num_terms
self.L = L
self.num_basis = 2 * num_terms
def __call__(self, x):
res = np.ndarray((self.num_basis,))
for i in range(num_terms):
res[2 * i] = np.cos(2 * i * np.pi / self.L * x[0])
res[2 * i + 1] = np.sin(2 * (i+1) * np.pi / self.L * x[0])
return res
class RadialBasisFunctions(object):
"""
A set of linear basis functions.
Arguments:
X - The centers of the radial basis functions.
ell - The assumed lengthscale.
"""
def __init__(self, X, ell):
self.X = X
self.ell = ell
self.num_basis = X.shape[0]
def __call__(self, x):
return np.exp(-.5 * (x - self.X) ** 2 / self.ell ** 2).flatten()
```
Let's generate some fake data.
```python
np.random.seed(12345)
def getdata(N, sigma2):
X = 2 * np.random.rand(N) - 1.
y = 0.5 * X ** 3 - 0.3 * X ** 2 + np.sqrt(sigma2) * np.random.rand(N)
return X, y
num_samples = 50
sigma2 = 1e-3
X, y = getdata(num_samples, sigma2)
plt.figure(figsize=(8, 6))
plt.plot(X, y, 'o', markeredgewidth=2, label='Data')
plt.xlabel('$x$', fontsize=20)
plt.ylabel('$y$', fontsize=20)
plt.legend(loc='best', fontsize=20)
plt.tight_layout()
```
We are going to implement a standard Bayesian linear regression and train it with `PyMC3`.
We will compute the evidence in order to select the best class of basis functions.
The model is as follows:
The output $y$ conditioned on the input $x$, the weights of the basis functions $w$ and
the noise variance $\sigma^2$ has likelihood:
$$
p(y|x,w,\sigma, \mathcal{M}) = \mathcal{N}(y|w^T\phi_{\mathcal{M}}(x), \sigma^2),
$$
where $\phi_{\mathcal{M},1}(\cdot), \dots, \phi_{\mathcal{M},m_{\mathcal{M}}}(\cdot)$ are the
$m_{\mathcal{M}}$ basis functions of the model $\mathcal{M}$.
We put a normal prior on the weights:
$$
p(w|\alpha) = \mathcal{N}(w|0, \alpha I_{m_{\mathcal{M}}}),
$$
and an inverse Gamma prior for $\sigma$ and $\alpha$:
$$
p(\sigma^2) = \mathrm{IG}(\sigma^2|1, 1),
$$
and
$$
p(\alpha) = \mathrm{IG}(\alpha|1,1).
$$
Assume that the data we have observed are:
$$
x_{1:n} = \{x_1,\dots,x_n\},\;\mathrm{and}\;y_{1:n} = \{y_1,\dots,y_n\}.
$$
Consider the design matrix $\Phi_{\mathcal{M}}\in\mathbb{R}^{n\times m}$:
$$
\Phi_{\mathcal{M},ij} = \phi_{\mathcal{M},j}(x_i).
$$
The likelihood of the data is:
$$
p(y_{1:n} | x_{1:n}, w, \sigma, \mathcal{M}) = \mathcal{N}(y_{1:n}|\Phi_{\mathcal{M}}w, \sigma^2I_n).
$$
Let's turn this into `PyMC3` code.
```python
def make_model(Phi, y):
"""
INPUTS:
Phi -> Design matrix.
y -> Target vector.
RETURNS:
model -> `pymc3.model` context.
"""
num_data, num_features = Phi.shape
# define the model
with pm.Model() as model:
# prior on the weights
alpha = pm.InverseGamma('alpha', alpha=1., beta=1.)
w = pm.Normal('w', mu=0., tau=alpha, shape=num_features)
# prior on the likelihood noise variance
sigma2 = pm.InverseGamma('sigma2', alpha=5., beta=0.1)
# the data likelihood mean
ymean = pm.Deterministic('ymean', tt.dot(Phi, w))
# likelihood
y = pm.Normal('y', ymean, sigma2, shape=num_data, observed=y)
#llk = pm.Potential('llk', pm.Normal.dist(ymean, tt.sqrt(sigma2)).logp_sum(y))
return model
```
Now, let's create a function that trains the model using pysmc for a polynomial basis with a given order.
```python
def fit_poly(phi, X, y, num_particles=100):
"""
RETURNS:
1. An instance of pymc3.Model for the SMC model.
2. The SMC trace.
3. An instance of pymc3.smc.SMC containing sampling information.
"""
Phi = compute_design_matrix(X[:, None], phi)
smcmodel = make_model(Phi, y)
trace, res = sample_smc(draws=num_particles,
model=smcmodel,
progressbar=True,
threshold=0.8)
return smcmodel, trace, res
phi = PolynomialBasis(3)
model, trace, res = fit_poly(phi, X, y)
```
Sample initial stage: ...
Stage: 0 Beta: 0.000 Steps: 25 Acce: 1.000
0%| | 0/100 [00:00<?, ?it/s][A
100%|██████████| 100/100 [00:00<00:00, 933.04it/s][A
Stage: 1 Beta: 0.000 Steps: 25 Acce: 0.155
100%|██████████| 100/100 [00:00<00:00, 1061.16it/s]
Stage: 2 Beta: 0.000 Steps: 25 Acce: 0.159
100%|██████████| 100/100 [00:00<00:00, 1048.84it/s]
Stage: 3 Beta: 0.000 Steps: 25 Acce: 0.197
100%|██████████| 100/100 [00:00<00:00, 1242.81it/s]
Stage: 4 Beta: 0.001 Steps: 21 Acce: 0.257
100%|██████████| 100/100 [00:00<00:00, 1725.92it/s]
Stage: 5 Beta: 0.001 Steps: 15 Acce: 0.289
100%|██████████| 100/100 [00:00<00:00, 1986.61it/s]
Stage: 6 Beta: 0.002 Steps: 13 Acce: 0.276
100%|██████████| 100/100 [00:00<00:00, 1770.82it/s]
Stage: 7 Beta: 0.004 Steps: 14 Acce: 0.211
100%|██████████| 100/100 [00:00<00:00, 1358.40it/s]
Stage: 8 Beta: 0.006 Steps: 19 Acce: 0.228
100%|██████████| 100/100 [00:00<00:00, 1510.53it/s]
Stage: 9 Beta: 0.011 Steps: 17 Acce: 0.246
100%|██████████| 100/100 [00:00<00:00, 1636.34it/s]
Stage: 10 Beta: 0.017 Steps: 16 Acce: 0.196
100%|██████████| 100/100 [00:00<00:00, 1238.94it/s]
Stage: 11 Beta: 0.025 Steps: 21 Acce: 0.250
100%|██████████| 100/100 [00:00<00:00, 1612.00it/s]
Stage: 12 Beta: 0.035 Steps: 16 Acce: 0.244
100%|██████████| 100/100 [00:00<00:00, 1618.49it/s]
Stage: 13 Beta: 0.050 Steps: 16 Acce: 0.196
100%|██████████| 100/100 [00:00<00:00, 1229.92it/s]
Stage: 14 Beta: 0.068 Steps: 21 Acce: 0.285
100%|██████████| 100/100 [00:00<00:00, 1966.60it/s]
Stage: 15 Beta: 0.090 Steps: 13 Acce: 0.313
100%|██████████| 100/100 [00:00<00:00, 2164.50it/s]
Stage: 16 Beta: 0.116 Steps: 12 Acce: 0.338
100%|██████████| 100/100 [00:00<00:00, 2285.18it/s]
Stage: 17 Beta: 0.151 Steps: 11 Acce: 0.289
100%|██████████| 100/100 [00:00<00:00, 1935.20it/s]
Stage: 18 Beta: 0.192 Steps: 13 Acce: 0.278
100%|██████████| 100/100 [00:00<00:00, 1840.10it/s]
Stage: 19 Beta: 0.240 Steps: 14 Acce: 0.272
100%|██████████| 100/100 [00:00<00:00, 1826.72it/s]
Stage: 20 Beta: 0.298 Steps: 14 Acce: 0.264
100%|██████████| 100/100 [00:00<00:00, 1681.72it/s]
Stage: 21 Beta: 0.378 Steps: 15 Acce: 0.276
100%|██████████| 100/100 [00:00<00:00, 1877.26it/s]
Stage: 22 Beta: 0.479 Steps: 14 Acce: 0.289
100%|██████████| 100/100 [00:00<00:00, 1988.47it/s]
Stage: 23 Beta: 0.610 Steps: 13 Acce: 0.272
100%|██████████| 100/100 [00:00<00:00, 1843.80it/s]
Stage: 24 Beta: 0.816 Steps: 14 Acce: 0.252
100%|██████████| 100/100 [00:00<00:00, 1749.14it/s]
Stage: 25 Beta: 1.000 Steps: 15 Acce: 0.261
100%|██████████| 100/100 [00:00<00:00, 1668.52it/s]
### Postprocessing
Once you have the `trace` object for the SMC simulation you can apply all the standard postprocessing tools from `PyMC3` as usual.
Here's the posterior distribution over the weights precision and the the likelihood noise, $\alpha$ and $\sigma^2$ respectively:
```python
_=pm.plot_posterior(trace, var_names=['alpha', 'sigma2'])
```
Here's the posterior predictive mean of the output $y$, i.e., $\mathbb{E}[y|x, w, \sigma]$:
```python
ppsamples = pm.sample_posterior_predictive(model=model,
trace=trace, var_names=['ymean'])['ymean']
```
100%|██████████| 100/100 [00:00<00:00, 3874.47it/s]
```python
idx = np.argsort(X)
plt.figure(figsize=(10, 8))
plt.plot(X[idx], ppsamples.mean(0)[idx], linewidth=2.5, label='Posterior Predictive Mean' )
plt.plot(X, y, 'x', markeredgewidth=2.5, markersize=10, label='Observed data')
plt.legend(loc='best', fontsize=20)
plt.tight_layout()
```
SMC does a particle approximation of the posterior distribution. The particles themselves can be obtained from the `trace` object and the particle weights can be obtained from the `res` object.
Recall that the approximate posterior distribution is of the form $p(\theta|\mathcal{D}) = \sum_{j=1}^{N} w_j \delta(\theta - \theta_j)$.
```python
particles_w = trace.w
particles_alpha = trace.alpha
particle_weights = res.weights # <- these are the ws from the above equation
```
## Model comparison
Since SMC can approximate the model evidence it provides a principled way of comparing models. Let's compare 5 different polynomial regression models where we change the degree of the polynomial from 1 to 5.
```python
# Evaluate the evidence for the various degrees
log_Zs = []
D = [1, 2, 3, 4, 5]
for d in D:
phi = PolynomialBasis(d)
_, _, res = fit_poly(phi, X, y, num_particles=500)
log_Z = np.log(res.model.marginal_likelihood)
log_Zs.append(log_Z)
```
Sample initial stage: ...
Stage: 0 Beta: 0.000 Steps: 25 Acce: 1.000
0%| | 0/500 [00:00<?, ?it/s][A
21%|██▏ | 107/500 [00:00<00:00, 1069.53it/s][A
43%|████▎ | 214/500 [00:00<00:00, 1067.47it/s][A
64%|██████▍ | 319/500 [00:00<00:00, 1061.39it/s][A
100%|██████████| 500/500 [00:00<00:00, 1060.87it/s][A
Stage: 1 Beta: 0.000 Steps: 25 Acce: 0.288
0%| | 0/500 [00:00<?, ?it/s][A
41%|████ | 203/500 [00:00<00:00, 2027.15it/s][A
100%|██████████| 500/500 [00:00<00:00, 2003.37it/s][A
Stage: 2 Beta: 0.000 Steps: 13 Acce: 0.259
0%| | 0/500 [00:00<?, ?it/s][A
36%|███▌ | 180/500 [00:00<00:00, 1799.16it/s][A
100%|██████████| 500/500 [00:00<00:00, 1763.08it/s][A
Stage: 3 Beta: 0.000 Steps: 15 Acce: 0.271
0%| | 0/500 [00:00<?, ?it/s][A
37%|███▋ | 187/500 [00:00<00:00, 1869.01it/s][A
100%|██████████| 500/500 [00:00<00:00, 1860.91it/s][A
Stage: 4 Beta: 0.001 Steps: 14 Acce: 0.256
0%| | 0/500 [00:00<?, ?it/s][A
36%|███▌ | 179/500 [00:00<00:00, 1787.86it/s][A
100%|██████████| 500/500 [00:00<00:00, 1784.59it/s][A
Stage: 5 Beta: 0.001 Steps: 15 Acce: 0.259
0%| | 0/500 [00:00<?, ?it/s][A
35%|███▌ | 177/500 [00:00<00:00, 1761.36it/s][A
100%|██████████| 500/500 [00:00<00:00, 1753.73it/s][A
Stage: 6 Beta: 0.002 Steps: 15 Acce: 0.223
0%| | 0/500 [00:00<?, ?it/s][A
30%|███ | 151/500 [00:00<00:00, 1508.31it/s][A
60%|█████▉ | 298/500 [00:00<00:00, 1493.93it/s][A
100%|██████████| 500/500 [00:00<00:00, 1473.24it/s][A
Stage: 7 Beta: 0.003 Steps: 18 Acce: 0.242
0%| | 0/500 [00:00<?, ?it/s][A
34%|███▎ | 168/500 [00:00<00:00, 1676.20it/s][A
67%|██████▋ | 334/500 [00:00<00:00, 1670.55it/s][A
100%|██████████| 500/500 [00:00<00:00, 1644.80it/s][A
Stage: 8 Beta: 0.006 Steps: 16 Acce: 0.228
0%| | 0/500 [00:00<?, ?it/s][A
32%|███▏ | 159/500 [00:00<00:00, 1584.49it/s][A
63%|██████▎ | 315/500 [00:00<00:00, 1575.19it/s][A
100%|██████████| 500/500 [00:00<00:00, 1561.86it/s][A
Stage: 9 Beta: 0.009 Steps: 17 Acce: 0.235
0%| | 0/500 [00:00<?, ?it/s][A
32%|███▏ | 158/500 [00:00<00:00, 1576.07it/s][A
63%|██████▎ | 314/500 [00:00<00:00, 1570.26it/s][A
100%|██████████| 500/500 [00:00<00:00, 1550.86it/s][A
Stage: 10 Beta: 0.015 Steps: 17 Acce: 0.234
0%| | 0/500 [00:00<?, ?it/s][A
31%|███▏ | 157/500 [00:00<00:00, 1567.88it/s][A
62%|██████▏ | 311/500 [00:00<00:00, 1556.50it/s][A
100%|██████████| 500/500 [00:00<00:00, 1535.76it/s][A
Stage: 11 Beta: 0.024 Steps: 17 Acce: 0.246
0%| | 0/500 [00:00<?, ?it/s][A
33%|███▎ | 166/500 [00:00<00:00, 1654.25it/s][A
65%|██████▌ | 327/500 [00:00<00:00, 1638.17it/s][A
100%|██████████| 500/500 [00:00<00:00, 1618.74it/s][A
Stage: 12 Beta: 0.038 Steps: 16 Acce: 0.235
0%| | 0/500 [00:00<?, ?it/s][A
31%|███ | 154/500 [00:00<00:00, 1537.05it/s][A
62%|██████▏ | 309/500 [00:00<00:00, 1539.67it/s][A
100%|██████████| 500/500 [00:00<00:00, 1536.17it/s][A
Stage: 13 Beta: 0.061 Steps: 17 Acce: 0.260
0%| | 0/500 [00:00<?, ?it/s][A
35%|███▌ | 177/500 [00:00<00:00, 1762.24it/s][A
100%|██████████| 500/500 [00:00<00:00, 1745.45it/s][A
Stage: 14 Beta: 0.095 Steps: 15 Acce: 0.237
0%| | 0/500 [00:00<?, ?it/s][A
31%|███ | 154/500 [00:00<00:00, 1538.01it/s][A
60%|██████ | 302/500 [00:00<00:00, 1518.98it/s][A
100%|██████████| 500/500 [00:00<00:00, 1519.69it/s][A
Stage: 15 Beta: 0.149 Steps: 17 Acce: 0.251
0%| | 0/500 [00:00<?, ?it/s][A
35%|███▌ | 176/500 [00:00<00:00, 1751.92it/s][A
100%|██████████| 500/500 [00:00<00:00, 1734.64it/s][A
Stage: 16 Beta: 0.239 Steps: 15 Acce: 0.243
0%| | 0/500 [00:00<?, ?it/s][A
33%|███▎ | 167/500 [00:00<00:00, 1661.72it/s][A
66%|██████▌ | 330/500 [00:00<00:00, 1650.53it/s][A
100%|██████████| 500/500 [00:00<00:00, 1639.98it/s][A
Stage: 17 Beta: 0.381 Steps: 16 Acce: 0.258
0%| | 0/500 [00:00<?, ?it/s][A
35%|███▌ | 176/500 [00:00<00:00, 1756.82it/s][A
100%|██████████| 500/500 [00:00<00:00, 1755.52it/s][A
Stage: 18 Beta: 0.600 Steps: 15 Acce: 0.246
0%| | 0/500 [00:00<?, ?it/s][A
33%|███▎ | 167/500 [00:00<00:00, 1661.42it/s][A
66%|██████▌ | 328/500 [00:00<00:00, 1643.73it/s][A
100%|██████████| 500/500 [00:00<00:00, 1624.14it/s][A
Stage: 19 Beta: 0.947 Steps: 16 Acce: 0.291
0%| | 0/500 [00:00<?, ?it/s][A
40%|████ | 201/500 [00:00<00:00, 2009.38it/s][A
100%|██████████| 500/500 [00:00<00:00, 1991.53it/s][A
Stage: 20 Beta: 1.000 Steps: 13 Acce: 0.230
0%| | 0/500 [00:00<?, ?it/s][A
31%|███ | 156/500 [00:00<00:00, 1552.94it/s][A
61%|██████ | 306/500 [00:00<00:00, 1535.65it/s][A
100%|██████████| 500/500 [00:00<00:00, 1505.63it/s][A
Sample initial stage: ...
Stage: 0 Beta: 0.000 Steps: 25 Acce: 1.000
0%| | 0/500 [00:00<?, ?it/s][A
22%|██▏ | 111/500 [00:00<00:00, 1102.46it/s][A
44%|████▍ | 219/500 [00:00<00:00, 1094.13it/s][A
65%|██████▌ | 327/500 [00:00<00:00, 1087.08it/s][A
100%|██████████| 500/500 [00:00<00:00, 1077.47it/s][A
Stage: 1 Beta: 0.000 Steps: 25 Acce: 0.222
0%| | 0/500 [00:00<?, ?it/s][A
30%|███ | 151/500 [00:00<00:00, 1501.80it/s][A
59%|█████▉ | 296/500 [00:00<00:00, 1482.92it/s][A
100%|██████████| 500/500 [00:00<00:00, 1464.64it/s][A
Stage: 2 Beta: 0.000 Steps: 18 Acce: 0.250
0%| | 0/500 [00:00<?, ?it/s][A
34%|███▍ | 170/500 [00:00<00:00, 1691.09it/s][A
100%|██████████| 500/500 [00:00<00:00, 1677.64it/s][A
Stage: 3 Beta: 0.000 Steps: 16 Acce: 0.237
0%| | 0/500 [00:00<?, ?it/s][A
32%|███▏ | 159/500 [00:00<00:00, 1586.73it/s][A
63%|██████▎ | 314/500 [00:00<00:00, 1573.64it/s][A
100%|██████████| 500/500 [00:00<00:00, 1559.97it/s][A
Stage: 4 Beta: 0.000 Steps: 17 Acce: 0.224
0%| | 0/500 [00:00<?, ?it/s][A
30%|███ | 150/500 [00:00<00:00, 1491.62it/s][A
59%|█████▉ | 297/500 [00:00<00:00, 1478.53it/s][A
100%|██████████| 500/500 [00:00<00:00, 1462.24it/s][A
Stage: 5 Beta: 0.001 Steps: 18 Acce: 0.225
0%| | 0/500 [00:00<?, ?it/s][A
30%|██▉ | 149/500 [00:00<00:00, 1482.93it/s][A
60%|█████▉ | 299/500 [00:00<00:00, 1485.79it/s][A
100%|██████████| 500/500 [00:00<00:00, 1482.70it/s][A
Stage: 6 Beta: 0.002 Steps: 18 Acce: 0.208
0%| | 0/500 [00:00<?, ?it/s][A
28%|██▊ | 142/500 [00:00<00:00, 1413.95it/s][A
56%|█████▋ | 282/500 [00:00<00:00, 1407.08it/s][A
100%|██████████| 500/500 [00:00<00:00, 1399.51it/s][A
Stage: 7 Beta: 0.003 Steps: 19 Acce: 0.201
0%| | 0/500 [00:00<?, ?it/s][A
26%|██▋ | 132/500 [00:00<00:00, 1315.07it/s][A
54%|█████▎ | 268/500 [00:00<00:00, 1325.73it/s][A
100%|██████████| 500/500 [00:00<00:00, 1335.19it/s][A
Stage: 8 Beta: 0.005 Steps: 20 Acce: 0.222
0%| | 0/500 [00:00<?, ?it/s][A
30%|██▉ | 149/500 [00:00<00:00, 1482.87it/s][A
60%|█████▉ | 298/500 [00:00<00:00, 1484.76it/s][A
100%|██████████| 500/500 [00:00<00:00, 1484.78it/s][A
Stage: 9 Beta: 0.008 Steps: 18 Acce: 0.233
0%| | 0/500 [00:00<?, ?it/s][A
32%|███▏ | 159/500 [00:00<00:00, 1588.49it/s][A
63%|██████▎ | 315/500 [00:00<00:00, 1576.88it/s][A
100%|██████████| 500/500 [00:00<00:00, 1554.79it/s][A
Stage: 10 Beta: 0.013 Steps: 17 Acce: 0.241
0%| | 0/500 [00:00<?, ?it/s][A
34%|███▎ | 168/500 [00:00<00:00, 1674.23it/s][A
100%|██████████| 500/500 [00:00<00:00, 1667.77it/s][A
Stage: 11 Beta: 0.021 Steps: 16 Acce: 0.243
0%| | 0/500 [00:00<?, ?it/s][A
34%|███▍ | 170/500 [00:00<00:00, 1694.05it/s][A
100%|██████████| 500/500 [00:00<00:00, 1670.03it/s][A
Stage: 12 Beta: 0.033 Steps: 16 Acce: 0.262
0%| | 0/500 [00:00<?, ?it/s][A
36%|███▌ | 179/500 [00:00<00:00, 1785.59it/s][A
100%|██████████| 500/500 [00:00<00:00, 1778.25it/s][A
Stage: 13 Beta: 0.052 Steps: 15 Acce: 0.257
0%| | 0/500 [00:00<?, ?it/s][A
36%|███▌ | 181/500 [00:00<00:00, 1802.70it/s][A
100%|██████████| 500/500 [00:00<00:00, 1776.71it/s][A
Stage: 14 Beta: 0.079 Steps: 15 Acce: 0.243
0%| | 0/500 [00:00<?, ?it/s][A
34%|███▍ | 169/500 [00:00<00:00, 1682.21it/s][A
100%|██████████| 500/500 [00:00<00:00, 1667.50it/s][A
Stage: 15 Beta: 0.120 Steps: 16 Acce: 0.275
0%| | 0/500 [00:00<?, ?it/s][A
38%|███▊ | 190/500 [00:00<00:00, 1893.38it/s][A
100%|██████████| 500/500 [00:00<00:00, 1881.68it/s][A
Stage: 16 Beta: 0.181 Steps: 14 Acce: 0.264
0%| | 0/500 [00:00<?, ?it/s][A
36%|███▌ | 178/500 [00:00<00:00, 1777.73it/s][A
100%|██████████| 500/500 [00:00<00:00, 1775.35it/s][A
Stage: 17 Beta: 0.264 Steps: 15 Acce: 0.276
0%| | 0/500 [00:00<?, ?it/s][A
39%|███▊ | 193/500 [00:00<00:00, 1927.86it/s][A
100%|██████████| 500/500 [00:00<00:00, 1895.63it/s][A
Stage: 18 Beta: 0.391 Steps: 14 Acce: 0.266
0%| | 0/500 [00:00<?, ?it/s][A
38%|███▊ | 190/500 [00:00<00:00, 1894.60it/s][A
100%|██████████| 500/500 [00:00<00:00, 1893.48it/s][A
Stage: 19 Beta: 0.597 Steps: 14 Acce: 0.259
0%| | 0/500 [00:00<?, ?it/s][A
36%|███▋ | 182/500 [00:00<00:00, 1814.00it/s][A
100%|██████████| 500/500 [00:00<00:00, 1784.45it/s][A
Stage: 20 Beta: 0.861 Steps: 15 Acce: 0.260
0%| | 0/500 [00:00<?, ?it/s][A
36%|███▌ | 178/500 [00:00<00:00, 1772.80it/s][A
100%|██████████| 500/500 [00:00<00:00, 1772.45it/s][A
Stage: 21 Beta: 1.000 Steps: 15 Acce: 0.269
0%| | 0/500 [00:00<?, ?it/s][A
37%|███▋ | 185/500 [00:00<00:00, 1848.93it/s][A
100%|██████████| 500/500 [00:00<00:00, 1843.60it/s][A
Sample initial stage: ...
Stage: 0 Beta: 0.000 Steps: 25 Acce: 1.000
0%| | 0/500 [00:00<?, ?it/s][A
21%|██ | 106/500 [00:00<00:00, 1054.14it/s][A
42%|████▏ | 212/500 [00:00<00:00, 1054.90it/s][A
64%|██████▍ | 320/500 [00:00<00:00, 1059.56it/s][A
100%|██████████| 500/500 [00:00<00:00, 1063.03it/s][A
Stage: 1 Beta: 0.000 Steps: 25 Acce: 0.179
0%| | 0/500 [00:00<?, ?it/s][A
23%|██▎ | 116/500 [00:00<00:00, 1156.89it/s][A
46%|████▋ | 232/500 [00:00<00:00, 1157.22it/s][A
70%|██████▉ | 348/500 [00:00<00:00, 1155.89it/s][A
100%|██████████| 500/500 [00:00<00:00, 1150.37it/s][A
Stage: 2 Beta: 0.000 Steps: 23 Acce: 0.183
0%| | 0/500 [00:00<?, ?it/s][A
24%|██▍ | 121/500 [00:00<00:00, 1203.27it/s][A
48%|████▊ | 239/500 [00:00<00:00, 1193.19it/s][A
72%|███████▏ | 359/500 [00:00<00:00, 1194.53it/s][A
100%|██████████| 500/500 [00:00<00:00, 1187.77it/s][A
Stage: 3 Beta: 0.000 Steps: 22 Acce: 0.229
0%| | 0/500 [00:00<?, ?it/s][A
31%|███▏ | 157/500 [00:00<00:00, 1569.56it/s][A
62%|██████▏ | 310/500 [00:00<00:00, 1554.74it/s][A
100%|██████████| 500/500 [00:00<00:00, 1535.60it/s][A
Stage: 4 Beta: 0.000 Steps: 17 Acce: 0.205
0%| | 0/500 [00:00<?, ?it/s][A
27%|██▋ | 134/500 [00:00<00:00, 1331.19it/s][A
53%|█████▎ | 263/500 [00:00<00:00, 1317.43it/s][A
100%|██████████| 500/500 [00:00<00:00, 1313.46it/s][A
Stage: 5 Beta: 0.001 Steps: 20 Acce: 0.220
0%| | 0/500 [00:00<?, ?it/s][A
29%|██▉ | 145/500 [00:00<00:00, 1448.67it/s][A
57%|█████▋ | 287/500 [00:00<00:00, 1439.91it/s][A
100%|██████████| 500/500 [00:00<00:00, 1442.99it/s][A
Stage: 6 Beta: 0.002 Steps: 18 Acce: 0.209
0%| | 0/500 [00:00<?, ?it/s][A
28%|██▊ | 139/500 [00:00<00:00, 1384.92it/s][A
55%|█████▌ | 275/500 [00:00<00:00, 1375.67it/s][A
100%|██████████| 500/500 [00:00<00:00, 1364.39it/s][A
Stage: 7 Beta: 0.003 Steps: 19 Acce: 0.213
0%| | 0/500 [00:00<?, ?it/s][A
27%|██▋ | 136/500 [00:00<00:00, 1355.78it/s][A
55%|█████▍ | 274/500 [00:00<00:00, 1361.41it/s][A
100%|██████████| 500/500 [00:00<00:00, 1357.11it/s][A
Stage: 8 Beta: 0.004 Steps: 19 Acce: 0.244
0%| | 0/500 [00:00<?, ?it/s][A
32%|███▏ | 159/500 [00:00<00:00, 1587.49it/s][A
64%|██████▍ | 321/500 [00:00<00:00, 1594.92it/s][A
100%|██████████| 500/500 [00:00<00:00, 1603.08it/s][A
Stage: 9 Beta: 0.008 Steps: 16 Acce: 0.255
0%| | 0/500 [00:00<?, ?it/s][A
35%|███▍ | 174/500 [00:00<00:00, 1733.17it/s][A
100%|██████████| 500/500 [00:00<00:00, 1730.94it/s][A
Stage: 10 Beta: 0.012 Steps: 15 Acce: 0.250
0%| | 0/500 [00:00<?, ?it/s][A
35%|███▌ | 176/500 [00:00<00:00, 1754.75it/s][A
100%|██████████| 500/500 [00:00<00:00, 1729.70it/s][A
Stage: 11 Beta: 0.019 Steps: 15 Acce: 0.248
0%| | 0/500 [00:00<?, ?it/s][A
33%|███▎ | 164/500 [00:00<00:00, 1634.20it/s][A
66%|██████▌ | 329/500 [00:00<00:00, 1638.88it/s][A
100%|██████████| 500/500 [00:00<00:00, 1621.96it/s][A
Stage: 12 Beta: 0.027 Steps: 16 Acce: 0.228
0%| | 0/500 [00:00<?, ?it/s][A
31%|███ | 154/500 [00:00<00:00, 1532.29it/s][A
61%|██████ | 306/500 [00:00<00:00, 1525.70it/s][A
100%|██████████| 500/500 [00:00<00:00, 1520.49it/s][A
Stage: 13 Beta: 0.037 Steps: 17 Acce: 0.260
0%| | 0/500 [00:00<?, ?it/s][A
35%|███▌ | 176/500 [00:00<00:00, 1753.18it/s][A
100%|██████████| 500/500 [00:00<00:00, 1744.77it/s][A
Stage: 14 Beta: 0.049 Steps: 15 Acce: 0.269
0%| | 0/500 [00:00<?, ?it/s][A
38%|███▊ | 188/500 [00:00<00:00, 1873.02it/s][A
100%|██████████| 500/500 [00:00<00:00, 1856.05it/s][A
Stage: 15 Beta: 0.064 Steps: 14 Acce: 0.259
0%| | 0/500 [00:00<?, ?it/s][A
35%|███▌ | 176/500 [00:00<00:00, 1750.41it/s][A
100%|██████████| 500/500 [00:00<00:00, 1744.21it/s][A
Stage: 16 Beta: 0.082 Steps: 15 Acce: 0.301
0%| | 0/500 [00:00<?, ?it/s][A
44%|████▍ | 222/500 [00:00<00:00, 2216.88it/s][A
100%|██████████| 500/500 [00:00<00:00, 2160.81it/s][A
Stage: 17 Beta: 0.104 Steps: 12 Acce: 0.325
0%| | 0/500 [00:00<?, ?it/s][A
48%|████▊ | 238/500 [00:00<00:00, 2379.86it/s][A
100%|██████████| 500/500 [00:00<00:00, 2340.54it/s][A
Stage: 18 Beta: 0.130 Steps: 11 Acce: 0.306
0%| | 0/500 [00:00<?, ?it/s][A
44%|████▎ | 218/500 [00:00<00:00, 2169.89it/s][A
100%|██████████| 500/500 [00:00<00:00, 2133.08it/s][A
Stage: 19 Beta: 0.166 Steps: 12 Acce: 0.339
0%| | 0/500 [00:00<?, ?it/s][A
47%|████▋ | 237/500 [00:00<00:00, 2368.87it/s][A
100%|██████████| 500/500 [00:00<00:00, 2349.21it/s][A
Stage: 20 Beta: 0.209 Steps: 11 Acce: 0.346
0%| | 0/500 [00:00<?, ?it/s][A
100%|██████████| 500/500 [00:00<00:00, 2540.49it/s][A
Stage: 21 Beta: 0.264 Steps: 10 Acce: 0.299
0%| | 0/500 [00:00<?, ?it/s][A
44%|████▎ | 218/500 [00:00<00:00, 2172.21it/s][A
100%|██████████| 500/500 [00:00<00:00, 2135.84it/s][A
Stage: 22 Beta: 0.337 Steps: 12 Acce: 0.277
0%| | 0/500 [00:00<?, ?it/s][A
37%|███▋ | 186/500 [00:00<00:00, 1851.76it/s][A
100%|██████████| 500/500 [00:00<00:00, 1859.78it/s][A
Stage: 23 Beta: 0.433 Steps: 14 Acce: 0.283
0%| | 0/500 [00:00<?, ?it/s][A
40%|███▉ | 199/500 [00:00<00:00, 1983.40it/s][A
100%|██████████| 500/500 [00:00<00:00, 1983.67it/s][A
Stage: 24 Beta: 0.556 Steps: 13 Acce: 0.271
0%| | 0/500 [00:00<?, ?it/s][A
37%|███▋ | 186/500 [00:00<00:00, 1857.38it/s][A
100%|██████████| 500/500 [00:00<00:00, 1838.67it/s][A
Stage: 25 Beta: 0.716 Steps: 14 Acce: 0.246
0%| | 0/500 [00:00<?, ?it/s][A
33%|███▎ | 165/500 [00:00<00:00, 1646.21it/s][A
65%|██████▌ | 327/500 [00:00<00:00, 1638.23it/s][A
100%|██████████| 500/500 [00:00<00:00, 1622.28it/s][A
Stage: 26 Beta: 0.938 Steps: 16 Acce: 0.251
0%| | 0/500 [00:00<?, ?it/s][A
34%|███▍ | 171/500 [00:00<00:00, 1700.48it/s][A
100%|██████████| 500/500 [00:00<00:00, 1721.75it/s][A
Stage: 27 Beta: 1.000 Steps: 15 Acce: 0.245
0%| | 0/500 [00:00<?, ?it/s][A
32%|███▏ | 161/500 [00:00<00:00, 1609.55it/s][A
64%|██████▍ | 319/500 [00:00<00:00, 1598.80it/s][A
100%|██████████| 500/500 [00:00<00:00, 1577.64it/s][A
Sample initial stage: ...
Stage: 0 Beta: 0.000 Steps: 25 Acce: 1.000
0%| | 0/500 [00:00<?, ?it/s][A
22%|██▏ | 108/500 [00:00<00:00, 1072.85it/s][A
42%|████▏ | 212/500 [00:00<00:00, 1060.37it/s][A
63%|██████▎ | 317/500 [00:00<00:00, 1056.47it/s][A
100%|██████████| 500/500 [00:00<00:00, 1054.29it/s][A
Stage: 1 Beta: 0.000 Steps: 25 Acce: 0.172
0%| | 0/500 [00:00<?, ?it/s][A
22%|██▏ | 112/500 [00:00<00:00, 1118.55it/s][A
44%|████▍ | 222/500 [00:00<00:00, 1111.77it/s][A
67%|██████▋ | 333/500 [00:00<00:00, 1111.16it/s][A
100%|██████████| 500/500 [00:00<00:00, 1102.35it/s][A
Stage: 2 Beta: 0.000 Steps: 24 Acce: 0.200
0%| | 0/500 [00:00<?, ?it/s][A
26%|██▋ | 132/500 [00:00<00:00, 1318.88it/s][A
52%|█████▏ | 262/500 [00:00<00:00, 1310.00it/s][A
100%|██████████| 500/500 [00:00<00:00, 1310.18it/s][A
Stage: 3 Beta: 0.000 Steps: 20 Acce: 0.225
0%| | 0/500 [00:00<?, ?it/s][A
29%|██▉ | 145/500 [00:00<00:00, 1449.41it/s][A
58%|█████▊ | 288/500 [00:00<00:00, 1440.46it/s][A
100%|██████████| 500/500 [00:00<00:00, 1432.79it/s][A
Stage: 4 Beta: 0.000 Steps: 18 Acce: 0.237
0%| | 0/500 [00:00<?, ?it/s][A
32%|███▏ | 162/500 [00:00<00:00, 1619.89it/s][A
64%|██████▍ | 322/500 [00:00<00:00, 1611.07it/s][A
100%|██████████| 500/500 [00:00<00:00, 1602.81it/s][A
Stage: 5 Beta: 0.001 Steps: 16 Acce: 0.239
0%| | 0/500 [00:00<?, ?it/s][A
33%|███▎ | 163/500 [00:00<00:00, 1629.53it/s][A
65%|██████▍ | 323/500 [00:00<00:00, 1619.02it/s][A
100%|██████████| 500/500 [00:00<00:00, 1607.04it/s][A
Stage: 6 Beta: 0.001 Steps: 16 Acce: 0.261
0%| | 0/500 [00:00<?, ?it/s][A
34%|███▍ | 172/500 [00:00<00:00, 1715.83it/s][A
100%|██████████| 500/500 [00:00<00:00, 1730.58it/s][A
Stage: 7 Beta: 0.002 Steps: 15 Acce: 0.272
0%| | 0/500 [00:00<?, ?it/s][A
37%|███▋ | 185/500 [00:00<00:00, 1847.38it/s][A
100%|██████████| 500/500 [00:00<00:00, 1831.29it/s][A
Stage: 8 Beta: 0.004 Steps: 14 Acce: 0.235
0%| | 0/500 [00:00<?, ?it/s][A
31%|███ | 155/500 [00:00<00:00, 1547.42it/s][A
62%|██████▏ | 308/500 [00:00<00:00, 1542.07it/s][A
100%|██████████| 500/500 [00:00<00:00, 1522.64it/s][A
Stage: 9 Beta: 0.007 Steps: 17 Acce: 0.234
0%| | 0/500 [00:00<?, ?it/s][A
31%|███ | 155/500 [00:00<00:00, 1548.53it/s][A
62%|██████▏ | 310/500 [00:00<00:00, 1546.09it/s][A
100%|██████████| 500/500 [00:00<00:00, 1531.79it/s][A
Stage: 10 Beta: 0.011 Steps: 17 Acce: 0.248
0%| | 0/500 [00:00<?, ?it/s][A
33%|███▎ | 163/500 [00:00<00:00, 1624.51it/s][A
65%|██████▌ | 325/500 [00:00<00:00, 1622.72it/s][A
100%|██████████| 500/500 [00:00<00:00, 1618.81it/s][A
Stage: 11 Beta: 0.017 Steps: 16 Acce: 0.226
0%| | 0/500 [00:00<?, ?it/s][A
31%|███ | 154/500 [00:00<00:00, 1536.28it/s][A
61%|██████ | 303/500 [00:00<00:00, 1521.00it/s][A
100%|██████████| 500/500 [00:00<00:00, 1511.12it/s][A
Stage: 12 Beta: 0.026 Steps: 17 Acce: 0.247
0%| | 0/500 [00:00<?, ?it/s][A
33%|███▎ | 164/500 [00:00<00:00, 1635.40it/s][A
65%|██████▌ | 325/500 [00:00<00:00, 1626.33it/s][A
100%|██████████| 500/500 [00:00<00:00, 1612.26it/s][A
Stage: 13 Beta: 0.037 Steps: 16 Acce: 0.277
0%| | 0/500 [00:00<?, ?it/s][A
38%|███▊ | 189/500 [00:00<00:00, 1885.08it/s][A
100%|██████████| 500/500 [00:00<00:00, 1849.35it/s][A
Stage: 14 Beta: 0.051 Steps: 14 Acce: 0.272
0%| | 0/500 [00:00<?, ?it/s][A
37%|███▋ | 185/500 [00:00<00:00, 1848.78it/s][A
100%|██████████| 500/500 [00:00<00:00, 1839.42it/s][A
Stage: 15 Beta: 0.066 Steps: 14 Acce: 0.262
0%| | 0/500 [00:00<?, ?it/s][A
31%|███ | 156/500 [00:00<00:00, 1552.66it/s][A
65%|██████▌ | 326/500 [00:00<00:00, 1593.78it/s][A
100%|██████████| 500/500 [00:00<00:00, 1647.52it/s][A
Stage: 16 Beta: 0.084 Steps: 15 Acce: 0.266
0%| | 0/500 [00:00<?, ?it/s][A
38%|███▊ | 191/500 [00:00<00:00, 1908.80it/s][A
100%|██████████| 500/500 [00:00<00:00, 1870.24it/s][A
Stage: 17 Beta: 0.105 Steps: 14 Acce: 0.279
0%| | 0/500 [00:00<?, ?it/s][A
37%|███▋ | 184/500 [00:00<00:00, 1839.96it/s][A
100%|██████████| 500/500 [00:00<00:00, 1839.83it/s][A
Stage: 18 Beta: 0.130 Steps: 14 Acce: 0.292
0%| | 0/500 [00:00<?, ?it/s][A
40%|████ | 200/500 [00:00<00:00, 1994.91it/s][A
100%|██████████| 500/500 [00:00<00:00, 1979.62it/s][A
Stage: 19 Beta: 0.159 Steps: 13 Acce: 0.289
0%| | 0/500 [00:00<?, ?it/s][A
41%|████ | 203/500 [00:00<00:00, 2024.84it/s][A
100%|██████████| 500/500 [00:00<00:00, 1999.22it/s][A
Stage: 20 Beta: 0.198 Steps: 13 Acce: 0.307
0%| | 0/500 [00:00<?, ?it/s][A
44%|████▍ | 219/500 [00:00<00:00, 2189.91it/s][A
100%|██████████| 500/500 [00:00<00:00, 2167.67it/s][A
Stage: 21 Beta: 0.244 Steps: 12 Acce: 0.317
0%| | 0/500 [00:00<?, ?it/s][A
43%|████▎ | 216/500 [00:00<00:00, 2152.30it/s][A
100%|██████████| 500/500 [00:00<00:00, 2141.15it/s][A
Stage: 22 Beta: 0.301 Steps: 12 Acce: 0.306
0%| | 0/500 [00:00<?, ?it/s][A
44%|████▎ | 218/500 [00:00<00:00, 2172.72it/s][A
100%|██████████| 500/500 [00:00<00:00, 2155.81it/s][A
Stage: 23 Beta: 0.379 Steps: 12 Acce: 0.285
0%| | 0/500 [00:00<?, ?it/s][A
39%|███▉ | 197/500 [00:00<00:00, 1962.11it/s][A
100%|██████████| 500/500 [00:00<00:00, 1962.52it/s][A
Stage: 24 Beta: 0.473 Steps: 13 Acce: 0.299
0%| | 0/500 [00:00<?, ?it/s][A
44%|████▍ | 221/500 [00:00<00:00, 2204.88it/s][A
100%|██████████| 500/500 [00:00<00:00, 2136.37it/s][A
Stage: 25 Beta: 0.609 Steps: 12 Acce: 0.262
0%| | 0/500 [00:00<?, ?it/s][A
35%|███▌ | 175/500 [00:00<00:00, 1742.79it/s][A
100%|██████████| 500/500 [00:00<00:00, 1725.43it/s][A
Stage: 26 Beta: 0.774 Steps: 15 Acce: 0.260
0%| | 0/500 [00:00<?, ?it/s][A
35%|███▍ | 174/500 [00:00<00:00, 1737.90it/s][A
100%|██████████| 500/500 [00:00<00:00, 1744.96it/s][A
Stage: 27 Beta: 1.000 Steps: 15 Acce: 0.255
0%| | 0/500 [00:00<?, ?it/s][A
34%|███▍ | 170/500 [00:00<00:00, 1696.58it/s][A
100%|██████████| 500/500 [00:00<00:00, 1673.65it/s][A
Sample initial stage: ...
Stage: 0 Beta: 0.000 Steps: 25 Acce: 1.000
0%| | 0/500 [00:00<?, ?it/s][A
22%|██▏ | 109/500 [00:00<00:00, 1086.97it/s][A
43%|████▎ | 217/500 [00:00<00:00, 1083.80it/s][A
65%|██████▌ | 325/500 [00:00<00:00, 1081.61it/s][A
100%|██████████| 500/500 [00:00<00:00, 1071.84it/s][A
Stage: 1 Beta: 0.000 Steps: 25 Acce: 0.185
0%| | 0/500 [00:00<?, ?it/s][A
25%|██▌ | 125/500 [00:00<00:00, 1242.61it/s][A
49%|████▉ | 246/500 [00:00<00:00, 1232.08it/s][A
74%|███████▎ | 368/500 [00:00<00:00, 1226.11it/s][A
100%|██████████| 500/500 [00:00<00:00, 1214.16it/s][A
Stage: 2 Beta: 0.000 Steps: 22 Acce: 0.199
0%| | 0/500 [00:00<?, ?it/s][A
27%|██▋ | 133/500 [00:00<00:00, 1327.45it/s][A
53%|█████▎ | 266/500 [00:00<00:00, 1325.57it/s][A
100%|██████████| 500/500 [00:00<00:00, 1321.80it/s][A
Stage: 3 Beta: 0.000 Steps: 20 Acce: 0.202
0%| | 0/500 [00:00<?, ?it/s][A
27%|██▋ | 134/500 [00:00<00:00, 1338.78it/s][A
53%|█████▎ | 267/500 [00:00<00:00, 1333.01it/s][A
100%|██████████| 500/500 [00:00<00:00, 1330.21it/s][A
Stage: 4 Beta: 0.000 Steps: 20 Acce: 0.227
0%| | 0/500 [00:00<?, ?it/s][A
31%|███▏ | 157/500 [00:00<00:00, 1569.09it/s][A
63%|██████▎ | 316/500 [00:00<00:00, 1572.47it/s][A
100%|██████████| 500/500 [00:00<00:00, 1561.78it/s][A
Stage: 5 Beta: 0.001 Steps: 17 Acce: 0.248
0%| | 0/500 [00:00<?, ?it/s][A
34%|███▍ | 169/500 [00:00<00:00, 1681.45it/s][A
67%|██████▋ | 333/500 [00:00<00:00, 1666.36it/s][A
100%|██████████| 500/500 [00:00<00:00, 1641.69it/s][A
Stage: 6 Beta: 0.001 Steps: 16 Acce: 0.252
0%| | 0/500 [00:00<?, ?it/s][A
36%|███▌ | 178/500 [00:00<00:00, 1775.74it/s][A
100%|██████████| 500/500 [00:00<00:00, 1777.11it/s][A
Stage: 7 Beta: 0.002 Steps: 15 Acce: 0.272
0%| | 0/500 [00:00<?, ?it/s][A
38%|███▊ | 190/500 [00:00<00:00, 1898.56it/s][A
100%|██████████| 500/500 [00:00<00:00, 1889.00it/s][A
Stage: 8 Beta: 0.004 Steps: 14 Acce: 0.256
0%| | 0/500 [00:00<?, ?it/s][A
36%|███▌ | 181/500 [00:00<00:00, 1806.21it/s][A
100%|██████████| 500/500 [00:00<00:00, 1776.93it/s][A
Stage: 9 Beta: 0.006 Steps: 15 Acce: 0.254
0%| | 0/500 [00:00<?, ?it/s][A
36%|███▌ | 179/500 [00:00<00:00, 1789.59it/s][A
100%|██████████| 500/500 [00:00<00:00, 1762.80it/s][A
Stage: 10 Beta: 0.010 Steps: 15 Acce: 0.245
0%| | 0/500 [00:00<?, ?it/s][A
33%|███▎ | 166/500 [00:00<00:00, 1656.20it/s][A
67%|██████▋ | 333/500 [00:00<00:00, 1657.09it/s][A
100%|██████████| 500/500 [00:00<00:00, 1653.33it/s][A
Stage: 11 Beta: 0.016 Steps: 16 Acce: 0.232
0%| | 0/500 [00:00<?, ?it/s][A
31%|███ | 155/500 [00:00<00:00, 1547.77it/s][A
62%|██████▏ | 312/500 [00:00<00:00, 1552.81it/s][A
100%|██████████| 500/500 [00:00<00:00, 1543.57it/s][A
Stage: 12 Beta: 0.025 Steps: 17 Acce: 0.226
0%| | 0/500 [00:00<?, ?it/s][A
31%|███▏ | 157/500 [00:00<00:00, 1561.83it/s][A
63%|██████▎ | 313/500 [00:00<00:00, 1558.66it/s][A
100%|██████████| 500/500 [00:00<00:00, 1547.65it/s][A
Stage: 13 Beta: 0.037 Steps: 17 Acce: 0.257
0%| | 0/500 [00:00<?, ?it/s][A
35%|███▌ | 177/500 [00:00<00:00, 1765.20it/s][A
100%|██████████| 500/500 [00:00<00:00, 1759.77it/s][A
Stage: 14 Beta: 0.050 Steps: 15 Acce: 0.247
0%| | 0/500 [00:00<?, ?it/s][A
33%|███▎ | 166/500 [00:00<00:00, 1656.54it/s][A
66%|██████▌ | 330/500 [00:00<00:00, 1650.65it/s][A
100%|██████████| 500/500 [00:00<00:00, 1647.94it/s][A
Stage: 15 Beta: 0.065 Steps: 16 Acce: 0.268
0%| | 0/500 [00:00<?, ?it/s][A
38%|███▊ | 188/500 [00:00<00:00, 1876.47it/s][A
100%|██████████| 500/500 [00:00<00:00, 1866.43it/s][A
Stage: 16 Beta: 0.081 Steps: 14 Acce: 0.243
0%| | 0/500 [00:00<?, ?it/s][A
34%|███▎ | 168/500 [00:00<00:00, 1671.31it/s][A
66%|██████▌ | 331/500 [00:00<00:00, 1658.09it/s][A
100%|██████████| 500/500 [00:00<00:00, 1636.48it/s][A
Stage: 17 Beta: 0.100 Steps: 16 Acce: 0.276
0%| | 0/500 [00:00<?, ?it/s][A
38%|███▊ | 192/500 [00:00<00:00, 1917.25it/s][A
100%|██████████| 500/500 [00:00<00:00, 1884.04it/s][A
Stage: 18 Beta: 0.124 Steps: 14 Acce: 0.298
0%| | 0/500 [00:00<?, ?it/s][A
41%|████ | 203/500 [00:00<00:00, 2026.37it/s][A
100%|██████████| 500/500 [00:00<00:00, 2000.13it/s][A
Stage: 19 Beta: 0.153 Steps: 13 Acce: 0.300
0%| | 0/500 [00:00<?, ?it/s][A
44%|████▍ | 220/500 [00:00<00:00, 2191.91it/s][A
100%|██████████| 500/500 [00:00<00:00, 2165.52it/s][A
Stage: 20 Beta: 0.185 Steps: 12 Acce: 0.314
0%| | 0/500 [00:00<?, ?it/s][A
44%|████▍ | 221/500 [00:00<00:00, 2201.72it/s][A
100%|██████████| 500/500 [00:00<00:00, 2173.88it/s][A
Stage: 21 Beta: 0.221 Steps: 12 Acce: 0.298
0%| | 0/500 [00:00<?, ?it/s][A
44%|████▍ | 219/500 [00:00<00:00, 2184.89it/s][A
100%|██████████| 500/500 [00:00<00:00, 2168.14it/s][A
Stage: 22 Beta: 0.264 Steps: 12 Acce: 0.288
0%| | 0/500 [00:00<?, ?it/s][A
41%|████ | 203/500 [00:00<00:00, 2027.77it/s][A
100%|██████████| 500/500 [00:00<00:00, 2002.81it/s][A
Stage: 23 Beta: 0.317 Steps: 13 Acce: 0.297
0%| | 0/500 [00:00<?, ?it/s][A
40%|████ | 202/500 [00:00<00:00, 2013.44it/s][A
100%|██████████| 500/500 [00:00<00:00, 1997.53it/s][A
Stage: 24 Beta: 0.387 Steps: 13 Acce: 0.292
0%| | 0/500 [00:00<?, ?it/s][A
41%|████ | 205/500 [00:00<00:00, 2044.67it/s][A
100%|██████████| 500/500 [00:00<00:00, 2013.66it/s][A
Stage: 25 Beta: 0.472 Steps: 13 Acce: 0.268
0%| | 0/500 [00:00<?, ?it/s][A
38%|███▊ | 190/500 [00:00<00:00, 1895.87it/s][A
100%|██████████| 500/500 [00:00<00:00, 1874.79it/s][A
Stage: 26 Beta: 0.576 Steps: 14 Acce: 0.267
0%| | 0/500 [00:00<?, ?it/s][A
38%|███▊ | 189/500 [00:00<00:00, 1880.88it/s][A
100%|██████████| 500/500 [00:00<00:00, 1860.04it/s][A
Stage: 27 Beta: 0.723 Steps: 14 Acce: 0.274
0%| | 0/500 [00:00<?, ?it/s][A
38%|███▊ | 192/500 [00:00<00:00, 1911.78it/s][A
100%|██████████| 500/500 [00:00<00:00, 1880.73it/s][A
Stage: 28 Beta: 0.910 Steps: 14 Acce: 0.262
0%| | 0/500 [00:00<?, ?it/s][A
36%|███▌ | 179/500 [00:00<00:00, 1783.10it/s][A
100%|██████████| 500/500 [00:00<00:00, 1760.96it/s][A
Stage: 29 Beta: 1.000 Steps: 15 Acce: 0.251
0%| | 0/500 [00:00<?, ?it/s][A
35%|███▍ | 173/500 [00:00<00:00, 1727.79it/s][A
100%|██████████| 500/500 [00:00<00:00, 1705.16it/s][A
```python
for d, log_Z in zip(D, log_Zs):
print('degree %d gives %.4f'%(d, log_Z))
```
degree 1 gives 25.0212
degree 2 gives 41.0753
degree 3 gives 142.4574
degree 4 gives 139.3597
degree 5 gives 137.5729
```python
plt.figure(figsize=(10, 5))
_=plt.bar(D, log_Zs, width=0.3)
_=plt.xticks(D)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.xlabel('Polynomial degree', fontsize=20)
plt.ylabel('Model Evidence', fontsize=20)
plt.tight_layout()
```
## Questions
+ The model with degree 3 polynomials has the gradest evidence. However, the degree 4 and 5 seem also very plausible. Is this a problem for the theory of Bayesian model selection? What complicates things here, is that model 3 is included in model 4 which is included in model 5. This requires us to design special priors for the models being right. They have to be consistent in some sense. For example, if model 3 is right then model 4 must be right, etc.
+ Revisit the motorcycle dataset problem. Evaluate the model evidence for a 1) Polynomial basis; 2) a Fourier basis; and 3) a Radial basis function basis.
## Revisiting Challenger Disaster Problem (Model Selection)
```python
challenger_data = np.genfromtxt("challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
# drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
# plot it, as a function of temperature (the first column)
print("Temp (F), O-Ring failure?")
print(challenger_data)
```
Temp (F), O-Ring failure?
[[66. 0.]
[70. 1.]
[69. 0.]
[68. 0.]
[67. 0.]
[72. 0.]
[73. 0.]
[70. 0.]
[57. 1.]
[63. 1.]
[70. 1.]
[78. 0.]
[67. 0.]
[53. 1.]
[67. 0.]
[75. 0.]
[70. 0.]
[81. 0.]
[76. 0.]
[79. 0.]
[75. 1.]
[76. 0.]
[58. 1.]]
```python
# plot it, as a function of temperature (the first column)
plt.figure(figsize=(12, 5))
plt.plot(challenger_data[:, 0], challenger_data[:, 1], 'ro',
markersize=15)
plt.ylabel("Damage Incident?",fontsize=20)
plt.xlabel("Outside temperature (Fahrenheit)",fontsize=20)
plt.title("Defects of the Space Shuttle O-Rings vs temperature",
fontsize=20)
plt.yticks([0, 1], fontsize=15)
plt.xticks(fontsize=15)
plt.tight_layout()
```
```python
# gather the data and apply preprocessing if any
temp = challenger_data[:, 0]
temp_scaled = (temp - np.mean(temp))/np.std(temp)
data = challenger_data[:, 1]
# instantiate the pymc3 model
challenger_model = pm.Model()
# define the graph
with challenger_model:
# define the prior
alpha = pm.Normal('alpha', mu=0., sigma=10.)
beta = pm.Normal('beta', mu=0., sigma=10.)
# get the probabilities of failure at each observed temp
p = pm.Deterministic('p', 1./(1. + tt.exp(alpha + beta*temp_scaled)))
# define the likelihood
x = pm.Bernoulli('x', p=p, observed=data)
print("Challenger space shuttle disaster model:")
challenger_model
```
Challenger space shuttle disaster model:
$$
\begin{array}{rcl}
\text{alpha} &\sim & \text{Normal}(\mathit{mu}=0.0,~\mathit{sigma}=10.0)\\\text{beta} &\sim & \text{Normal}(\mathit{mu}=0.0,~\mathit{sigma}=10.0)\\\text{p} &\sim & \text{Deterministic}(\text{Constant},~\text{Constant},~\text{alpha},~\text{beta},~\text{Constant})\\\text{x} &\sim & \text{Bernoulli}(\mathit{p}=\text{p})
\end{array}
$$
```python
num_particles = 500
trace, smc = sample_smc(model=challenger_model,
draws=num_particles,
threshold=0.8,
progressbar=True)
```
Sample initial stage: ...
Stage: 0 Beta: 0.000 Steps: 25 Acce: 1.000
0%| | 0/500 [00:00<?, ?it/s][A
24%|██▍ | 121/500 [00:00<00:00, 1204.65it/s][A
48%|████▊ | 239/500 [00:00<00:00, 1195.81it/s][A
72%|███████▏ | 360/500 [00:00<00:00, 1199.98it/s][A
100%|██████████| 500/500 [00:00<00:00, 1192.56it/s][A
Stage: 1 Beta: 0.011 Steps: 25 Acce: 0.549
100%|██████████| 500/500 [00:00<00:00, 5654.91it/s]
Stage: 2 Beta: 0.028 Steps: 5 Acce: 0.406
0%| | 0/500 [00:00<?, ?it/s][A
100%|██████████| 500/500 [00:00<00:00, 3532.90it/s][A
Stage: 3 Beta: 0.056 Steps: 8 Acce: 0.346
0%| | 0/500 [00:00<?, ?it/s][A
100%|██████████| 500/500 [00:00<00:00, 2808.64it/s][A
Stage: 4 Beta: 0.106 Steps: 10 Acce: 0.301
0%| | 0/500 [00:00<?, ?it/s][A
48%|████▊ | 240/500 [00:00<00:00, 2394.73it/s][A
100%|██████████| 500/500 [00:00<00:00, 2361.99it/s][A
Stage: 5 Beta: 0.182 Steps: 12 Acce: 0.275
0%| | 0/500 [00:00<?, ?it/s][A
41%|████ | 205/500 [00:00<00:00, 2048.77it/s][A
100%|██████████| 500/500 [00:00<00:00, 2041.58it/s][A
Stage: 6 Beta: 0.313 Steps: 14 Acce: 0.266
0%| | 0/500 [00:00<?, ?it/s][A
42%|████▏ | 212/500 [00:00<00:00, 2117.50it/s][A
100%|██████████| 500/500 [00:00<00:00, 2076.29it/s][A
Stage: 7 Beta: 0.529 Steps: 14 Acce: 0.250
0%| | 0/500 [00:00<?, ?it/s][A
39%|███▉ | 196/500 [00:00<00:00, 1959.82it/s][A
100%|██████████| 500/500 [00:00<00:00, 1943.86it/s][A
Stage: 8 Beta: 0.897 Steps: 15 Acce: 0.270
0%| | 0/500 [00:00<?, ?it/s][A
42%|████▏ | 209/500 [00:00<00:00, 2085.22it/s][A
100%|██████████| 500/500 [00:00<00:00, 2060.88it/s][A
Stage: 9 Beta: 1.000 Steps: 14 Acce: 0.261
0%| | 0/500 [00:00<?, ?it/s][A
38%|███▊ | 192/500 [00:00<00:00, 1912.39it/s][A
100%|██████████| 500/500 [00:00<00:00, 1860.62it/s][A
```python
ppsamples = pm.sample_posterior_predictive(model=challenger_model,
trace=trace,
var_names=['p'])['p']
```
0%| | 0/500 [00:00<?, ?it/s][A
100%|██████████| 500/500 [00:00<00:00, 3612.98it/s][A
```python
ppmean = ppsamples.mean(0)
pp_lower, pp_upper = np.percentile(ppsamples, axis=0, q=[2.5, 97.5])
plt.figure(figsize=(15, 8))
plt.plot(temp, data, 'ro', markersize=12, label='Observed data')
idx=np.argsort(temp)
plt.plot(temp[idx], ppmean[idx], linestyle='--', linewidth=2.5,
label='Post. pred. mean prob.')
plt.fill_between(temp[idx], pp_lower[idx], pp_upper[idx],
color='purple', alpha=0.25, label='95% Confidence')
plt.ylabel("Probability estimate",fontsize=20)
plt.xlabel("Outside temperature (Fahrenheit)",fontsize=20)
plt.title("Defects of the Space Shuttle O-Rings vs temperature",
fontsize=20)
plt.yticks(np.arange(0., 1.01, 0.2), fontsize=20)
plt.xticks(fontsize=20)
plt.legend(loc='best', fontsize=20)
plt.tight_layout()
```
```python
logZ_temp = np.log(smc.model.marginal_likelihood)
```
```python
```
| a6d773aefd28776fa3a851cb7f12a7b25ace083f | 303,181 | ipynb | Jupyter Notebook | lectures/lecture_24.ipynb | PredictiveScienceLab/uq-course | ddbe0865c9f91c4bd9b12e9b85d4293168306438 | [
"MIT"
]
| 218 | 2016-01-04T15:31:44.000Z | 2022-03-23T20:09:27.000Z | lectures/lecture_24.ipynb | ragusa/uq-course | ddbe0865c9f91c4bd9b12e9b85d4293168306438 | [
"MIT"
]
| 2 | 2019-02-22T08:13:54.000Z | 2020-02-08T19:25:16.000Z | lectures/lecture_24.ipynb | ragusa/uq-course | ddbe0865c9f91c4bd9b12e9b85d4293168306438 | [
"MIT"
]
| 112 | 2016-01-05T18:50:34.000Z | 2022-03-15T04:33:28.000Z | 171.579513 | 91,968 | 0.845957 | true | 24,043 | Qwen/Qwen-72B | 1. YES
2. YES | 0.872347 | 0.853913 | 0.744909 | __label__yue_Hant | 0.346496 | 0.569004 |
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
sns.set_context('notebook', font_scale=1.5)
```
The first exercise is about using Newton's method to find the cube roots of unity - find $z$ such that $z^3 = 1$. From the fundamental theorem of algebra, we know there must be exactly 3 complex roots since this is a degree 3 polynomial.
We start with Euler's equation
$$
e^{ix} = \cos x + i \sin x
$$
Raising $e^{ix}$ to the $n$th power where $n$ is an integer, we get from Euler's formula with $nx$ substituting for $x$
$$
(e^{ix})^n = e^{i(nx)} = \cos nx + i \sin nx
$$
Whenever $nx$ is an integer multiple of $2\pi$, we have
$$
\cos nx + i \sin nx = 1
$$
So
$$
e^{2\pi i \frac{k}{n}}
$$
is a root of 1 whenever $k/n = 0, 1, 2, \ldots$.
So the cube roots of unity are $1, e^{2\pi i/3}, e^{4\pi i/3}$.
While we can do this analytically, the idea is to use Newton's method to find these roots, and in the process, discover some rather perplexing behavior of Newton's method.
```python
from sympy import Symbol, exp, I, pi, N, expand
from sympy import init_printing
init_printing()
```
```python
expand(exp(2*pi*I/3), complex=True)
```
```python
expand(exp(4*pi*I/3), complex=True)
```
```python
plt.figure(figsize=(4,4))
roots = np.array([[1,0], [-0.5, np.sqrt(3)/2], [-0.5, -np.sqrt(3)/2]])
plt.scatter(roots[:,0], roots[:,1], s=50, c='red')
xp = np.linspace(0, 2*np.pi, 100)
plt.plot(np.cos(xp), np.sin(xp), c='blue');
```
**1**. Newton's method for functions of complex variables - stability and basins of attraction. (30 points)
1. Write a function with the following function signature `newton(z, f, fprime, max_iter=100, tol=1e-6)` where
- `z` is a starting value (a complex number e.g. ` 3 + 4j`)
- `f` is a function of `z`
- `fprime` is the derivative of `f`
The function will run until either max_iter is reached or the absolute value of the Newton step is less than tol. In either case, the function should return the number of iterations taken and the final value of `z` as a tuple (`i`, `z`).
2. Define the function `f` and `fprime` that will result in Newton's method finding the cube roots of 1. Find 3 starting points that will give different roots, and print both the start and end points.
Write the following two plotting functions to see some (pretty) aspects of Newton's algorithm in the complex plane.
3. The first function `plot_newton_iters(f, fprime, n=200, extent=[-1,1,-1,1], cmap='hsv')` calculates and stores the number of iterations taken for convergence (or max_iter) for each point in a 2D array. The 2D array limits are given by `extent` - for example, when `extent = [-1,1,-1,1]` the corners of the plot are `(-i, -i), (1, -i), (1, i), (-1, i)`. There are `n` grid points in both the real and imaginary axes. The argument `cmap` specifies the color map to use - the suggested defaults are fine. Finally plot the image using `plt.imshow` - make sure the axis ticks are correctly scaled. Make a plot for the cube roots of 1.
4. The second function `plot_newton_basins(f, fprime, n=200, extent=[-1,1,-1,1], cmap='jet')` has the same arguments, but this time the grid stores the identity of the root that the starting point converged to. Make a plot for the cube roots of 1 - since there are 3 roots, there should be only 3 colors in the plot.
```python
def newton(z, f, fprime, max_iter=100, tol=1e-6):
"""The Newton-Raphson method."""
for i in range(max_iter):
step = f(z)/fprime(z)
if abs(step) < tol:
return i, z
z -= step
return i, z
```
```python
def plot_newton_iters(p, pprime, n=200, extent=[-1,1,-1,1], cmap='hsv'):
"""Shows how long it takes to converge to a root using the Newton-Rahphson method."""
m = np.zeros((n,n))
xmin, xmax, ymin, ymax = extent
for r, x in enumerate(np.linspace(xmin, xmax, n)):
for s, y in enumerate(np.linspace(ymin, ymax, n)):
z = x + y*1j
m[s, r] = newton(z, p, pprime)[0]
plt.imshow(m, cmap=cmap, extent=extent)
```
```python
def plot_newton_basins(p, pprime, n=200, extent=[-1,1,-1,1], cmap='jet'):
"""Shows basin of attraction for convergence to each root using the Newton-Raphson method."""
root_count = 0
roots = {}
m = np.zeros((n,n))
xmin, xmax, ymin, ymax = extent
for r, x in enumerate(np.linspace(xmin, xmax, n)):
for s, y in enumerate(np.linspace(ymin, ymax, n)):
z = x + y*1j
root = np.round(newton(z, p, pprime)[1], 1)
if not root in roots:
roots[root] = root_count
root_count += 1
m[s, r] = roots[root]
plt.imshow(m, cmap=cmap, extent=extent)
```
```python
plt.grid('off')
plot_newton_iters(lambda x: x**3 - 1, lambda x: 3*x**2)
```
```python
plt.grid('off')
m = plot_newton_basins(lambda x: x**3 - 1, lambda x: 3*x**2)
```
**2**. Ill-conditioned linear problems. (20 points)
You are given a $n \times p$ design matrix $X$ and a $p$-vector of observations $y$ and asked to find the coefficients $\beta$ that solve the linear equations $X \beta = y$.
```python
X = np.load('x.npy')
y = np.load('y.npy')
```
The solution $\beta$ can also be loaded as
```python
beta = np.load('b.npy')
```
- Write a formula that could solve the system of linear equations in terms of $X$ and $y$ Write a function `f1` that takes arguments $X$ and $y$ and returns $\beta$ using this formula.
- How could you code this formula using `np.linalg.solve` that does not require inverting a matrix? Write a function `f2` that takes arguments $X$ and $y$ and returns $\beta$ using this.
- Note that carefully designed algorithms *can* solve this ill-conditioned problem, which is why you should always use library functions for linear algebra rather than write your own.
```python
np.linalg.lstsq(x, y)[0]
```
- What happens if you try to solve for $\beta$ using `f1` or `f2`? Remove the column of $X$ that is making the matrix singular and find the $p-1$ vector $b$ using `f2`.
- Note that the solution differs from that given by `np.linalg.lstsq`? This arises because the relevant condition number for `f2` is actually for the matrix $X^TX$ while the condition number of `lstsq` is for the matrix $X$. Why is the condition so high even after removing the column that makes the matrix singular?
```python
X = np.load('x.npy')
y = np.load('y.npy')
beta = np.load('b.npy')
def f1(X, y):
"""Direct translation of normal equations to code."""
return np.dot(np.linalg.inv(np.dot(X.T, X)), np.dot(X.T, y))
def f2(X, y):
"""Solving normal equations wihtout matrix inversion."""
return np.linalg.solve(np.dot(X.T, X), np.dot(X.T, y))
%precision 3
print("X = ")
print(X)
# counting from 0 (so column 5 is the last column)
# we can see that column 5 is a multiple of column 3
# so one approach is to simply remove this (dependent) column
print("True solution\t\t", beta)
print("Library function\t", np.linalg.lstsq(X, y)[0])
print("Using f1\t\t", f1(X[:, :5], y))
print("Using f2\t\t", f2(X[:, :5], y))
```
X =
[[ 5.000e+00 4.816e+14 9.000e+00 5.000e+00 0.000e+00 5.000e+01]
[ 1.000e+00 4.214e+14 6.000e+00 9.000e+00 2.000e+00 9.000e+01]
[ 5.000e+00 1.204e+14 4.000e+00 2.000e+00 4.000e+00 2.000e+01]
[ 7.000e+00 5.418e+14 1.000e+00 7.000e+00 0.000e+00 7.000e+01]
[ 9.000e+00 5.418e+14 7.000e+00 6.000e+00 9.000e+00 6.000e+01]
[ 0.000e+00 6.020e+13 8.000e+00 8.000e+00 3.000e+00 8.000e+01]
[ 8.000e+00 4.214e+14 3.000e+00 6.000e+00 5.000e+00 6.000e+01]
[ 9.000e+00 1.806e+14 4.000e+00 8.000e+00 1.000e+00 8.000e+01]
[ 0.000e+00 1.806e+14 9.000e+00 2.000e+00 0.000e+00 2.000e+01]
[ 9.000e+00 1.204e+14 7.000e+00 7.000e+00 9.000e+00 7.000e+01]]
True solution [ 0.469 0.096 0.903 0.119 0.525 0.084]
Library function [ 0.469 0.096 0.903 0.009 0.526 0.095]
Using f1 [ 0.465 0.096 0.908 0.951 0.523]
Using f2 [ 0.471 0.096 0.903 0.953 0.526]
#### Condition numbers are ratio of largest to smallest singular values
```python
np.linalg.svd(X)[1]
```
array([ 1.128e+15, 1.124e+02, 1.364e+01, 1.151e+01, 6.344e+00,
9.940e-16])
```python
np.linalg.svd(X[:, :-1])[1]
```
array([ 1.128e+15, 1.863e+01, 1.224e+01, 8.025e+00, 6.086e+00])
```python
np.linalg.cond(X[:, :-1])
```
One way to think about the condition number is in terms of the ratio of the largest singular value to the smallest one - so a measure of the disproportionate stretching effect of the linear transform in one direction versus another. When this is very big, it means that errors in one or more direction will be amplified greatly. This often occurs because one or more columns is "almost" dependent - i.e. it can be approximated by a linear combination of the other columns.
**3**. Consider the following function on $\mathbb{R}^2$:
$$f(x_1,x_2) = -x_1x_2e^{-\frac{(x_1^2+x_2^2)}{2}}$$
1. Write down its gradient.
2. write down the Hessian matrix.
3. Find the critical points of $f$.
4. Characterize the critical points as max/min or neither. Find the minimum under the constraint
$$g(x) = x_1^2+x_2^2 \leq 10$$
and
$$h(x) = 2x_1 + 3x_2 = 5$$ using `scipy.optimize.minimize`.
5. Plot the function contours using `matplotlib`. (20 points)
```python
import sympy as sym
from sympy import Matrix
from numpy import linalg as la
x1, x2 = sym.symbols('x1 x2')
def f(x1, x2):
return sym.Matrix([-x1*x2*sym.exp(-(x1**2 + x2**2)/2)])
def h(x1,x2):
return x1**2+x2**2
def g(x1,x2):
return 2*x1+3*x2
def characterize_cp(H):
l,v = la.eig(H)
if(np.all(np.greater(l,np.zeros(2)))):
return("minimum")
elif(np.all(np.less(l,np.zeros(2)))):
return("maximum")
else:
return("saddle")
```
```python
sym.init_printing()
```
```python
fun = f(x1,x2)
X = sym.Matrix([x1,x2])
gradf = fun.jacobian(X)
sym.simplify(gradf)
```
```python
hessianf = gradf.jacobian(X)
sym.simplify(hessianf)
```
```python
fcritical = sym.solve(gradf,X)
for i in range(4):
H = np.array(hessianf.subs([(x1,fcritical[i][0]),(x2,fcritical[i][1])])).astype(float)
print(fcritical[i], characterize_cp(H))
```
(-1, -1) minimum
(-1, 1) maximum
(0, 0) saddle
(1, -1) maximum
```python
import scipy.optimize as opt
```
```python
def f(x):
return -x[0] * x[1] * np.exp(-(x[0]**2+x[1]**2)/2)
cons = ({'type': 'eq',
'fun' : lambda x: np.array([2.0*x[0] + 3.0*x[1] - 5.0]),
'jac' : lambda x: np.array([2.0,3.0])},
{'type': 'ineq',
'fun' : lambda x: np.array([-x[0]**2.0 - x[1]**2.0 + 10.0])})
x0 = [1.5,1.5]
cx = opt.minimize(f, x0, constraints=cons)
```
```python
x = np.linspace(-5, 5, 200)
y = np.linspace(-5, 5, 200)
X, Y = np.meshgrid(x, y)
Z = f(np.vstack([X.ravel(), Y.ravel()])).reshape((200,200))
plt.contour(X, Y, Z)
plt.plot(x, (5-2*x)/3, 'k:', linewidth=1)
plt.plot(x, (10.0-x**2)**0.5, 'k:', linewidth=1)
plt.plot(x, -(10.0-x**2)**0.5, 'k:', linewidth=1)
plt.fill_between(x,(10-x**2)**0.5,-(10-x**2)**0.5,alpha=0.15)
plt.text(cx['x'][0], cx['x'][1], 'x', va='center', ha='center', size=20, color='red')
plt.axis([-5,5,-5,5])
plt.title('Contour plot of f(x) subject to constraints g(x) and h(x)')
plt.xlabel('x1')
plt.ylabel('x2')
pass
```
**4**. One of the goals of the course it that you will be able to implement novel algorithms from the literature. (30 points)
- Implement the mean-shift algorithm in 1D as described [here](http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/TUZEL1/MeanShift.pdf).
- Use the following function signature
```python
def mean_shift(xs, x, kernel, max_iters=100, tol=1e-6):
```
- xs is the data set, x is the starting location, and kernel is a kernel function
- tol is the difference in $||x||$ across iterations
- Use the following kernels with bandwidth $h$ (a default value of 1.0 will work fine)
- Flat - return 1 if $||x|| < h$ and 0 otherwise
- Gaussian
$$\frac{1}{\sqrt{2 \pi h}}e^{\frac{-||x||^2}{h^2}}$$
- Note that $||x||$ is the norm of the data point being evaluated minus the current value of $x$
- Use both kernels to find all 3 modes of the data set in `x1d.npy`
- Modify the algorithm and/or kernels so that it now works in an arbitrary number of dimensions.
- Use both kernels to find all 3 modes of the data set in `x2d.npy`
- Plot the path of successive intermediate solutions of the mean-shift algorithm starting from `x0 = (-4, 5)` until it converges onto a mode in the 2D data for each kernel. Superimpose the path on top of a contour plot of the data density.
```python
def gaussian_kernel(xs, x, h=1.0):
"""Gaussian kernel for a shifting window centerd at x."""
X = xs - x
try:
d = xs.shape[1]
except:
d = 1
k = np.array([(2*np.pi*h**d)**-0.5*np.exp(-(np.dot(_.T, _)/h)**2) for _ in X])
if d != 1:
k = k[:, np.newaxis]
return k
def flat_kernel(xs, x, h=1.0):
"""Flat kenrel for a shifting window centerd at x."""
X = xs - x
try:
d = xs.shape[1]
except:
d = 1
k = np.array([1 if np.dot(_.T, _) < h else 0 for _ in X])
if d != 1:
k = k[:, np.newaxis]
return k
def mean_shift(xs, x, kernel, max_iters=100, tol=1e-6, trace=False):
"""Finds the local mode using mean shift algorithm."""
record = []
for i in range(max_iters):
if trace:
record.append(x)
m = (kernel(xs, x)*xs).sum(axis=0)/kernel(xs, x).sum(axis=0) - x
if np.sum(m**2) < tol:
break
x += m
return i, x, np.array(record)
```
```python
x1 = np.load('x1d.npy')
# choose kernel to evaluate
kernel = flat_kernel
# kernel = gaussian_kernel
i1, m1, path = mean_shift(x1, 1, kernel)
print(i1, m1)
i2, m2, path = mean_shift(x1, -7, kernel)
print(i2, m2)
i3, m3, path = mean_shift(x1, 7 ,kernel)
print(i3, m3)
xp = np.linspace(0, 1.0, 100)
plt.hist(x1, 50, histtype='step', normed=True);
plt.axvline(m1, c='blue')
plt.axvline(m2, c='blue')
plt.axvline(m3, c='blue');
```
```python
x2 = np.load('x2d.npy')
# choose kernel to evaluate
# kernel = flat_kernel (also OK if they use the Epanachnikov kernel since the flat is a shadow of that)
kernel = gaussian_kernel
i1, m1, path1 = mean_shift(x2, [0,0], kernel, trace=True)
print(i1, m1)
i2, m2, path2 = mean_shift(x2, [-4,5], kernel, trace=True)
print(i2, m2)
i3, m3, path3 = mean_shift(x2, [10,10] ,kernel, trace=True)
print(i3, m3)
```
59 [ 2.318 2.826]
12 [-3.07 3.057]
42 [ 6.023 8.951]
```python
import scipy.stats as stats
# size of marekr at starting position
base = 40
# set up for estimating density using gaussian_kde
xmin, xmax = -6, 12
ymin,ymax = -5, 15
X, Y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
positions = np.vstack([X.ravel(), Y.ravel()])
kde = stats.gaussian_kde(x2.T)
Z = np.reshape(kde(positions).T, X.shape)
plt.contour(X, Y, Z)
# plot data in background
plt.scatter(x2[:, 0], x2[:, 1], c='grey', alpha=0.2, edgecolors='none')
# path from [0,0]
plt.scatter(path1[:, 0], path1[:, 1], s=np.arange(base, base+len(path1)),
c='red', edgecolors='red', marker='x', linewidth=1.5)
# path from [-4,5]
plt.scatter(path2[:, 0], path2[:, 1], s=np.arange(base, base+len(path2)),
c='blue', edgecolors='blue', marker='x', linewidth=1.5)
# path from [10,10]
plt.scatter(path3[:, 0], path3[:, 1], s=np.arange(base, base+len(path3)),
c='green', edgecolors='green',marker='x', linewidth=1.5)
plt.axis([xmin, xmax, ymin, ymax]);
```
| ca41e004b39b3084f7d97f22a31b4db361a25d82 | 306,334 | ipynb | Jupyter Notebook | homework/08_Optimization_Solutions.ipynb | cliburn/sta-663-2017 | 89e059dfff25a4aa427cdec5ded755ab456fbc16 | [
"MIT"
]
| 52 | 2017-01-11T03:16:00.000Z | 2021-01-15T05:28:48.000Z | homework/08_Optimization_Solutions.ipynb | slimdt/Duke_Stat633_2017 | 89e059dfff25a4aa427cdec5ded755ab456fbc16 | [
"MIT"
]
| 1 | 2017-04-16T17:10:49.000Z | 2017-04-16T19:13:03.000Z | homework/08_Optimization_Solutions.ipynb | slimdt/Duke_Stat633_2017 | 89e059dfff25a4aa427cdec5ded755ab456fbc16 | [
"MIT"
]
| 47 | 2017-01-13T04:50:54.000Z | 2021-06-23T11:48:33.000Z | 330.101293 | 106,190 | 0.909409 | true | 5,315 | Qwen/Qwen-72B | 1. YES
2. YES | 0.76908 | 0.800692 | 0.615796 | __label__eng_Latn | 0.934802 | 0.269032 |
# Decaimiento radiactivo
Existen muchos modelos de decaimiento radiactivo, sin embargo, uno de los más sencillos es el considerar que la cantidad de material radioactivo decae de forma proporcional a la cantidad que tenga en un tiempo $t$. esto puede ser escrito de forma sencilla en el siguiente modelo:
$$\frac{\Delta N(t)}{\Delta t}\rightarrow\frac{dN}{dt}=\lambda N$$
Esta ecuación resulta de un promedio sobre un conjunto de atomos (Ensamble), esto se tiene para el caso en el que el numero de partículas $N\to\infty$ y el intervalo de observación $\Delta t \to 0$. La ecuación del modelo puede ser integrada y esta da como resultado algo llamado la ley expoenecial de decaimiento de los núcleos.
El siguiente programa efectua una simulación de del decaimiento de un nucleo.
Tomado de del libro A Survey of Computational Physics Introductory Computational Science de Landau, Paez, Bordeianu (Python Multimodal eTextBook Beta4.0)
```python
import numpy as np
import matplotlib.pylab as plt
import random
```
```python
Lambda = 0.001 # Decay c o n s t a n t
maximum = 200. ; timemax = 500 # Pa rams
number = nloop = maximum # I n i t i a l v a l u e
lambda1 = 0.001
maximum = 200. ; timemax = 500
number = nloop = maximum
for time in range(0 , timemax + 1 ) : # Time l o o p
for atom in range(1 , int(number) + 1 ) : # Decay l o o p
decay = random.random( )
if(decay < Lambda):
nloop += -1
number = nloop
```
El programa anterior simula el numero de atomos en un nucleo en función del tiempo a partir de la generación de números aleatorios con distribución uniforme.
# Ejercicio 1
Modifique el codigo anterior para guardar los valores que se obtienen de tiempo y numero de atomos en el nucleo, prepare una gráfica en donde muestre cual es el comportamiento que tiene el número de atomos en función del tiempo.
```python
Lambda = 0.001 # Decay c o n s t a n t
maximum = 200. ; timemax = 500 # Pa rams
number = nloop = maximum # I n i t i a l v a l u e
lambda1 = 0.001
maximum = 200. ; timemax = 500
number = nloop = maximum
Atomos=[]
tiempo=[]
for time in range(0 , timemax + 1 ) : # Time l o o p
for atom in range(1 , int(number) + 1 ) : # Decay l o o p
decay = random.random( )
if(decay < Lambda):
nloop += -1
number = nloop
Atomos.append(number)
tiempo.append(time)
plt.plot(tiempo,Atomos)
plt.show()
```
# Ejercicio 2
Modifique el codigo efectuado en el ejercicio 1 con el fin de que no sólo se efectue el comportamiento de 1 nucleo si no que se efectue un ensamble de 1000 posibles comportamientos del número de atomos en el nucleo.
Efectue un promedio de todos los valores obtenidos para cada trayectoria y prepare una gráfica con el comportamiento del promedio.
```python
num_experimentos=100
Avg=np.zeros(timemax+1)
for experimento in range(num_experimentos):
Lambda = 0.001 # Decay c o n s t a n t
maximum = 200. ; timemax = 500 # Pa rams
number = nloop = maximum # I n i t i a l v a l u e
for time in range(0 , timemax + 1 ) : # Time l o o p
for atom in range(1 , int(number) + 1 ) : # Decay l o o p
decay = random.random( )
if(decay < Lambda):
nloop += -1
number = nloop
Avg[time] += number/num_experimentos
plt.plot(range(0 , timemax + 1 ),Avg)
plt.show()
```
# Ejercicio 3
Tome una trayectoria de su preferencia y tome el logaritmo del numero de atomos en el nucleo y grafique estos resultados, intente recrear la figura 5.5 del libro guia.
```python
plt.plot(tiempo,np.log(Atomos))
plt.show()
```
# Ejercicio 4
Genere una gráfica en donde muestre que para distintos valores de $N(0)$ en el tiempo inicial, el comportamiento de $\ln(N(t)$ vs $t$ es invariante ante la escogencia de $N(0)$
```python
for i in (100,200,300,500,1000):
Lambda = 0.001 # Decay c o n s t a n t
maximum = float(i)
timemax= 500 # Pa rams
number = nloop = maximum # I n i t i a l v a l u e
Atomos=[]
tiempo=[]
for time in range(0 , timemax + 1 ) : # Time l o o p
for atom in range(1 , int(number) + 1 ) : # Decay l o o p
decay = random.random( )
if(decay < Lambda):
nloop += -1
number = nloop
Atomos.append(number)
tiempo.append(time)
plt.plot(tiempo,np.log(Atomos))
plt.show()
```
# Ejercicio 5
La función de autocorrelación se define como la correlación cruzada de la señal consigo misma.
Esta función resulta de gran utilidad para encontrar patrones repetitivos dentro de una señal, como la periodicidad de una señal enmascarada bajo el ruido o para identificar la frecuencia fundamental de una señal que no contiene dicha componente, pero aparecen numerosas frecuencias armónicas de esta.
Esta función está dada por:
\begin{equation}
\varphi(i)=\frac{1}{N}\sum_{j=1}^{N} \left(x_j - \langle X\rangle\right)\left(x_{i+j}-\langle X\rangle\right) \tag{D2}
\end{equation}
Y tiene la propiedad de que si se cumple:
\begin{equation}
\varphi(i=0)=\frac{1}{N}\sum_{j=1}^{N} \left(x_j - \langle X\rangle\right)^2=\langle x_j - \langle X\rangle\rangle^2=\sigma^2\tag{D3}
\end{equation}
\begin{equation}
\varphi(i\ne 0)= \langle x_j - \langle X\rangle\rangle\langle x_{i\ne j} - \langle X\rangle\rangle=0\hspace{5mm}
(\rightarrow{\rm White\ noise})\hspace{-12mm}
\tag{D4}
\end{equation}
se denomina que el proceso estocastico es un ruido blanco o que no tiene correlación alguna más que en el momento en el que este fue generado.
Haciendo uso de la función de ``numpy`` "correlate" calcule la función de correlación para una secuencia de puntos generados a partir de una distribución uniforme.
```python
x=np.random.uniform(0,3,1000)
corr=np.correlate(x-np.mean(x),x-np.mean(x),mode="full")
corr=corr/len(x)
```
```python
plt.plot(range(0,len(corr)//2 +1),corr[len(corr)//2:])
```
Como se observa el valor del pico es de
```python
np.max(corr)
```
0.7710989675733794
El cual coincide con la varianza de los numeros aleatorios
```python
np.var(x)
```
0.7710989675733794
Y su promedio es:
```python
np.mean(corr)
```
-2.7769460345801814e-19
Con lo cual sepuede ver que lo que se tiene acá es un ruido blanco generado a partir de una distribución uniforme
```python
```
```python
```
```python
```
```python
```
| a93c56792e3d12ff4fe46d0d2c99949fb9b89441 | 83,027 | ipynb | Jupyter Notebook | Seccion_1/Ejercicio_3 (1).ipynb | ComputoCienciasUniandes/FISI2029-201910 | 88909a78e562f8d5c61f3fd9178ed5f59f973945 | [
"MIT"
]
| null | null | null | Seccion_1/Ejercicio_3 (1).ipynb | ComputoCienciasUniandes/FISI2029-201910 | 88909a78e562f8d5c61f3fd9178ed5f59f973945 | [
"MIT"
]
| null | null | null | Seccion_1/Ejercicio_3 (1).ipynb | ComputoCienciasUniandes/FISI2029-201910 | 88909a78e562f8d5c61f3fd9178ed5f59f973945 | [
"MIT"
]
| 5 | 2019-04-03T19:28:00.000Z | 2019-06-28T15:18:56.000Z | 189.127563 | 18,472 | 0.903706 | true | 1,928 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.822189 | 0.708347 | __label__spa_Latn | 0.915461 | 0.48406 |
# Covariate Shift
A fundamental assumption in almost all [supervised learning](https://en.wikipedia.org/wiki/Supervised_learning) methods is that training and test samples are drawn from the same [probability distribution](https://en.wikipedia.org/wiki/Probability_distribution). However, in practice, this assumption is rarely satisfied and standard machine learning models may not work as well as anticipated. [Covariate shift](https://www.quora.com/What-is-Covariate-shift) refers to the situation where the probability distribution of covariates changes between training and test data. *Shikhar Gupta*, Master's student in Data Science, gives the following great visual representation of this in [How Dissimilar are my Train and Test Data](https://towardsdatascience.com/how-dis-similar-are-my-train-and-test-data-56af3923de9b) in the **Towards Data Science** blog.
It is clear that if covariate shift is not accounted for correctly, it can lead to poor generalization.
So why are we even talking about covariate shift? Well, one of the causes of covariate shift is [sample selection bias](https://en.wikipedia.org/wiki/Selection_bias). [Dataset Shift in Machine Learning](http://www.acad.bg/ebook/ml/The.MIT.Press.Dataset.Shift.in.Machine.Learning.Feb.2009.eBook-DDU.pdf) states that "Sample selection bias occurs when the training data points {$x_i$} (the sample) do not accurately represent the distribution of the test scenario (the population) due to a selection process for each item that is (usually implicitly) dependent on the target variable $y_i$." If you recall, during [feature construction](3.0-build-features.ipynb), we made a fleeting mention of the selection bias that was introduced by the way we sampled the training and test / validation sets. The training set was selected to consist entirely of deceased physicists and the test / validation set was selected to consist entirely of living physicists. The feature building process has already hinted that the physicists in these datasets have different characteristics. Remember the different feature columns created by one-hot encoding?
The data was purposely sampled in this way due to the realization that the Nobel Prize in Physics cannot be awarded posthumously. The selection bias is an inherent part of the problem, as one of the goals of this project is to try to predict the next Physics Nobel Laureates, who obviously must be alive. As a result, the selection bias is something that we have to live with. The aim here is to see if we can formally detect whether a covariate shift occurs between the training and test data. It is important to note that we will be using the validation set as a *proxy* for the test data, since the true performance of the model will be evaluated on the latter, which is meant to be unseen data. Naturally, this doesn't take into account that there may also be a covariate shift between the validation and the test set. However, hopefully the random sampling process that we employed to divide the living physicists into the validation and test sets will have mitigated this.
```python
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import make_scorer
from sklearn.metrics import matthews_corrcoef
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.utils import indices_to_mask
from src.features.features_utils import convert_categoricals_to_numerical
from src.models.metrics_utils import print_matthews_corrcoef
from src.stats.stats_utils import bootstrap_prediction
from src.stats.stats_utils import percentile_conf_int
from src.visualization.visualization_utils import plot_bootstrap_statistics
from src.visualization.visualization_utils import plot_logistic_regression_odds_ratio
%matplotlib inline
```
## Classifier Two-Sample Hypothesis Testing
A formal way of detecting whether there is a covariate shift between two samples (sets of identically and independently distributed examples) is to perform a [hypothesis test](https://en.wikipedia.org/wiki/Statistical_hypothesis_testing) known as a **two-sample test**. The goal of two-sample tests is to assess whether two samples, say $S_{train} \sim P$ and $S_{validation} \sim Q$, are drawn from the same probability distribution. Two-sample tests evaluate the difference between two distributions using the value of a [test statistic](https://en.wikipedia.org/wiki/Test_statistic) to either accept or reject the [null hypothesis](https://en.wikipedia.org/wiki/Null_hypothesis) $ H_0: P = Q$.
A rather elegant and informative way to perform a two-sample test is to train a binary classifier to distinguish between instances in $S_{train}$ and $S_{validation}$. The intuition is that if the null hypothesis is true, then the performance of such a binary classifier, as measured by an appropriate test statistic, will be approximately the same as random guessing. If the performance of such a classifier is better than chance-level, the null hypothesis is rejected in favor of the [alternative hypothesis](https://en.wikipedia.org/wiki/Alternative_hypothesis) $ H_1: P \neq Q$.
To test whether the null hypothesis $ H_0: P = Q$ is true, we will loosely follow the steps of *Lopez-Paz* and *Oquab* in [Revisiting Classifier Two-Sample Tests](https://arxiv.org/pdf/1610.06545). However, there are some notable differences to our approach here, which we will point out. Also, we will try not to over-burden the reader with heavy notation. We will be taking the following steps for both the original features and the features created from the topic modeling:
1. **Construct the dataset that is the union of the training and validation sets**, {$X = S_{train} \cup S_{validation}$, $y = y_{train} \cup y_{validation}$} where $X$ is the feature matrix and $y$ the target vector. Assign $y_{train} = 0$ and $y_{validation} = 1$ for all instances in the training and validation sets, respectively.
2. **Shuffle and split** $X$ **at random into disjoint training and test subsets** $X_{train}$ and $X_{test}$.
3. **Train a binary classifier** on $X_{train}$.
4. **Evaluate the performance of the binary classifier** by computing the Matthews Correlation Coefficient (MCC) as the test statistic on $X_{test}$:
\begin{equation}
\hat{t} \equiv MCC = \frac{TP \times TN - FP \times FN}{{\sqrt{(TP + FP)(TP + FN)(TN + FP)(TN + FN)}}}
\end{equation}
where TP is the number of true positives, TN the number of true negatives, FP the number of false positives and FN the number of false negatives. This differs from the classification accuracy test statistic that is used in the paper. Accuracy is not an appropriate metric for the same reasons mentioned during our creation of the [baseline model](5.0-baseline-model.ipynb). There are many more instances in the training set than the validation set. MCC is a metric that will account for this imbalance of classes.
5. **Accept or reject the null hypothesis** by computing a 95% [bootstrap](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) [confidence interval](https://en.wikipedia.org/wiki/Confidence_interval) for $\hat{t} \equiv MCC$. If the confidence interval contains the value of $\hat{t} \equiv MCC$ corresponding to chance-level performance (i.e. $MCC = 0$) then accept the null hypothesis, otherwise reject it. Again this differs from the [p-value](https://en.wikipedia.org/wiki/P-value) approach taken in the paper; non-parametrics are needed here as we do not know the distribution of $\hat{t} \equiv MCC$. Anyway, confidence intervals are more informative than p-values as they elucidate the magnitude and precision of the estimated effect. Furthemore, [p-values and confidence intervals always agree about statistical significance](http://blog.minitab.com/blog/adventures-in-statistics-2/understanding-hypothesis-tests-confidence-intervals-and-confidence-levels), so the substitution of a confidence interval for a p-value is warranted.
First, let's construct the dataset that is the union of the training and validation data, making sure to convert the categorical fields to a numerical form that is suitable for building machine learning models.
```python
train_features = pd.read_csv('../data/processed/train-features.csv')
train_features = convert_categoricals_to_numerical(train_features)
train_features.head()
```
```python
train_features_topics = pd.read_csv('../data/processed/train-features-topics.csv')
train_features_topics = convert_categoricals_to_numerical(train_features_topics)
train_features_topics.head()
```
```python
validation_features = pd.read_csv('../data/processed/validation-features.csv')
validation_features = convert_categoricals_to_numerical(validation_features)
validation_features.head()
```
```python
validation_features_topics = pd.read_csv('../data/processed/validation-features-topics.csv')
validation_features_topics = convert_categoricals_to_numerical(validation_features_topics)
validation_features_topics.head()
```
```python
X = train_features.append(validation_features)
assert(len(X) == len(train_features) + len(validation_features))
X.head()
```
```python
X_topics = train_features_topics.append(validation_features_topics)
assert(len(X_topics) == len(train_features_topics) + len(validation_features_topics))
X_topics.head()
```
```python
y = pd.Series(np.concatenate((np.zeros(len(train_features), dtype='int64'),
np.ones(len(validation_features), dtype='int64'))),
index=train_features.index.append(validation_features.index))
assert(y.value_counts().equals(pd.Series([len(train_features), len(validation_features)], index=[0, 1])))
y.head()
```
Second, let's shuffle and split the dataset (in a stratified manner to maintain the class proportions) into disjoint training and test sets. We will use 80% of the data for training and 20% for testing.
```python
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, shuffle=True, stratify=y, random_state=0)
display(X_train.head())
display(X_test.head())
display(y_train.head())
y_test.head()
```
```python
X_train_topics = X_topics.loc[X_train.index, :]
X_test_topics = X_topics.loc[X_test.index, :]
display(X_train_topics.head())
X_test_topics.head()
```
Third, let's train a binary classifier on $X_{train}$. The choice of classifier is abitrary, so we will choose logistic regression, making sure to select the regularization parameter, $C$, and the regularization penalty, $L1$ or $L2$, via stratified 5-fold cross-validation. We will be using the MCC for scoring.
```python
def fit_logit_classifier(X, y):
params = {'C': np.logspace(0, 1, 11), 'penalty': ['l1', 'l2']}
mcc_scorer = make_scorer(matthews_corrcoef)
clf = GridSearchCV(LogisticRegression(solver='liblinear'), param_grid=params, scoring=mcc_scorer,
cv=5, iid=False, return_train_score=True)
clf.fit(X, y)
return clf
```
```python
logit = fit_logit_classifier(X_train, y_train)
logit.best_params_, logit.best_estimator_
```
```python
logit_topics = fit_logit_classifier(X_train_topics, y_train)
logit_topics.best_params_, logit_topics.best_estimator_
```
Fourth, evaluate the performance of the classifier on $X_{test}$ using the MCC.
```python
print_matthews_corrcoef(
matthews_corrcoef(y_train, logit.best_estimator_.predict(X_train)), 'Features', data_label='train')
test_mcc = matthews_corrcoef(y_test, logit.best_estimator_.predict(X_test))
print_matthews_corrcoef(test_mcc, 'Features', data_label='test')
print_matthews_corrcoef(
matthews_corrcoef(y_train, logit_topics.best_estimator_.predict(X_train_topics)), 'Topics',
data_label='train')
test_topics_mcc = matthews_corrcoef(y_test, logit_topics.best_estimator_.predict(X_test_topics))
print_matthews_corrcoef(test_topics_mcc, 'Topics', data_label='test')
```
Fifth, accept or reject the null hypothesis $ H_0: P = Q$ by computing a 95% bootstrap confidence interval.
```python
n_estimators = 1000
max_samples = 0.8
n_jobs = -1
alpha = 0.05
mccs = bootstrap_prediction(
X, y, base_estimator=logit.best_estimator_, score_func=matthews_corrcoef, n_estimators=n_estimators,
max_samples=max_samples, n_jobs=n_jobs, random_state=2)
conf_int = percentile_conf_int(mccs, alpha=alpha)
```
```python
stat_label = 'Matthews Correlation Coefficient (MCC)'
ax = plot_bootstrap_statistics(
mccs, test_mcc, conf_int, alpha, 'test MCC:', stat_label,
title='Features Bootstrap Matthews Correlation Coefficient (MCC) \nfor 1000 samples')
ax.set_xlim(0, 0.5)
ax.set_ylim(0, 300);
```
```python
mccs = bootstrap_prediction(
X_topics, y, base_estimator=logit_topics.best_estimator_, score_func=matthews_corrcoef,
n_estimators=n_estimators, max_samples=max_samples, n_jobs=n_jobs, random_state=3)
conf_int = percentile_conf_int(mccs, alpha=alpha)
```
```python
ax = plot_bootstrap_statistics(
mccs, test_topics_mcc, conf_int, alpha, 'test MCC:', stat_label,
title='Topics Bootstrap Matthews Correlation Coefficient (MCC) \nfor 1000 samples')
ax.set_xlim(0, 0.4)
ax.set_ylim(0, 300);
```
The figures illustrate that the distributions of the MCCs for the bootstrap samples are Gaussian-like for both the original features and the topics features. The MCC for the test set of the full dataset, along with the upper and lower values of the 95% confidence intervals, are shown. It is clear that for both sets of features, the confidence interval does not contain the value of chance-level performance ($MCC = 0$). Hence in both cases, there is sufficient evidence to reject the null hypothesis in favor of the alternate hypothesism, $ H_1: P \neq Q$. The conclusion is that the training and validation sets are drawn from different distributions. In other words, there is a covariate shift in both feature sets. We can see that the severity of the covariate shift has been reduced significantly by constructing features with topic modeling.
## Shifting Predictors
We can determine the predictors that exhibit a covariate shift by looking at the coefficients of the logistic regression models. Each coefficient represents the impact that the *presence* vs. *absence* of a predictor has on the [log odds ratio](https://en.wikipedia.org/wiki/Odds_ratio#Role_in_logistic_regression) of a physicist being from the validation set (as opposed to being from the training set). The change in [odds ratio](https://en.wikipedia.org/wiki/Odds_ratio) for each predictor can simply be computed by exponentiating its associated coefficient.
Formally, a change in odds ratio of 1 for a particular predictor indicates that it is not shifting, whereas a value greater than 1 indicates a shift. As it is likely that a lot of predictors will have odds ratios of slightly over 1, we will loosely define a shifting predictor as one that has a change in odds ratio greater than 1.2. This will give us an idea of the predictors that contribute the most to the covariate shift in the data. These are plotted in the charts below for the two sets of predictors.
```python
ax = plot_logistic_regression_odds_ratio(
logit.best_estimator_.coef_, top_n=1.2, columns=X.columns, title='Features covariate shifting',
plotting_context='talk')
ax.figure.set_size_inches(20, 30)
```
```python
ax = plot_logistic_regression_odds_ratio(
logit_topics.best_estimator_.coef_, top_n=1.2, columns=X_topics.columns,
title='Topics covariate shifting')
```
The charts certainly make intuitive sense as they illustrate the following changes over time:
1. Increase in globalization of physics.
2. Concentration of physics in the hubs of North America and Europe.
3. Coming to prominence of major American institutions as places of study and work.
4. Broadening of the research fields of theoretical physics and astronomy.
Now that we know which features are shifting, what should be done about it? A simple solution could be to drop these features. However, this would result in some loss of information and we do not yet know if these features are important in predicting our target. Furthermore, this would raise the question as to what should be the minimum change in odds ratio for identifying drifting features. The value of 1.2 chosen above was rather *ad hoc* as it was intended only to illustrate the features that contribute the most to the covariate shift in the data. It is clear that a more principled approach is needed to deal with the covariate shift in the data.
| 5588c6ec970199986cf7d9fe5a16ba8640e5a613 | 21,023 | ipynb | Jupyter Notebook | nobel_physics_prizes/notebooks/5.1-covariate-shift.ipynb | covuworie/nobel-physics-prizes | f89a32cd6eb9bbc9119a231bffee89b177ae847a | [
"MIT"
]
| 3 | 2019-08-21T05:35:42.000Z | 2020-10-08T21:28:51.000Z | nobel_physics_prizes/notebooks/5.1-covariate-shift.ipynb | covuworie/nobel-physics-prizes | f89a32cd6eb9bbc9119a231bffee89b177ae847a | [
"MIT"
]
| 139 | 2018-09-01T23:15:59.000Z | 2021-02-02T22:01:39.000Z | nobel_physics_prizes/notebooks/5.1-covariate-shift.ipynb | covuworie/nobel-physics-prizes | f89a32cd6eb9bbc9119a231bffee89b177ae847a | [
"MIT"
]
| null | null | null | 54.18299 | 1,148 | 0.695904 | true | 3,722 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.851953 | 0.759781 | __label__eng_Latn | 0.987019 | 0.603558 |
# Estimation of Temperature and Pressure of a Constant Volume Propane-Oxygen Mixture
A recent project required a first-order approximation to determine if an explosive gas mixture would result in a tank rupture. The following analysis done in Python follows Coopers analysis [[1]] It provides a reasonable approximation, however it is sensitive to the chemical reaction hieratchy assumed.
This post is based on an initial analysis of an article in [Inspire 12](https://en.wikipedia.org/wiki/Inspire_(magazine)). I used the [pint](https://pint.readthedocs.io/en/latest/) library to provide [dimensional analysis](https://en.wikipedia.org/wiki/Dimensional_analysis) where I could. A second post on this topic will be written where I used the [Cantera](https://cantera.org/) and [SDToolbox](http://shepherd.caltech.edu/EDL/PublicResources/sdt/) libraries to perform a more in depth analysis.
To start this analysis let's load the necessary Python libraries and some formating,
```python
# Setup for calcuations
import pint
u = pint.UnitRegistry()
u.default_format = '~P'
from prettytable import PrettyTable
from IPython.display import display, Math
from sympy import *
init_printing(use_unicode=True)
from sympy import *
init_printing(use_unicode=True)
from numpy import linspace
from sympy import lambdify
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'png')
plt.rcParams['savefig.dpi'] = 75
plt.rcParams['figure.autolayout'] = False
plt.rcParams['figure.figsize'] = 1.61803398875*8, 8
plt.rcParams['axes.labelsize'] = 20
plt.rcParams['axes.titlesize'] = 22
plt.rcParams['font.size'] = 18
plt.rcParams['lines.linewidth'] = 2.0
plt.rcParams['lines.markersize'] = 8
plt.rcParams['legend.fontsize'] = 16
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.weight'] = 'regular'
plt.rcParams['mathtext.fontset'] = 'dejavuserif'
```
## Volume of Standard 20 lb Propane Tank
To start this analysis we need to calculate the volume of a 20 lb propane tank. Using the weight of water in a 20 lb propane tank, $wt_{H_{2}O}= 21.6\:kg$ and the density of water, $\rho_{H_{2}}= 1000\:\frac{kg}{m^3}$ we can calculate the volume of the tank,
```python
wt_water = 21.6*u.kilogram
rho_water = 1000*u.kilogram/u.meter**3
v_tank = wt_water/rho_water
display(Math((r"V_{{tank}} = %s" % latex(v_tank))))
```
$\displaystyle V_{{tank}} = 0.0216 m³$
## Explosive Gas Mixture Recipe
The sections of the article relevant to the preparation of the gas mixture are:
- Discharge gas from the propane tank until only, $P_{C_3H_8} = 4\:bar$, are left in it.
- Insert, $P_{O_2}= 9\:bar$
Therefore, the partial pressure of propane is,
```python
p_c3h8 = 4*u.bar
p_c3h8.ito(u.pascal)
r = 8.314*(u.meter**3*u.pascal)/(u.mole*u.kelvin)
t_c3h8 = 293.2*u.kelvin
p_c3h8.ito(u.kilopascal)
display(Math((r"P_{{C_3H_8}} = %s" % latex(p_c3h8))))
p_c3h8.ito(u.pascal)
```
$\displaystyle P_{{C_3H_8}} = 400.0 kPa$
Using the ideal gas law, the weight and moles of propane are,
```python
n_c3h8 = (p_c3h8*v_tank)/(r*t_c3h8)
mw_c3h8 = 44.0956*u.gram/u.mole
wt_c3h8 = n_c3h8*mw_c3h8
wt_c3h8.ito(u.pound)
display(Math((r"W_{{C_3H_8}} = {:3.3}\:lb".format(latex(wt_c3h8)))))
display(Math((r"n_{{C_3H_8}} = {:3.3}\:mol".format(latex(n_c3h8)))))
```
$\displaystyle W_{C_3H_8} = 0.3\:lb$
$\displaystyle n_{C_3H_8} = 3.5\:mol$
The partial pressure of the oxygen would be,
```python
p_o2 = 9*u.bar
p_o2.ito(u.pascal)
r = 8.314*(u.meter**3*u.pascal)/(u.mole*u.kelvin)
t_o2 = 293.2*u.kelvin
p_o2.ito(u.kilopascal)
display(Math((r"P_{{O_2}} = %s" % latex(p_o2))))
p_o2.ito(u.pascal)
```
$\displaystyle P_{{O_2}} = 900.0 kPa$
Again using the ideal gas law the weight and moles of oxygen are,
```python
n_o2 = (p_o2*v_tank)/(r*t_o2)
mw_o2 = 31.9988*u.gram/u.mole
wt_o2 = n_o2*mw_o2
wt_o2.ito(u.pound)
display(Math((r"W_{{O_2}} = {:3.3}".format(latex(wt_o2)))))
display(Math((r"n_{{O_2}} = {:3.3}".format(latex(n_o2)))))
```
$\displaystyle W_{O_2} = 0.5$
$\displaystyle n_{O_2} = 7.9$
The total gas mixture weight is,
```python
wt_tot = wt_c3h8 + wt_o2
display(Math((r"W_{{tot}} = %s" % latex(wt_tot))))
wt_tot.ito(u.grams)
```
$\displaystyle W_{{tot}} = 0.9071511924432267 lb$
## Chemical Reaction
The CHNO reaction hieratchy is,
1. All carbon is burned to CO.
2. All the hydrogen is burned to H2O.
3. Any oxygen left after CO and H2O formation burns CO to CO2. 4. All the nitrogen forms N2.
5. Any oxygen remaining forms O2.
6. Any hydrogen remaining form H2.
7. Any carbon reamaining forms C.
For our reaction substituting the moles of propane and oxygen we have,
$$3.544C_3H_8 + 7.975O_2 → 10.623CO + 5.327H_2O + + 8.849H_2$$
Normalizing on the number of moles of propane we have,
$$C_3H_8 + 2.25O_2 → 1.5H_2O + 3CO + 2.5H_2$$
Leaving 2.5 moles of hydrogen unreacted for every mole of propane burned or a fuel rich reaction. Once the tank has ruptured this free hydrogen and the carbon monoxide will react with the oxygen in the air.
## Heat Produced
Because we are interested in the the pressure and temperature inside the tank we will ignore that it will eventually rupture and that the excess hydrogen will be consumed in the afterburn. In this case the heat evolved is equal to the heat of explosion of the propane.
$$ΔH_{expo} = ΣΔH_{fo}(products) - ΔH_{fo}(reactants)$$
From the National Institute of Standards and Testing (NIST) we have the enthalpy of formations for the products and reactants,
```python
# Enthalpy of Formation from NIST
d_H_c3h8 = -2354*u.joule/u.gram
d_H_h2o = -13420*u.joule/u.gram
d_H_co = -3945*u.joule/u.gram
# Molecular Weights from NIST
mw_h2o = 18.0153*u.gram/u.mole
mw_co = 28.0101*u.gram/u.mole
# Moles of Reactants
n_h2o = 5.327*u.moles
n_co = 10.623*u.moles
n_h2 = 8.849*u.moles
pt = PrettyTable()
print_mole_c3h8 = f"{n_c3h8:0.2f}"
print_mole_h2o = f"{n_h2o:0.2f}"
print_mole_co = f"{n_co:0.2f}"
print_H_c3h8 = f"{d_H_c3h8*mw_c3h8:0.2f}"
print_H_h2o = f"{d_H_h2o*mw_h2o:0.2f}"
print_H_co = f"{d_H_co*mw_co:0.2f}"
pt.field_names = ["Compound","Hf - J/g", "MW - g/mol", "Hf - J/mol", "n"]
pt.add_row(["Propane", d_H_c3h8, mw_c3h8, print_H_c3h8, print_mole_c3h8])
pt.add_row(["Water", d_H_h2o, mw_h2o, print_H_h2o, print_mole_h2o])
pt.add_row(["Carbon Monoxide", d_H_co, mw_co, print_H_co, print_mole_co])
print(pt)
```
+-----------------+--------------+---------------+-------------------------+------------+
| Compound | Hf - J/g | MW - g/mol | Hf - J/mol | n |
+-----------------+--------------+---------------+-------------------------+------------+
| Propane | -2354.0 J/g | 44.0956 g/mol | -103801.04 joule / mole | 3.54 mole |
| Water | -13420.0 J/g | 18.0153 g/mol | -241765.33 joule / mole | 5.33 mole |
| Carbon Monoxide | -3945.0 J/g | 28.0101 g/mol | -110499.84 joule / mole | 10.62 mole |
+-----------------+--------------+---------------+-------------------------+------------+
Calculating the heat of explosion we have,
```python
d_H_exp = (n_h2o*d_H_h2o*mw_h2o + n_co*d_H_co*mw_co)-n_c3h8*d_H_c3h8*mw_c3h8
display(Math(r'\Delta H_{{exp}} = {:.02f~P}'.format(d_H_exp)))
```
$\displaystyle \Delta H_{exp} = -2093813.84 J$
## TNT Equivalency
We we can equate the heat of detonation of propane to the heat of detonation of TNT to detrmine a TNT equivalency. We can get the experimentally derived heat of detonation of TNT from Cooper (page 132) which is,
```python
d_H_tnt = -247500*u.calorie/u.mole
d_H_tnt.ito(u.joules/u.mole)
display(Math(r'\Delta H_{{TNT}} = {:.02f~P}'.format(d_H_tnt)))
```
$\displaystyle \Delta H_{TNT} = -1035540.00 J/mol$
The molecular weight of TNT is,
```python
mw_tnt = 227.1*u.gram/u.mole
display(Math(r'MW_{{TNT}} = {:.02f~P}'.format(mw_tnt)))
```
$\displaystyle MW_{TNT} = 227.10 g/mol$
Dividing by the heat of detonation for TNT by the molecular weight we have,
```python
d_h_tnt = d_H_tnt/mw_tnt
display(Math(r'\Delta h_{{TNT}} = {:.02f~P}'.format(d_h_tnt)))
```
$\displaystyle \Delta h_{TNT} = -4559.84 J/g$
If we divide the heat of exposion of propane calculated above by the weight of the propane-oxygen mixture we have,
```python
d_h_exp = d_H_exp/wt_tot
display(Math(r'\Delta h_{{exp}} = {:.02f~P}'.format(d_h_exp)))
```
$\displaystyle \Delta h_{exp} = -5088.53 J/g$
So we can estimate the TNT equivalency by dividing the specific heat of explosion of propane by the specific heat of detonation of TNT or,
```python
tnt_eqv = d_h_exp/d_h_tnt
display(Math(r'E_{{\Delta h}} = {:.02f~P}'.format(tnt_eqv)))
```
$\displaystyle E_{\Delta h} = 1.12$
So theroretically, the propane-oxygen mixture is 1.12 times more powerful than TNT, however, the hierarchy of products plays a significant role in the amount of energy calculated to be released. Depending on the method used errors as large as $\pm 30\%$ can be observed as compared to experimentally determined values (Cooper pg 132). Using this margin of error a more realistic value for the TNT equivalency would be $1.12 \pm 0.34$.
## Temperature of the Gases
A simple method of determining the temperature of the gases inside the tank, $T_V$, is to use,
$$T_V = T_a\lambda$$
where, $T_a$ is the adiabatic flame temperature and $\lambda$ is the ratio of the specific heats of the gases, $\lambda=\frac{C_P}{C_V}$. The adiabatic flame temperature can be found from,
$$Q=n\int_{T_0}^{T_a} \! C_P \, \mathrm{d}T$$
where $C_P$ is the average specific heat of the combustion gases at constant pressure, $n$ is the number of moles of combution gases. The average heat capactity, $C_P$, of the gases can be calculated using,
$$C_P=\sum_{i=n}^{1}\left(n_i\cdot C_{Pi}\right)$$
To calcualte the mole fractions of the product gases we first need the total moles of products,
```python
N_prod = n_h2o+n_co+n_h2
display(Math(r'n_{{total}} = {:.02f~P}'.format(N_prod)))
```
$\displaystyle n_{total} = 24.80 mol$
the mole fractions are then,
```python
n_frac_co = n_co/N_prod
n_frac_h2o = n_h2o/N_prod
n_frac_h2 = n_h2/N_prod
display(Math(r'X_{{CO}} = {:.02f~P}'.format(n_frac_co)))
display(Math(r'X_{{H_2O}} = {:.02f~P}'.format(n_frac_h2o)))
display(Math(r'X_{{H_2}} = {:.02f~P}'.format(n_frac_h2)))
```
$\displaystyle X_{CO} = 0.43$
$\displaystyle X_{H_2O} = 0.21$
$\displaystyle X_{H_2} = 0.36$
From Cooper Table 8.2 the molar heat capacities of carbon monoxide, water, and hydrogen are,
```python
T= symbols('T', positive = True)
Ta= symbols('Ta', positive = True, real = True)
C_P_co = Function('C_P_co')('T')
C_P_h2o = Function('C_P_h2o')('T')
C_P_h2 = Function('C_P_h2')('T')
C_P_co = 6.350 + 1.811e-3*T - 0.2675e-6*T**2
C_P_h2o = 7.136 + 2.640e-3*T + 0.0459e-6*T**2
C_P_h2 = 6.946 - 0.196e-3*T + 0.4757e-6*T**2
display(Math((r"C_{{P-CO}} = %s" % latex(C_P_co))))
display(Math((r"C_{{P-H_2O}} = %s" % latex(C_P_h2o))))
display(Math((r"C_{{P-H2}} = %s" % latex(C_P_h2))))
```
$\displaystyle C_{{P-CO}} = - 2.675 \cdot 10^{-7} T^{2} + 0.001811 T + 6.35$
$\displaystyle C_{{P-H_2O}} = 4.59 \cdot 10^{-8} T^{2} + 0.00264 T + 7.136$
$\displaystyle C_{{P-H2}} = 4.757 \cdot 10^{-7} T^{2} - 0.000196 T + 6.946$
Multiplying through by the mole fraction and summing we have the average specific heat at constant pressure for carbon monoxide, water, and hydrogen,
```python
c_p = (n_frac_co.magnitude)*(C_P_co) + (n_frac_h2o.magnitude)*(C_P_h2o) + (n_frac_h2.magnitude)*(C_P_h2)
c_p
display(Math((r"C_{{P-avg}} = %s" % latex(c_p))))
```
$\displaystyle C_{{P-avg}} = 6.50157707972096 \cdot 10^{-8} T^{2} + 0.00127291943223517 T + 6.73150836727287$
Changing the units of the $\Delta H_{exp}$ to calories and integrating from $298\:K$ to $T_a$ we have,
$$Q=n\int_{T_0}^{T_a} \! C_P \, \mathrm{d}T$$
```python
d_H_exp.ito(u.calorie)
q = N_prod*integrate(c_p, (T, 298, Ta))
display(Math(r'\Delta H_{{exp}} = Q = {:.02f~P}'.format(d_H_exp)))
display(Math((r"Q = %s" % latex(q.magnitude))))
```
$\displaystyle \Delta H_{exp} = Q = -500433.52 cal$
$\displaystyle Q = 5.37442033333333 \cdot 10^{-7} Ta^{3} + 0.0157835645 Ta^{2} + 166.934676 Ta - 51162.3997565518$
The only thing we don't know in the equation is $T_a$, so solving for $T_a$ we have,
```python
sol = solve(q.magnitude + d_H_exp.magnitude, Ta)[0]*u.K
display(Math((r"T_a = {:6.6} \:K".format(latex(sol)))))
```
$\displaystyle T_a = 2605.4 \:K$
Plotting $Q$ vs. $T_a$ we have,
```python
sol = solve(q.magnitude + d_H_exp.magnitude, Ta)[0] # this was added because there are no units.
r = [sol, d_H_exp.magnitude]
lam_x = lambdify(Ta, q.magnitude, modules=['numpy'])
x_vals = linspace(0, 3000, 100)
y_vals = lam_x(x_vals)/1000
plt.plot(y_vals, x_vals, -d_H_exp.magnitude/1000, sol, 'ro')
plt.axhline(sol, color = 'red', linestyle = ':')
plt.axvline(-d_H_exp.magnitude/1000, color = 'red', linestyle = ':')
arrowprops = dict(
arrowstyle = "-|>",
connectionstyle = "angle, angleB=45,rad=10")
offset = 400
plt.annotate('$(%.1f, %.1f)$'%(-d_H_exp.magnitude/1000, sol), (-d_H_exp.magnitude/1000, sol),
xytext=(350, 1500), arrowprops=arrowprops)
plt.ylabel(r"$T_a (K)$")
plt.xlabel("Q (kcal)")
plt.show()
```
Now we can use ratio of specific heats, $\lambda$, for the correction to constant volume. We must calculate $\lambda$ for this mixture of gases using,
$$\lambda_{avg} = \sum_i n_i\cdot \lambda_i$$
where $n_i$ is the mole fraction and $\lambda$ is the ratio of specific heats for each product.
```python
lambda_h2o = 1.324
lambda_co = 1.404
lambda_h2 = 1.410
lambda_avg = n_frac_co*lambda_co + n_frac_h2*lambda_h2 + n_frac_h2o*lambda_h2o
display(Math((r"\lambda_{{avg}} = {:3.3}".format(latex(lambda_avg)))))
```
$\displaystyle \lambda_{avg} = 1.3$
The temperature a constant volume is given by,
$$T_v = T_a\cdot \lambda_{avg}$$
```python
T_v = sol*lambda_avg
display(Math((r"T_{{V}} = {:6.6} \:K".format(latex(T_v*u.K)))))
```
$\displaystyle T_{V} = 3618.8 \:K$
## Pressure of Gases
Assuming pressures above 200 atm we can no longer use the ideal gas law equation of state (EOS). A common equation of state for pressures above 200 atm and commonly used in the field of interior balistics is the Noble-Able EOS:
$$P\left(V-0.025\cdot N_{{products}}\right)=0.0821\cdot N_{{products}}\cdot T$$
```python
v_tank.ito(u.liter)
P_v = (0.0821*N_prod.magnitude*T_v)/(v_tank.magnitude-0.025*N_prod.magnitude)
display(Math((r"P = {:5.5} \:atm \:(5158.28\:psi)".format(latex(P_v)))))
```
$\displaystyle P = 351.1 \:atm \:(5158.28\:psi)$
This is almost 5 times the burst pressure of a standard 20lb propane tank and would cause the tank to fail catostrophically.
## References
1. P. Cooper, Exlosives Engineering. New York, NY: Wiley-VCH Verlag GmbH Co., 1996.
| 9cb109ee4275f5a05b04259581e52d35baaa1067 | 160,071 | ipynb | Jupyter Notebook | _jupyter/tank_burst_analysis.ipynb | lightsquared/lightsquared.github.io | d7cb83732d325ad8c76d3328ffd6cd183785bc50 | [
"MIT"
]
| null | null | null | _jupyter/tank_burst_analysis.ipynb | lightsquared/lightsquared.github.io | d7cb83732d325ad8c76d3328ffd6cd183785bc50 | [
"MIT"
]
| null | null | null | _jupyter/tank_burst_analysis.ipynb | lightsquared/lightsquared.github.io | d7cb83732d325ad8c76d3328ffd6cd183785bc50 | [
"MIT"
]
| null | null | null | 143.048257 | 92,770 | 0.85905 | true | 5,060 | Qwen/Qwen-72B | 1. YES
2. YES | 0.897695 | 0.727975 | 0.6535 | __label__eng_Latn | 0.740201 | 0.356631 |
# Extremal linkage networks
This notebook contains code accompanying the paper [extremal linkage networks](https://arxiv.org/abs/1904.01817).
We first implement the network dynamics and then rely on [TikZ](https://github.com/pgf-tikz/pgf) for visualization.
## The Model
We define a random network on an infinite set of layers, each consisting of $N \ge 1$ nodes. The node $i \in \{0, \dots, N - 1\}$ in layer $h \in \mathbb Z$ has a fitness $F_{i, h}$, where we assume the family $\{F_{i, h}\}_{i \in \{0, \dots, N - 1\}, h \in \mathbb Z}$ to be independent and identically distributed (i.i.d.).
Then, the number of nodes on layer $h+1$ that are visible for the $i$th node in layer $h$, which we call the *scope* of $(i,h)$, is given by $\varphi(F_{i, h}) \wedge N$, where
\begin{equation}\label{eqPhiDef}
\varphi(f) = 1 + 2 \lceil f \rceil.
\end{equation}
Now, $(i, h)$ connects to precisely one visible node $(j, h+1)$ in layer $h+1$, namely the one of maximum fitness. In other words,
$$F_{j, h+1} = \max_{j':\, d_N(i, j') \le \lceil F_{i, h}\rceil}F_{j', h+1}.$$
Henceforth, we assume the fitnesses to follow a Fréchet distribution with tail index 1. That is,
$$\mathbb P(F \le s) = \exp(-s^{-1}).$$
## Simulation of Network Dynamics
```python
def simulate_network(hrange = 250,
layers = 6):
"""Simulation of the network model
# Arguments
hrange: horizontal range of the network
layers: number of layers
# Result
fitnesses and selected edge
"""
#generate fréchet distribution
fits = np.array([1/np.log(1/np.random.rand(hrange)) for _ in range(layers)])
fits_int = 1+ np.array(fits, dtype = np.int)
#determine possible neighbors
neighbs = [[(idx + np.arange(-fit, fit + 1)) % hrange
for idx, fit in enumerate(layer) ]
for layer in fits_int]
#determine selected neighbor
sel_edge = [[neighb[np.argmax(fit[neighb])]
for neighb in nb]
for (fit, nb) in zip(np.roll(fits, -1, 0), neighbs)]
return fits, sel_edge
```
Now, we simulate the random network model as described above.
```python
import numpy as np
#seed
seed = 56
np.random.seed(seed)
fits, edges = simulate_network()
```
## Visualization
Now, plot the network in tikz.
```python
def plot_synapses(edges,
idxs = np.arange(102, 131),
layers = 4,
x_scale = .15,
node_scale = .5):
"""Plot relevant synapses
# Arguments
idxs: indexes of layer-0 node
edges: edges in the linkage graph
layers: number of layers to be plotted
x_scale: scaling in x-direction
node_scale: scaling of nodes
# Result
tikz representation of graph
"""
result = []
#horizontal range
hrange = len(edges[0])
#plot layer by layer
for layer in range(layers):
#plot points
result +=["\\fill ({0:1.2f}, {1:1.2f}) circle ({2:1.1f}pt);\n".format((idx % hrange) * x_scale,
layer,
node_scale * np.log(fits)[layer, idx])
for idx in idxs]
#plot edges
string = "\\draw[line width = .5pt]"
string += " ({0:1.2f}, {1:1.2f})--({2:1.2f}, {3:1.2f});\n"
path_unordered = [string.format(idx * x_scale,
layer,
edges[layer][idx] * x_scale,
layer + 1) for idx in idxs]
result += path_unordered
#update indexes
idxs = np.unique([edges[layer][idx] for idx in idxs])
#plot points
result +=["\\fill ({0:1.2f}, {1:1.2f}) circle ({2:1.1f}pt);\n".format((idx % hrange) * x_scale,
layers,
node_scale * np.log(fits)[layer + 1, idx])
for idx in idxs]
tikz = ''.join(result)
return '\\begin{tikzpicture}\n' + tikz + '\\end{tikzpicture}\n'
```
Finally, we write to a file.
```python
fname = 'coalesc.tex'
f = open(fname, "w")
f.write(plot_synapses(idxs,
edges))
f.close()
```
```python
!pdflatex evolFig.tex
```
| 4abcc8503d5368a877c105a13c61459eab919fcf | 7,388 | ipynb | Jupyter Notebook | simulation.ipynb | Christian-Hirsch/extremal_linkage | dea32732b2b8ec53d5b356f38c215de1381fa35f | [
"MIT"
]
| null | null | null | simulation.ipynb | Christian-Hirsch/extremal_linkage | dea32732b2b8ec53d5b356f38c215de1381fa35f | [
"MIT"
]
| null | null | null | simulation.ipynb | Christian-Hirsch/extremal_linkage | dea32732b2b8ec53d5b356f38c215de1381fa35f | [
"MIT"
]
| null | null | null | 28.972549 | 349 | 0.458446 | true | 1,222 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.740174 | 0.623098 | __label__eng_Latn | 0.897365 | 0.285995 |
# Pulse stream recovery under additive noise
In the previous notebooks, we considered the pulse stream recovery problem under no noise. Here we investigate the effect of **_additive noise_** on our recovery process. Moreover, we consider:
\begin{align}
y_{meas}[n] = y_{BL}[n] + w[n], \nonumber
\end{align}
where $w[n]$ can be modelled as additive white Gaussian noise with variance $\sigma^2$.
```python
import numpy as np
import plot_settings
import sys
sys.path.append('..')
from frius import create_pulse_param, sample_ideal_project, estimate_fourier_coeff, compute_ann_filt, estimate_time_param, estimate_amplitudes, evaluate_recovered_param
from frius import add_noise # 'us_utils.py'
```
# 1. Oversampling
Our first "mechanism" to deal with noise is to **_oversample_**. This is not too different from the classical bandlimited situation in which a particular choice for the bandwidth may be too "restrictive" as an approximation of our original signal. Therefore, in order to reduce this _approximation error_, i.e. represent our original signal more faithfully as a bandlimited signal, we may need to:
* Increase the bandwidth of our anti-aliasing filter / generation function.
* Increase the sampling rate (i.e. take more samples per unit of time).
These two steps are precisely what will be done for the pulse stream scenario _prior_ to applying a denoising algorithm.
# 2. Exploiting redundancy in extra measurements
Assuming that we are correct in our choice of the number of pulses $K$, oversampling, i.e. taking $N\geq 2K+1$ samples, would yield more measurements than degrees of freedom in the noiseless case: $\rho = 2K$. Under noise, more than $2K$ degrees of freedom are needed to represent our signal. Therefore, it is natural that we need more samples.
For the pulse stream recovery problem, we will define an oversampling factor $\beta > 1$ so that an _integer_ number of samples $N$ is given by:
\begin{align}
N = 2\beta K +1. \nonumber
\end{align}
As a reminder, we need an odd number of measurements ($2K+1$ for critical sampling) in order to use real-valued sampling kernels.
We inevitably have to bring in some more math now to see how we can exploit the redundancy in our extra measurements. In the last notebook, in particular in the Appendix, we saw in more detail what the annihilation filter entailed, namely the discrete convolution between the (equalized) Fourier coefficients and this filter is equal to zero for all indices. We can write this annihilation property (or constraint within the context of denoising) as a matrix-vector product:
\begin{align}
\begin{bmatrix}
\hat{x}[\mathrel{{-M}{+}{K}}]&\hat{x}[\mathrel{{-M}{+}{K}{-}{1}}] & \cdots & \hat{x}[-M]\\
\vdots&\ddots & \ddots & \vdots\\
\hat{x}[1] & \hat{x}[0] & \cdots & \hat{x}[\mathrel{{-K}{+}{1}}] \\
\hat{x}[2] & \hat{x}[1] & \cdots & \hat{x}[\mathrel{{-K}{+}{2}}] \\
\vdots&\vdots& \ddots & \vdots\\
\hat{x}[M] & \hat{x}[\mathrel{{M}{-}{1}}] & \cdots & \hat{x}[\mathrel{{M}{-}{K}}]
\end{bmatrix}
\underbrace{\begin{bmatrix} a[0] \\ a[1] \\ a[2] \\ \vdots\\ a[K]\end{bmatrix}}_{\mathbf{a}}
&=
\mathbf{0}. \nonumber
\end{align}
One thing to notice about the matrix of Fourier coefficients ($\hat{x}[n]$) above is its [**_Toeplitz_**](https://en.wikipedia.org/wiki/Toeplitz_matrix) structure, i.e. descending diagonals from left to right are constant. We can write the above annihilation constraint more concisely as:
\begin{align}
\mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}}) \hspace{0.05cm}\mathbf{a} = \mathbf{0}, \nonumber
\end{align}
where the operator $ \mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}}) $ forms the Toeplitz matrix from $ \mathbf{\hat{x}} = \{\hat{x}[m]\}_{m=-M}^{M} $ with $ (K+1 )$ columns.
If there are truly $K$ pulses, another (less intuitive) observation we can make about the matrix $ \mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}}) $ is that it has a rank of at most $K$. In the [Appendix](#app) of this notebook, we prove this rank property. With our extra measurements, we can then exploit this rank property for recovery under noise!
## Total least-squares (TLS)
The simplest "denoising" scheme, although it is more akin to parameter extraction, is to perform the [Singular Value Decomposition](https://en.wikipedia.org/wiki/Singular-value_decomposition) (SVD) of the Toeplitz matrix:
\begin{align}
\mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}}) = \mathbf{U} \mathbf{\Sigma} \mathbf{V}^H, \nonumber
\end{align}
where $ \mathbf{\Sigma} \in \mathbb{C}^{(2M+1-K)\times(K+1)} $ is a rectangular diagonal matrix of singular values in _decreasing order_ along the diagonal, $ \mathbf{U}\in \mathbb{C}^{(2M+1-K)\times(2M+1-K)} $ is a [unitary](https://en.wikipedia.org/wiki/Unitary_matrix) matrix, and $ \mathbf{V}\in\mathbb{C}^{(K+1)\times(K+1)} $ is another unitary matrix. Since $ \text{rank}\big(\mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}}) \big)=K $ for noiseless Fourier coefficients, the $ (K+1)^{th} $ singular value along the diagonal of $ \mathbf{\Sigma} $ should ideally be equal to zero!
We can write this as:
\begin{align}
\sigma_{K+1}\big(\mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}}) \big) = 0, \nonumber
\end{align}
where $ \sigma_K(\mathbf{A}) $ denotes the $ K^{th} $ largest singular value of $ \mathbf{A} $. Therefore, the <a href="https://en.wikipedia.org/wiki/Kernel_(linear_algebra)">null space</a> of $ \mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}}) $ is spanned by the $ (\mathrel{{K}{+}{1}})^{th} $ column of $ \mathbf{V} $, i.e. the last column.
Mathematically, this implies:
\begin{equation}
\mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}})\hspace{0.05cm}\mathbf{V}_{(-1)} = \mathbf{0}, \nonumber
\end{equation}
where $ \mathbf{V}_{(-1)}$ extracts the last column of $ \mathbf{V} $. This means that $ \mathbf{V}_{(-1)} $ meets the annihilation constraint we saw earlier: $\mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}}) \hspace{0.05cm}\mathbf{a} = \mathbf{0}$.
However, _noisy_ Fourier coefficients $ \mathbf{\hat{x}}_{noisy} $ estimated from the measured samples will typically result in a Toeplitz matrix $ \mathbf{T}(\mathbf{\hat{x}}_{noisy}, \mathrel{{K}{+}{1}} ) $ with a rank larger than $ K $. Therefore, the $ (\mathrel{{K}{+}{1}})^{th} $ singular value of $ \mathbf{T}(\mathbf{\hat{x}}_{noisy}, \mathrel{{K}{+}{1}} )$ may not be equal to zero. Nonetheless, if this singular value is significantly smaller than the $ K^{th} $ singular value, $ \mathbf{V}_{(-1)} $ may still be a sufficient candidate for the annihilating filter.
This method for obtaining the annihilating filter with the SVD is often called the **_total least-squares approach_** and abbreviated as TLS. It was first suggested within the context of FRI in [1].
Let's see how this approach works with synthetic data! We will consider additive white Gaussian noise at various _signal-to-noise ratio_ (SNR) values. The variance of the noise will be set according to the signal energy and the desired SNR in dB. Please refer the function `'add_noise'` in `'frius/us_utils.py'` for the implementation details.
# 3. Simulations
We first consider critical sampling and recovery as seen in the previous notebooks for the noiseless case _but_ with noisy samples.
```python
# signal parameters
K = 6
period = 1
ck, tk = create_pulse_param(K=K, period=period)
# critical sampling
y_samp, t_samp, fs_ind_base = sample_ideal_project(ck, tk, period=period)
# add noise
snr_db = 50
y_noisy = add_noise(y_samp, snr_db=snr_db)
# recovery
freqs = fs_ind_base/period
fs_coeff_hat = estimate_fourier_coeff(y_noisy, t_samp)
ann_filt = compute_ann_filt(fs_coeff_hat, K)
tk_hat = estimate_time_param(ann_filt, period)
ck_hat = estimate_amplitudes(fs_coeff_hat, freqs, tk_hat, period)
evaluate_recovered_param(ck, tk, ck_hat, tk_hat, viz=True, figsize=(10,5), t_max=period)
plt.legend(loc="lower left");
```
With critical sampling, we can see that we completely miss out on one of the Diracs around $0.45$ seconds and are a little bit off for the Dirac around $0.4$ seconds.
With the same signal parameters and SNR, we will oversample the FRI by a factor of $\beta = 2$ and apply the TLS approach described above.
```python
from scipy.linalg import toeplitz, svd
def total_least_squares(fs_coeff, K):
col1 = fs_coeff[K:]
row1 = np.flipud(fs_coeff[:K+1])
A_top = toeplitz(col1, r=row1)
U, s, Vh = svd(A_top)
print("K/(K+1) singular value = %f\n" % (s[K-1]/s[K]))
return np.conj(Vh[-1, :])
oversample_freq = 2
y_samp, t_samp, fs_ind_base = sample_ideal_project(ck, tk, period=period, oversample_freq=oversample_freq)
# add noise
y_noisy = add_noise(y_samp, snr_db=snr_db)
# recovery
freqs = fs_ind_base/period
fs_coeff_hat = estimate_fourier_coeff(y_noisy, t_samp)
ann_filt = total_least_squares(fs_coeff_hat, K)
# ann_filt = compute_ann_filt(fs_coeff_hat, K, print_ratio=True)
tk_hat = estimate_time_param(ann_filt, period)
ck_hat = estimate_amplitudes(fs_coeff_hat, freqs, tk_hat, period)
evaluate_recovered_param(ck, tk, ck_hat, tk_hat, viz=True, figsize=(10,5), t_max=period)
plt.legend(loc="lower left");
```
And we are able to obtain the pulse parameters quite well!
It is also possible to use the same function `'compute_ann_filt'` from `'frius/fri_utils.py'`, as it will detect if we are in a scenario where we have more Fourier coefficients than $(2K+1)$ and apply TLS if it is the case.
# More noise!
An SNR of $50$ dB corresponds to very little noise. Below we plot the clean and noisy samples from our critical sampling scenario.
```python
y_samp, t_samp, fs_ind_base = sample_ideal_project(ck, tk, period=period)
# add noise
snr_db = 50
y_noisy = add_noise(y_samp, snr_db=snr_db)
# visualize
plt.figure(figsize=(10,5))
plt.plot(t_samp, y_samp, label="Clean")
plt.plot(t_samp, y_noisy, label="Noisy")
plt.xlabel("Time [seconds]")
plt.xlim([0, period])
plt.grid()
plt.legend();
```
The two signals are practically identical! This reflects the very sensitive behavior of the recovery algorithm even to the _slightest_ perturbation. Let's try a smaller SNR, i.e. one that will result in a noticeable perturbation of the clean samples, and with an oversampling factor of $\beta= 7$. Generally, with more noise, we will need to oversample by a larger factor.
```python
oversample_freq = 7
y_samp, t_samp, fs_ind_base = sample_ideal_project(ck, tk, period=period, oversample_freq=oversample_freq)
# add noise
snr_db = 10
y_noisy = add_noise(y_samp, snr_db=snr_db)
# visualize
plt.figure(figsize=(10,5))
plt.plot(t_samp, y_samp, label="Clean")
plt.plot(t_samp, y_noisy, label="Noisy")
plt.xlabel("Time [seconds]")
plt.xlim([0, period])
plt.grid()
plt.legend();
```
And now let's try recovering the pulse parameters.
```python
# recovery
freqs = fs_ind_base/period
fs_coeff_hat = estimate_fourier_coeff(y_noisy, t_samp)
ann_filt = compute_ann_filt(fs_coeff_hat, K, print_ratio=True)
tk_hat = estimate_time_param(ann_filt, period)
ck_hat = estimate_amplitudes(fs_coeff_hat, freqs, tk_hat, period)
evaluate_recovered_param(ck, tk, ck_hat, tk_hat, viz=True, figsize=(10,5), t_max=period)
plt.legend(loc="lower left");
```
Not so great, but we can also notice that the ratio between the $K^{th}$ and $ (\mathrel{{K}{+}{1}})^{th} $ singular value is considerably smaller ($1.34$ compared to $22.7$).
If we increase the oversampling factor to $\beta=10$, we are able to perform a better recovery. However, we still miss out on one Dirac, and the singular value ratio is still not desirable to represent a low rank matrix that we are expecting.
```python
oversample_freq = 10
y_samp, t_samp, fs_ind_base = sample_ideal_project(ck, tk, period=period, oversample_freq=oversample_freq)
# add noise
y_noisy = add_noise(y_samp, snr_db=snr_db)
# recovery
freqs = fs_ind_base/period
fs_coeff_hat = estimate_fourier_coeff(y_noisy, t_samp)
ann_filt = compute_ann_filt(fs_coeff_hat, K, print_ratio=True)
tk_hat = estimate_time_param(ann_filt, period)
ck_hat = estimate_amplitudes(fs_coeff_hat, freqs, tk_hat, period)
evaluate_recovered_param(ck, tk, ck_hat, tk_hat, viz=True, figsize=(10,5), t_max=period)
plt.legend(loc="lower left");
```
This diffulty for lower SNRs motivates another denoising approach, which we will consider in the following notebook, called **_Cadzow's iterative denoising_**. It is also suggested by [1] and again exploits the low rank property of the Toeplitz matrix of Fourier coefficient in order to perform a "model matching" step prior to TLS.
As a small teaser, below we employ Cadzow's iterative denoising to (quite) sucessfully recovery the pulse parameters for an oversampling factor of $\beta=7$.
```python
from frius import cadzow_denoising
oversample_freq = 7
snr_db = 10
y_samp, t_samp, fs_ind_base = sample_ideal_project(ck, tk, period=period, oversample_freq=oversample_freq)
# add noise
y_noisy = add_noise(y_samp, snr_db=snr_db)
# recovery
freqs = fs_ind_base/period
fs_coeff_hat = estimate_fourier_coeff(y_noisy, t_samp)
fs_coeff_hat = cadzow_denoising(fs_coeff_hat, K, n_iter=2)
ann_filt = compute_ann_filt(fs_coeff_hat, K, print_ratio=True)
tk_hat = estimate_time_param(ann_filt, period)
ck_hat = estimate_amplitudes(fs_coeff_hat, freqs, tk_hat, period)
evaluate_recovered_param(ck, tk, ck_hat, tk_hat, viz=True, figsize=(10,5), t_max=period)
plt.legend(loc="lower left");
```
That's certainly a more representation singular value ratio for a Toeplitz matrix with rank $K$!
<a id='app'></a>
# Appendix: Rank of Toeplitz matrix from pulse stream Fourier coefficients
Assuming that we are correct in our choice of $ K $ pulses and that we have obtained noiseless Fourier coefficients $ \{\hat{x}[m]\}_{m=-M}^{M} $ for $ M \geq K $, the rectangular Toeplitz matrix
\begin{align}
\mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}}) = \begin{bmatrix}
\hat{x}[\mathrel{{-M}{+}{K}}]&\hat{x}[\mathrel{{-M}{+}{K}{-}{1}}] & \cdots & \hat{x}[-M]\\
\vdots&\ddots & \ddots & \vdots\\
\hat{x}[1] & \hat{x}[0] & \cdots & \hat{x}[\mathrel{{-K}{+}{1}}] \\
\hat{x}[2] & \hat{x}[1] & \cdots & \hat{x}[\mathrel{{-K}{+}{2}}] \\
\vdots&\vdots& \ddots & \vdots\\
\hat{x}[M] & \hat{x}[\mathrel{{M}{-}{1}}] & \cdots & \hat{x}[\mathrel{{M}{-}{K}}]
\end{bmatrix}, \nonumber
\end{align}
of size $ (\mathrel{{2M}{+}{1}{-}{K}})\times(\mathrel{{K}{+}{1}}) $ has a rank of $ K $. This rank property can be verified by reminding ourselves of the expression for each $ \{\hat{x}[m]\}_{m=-M}^{M} $ for the pulse stream model (through the Fourier Series representation):
\begin{align}
\hat{x}[m] &= \sum_{k=0}^{K-1} c_k \hspace{0.05cm} \exp(-j2\pi m t_k/T) \nonumber\\
&= \sum_{k=0}^{K-1} c_k \hspace{0.05cm} u_k^m, \nonumber
\end{align}
where $u_k = \exp(-j2\pi m t_k/T)$. We can therefore write $ \mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}})$ as:
\begin{align}
\mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}})&=
\underbrace{
\begin{bmatrix}
u_0^{-M+K}&u_1^{-M+K} & \cdots & u_{K-1}^{-M+K} \\
u_0^{-M+K-1}&u_1^{-M+K-1} & \cdots & u_{K-1}^{-M+K-1} \\
\vdots&\vdots & \ddots & \vdots\\[0.2cm]
u_0^{M}&u_1^{M} & \cdots & u_{K-1}^{M}
\end{bmatrix}}_{\mathbf{C}}
\cdot
\underbrace{
% for same height
\vphantom{ \begin{bmatrix}
u_0^{-M+K}&u_1^{-M+K} & \cdots & u_{K-1}^{-M+K} \\
u_0^{-M+K-1}&u_1^{-M+K-1} & \cdots & u_{K-1}^{-M+K-1} \\
\vdots&\vdots & \ddots & \vdots\\[0.2cm]
u_0^{M}&u_1^{M} & \cdots & u_{K-1}^{M}
\end{bmatrix}}
\begin{bmatrix}
c_0&0 & \cdots & 0 \\
0&c_1& \cdots & 0\\
\vdots&\vdots & \ddots & \vdots\\[0.1cm]
0&0& \cdots & c_{K-1}
\end{bmatrix}}_{\mathbf{D}}
\cdot
\underbrace{\begin{bmatrix}
1&u_0^{-1} & \cdots & u_{0}^{-K} \\
1&u_1^{-1} & \cdots & u_{1}^{-K} \\
\vdots&\vdots & \ddots & \vdots\\[0.2cm]
1 & u_{K-1}^{-1} &\cdots & u_{K-1}^{-K}.
\end{bmatrix}}_{\mathbf{E}}, \nonumber
\end{align}
where $ \mathbf{C}\in \mathbb{C}^{ (2M+1-K)\times K} $, $ \mathbf{D} \in \mathbb{C}^{K\times K} $, and $ \mathbf{E}\in \mathbb{C}^{K\times (K+1)}$. The diagonal matrix $ \mathbf{D} $ certainly has a rank of $ K $ so using the following rank properties (where $ \mathbf{A} \in \mathbb{C}^{m\times n}$):
\begin{align}
\text{rank}(\mathbf{A}) &\leq \text{min}(m,n), \nonumber\\
\text{rank}(\mathbf{A}\mathbf{B}) &\leq \min\Big(\text{rank}(\mathbf{A}), \text{rank}(\mathbf{B})\Big), \nonumber
\end{align}
we know that $ \text{rank}\big(\mathbf{T}(\mathbf{\hat{x}})\big) \leq K$ since both dimensions of $ \mathbf{C} $ and $ \mathbf{E} $ are $ \geq K $. Moreover, as $ \mathbf{C} $ and $ \mathbf{E} $ have a Vandermonde structure _and_ the $ u_k $'s are distinct (as we are assuming $ K $ different pulses), both $ \mathbf{C} $ and $ \mathbf{E} $ also have rank $ K $, i.e. their smallest dimension. Therefore, $ \text{rank}\big(\mathbf{T}(\mathbf{\hat{x}}, \mathrel{{K}{+}{1}})\big) = K$.
Even if we chose to create a Toeplitz with more columns, i.e. $ \mathbf{T}(\mathbf{\hat{x}}, L) $ with $ L > (K+1) $, the rank would still be $ K $. This can be shown by adding more rows to $ \mathbf{C} $ and columns to $ \mathbf{E} $ to the decomposition above. Nevertheless, the rank would remain $ K $ as $ \mathbf{D} $ would remain unchanged, having a rank of $K$.
# References
[1] T. Blu, P. L. Dragotti, M. Vetterli, P. Marziliano and L. Coulot, "Sparse Sampling of Signal Innovations," in IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 31-40, March 2008.
| 48faa760852e494fc643c6a1f9ee1368d397dfec | 220,040 | ipynb | Jupyter Notebook | notebooks/fri_part3_additive_noise.ipynb | ebezzam/frius | c3acc98288c949085b7dea08ef3708581f86ce25 | [
"MIT"
]
| null | null | null | notebooks/fri_part3_additive_noise.ipynb | ebezzam/frius | c3acc98288c949085b7dea08ef3708581f86ce25 | [
"MIT"
]
| null | null | null | notebooks/fri_part3_additive_noise.ipynb | ebezzam/frius | c3acc98288c949085b7dea08ef3708581f86ce25 | [
"MIT"
]
| 1 | 2018-11-26T10:10:33.000Z | 2018-11-26T10:10:33.000Z | 298.561737 | 57,936 | 0.907785 | true | 5,656 | Qwen/Qwen-72B | 1. YES
2. YES | 0.885631 | 0.863392 | 0.764647 | __label__eng_Latn | 0.925001 | 0.614863 |
# Monte Carlo Methods: Lab 1
Take a look at Chapter 10 of Newman's *Computational Physics with Python* where much of this material is drawn from.
```
from IPython.core.display import HTML
css_file = '../ipython_notebook_styles/ngcmstyle.css'
HTML(open(css_file, "r").read())
```
<link href='http://fonts.googleapis.com/css?family=Open+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 1000px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1200px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Open Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:900px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Arvo', serif;
font-style:regular;
font-weight: 400;
font-size: 45pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Arvo', serif;
font-weight: 400;
font-size: 30pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Arvo', serif;
margin-top:16px;
font-size: 22pt;
font-weight: 600;
margin-bottom: 3px;
font-style: regular;
color: rgb(102,102,0);
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Arvo', serif;
font-size: 14pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Arvo', sans-serif;
font-weight: 400;
font-size: 16pt;
color: rgb(163,0,0);
font-style: italic;
margin-bottom: .1em;
margin-top: 0.8em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "PT Mono";
font-size: 90%;
}
</style>
## Integration
If we have an ugly function, say
$$
\begin{equation}
f(x) = \sin^2 \left(\frac{1}{x (2-x)}\right),
\end{equation}
$$
then it can be very difficult to integrate. To see this, just do a quick plot.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
rcParams['figure.figsize'] = (12,6)
```
```
def f(x):
return np.sin(1.0/(x*(2.0-x)))**2
```
```
x = np.linspace(0.0, 2.0, 10000)
plt.plot(x, f(x))
plt.xlabel(r"$x$")
plt.ylabel(r"$\sin^2([x(x-2)]^{-1})$");
```
We see that as the function oscillates *infinitely often*, integrating this with standard methods is going to be very inaccurate.
However, we note that the function is bounded, so the integral (given by the shaded area below) must itself be bounded - less than the total area in the plot, which is $2$ in this case.
```
plt.fill_between(x, f(x))
plt.xlabel(r"$x$")
plt.ylabel(r"$\sin^2([x(x-2)]^{-1})$");
```
So if we scattered (using a *uniform* random distribution) a large number of points within this box, the fraction of them falling *below* the curve is approximately the integral we want to compute, divided by the area of the box:
$$
\begin{equation}
I = \int_a^b f(x) \, dx \quad \implies \quad I \simeq \frac{k A}{N}
\end{equation}
$$
where $N$ is the total number of points considered, $k$ is the number falling below the curve, and $A$ is the area of the box. We can choose the box, but we need $y \in [\min_{x \in [a, b]} (f(x)), \max_{x \in [a, b]} (f(x))] = [c, d]$, giving $A = (d-c)(b-a)$.
So let's apply this technique to the function above, where the box in $y$ is $[0,1]$.
```
def mc_integrate(f, domain_x, domain_y, N = 10000):
"""
Monte Carlo integration function: to be completed. Result, for the given f, should be around 1.46.
"""
import numpy.random
return I
```
```
```
### Accuracy
To check the accuracy of the method, let's apply this to calculate $\pi$.
The area of a circle of radius $2$ is $4\pi$, so the area of the *quarter* circle in $x, y \in [0, 2]$ is just $\pi$:
$$
\begin{equation}
\pi = \int_0^2 \sqrt{4 - x^2} \, dx.
\end{equation}
$$
Check the convergence of the Monte Carlo integration with $N$. (I suggest using $N = 100 \times 2^i$ for $i = 0, \dots, 19$; you should find the error scales roughly as $N^{-1/2}$)
```
```
## Mean Value Method
Monte Carlo integration is pretty inaccurate, as seen above: it converges slowly, and has poor accuracy at all $N$. An alternative is the *mean value* method, where we note that *by definition* the average value of $f$ over the interval $[a, b]$ is precisely the integral multiplied by the width of the interval.
Hence we can just choose our $N$ random points in $x$ as above, but now just compute
$$
\begin{equation}
I \simeq \frac{b-a}{N} \sum_{i=1}^N f(x_i).
\end{equation}
$$
```
def mv_integrate(f, domain_x, N = 10000):
"""
Mean value Monte Carlo integration: to be completed
"""
import numpy.random
return I
```
Let's look at the accuracy of this method again applied to computing $\pi$.
```
```
The convergence *rate* is the same (only roughly, typically), but the Mean Value method is *expected* to be better in terms of its absolute error.
### Dimensionality
Compared to standard integration methods (Gauss quadrature, Simpson's rule, etc) the convergence rate for Monte Carlo methods is very slow. However, there is one crucial advantage: as you change dimension, the amount of calculation required is *unchanged*, whereas for standard methods it grows geometrically with the dimension.
Try to compute the volume of an $n$-dimensional unit *hypersphere*, which is the object in $\mathbb{R}^n$ such that
$$
\begin{equation}
\sum_{i=1}^n x_i^2 \le 1.
\end{equation}
$$
The volume of the hypersphere [can be found in closed form](http://en.wikipedia.org/wiki/Volume_of_an_n-ball#The_volume), but can rapidly be computed using the Monte Carlo method above, by counting the $k$ points that randomly fall within the hypersphere and using the standard formula $I \simeq V k / N$.
```
def mc_integrate_multid(f, domain, N = 10000):
"""
Monte Carlo integration in arbitrary dimensions (read from the size of the domain): to be completed
"""
return I
```
```
import scipy.special
```
```
def volume_hypersphere(ndim=3):
return np.pi**(float(ndim)/2.0) / scipy.special.gamma(float(ndim)/2.0 + 1.0)
```
Now let us repeat this across multiple dimensions.
The errors clearly vary over a range, but the convergence remains roughly as $N^{-1/2}$ independent of the dimension; using other techniques such as Gauss quadrature would see the points required scaling geometrically with the dimension.
## Importance sampling
Consider the integral (which arises, for example, in the theory of Fermi gases)
$$
\begin{equation}
I = \int_0^1 \frac{x^{-1/2}}{e^x + 1} \, dx.
\end{equation}
$$
This has a finite value, but the integrand diverges as $x \to 0$. This *may* cause a problem for Monte Carlo integration when a single value may give a spuriously large contribution to the sum.
We can get around this by changing the points at which the integrand is sampled. Choose a *weighting* function $w(x)$. Then a weighted average of any function $g(x)$ can be
$$
\begin{equation}
<g>_w = \frac{\int_a^b w(x) g(x) \, dx}{\int_a^b w(x) \, dx}.
\end{equation}
$$
As our integral is
$$
\begin{equation}
I = \int_a^b f(x) \, dx
\end{equation}
$$
we can, by setting $g(x) = f(x) / w(x)$ get
$$
\begin{equation}
I = \int_a^b f(x) \, dx = \left< \frac{f(x)}{w(x)} \right>_w \int_a^b w(x) \, dx.
\end{equation}
$$
This gives
$$
\begin{equation}
I \simeq \frac{1}{N} \sum_{i=1}^N \frac{f(x_i)}{w(x_i)} \int_a^b w(x) \, dx,
\end{equation}
$$
where the points $x_i$ are now chosen from a *non-uniform* probability distribution with pdf
$$
\begin{equation}
p(x) = \frac{w(x)}{\int_a^b w(x) \, dx}.
\end{equation}
$$
This is a generalization of the mean value method - we clearly recover the mean value method when the weighting function $w(x) \equiv 1$. A careful choice of the weighting function can mitigate problematic regions of the integrand; e.g., in the example above we could choose $w(x) = x^{-1/2}$, giving $p(x) = x^{-1/2}/2$.
So, let's try to solve the integral above. The expected solution is around 0.84.
```
```
| c34600a807d90c17543ee404f2587b385571f3e7 | 101,939 | ipynb | Jupyter Notebook | FEEG6016 Simulation and Modelling/2014/Monte Carlo Lab 1.ipynb | ngcm/training-public | e5a0d8830df4292315c8879c4b571eef722fdefb | [
"MIT"
]
| 7 | 2015-06-23T05:50:49.000Z | 2016-06-22T10:29:53.000Z | FEEG6016 Simulation and Modelling/2014/Monte Carlo Lab 1.ipynb | Jhongesell/training-public | e5a0d8830df4292315c8879c4b571eef722fdefb | [
"MIT"
]
| 1 | 2017-11-28T08:29:55.000Z | 2017-11-28T08:29:55.000Z | FEEG6016 Simulation and Modelling/2014/Monte Carlo Lab 1.ipynb | Jhongesell/training-public | e5a0d8830df4292315c8879c4b571eef722fdefb | [
"MIT"
]
| 24 | 2015-04-18T21:44:48.000Z | 2019-01-09T17:35:58.000Z | 162.323248 | 47,311 | 0.861574 | true | 2,924 | Qwen/Qwen-72B | 1. YES
2. YES | 0.665411 | 0.817574 | 0.544023 | __label__eng_Latn | 0.950145 | 0.102277 |
<hr style="height:2px;border:none"/>
<H1 align='center'> Image Interpolation </H1>
<H3> INF-285 Computación Científica </H3>
<H3> Autor: Francisco Andrades</H3>
Lenguaje: Python
Temas:
- Image Interpolation
- Interpolación Bicúbica
- Lagrange, Newton, Spline
<hr style="height:2px;border:none"/>
```python
import numpy as np
import sympy as sp
from PIL import Image
from scipy import interpolate
import matplotlib.pyplot as plt
```
## Introducción
En la siguiente tarea estudiaremos un método de interpolación denominado **Interpolación Bicúbica**, utilizada frecuentemente sobre imágenes. Aplicaremos el método para aumentar la resolución de una imagen intentando preservar las propiedades de la versión original.
## Contexto
Supongamos que usted conoce $f$ y las derivadas $f_x$, $f_y$ y $f_{xy}$ dentro de las coordenadas $(0,0),(0,1),(1,0)$ y $(1,1)$ de un cuadrado unitario. La superficie que interpola estos 4 puntos es:
$$
p(x,y) = \sum\limits_{i=0}^3 \sum_{j=0}^3 a_{ij} x^i y^j.
$$
Como se puede observar el problema de interpolación se resume en determinar los 16 coeficientes $a_{ij}$ y para esto se genera un total de $16$ ecuaciones utilizando los valores conocidos de $f$,$f_x$,$f_y$ y $f_{xy}$. Por ejemplo, las primeras $4$ ecuaciones son:
$$
\begin{aligned}
f(0,0)&=p(0,0)=a_{00},\\
f(1,0)&=p(1,0)=a_{00}+a_{10}+a_{20}+a_{30},\\
f(0,1)&=p(0,1)=a_{00}+a_{01}+a_{02}+a_{03},\\
f(1,1)&=p(1,1)=\textstyle \sum \limits _{i=0}^{3}\sum \limits _{j=0}^{3}a_{ij}.
\end{aligned}
$$
Para las $12$ ecuaciones restantes se debe utilizar:
$$
\begin{aligned}
f_{x}(x,y)&=p_{x}(x,y)=\textstyle \sum \limits _{i=1}^{3}\sum \limits _{j=0}^{3}a_{ij}ix^{i-1}y^{j},\\
f_{y}(x,y)&=p_{y}(x,y)=\textstyle \sum \limits _{i=0}^{3}\sum \limits _{j=1}^{3}a_{ij}x^{i}jy^{j-1},\\
f_{xy}(x,y)&=p_{xy}(x,y)=\textstyle \sum \limits _{i=1}^{3}\sum \limits _{j=1}^{3}a_{ij}ix^{i-1}jy^{j-1}.
\end{aligned}
$$
Una vez planteadas las ecuaciones, los coeficientes se pueden obtener al resolver el problema $A\alpha=x$, donde $\alpha=\left[\begin{smallmatrix}a_{00}&a_{10}&a_{20}&a_{30}&a_{01}&a_{11}&a_{21}&a_{31}&a_{02}&a_{12}&a_{22}&a_{32}&a_{03}&a_{13}&a_{23}&a_{33}\end{smallmatrix}\right]^T$ y ${\displaystyle x=\left[{\begin{smallmatrix}f(0,0)&f(1,0)&f(0,1)&f(1,1)&f_{x}(0,0)&f_{x}(1,0)&f_{x}(0,1)&f_{x}(1,1)&f_{y}(0,0)&f_{y}(1,0)&f_{y}(0,1)&f_{y}(1,1)&f_{xy}(0,0)&f_{xy}(1,0)&f_{xy}(0,1)&f_{xy}(1,1)\end{smallmatrix}}\right]^{T}}$.
En un contexto más aplicado, podemos hacer uso de la interpolación bicúbica para aumentar la resolución de una imagen. Supongamos que tenemos la siguiente imagen de tamaño $5 \times 5$:
Podemos ir tomando segmentos de la imagen de tamaño $2 \times 2$ de la siguiente forma:
Por cada segmento podemos generar una superficie interpoladora mediante el algoritmo de interpolación cubica. Para el ejemplo anterior estariamos generando $16$ superficies interpoladoras distintas. La idea es hacer uso de estas superficies para estimar los valores de los pixeles correspondienets a una imagen más grande. Por ejemplo, la imagen $5 \times 5$ la podemos convertir a una imagen de $9 \times 9$ agregando un pixel entre cada par de pixeles originales además de uno en el centro para que no quede un hueco.
Aca los pixeles verdes son los mismos que la imagen original y los azules son obtenidos de evaluar cada superficie interpoladora. Notar que existen pixeles azules que se pueden obtener a partir de dos superficies interpoladoras distintas, en esos casos se puede promediar el valor de los pixeles o simplemente dejar uno de los dos.
Para trabajar con la interpolación bicubica necesitamos conocer los valores de $f_x$, $f_y$ y $f_{xy}$. En el caso de las imagenes solo tenemos acceso al valor de cada pixel por lo que deberemos estimar cual es el valor de estos. Para estimar $f_x$ haremos lo siguiente:
Para estimar el valor de $f_x$ en cada pixel haremos una interpolación con los algoritmos conocidos, usando tres pixels en dirección de las filas, luego derivaremos el polinomio obtenido y finalmente evaluaremos en la posición de interes. La misma idea aplica para $f_y$ solo que ahora interpolaremos en dirección de las columnas.
Por ejemplo si queremos obtener el valor de $f_x$ en la posición $(0,0)$ (imagen de la izquierda) entonces haremos una interpolación de Lagrange utilizando los pixeles $(0,0),(0,1)$ y $(0,2)$. Derivaremos el polinomio interpolador y evaluaremos en $(0,0)$. Por otro lado si queremos obtener el valor de $f_y$ en la posición $(0,0)$ (imagen de la derecha) entonces interpolaremos los pixeles $(0,0),(1,0)$ y $(2,0)$. Luego derivaremos el polinomio interpolador y evaluaremos en $(0,0)$.
Para obtener $f_{xy}$ seguiremos la idea anterior. Solo que esta vez se utilizaran los valores de $f_y$ y se interpolaran estos en dirección de las filas.
# Preguntas
## 1. Interpolación bicubica
### 1.1 Obtener derivadas (30 puntos)
Implemente la función `derivativeValues` que reciba como input un arreglo con valores, el método de interpolación y si es que se considera el uso de los puntos de chebyshev . La función debe retornar un arreglo de igual dimensión con los valores de las derivadas de los puntos obtenidas
Los métodos de interpolación serán representados por los siguientes valores
* Interpolación de lagrange: `'lagrange'`
* Diferencias divididas de Newton: `'newton'`
* Spline cubica: `'spline3'`
```python
def chebyshevNodes(n):
i = np.arange(1, n+1)
t = (2*i - 1) * np.pi / (2 * n)
return np.cos(t)
def newtonDD(x_i, y_i):
n = x_i.shape[-1]
pyramid = np.zeros((n, n)) # Create a square matrix to hold pyramid
pyramid[:,0] = y_i # first column is y
for j in range(1,n):
for i in range(n-j):
# create pyramid by updating other columns
pyramid[i][j] = (pyramid[i+1][j-1] - pyramid[i][j-1]) / (x_i[i+j] - x_i[i])
a = pyramid[0] # f[ ... ] coefficients
N = lambda x: a[0] + np.dot(a[1:], np.array([np.prod(x - x_i[:i]) for i in range(1, n)]))
return N
def calcular(values1,values2,values3, method, cheb,number):
y = np.array((values1,values2,values3))
x = np.array((0,1,2))
if cheb:
x = chebyshevNodes(3)
x.sort()
xS = sp.symbols('x', reals=True)
if(method == 'lagrange'):
L = interpolate.lagrange(x,y)
deriv = np.polyder(L)
return deriv(x[number])
if(method == 'newton'):
Pn = newtonDD(x, y)
L = Pn(xS)
deriv = sp.diff(L,xS)
if(method=='spline3'):
deriv = interpolate.CubicSpline(x, y)
deriv = deriv.derivative()
return deriv(x[number])
return deriv.evalf(subs = {xS : x[number]})
calcular_v = np.vectorize(calcular)
#recibe fila de 1 dimension
def derivativeValues(fila,method,cheb):
"""
Parameters
----------
values: (int array) points values
method: (string) interpolation method
cheb: (boolean) if chebyshev points are used
Returns
-------
d: (float array) derivative value of interpolated points
"""
shape = fila.shape
nuevo = np.zeros(shape)
nuevo[1:shape[0]-1] = calcular_v(fila[0:shape[0]-2],fila[1:shape[0]-1],fila[2:shape[0]],method,cheb,1)
nuevo[0] = calcular_v(fila[0],fila[1],fila[2], method, cheb,0)
nuevo[shape[0]-1] = calcular_v(fila[shape[0]-3],fila[shape[0]-2],fila[shape[0]-1], method, cheb,2)
return nuevo
```
### 1.2 Interpolación de imagen (50 puntos)
Implemente la función `bicubicInterpolation` que reciba como input la matriz de la imagen y cuantos píxeles extra se quiere agregar entre los píxeles originales y el algoritmo de interpolación a utilizar. La función debe retornar la matriz con la imagen de dimensión nueva. Considere que se debe aplicar el método de interpolación en cada canal RGB por separado.
```python
def obtain_all_derivatives(image,method,cheb):
shape = image.shape
nuevo_x = np.zeros(shape)
nuevo_y = np.zeros(shape)
nuevo_xy = np.zeros(shape)
for i in range(shape[2]):
nuevo_y[:,:,i] = np.array([derivativeValues(n, method, cheb) for n in image[:,:,i].T]).T
nuevo_x[:,:,i] = np.array([derivativeValues(n, method, cheb) for n in image[:,:,i]])
nuevo_xy[:,:,i] = np.array([derivativeValues(n, method, cheb) for n in nuevo_y[:,:,i]])
return nuevo_x,nuevo_y,nuevo_xy
def bicubicInterpolation(image, interiorPixels, method,cheb):
"""
Parameters
----------
image: (nxnx3 array) image array in RGB format
interiorPixels: (int) interpolation method
method: (string) interpolation method
cheb: (boolean) if chebyshev points are used
Returns
-------
newImage: (nxnx3 array) image array in RGB format
"""
matriz = np.array(((1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0),(-3,3,0,0,-2,-1,0,0,0,0,0,0,0,0,0,0),(2,-2,0,0,1,1,0,0,0,0,0,0,0,0,0,0),
(0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0),(0,0,0,0,0,0,0,0,-3,3,0,0,-2,-1,0,0),
(0,0,0,0,0,0,0,0,2,-2,0,0,1,1,0,0),(-3,0,3,0,0,0,0,0,-2,0,-1,0,0,0,0,0),(0,0,0,0,-3,0,3,0,0,0,0,0,-2,0,-1,0),
(9,-9,-9,9,6,3,-6,-3,6,-6,3,-3,4,2,2,1),(-6,6,6,-6,-3,-3,3,3,-4,4,-2,2,-2,-2,-1,-1),(2,0,-2,0,0,0,0,0,1,0,1,0,0,0,0,0),
(0,0,0,0,2,0,-2,0,0,0,0,0,1,0,1,0),(-6,6,6,-6,-4,-2,4,2,-3,3,-3,3,-2,-1,-2,-1),(4,-4,-4,4,2,2,-2,-2,2,-2,2,-2,1,1,1,1)))
shape = image.shape
nueva_imagen = np.zeros((shape[0]*(interiorPixels+1)-interiorPixels,shape[1]*(interiorPixels+1)-interiorPixels,shape[2]),dtype=image.dtype)
nuevo_x, nuevo_y, nuevo_xy = obtain_all_derivatives(image,method,cheb)
for j in range(shape[0]-1):
for i in range(shape[0]-1):
for rgb in range(shape[2]):
array = np.array((image[i,j,rgb],image[i+1,j,rgb],image[i,j+1,rgb]
,image[i+1,j+1,rgb],nuevo_x[i,j,rgb],nuevo_x[i+1,j,rgb]
,nuevo_x[i,j+1,rgb],nuevo_x[i+1,j+1,rgb],nuevo_y[i,j,rgb]
,nuevo_y[i+1,j,rgb],nuevo_y[i,j+1,rgb],nuevo_y[i+1,j+1,rgb]
,nuevo_xy[i,j,rgb],nuevo_xy[i+1,j,rgb],nuevo_xy[i,j+1,rgb],nuevo_xy[i+1,j+1,rgb]))
a = matriz.dot(array.T)
P = lambda x,y: np.sum([a[i]*(x**(i%4))*y**(int(i/4)) for i in range(16)])
numero_fila = (interiorPixels + 1)*i
numero_columna = (interiorPixels+1)*j
#rellenar
for cont in range(interiorPixels+2):
for cont1 in range(interiorPixels+2):
value = P(cont1/(interiorPixels+1),cont/(interiorPixels+1))
if(value > 255):
value = 255
if(value < 0):
value = 0
if(nueva_imagen[numero_fila+cont1,numero_columna+cont,rgb] != 0):
value = (nueva_imagen[numero_fila+cont1,numero_columna+cont,rgb]+value)/2
nueva_imagen[numero_fila+cont1,numero_columna+cont,rgb] = value
return nueva_imagen
img = Image.open('sunset.png')
img = img.convert('RGB')
array=np.array(img)
array_nuevo = bicubicInterpolation(array, 4, 'spline3',False)
#original
plt.imshow(img)
plt.show()
#interpolada
plt.imshow(array_nuevo)
plt.show()
```
```python
print("Tamaño Original: ",array.shape)
print("Interpolada: ", array_nuevo.shape)
```
Tamaño Original: (100, 100, 3)
Interpolada: (496, 496, 3)
## 2. Evaluacion de algoritmos
### 2.1 Tiempo de ejecucion
Implemente la funcion `timeInterpolation` que mida el tiempo de interpolacion de una imagen dado el algoritmo de interpolacion , en segundos.(5 puntos)
```python
import time
def timeInterpolation(image, interiorPixels, method,cheb):
"""
Parameters
----------
image: (nxnx3 array) image array in RGB format
interiorPixels: (int) interpolation method
method: (string) interpolation method
cheb: (boolean) if chebyshev points are used
Returns
-------
time: (float) time in seconds
"""
time1 = time.time()
bicubicInterpolation(image, interiorPixels, method,cheb)
time2 = time.time()
return time2-time1
```
***Pregunta: ¿Cual es el metodo que presenta mayor velocidad en general? (5 puntos)***
'spline3' es el método con mayor velocidad
### 2.2 Calculo de error
Implemente la funcion `errorInterpolation` la cual debe obtener el error de la imagen obtenida comparandola con una de referencia. El error debe ser calculado utilizando el indice SSIM (Structural similarity) (5 puntos)
```python
from skimage import metrics
def errorInterpolation(original,new):
"""
Parameters
----------
image: (nxn array) original image array in RGB format
new: (nxn array) new image array in RGB format obtained from interpolation
Returns
-------
error: (float) difference between images
"""
s = metrics.structural_similarity(original, new, multichannel = True)
return 1-s
```
***Pregunta: ¿Cual metodo presenta menor error? (5 puntos)***
Depende.
Para gradient con 1 pixel, 'lagrange'.
Para gradient con 4 pixel, 'spline3'.
Para sunset con 1 pixel, 'lagrange'.
Para sunset con 2 pixel, 'lagrange'.
Cabe destacar que los errores son muy parecidos, con una diferencia entre 10^-5 y 10^-6.
Referencias:
chebyshevNodes(), NewtonDD() sacados del jupiter del curso.
| 8ddc148b12f54fb6f9ca68f471d1643db821879f | 180,747 | ipynb | Jupyter Notebook | Otros/BicubicInterpolation.ipynb | franciscoandrades/Portafolio | 69a538b16ee2a6e8aa000c2e13ce1803f8c9f636 | [
"Apache-2.0"
]
| null | null | null | Otros/BicubicInterpolation.ipynb | franciscoandrades/Portafolio | 69a538b16ee2a6e8aa000c2e13ce1803f8c9f636 | [
"Apache-2.0"
]
| null | null | null | Otros/BicubicInterpolation.ipynb | franciscoandrades/Portafolio | 69a538b16ee2a6e8aa000c2e13ce1803f8c9f636 | [
"Apache-2.0"
]
| null | null | null | 326.258123 | 82,856 | 0.922206 | true | 4,421 | Qwen/Qwen-72B | 1. YES
2. YES | 0.76908 | 0.754915 | 0.58059 | __label__spa_Latn | 0.791565 | 0.187236 |
```python
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
import sympy
import matplotlib as mpl
```
```python
mpl.rc("text", usetex=False)
```
# Punto A #
```python
p=pd.read_csv("DataSet_Resolution_50",header=None)
```
```python
fig1,ax1=plt.subplots(figsize=(8,7))
(n1, bins1, patches1)=ax1.hist(p[0],bins=60,range=(5, 14))
ax1.set_xlabel("logE",fontsize=12)
ax1.set_ylabel("Counts",fontsize=12)
#ax.legend("Mean"+str(p.mean()))
ax1.set_title("Raw Counts",fontsize=15,pad=15)
plt.savefig("Raw_Counts_1.png")
```
```python
cost1=2*np.pi*3.15 * 10**7
```
```python
fig2,ax2=plt.subplots(figsize=(8,7))
(n2, bins2, patches2)=ax2.hist(p[0],bins=60,range=(5, 14),weights=(np.zeros_like(p)+1)/cost1)
ax2.set_xlabel("logE",fontsize=12)
ax2.set_ylabel("Counts",fontsize=12)
#ax.legend("Mean"+str(p.mean()))
ax2.set_title("Raw Counts 2",fontsize=15,pad=15)
plt.savefig("Raw_Counts_2.png")
```
```python
data1=[n2[i-1]/(10**bins1[i]-10**bins1[i-1]) for i in range(1,len(bins1))]
```
```python
fig3,ax3=plt.subplots(figsize=(8,7))
ax3.plot(bins1[:-1],data1)
#ax.plot(energy[:-1],n5)
#ax.set_xscale('log')
ax3.set_yscale("log")
ax3.set_ylabel("J (1/(km^2*yr*sr*eV))",fontsize=12)
ax3.set_xlabel("logE",fontsize=12)
ax3.set_title("Raw Spectrum",fontsize=15,pad=15)
plt.savefig("Raw_Spectrum_plot.png")
```
```python
fig4,ax4=plt.subplots(figsize=(8,7))
(n4, bins4, patches4)=ax4.hist(p[0],bins=60,range=(8, 14), rwidth=0.85,color='#0504aa')
ax4.set_xlabel("logE",fontsize=12)
ax4.set_ylabel("Counts",fontsize=12)
ax4.set_yscale("log")
ax4.grid(axis='y', alpha=0.75)
#ax4.set_ylim(0,10**(-9))
#ax.legend("Mean"+str(p.mean()))
ax4.set_title("Raw Binned Event Rate",fontsize=15,pad=15)
plt.savefig("Raw Binned Event Rate.png")
```
```python
fig5,ax5=plt.subplots(figsize=(8,7))
(n5, bins5, patches5)=ax5.hist(p[0],bins=60,range=(8, 14),weights=(np.zeros_like(p)+1)/cost1, rwidth=0.85,color='#0504aa')
ax5.set_xlabel("logE",fontsize=12)
ax5.set_ylabel("Flux [1/(m^2*s*sr)]",fontsize=12)
ax5.set_yscale("log")
ax5.grid(axis='y', alpha=0.75)
#ax.legend("Mean"+str(p.mean()))
ax5.set_title("Integral Flux",fontsize=15,pad=15)
plt.savefig("Integral Flux.png")
```
```python
data2=[n5[i-1]/(10**bins5[i]-10**bins5[i-1]) for i in range(1,len(bins5))]
```
```python
fig6,ax6=plt.subplots(figsize=(8,7))
(n6, bins6, patches6)=ax6.hist(bins5[:-1],bins=60,range=(8, 14),weights=data2, rwidth=0.85,color='#0504aa')
ax6.set_xlabel("logE",fontsize=12)
ax6.set_ylabel("Flux [1/(m^2*s*sr*eV)]",fontsize=12)
ax6.set_yscale("log")
ax6.grid(axis='y', alpha=0.75)
#ax.legend("Mean"+str(p.mean()))
ax6.set_title("J Raw",fontsize=15,pad=15)
plt.savefig("J Raw.png")
```
# Punto C#
```python
J0=6.57523*10**(-9)
gamma=-2.29989
exposure=np.pi*10**7*2*np.pi
```
## True Binned Event Rate ##
```python
def counts_fit(x):
return J0*pow(10,gamma*(x-8))*exposure
```
```python
real_counts=[counts_fit((bins4[i]+bins4[i-1])/2)*(10**bins4[i]-10**bins4[i-1]) for i in range(1,len(bins4))]
```
```python
fig7,ax7=plt.subplots(figsize=(8,7))
(n7, bins7, patches7)=ax7.hist(bins5[:-1],bins=60,range=(8, 14),weights=real_counts, rwidth=0.85,color='red')
ax7.set_xlabel("logE",fontsize=12)
ax7.set_ylabel("Counts",fontsize=12)
ax7.set_yscale("log")
ax7.grid(axis='y', alpha=0.75)
#ax.legend("Mean"+str(p.mean()))
ax7.set_title("True Binned Event Rate",fontsize=15,pad=15)
plt.savefig("True Binned Event Rate.png")
```
```python
fig8,ax8=plt.subplots(figsize=(8,7))
ax8.hist(bins5[:-1],bins=60,range=(8, 14),weights=real_counts, rwidth=0.85,color='red',alpha=0.8,label="True")
ax8.hist(p[0],bins=60,range=(8, 14), rwidth=0.85,color='#0504aa',alpha=0.8,label="Raw")
ax8.set_xlabel("logE",fontsize=12)
ax8.set_ylabel("Counts",fontsize=12)
ax8.set_yscale("log")
ax8.grid(axis='y', alpha=0.75)
ax8.legend()
ax8.set_title("Binned Event Rate",fontsize=15,pad=15)
plt.savefig("Binned Event Rate.png")
```
## True Spectrum ##
```python
def J_fit(x):
return J0*pow(10,gamma*(x-8))
```
```python
real_J=[J_fit((bins4[i]+bins4[i-1])/2) for i in range(1,len(bins4))]
```
```python
energy=[10**((bins4[i]+bins4[i-1])/2) for i in range(1,len(bins4))]
```
```python
fig7,ax7=plt.subplots(figsize=(7,7))
ax7.plot(energy,real_J)
ax7.set_xscale('log')
ax7.set_yscale("log")
ax7.set_ylabel("J (1/(km^2*yr*sr*eV))",fontsize=12)
ax7.set_xlabel("Energy [eV]",fontsize=12)
ax7.set_title("True Spectrum",fontsize=15,pad=15)
plt.savefig("True Spectrum.png")
```
# Punto D #
```python
fig8,ax8=plt.subplots(figsize=(7,7))
ax8.plot(energy,real_J,label="True")
ax8.plot(energy,data2,label="Raw")
ax8.set_xscale('log')
ax8.set_yscale("log")
ax8.set_ylabel("J (1/(m^2*s*sr*eV))",fontsize=12)
ax8.set_xlabel("Energy [eV]",fontsize=12)
ax8.set_title("Spectrum",fontsize=15,pad=15)
ax8.legend()
plt.savefig("Spectrum.png")
```
```python
true_raw_fraction=[real_J[i]/data2[i] for i in range(0,len(n5))]
```
/home/andry/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:1: RuntimeWarning: divide by zero encountered in double_scalars
"""Entry point for launching an IPython kernel.
```python
fig9,ax9=plt.subplots(figsize=(6,6))
ax9.plot(energy,true_raw_fraction)
ax9.set_xscale('log')
#ax9.set_yscale('log')
ax9.set_ylim(0.8,1.1)
ax9.set_xlabel("Energy [eV]",fontsize=12)
ax9.set_ylabel("True J/Raw J",fontsize=12)
plt.savefig("True J_Raw J.png")
```
# Punto E #
```python
x = sympy.symbols('x')
```
```python
y=J0*pow(10,gamma*(x-8))*2*np.pi
```
```python
c=sympy.integrate(y,(x,9,9.1))
```
```python
c.evalf()*10**(9.05)
```
$\displaystyle 0.0180413613773858$
```python
J0*pow(10,gamma*(9.05-8))*2*np.pi*(10**9.1-10**9)
```
0.041151056811310156
```python
```
| 71f58c8f52962cb7095ef9fca3795420a4d25716 | 200,224 | ipynb | Jupyter Notebook | Data_Analysis/Fenu_Maldera/Esercizio_2.ipynb | andreasemeraro/MPM_Space_Sciences | b9171f8b926f6ab355c4d87b6f715944b29b05ec | [
"MIT"
]
| null | null | null | Data_Analysis/Fenu_Maldera/Esercizio_2.ipynb | andreasemeraro/MPM_Space_Sciences | b9171f8b926f6ab355c4d87b6f715944b29b05ec | [
"MIT"
]
| null | null | null | Data_Analysis/Fenu_Maldera/Esercizio_2.ipynb | andreasemeraro/MPM_Space_Sciences | b9171f8b926f6ab355c4d87b6f715944b29b05ec | [
"MIT"
]
| null | null | null | 330.40264 | 26,772 | 0.936271 | true | 2,064 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.782662 | 0.675744 | __label__eng_Latn | 0.109482 | 0.408311 |
# PageRank Algorithm
This notebook implements the PageRank algorithm, prepared as a homework in BLG202E - Numerical Methods in CE class at ITU, Spring 2020.
```python
!pip install mechanize
```
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: mechanize in /home/marif/.local/lib/python3.6/site-packages (0.4.5)
Requirement already satisfied: html5lib>=0.999999999 in /home/marif/.local/lib/python3.6/site-packages (from mechanize) (1.0.1)
Requirement already satisfied: six>=1.9 in /home/marif/.local/lib/python3.6/site-packages (from html5lib>=0.999999999->mechanize) (1.14.0)
Requirement already satisfied: webencodings in /home/marif/.local/lib/python3.6/site-packages (from html5lib>=0.999999999->mechanize) (0.5.1)
```python
import numpy as np #For arrays and linear algebra routines
from mechanize import Browser #For getting the names of the papers from arXiv in part 3 results
#Not necessary, but to use it, above cell must be run to install mechanize library
```
```python
eps = 1e-10
```
# Motivating Example: Academic Ranking System
Our motivating example uses the paper citation network of Arxiv High Energy Physics Theory category. We will rank the papers by their influence. Data is represented as a directed graph, and we have the list of edges in a text file. We will start by loading the text file that contains the directed graph into a NumPy array.
```python
array = np.genfromtxt("cit-HepTh.txt", dtype=int)
```
Let's see the first 10 edges.
```python
print(array[0:10])
```
[[ 1001 9304045]
[ 1001 9308122]
[ 1001 9309097]
[ 1001 9311042]
[ 1001 9401139]
[ 1001 9404151]
[ 1001 9407087]
[ 1001 9408099]
[ 1001 9501030]
[ 1001 9503124]]
We will create a transition matrix from the list that contains links between the nodes of the graph. A transition matrix shows transitions from nodes to other nodes and the probability that transition happens.
These weights or normalization factors are calculated by taking $\frac{1}{\textrm{(no of outgoing links from that node)}}$. Following code will define a function with parameter **arr**: multidimensional array of edges, and returns **labels**: array consisting of distinct nodes and **weights**: a dictionary that maps these nodes to their weights.
```python
def createWeights(arr):
labels, counts = np.unique(arr[:,0], return_counts=True) ##Finds the unique entries in first column, returns their values
#and their counts to calculate weights
weight_dict = dict()
for index, label in enumerate(labels): ##Creates a dictionary that holds the normalization factor,
weight_dict[label] = 1 / counts[index] # 1/(number of outgoing links) for every paper ID in labels
return labels, weight_dict
```
```python
labels, weights = createWeights(array)
```
Following code will initialize transition matrix, finds the cells that represent transitions using a for loop that examines each node and links from that node, and fills in the cell with the weight specified in weights dictionary created above.
```python
def createTransitionMatrix(arr, labels, weight_dict):
transitionMatrix = np.zeros((labels.size,labels.size))
for i, label in enumerate(labels):
links_from_label = arr[np.nonzero(label == arr[:,0]),1][0]
# if links_from_label.shape == (0,): ##For dangling nodes, nodes that have no outgoing links
# weight_dict[label] = 1 / labels.size
# links_to_label = labels
transitionMatrix[np.searchsorted(labels, links_from_label),i] = weight_dict[label]
return transitionMatrix
```
```python
rankMatrix = createTransitionMatrix(array, labels, weights)
```
We expect our transition matrix to be column-stochastic, and we also expect it to be a sparse matrix it is mostly populated by zeros. Operations on sparse matrices can be done by some faster methods, so sparsity is an advantage in speed.
Now we will define and run two functions to test these attributes of our matrix. Results will show that our expectations hold.
```python
def checkStochastic(matrix):
eps = 1e-10
print("Sums of columns that do not add to 1 in a reasonable error margin {} will be shown.".format(eps))
for i in matrix.sum(0):
if not (i - 1 < eps):
print(i)
print("Calculation finished, average value of sums of all columns is {}.".format(np.mean(matrix.sum(0))))
def checkSparsity(matrix):
zeros = np.count_nonzero(matrix==0)
elements = matrix.size
sparse_rate = np.divide(zeros,elements)
print("There are {} elements in total, {} of them are zero.".format(elements, zeros))
print("Sparsity rate of this matrix is %{}".format(sparse_rate*100))
return sparse_rate
```
```python
checkStochastic(rankMatrix)
sparsity = checkSparsity(rankMatrix)
```
Sums of columns that do not add to 1 in a reasonable error margin 1e-10 will be shown.
Calculation finished, average value of sums of all columns is 0.9958169865008099.
There are 627953481 elements in total, 627601470 of them are zero.
Sparsity rate of this matrix is %99.94394314059069
---
Now that we have the required matrix, we can solve the equation
\begin{equation}
A x = x
\end{equation}
where A is the matrix, and x is the result vector that contains the rank. We will solve this by **power method**, by repeatedly multiplying an arbitrary vector* by our matrix until the difference in resulting vectors of two iterations is smaller than epsilon.
*While any arbitrary vector should work, it is better practice to use an all ones vector normalized by the size of itself, so initially every rank is equal and vector sums up to one. We will follow this practice.
```python
def solveRank(rankMatrix):
eps = 1e-7
v0 = np.ones(rankMatrix.shape[0]) / rankMatrix.shape[0]
# v0 = np.random.random(rankMatrix.shape[0] / rankMatrix.shape[0])
counter = 1
while True:
v = np.dot(rankMatrix, v0)
v = v / np.linalg.norm(v)
if (np.mean(np.abs(np.subtract(v,v0))) < 2*eps):
break
# print("Error: {}".format(np.mean(np.abs(np.subtract(v,v0))))) ##Uncomment this line to print error in each step
#If this function is taking too long, printing the error may be a good idea for debugging
counter += 1
v0 = v
print("Appropriate vector found in {} iterations, final difference between two iteration result vectors was less than {}.".format(counter,eps))
return v
```
```python
final = solveRank(rankMatrix)
```
Appropriate vector found in 80 iterations, final difference between two iteration result vectors was less than 1e-07.
Finally, we will rank our papers according to the resulting vector, and show the first 10 papers.
```python
def rankPagesDescending(labels, final):
return labels[final.argsort()][::-1]
```
```python
rankPagesDescending(labels, final)[0:10]
```
array([9201015, 9207016, 9206003, 209015, 9205071, 9202067, 9201047,
9205038, 9202018, 9205006])
Using the mechanize library, we can collect the information on these papers.
```python
ranking = rankPagesDescending(labels, final)
br = Browser()
for index,paper_id in enumerate(ranking[0:10]):
str_id = str(paper_id)
page_url = "https://arxiv.org/abs/hep-th/"
while(len(str_id) < 7):
str_id = '0' + str_id
page_url += str_id
br.open(page_url)
paper_title = br.title()[17:]
print("{}. paper ID is {}.".format(index + 1, paper_id))
print("Name of the paper is: {}".format(paper_title))
print(page_url)
```
1. paper ID is 9201015.
Name of the paper is: An Algorithm to Generate Classical Solutions for String Effective Action
https://arxiv.org/abs/hep-th/9201015
2. paper ID is 9207016.
Name of the paper is: Noncompact Symmetries in String Theory
https://arxiv.org/abs/hep-th/9207016
3. paper ID is 9206003.
Name of the paper is: From Form Factors to Correlation Functions: The Ising Model
https://arxiv.org/abs/hep-th/9206003
4. paper ID is 209015.
Name of the paper is: Advances in String Theory in Curved Space Times
https://arxiv.org/abs/hep-th/0209015
5. paper ID is 9205071.
Name of the paper is: Novel Symmetry of Non-Einsteinian Gravity in Two Dimensions
https://arxiv.org/abs/hep-th/9205071
6. paper ID is 9202067.
Name of the paper is: Stringy Domain Walls and Other Stringy Topological Defects
https://arxiv.org/abs/hep-th/9202067
7. paper ID is 9201047.
Name of the paper is: Duality-Invariant Gaugino Condensation and One-Loop Corrected Kahler Potentials in String Theory
https://arxiv.org/abs/hep-th/9201047
8. paper ID is 9205038.
Name of the paper is: Recent Developments in Classical and Quantum Theories of Connections, Including General Relativity
https://arxiv.org/abs/hep-th/9205038
9. paper ID is 9202018.
Name of the paper is: Jones Polynomials for Intersecting Knots as Physical States of Quantum Gravity
https://arxiv.org/abs/hep-th/9202018
10. paper ID is 9205006.
Name of the paper is: Stabilized Quantum Gravity: Stochastic Interpretation and Numerical Simulation
https://arxiv.org/abs/hep-th/9205006
In some edge cases, the transition matrix above may not abide by the conditions algorithm requires. To account for those situations, there is a **damping factor** defined for the algorithm, which takes the weighted average of our transition matrix with a matrix of all ones. While transition matrix represents probabilities of going from one node to another, adding these damping factor will give us a chance to randomly go from any node to another.
```python
def addDamping(transitionMatrix, labels, p = 0.15):
if(p < 0 or p > 1):
print("Please try again with a damping factor in interval [0,1].")
return None
rankMatrix = (1 - p) * transitionMatrix + p * ((1/labels.size) * np.ones((labels.size,labels.size)))
return rankMatrix
```
```python
rankMatrix = addDamping(rankMatrix, labels)
checkStochastic(rankMatrix)
sparsity = checkSparsity(rankMatrix)
```
Sums of columns that do not add to 1 in a reasonable error margin 1e-10 will be shown.
Calculation finished, average value of sums of all columns is 0.9964444385253646.
There are 627953481 elements in total, 0 of them are zero.
Sparsity rate of this matrix is %0.0
We see that while stochastic structure of the matrix holds, it is no longer a sparse matrix after adding damping factor. This is because now every node has the probability to go to any other node, even if this is a very small probability. So we do not have any 0 cells in our matrix anymore.
Now, we will solve out matrix again.
```python
final = solveRank(rankMatrix)
ranking = rankPagesDescending(labels, final)
br = Browser()
for index,paper_id in enumerate(ranking[0:10]):
str_id = str(paper_id)
page_url = "https://arxiv.org/abs/hep-th/"
while(len(str_id) < 7):
str_id = '0' + str_id
page_url += str_id
br.open(page_url)
paper_title = br.title()[17:]
print("{}. paper ID is {}.".format(index + 1, paper_id))
print("Name of the paper is: {}".format(paper_title))
print(page_url)
```
Appropriate vector found in 30 iterations, final difference between two iteration result vectors was less than 1e-07.
1. paper ID is 9201015.
Name of the paper is: An Algorithm to Generate Classical Solutions for String Effective Action
https://arxiv.org/abs/hep-th/9201015
2. paper ID is 9207016.
Name of the paper is: Noncompact Symmetries in String Theory
https://arxiv.org/abs/hep-th/9207016
3. paper ID is 9205071.
Name of the paper is: Novel Symmetry of Non-Einsteinian Gravity in Two Dimensions
https://arxiv.org/abs/hep-th/9205071
4. paper ID is 209015.
Name of the paper is: Advances in String Theory in Curved Space Times
https://arxiv.org/abs/hep-th/0209015
5. paper ID is 9202067.
Name of the paper is: Stringy Domain Walls and Other Stringy Topological Defects
https://arxiv.org/abs/hep-th/9202067
6. paper ID is 9201047.
Name of the paper is: Duality-Invariant Gaugino Condensation and One-Loop Corrected Kahler Potentials in String Theory
https://arxiv.org/abs/hep-th/9201047
7. paper ID is 9202018.
Name of the paper is: Jones Polynomials for Intersecting Knots as Physical States of Quantum Gravity
https://arxiv.org/abs/hep-th/9202018
8. paper ID is 9205006.
Name of the paper is: Stabilized Quantum Gravity: Stochastic Interpretation and Numerical Simulation
https://arxiv.org/abs/hep-th/9205006
9. paper ID is 9205038.
Name of the paper is: Recent Developments in Classical and Quantum Theories of Connections, Including General Relativity
https://arxiv.org/abs/hep-th/9205038
10. paper ID is 9206048.
Name of the paper is: Conformally Exact Results for SL(2,R)\times SO(1,1)^{d-2}/SO(1,1) Coset Models
https://arxiv.org/abs/hep-th/9206048
---
Since our dataset is very big, it is not easy to say if our algorithm correctly chooses the rankings. So we will test the same functions on a smaller dataset below, and we will compare the results with our expectation that will be computed by hand mathematically.
```python
def calculateRankVector(data, damping=False):
labels, weights = createWeights(data)
rankMatrix = createTransitionMatrix(data, labels, weights)
if damping:
rankMatrix = addDamping(rankMatrix, labels)
resultVector = solveRank(rankMatrix)
return resultVector
```
```python
filename = "testset.txt" #Change the filename to test it with another file
data = np.genfromtxt(filename, dtype = str)
resultVector = calculateRankVector(data, damping=True)
ranking = rankPagesDescending(labels, resultVector)
print(ranking)
print(resultVector.reshape(resultVector.size,1))
```
Appropriate vector found in 20 iterations, final difference between two iteration result vectors was less than 1e-07.
['A' 'D' 'C' 'B']
[[0.63395272]
[0.34267715]
[0.444879 ]
[0.53175087]]
| a9535e3ff61bf761c668565600181c90ec44bfef | 24,166 | ipynb | Jupyter Notebook | pagerank.ipynb | marifdemirtas/pagerank | 0d5796c5720a35aa84b2aa1ef98343a28016d390 | [
"MIT"
]
| null | null | null | pagerank.ipynb | marifdemirtas/pagerank | 0d5796c5720a35aa84b2aa1ef98343a28016d390 | [
"MIT"
]
| null | null | null | pagerank.ipynb | marifdemirtas/pagerank | 0d5796c5720a35aa84b2aa1ef98343a28016d390 | [
"MIT"
]
| null | null | null | 32.568733 | 455 | 0.583175 | true | 3,739 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92523 | 0.843895 | 0.780797 | __label__eng_Latn | 0.972892 | 0.652386 |
## Numerical method
Here we will solve few problems by numerical method using Lagrangian grid which deformed and moved together with system.
We will use Wilkins Method.
### Numerical implementation of boundary conditions
#### Basics of the system
Lets consider tho slabs collision problem. Suppose slabs have areas $AB$ and $CD$ on $x$ axes:
we call $A$, $B$, $C$ and $D$ *side bars*
We mark here and following formulas in varible $v^n_i$ top index $n$ as time and $i$ as index of *inner bars* in the grid.
*Bar* has always integer index $k$ and *node* has index $k\pm1/2$
For *inner bars* we will use:
for speeds:
$\begin{align}
v^{n+1/2}_{i} = v^{n-1/2}_{i} + 2 \Delta t \frac{\Sigma^n_{i+1/2} - \Sigma^n_{i-1/2}}{\phi^n_{i+1/2} + \phi^n_{i - 1/2}}
\end{align}$
for positions:
$\begin{align}
x^{n+1}_i = x^{n}_i + \Delta t v^{n+1/2}_i \\
\end{align}$
where $\Delta t$ is time step; $\phi^n_{i+1/2} = \rho^n_{i+1/2}(x^n_{i+1} - x^n_{i})$ where $\rho$ is density.
We suppose that all values of all variables are know for moment $t^n$ and before it.
#### Collision
Slabs collide at moment $t^n$, we merge $B$ and $C$ and set following: $\Sigma_B = \Sigma_C$, where $\Sigma$ is tension.
Boundary conditions for the *side bar*s we find at moment $t^n$ as follows. Let's add a *mock node* $i - 1/2$ with *bars* $i - 1$ and $i$ which we will use only for computations.
This *mock node* we place to the left corner of the right stab.
And we suposse following for the *mock node*:
$\begin{align}
\Sigma^n_{i-1/2} = 0 ; \phi^n_{i-1/2} = 0
\end{align}$
```python
```
| 3f8d4a041d82d07de032e39deea309da29a8657f | 3,240 | ipynb | Jupyter Notebook | numerical_method.ipynb | CorpGlory/codebang | 69aff8e91ec661318397f684106d9fd1cc51df57 | [
"MIT"
]
| null | null | null | numerical_method.ipynb | CorpGlory/codebang | 69aff8e91ec661318397f684106d9fd1cc51df57 | [
"MIT"
]
| null | null | null | numerical_method.ipynb | CorpGlory/codebang | 69aff8e91ec661318397f684106d9fd1cc51df57 | [
"MIT"
]
| null | null | null | 30.857143 | 188 | 0.549074 | true | 534 | Qwen/Qwen-72B | 1. YES
2. YES | 0.899121 | 0.689306 | 0.619769 | __label__eng_Latn | 0.990774 | 0.278262 |
```python
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import random
%matplotlib inline
```
**Note:** This Jupyter notebook is a slightly shortened version of "HT>.ipynb" found at this GitHub repository __[here](https://github.com/craw-daddy/Introductory-DS)__.
## Statistical Hypothesis Testing
Hypothesis testing typically involves comparing two datasets to look for a statistical relationship between the two datasets. In other cases, testing will involve a dataset versus an "idealized" dataset.
### Examples of hypothesis testing
1. Is the mean height of Scandinavian men the same as the mean height of other European (non-Scandinavian) men?
2. Are the number of males/females fined for speeding in New York "significantly different" than male/female arrests for speeding in Massachusetts in 2017? (This is essentially asking if the number of fines is independent of gender and location.)
3. A certain machine for manufacturing cardboard boxes is supposed to manufacture ones that are 1/5" thick. A quality control technician in the company checks a sample of the output to test if the thickness is smaller than this desired target (meaning the boxes will be too weak).
4. When making tea, is there is a difference in taste whether the tea is added to the cup first, or the milk is added to the cup first? (A somewhat famous example described by Ron Fisher who helped lay the foundations of statistical hypothesis testing. Dr. Muriel Bristol, a colleague of Fisher, claimed to be able to tell which was added first.)
### Null hypothesis and alternative hypothesis
$H_O$: The null hypothesis, assumed to be true.
$H_A$: The alternative hypothesis, accepted if the samples/observations support this hypothesis "with sufficient evidence".
For some of the examples given above, $H_O$ and $H_A$ might be stated as follows:
1. $H_O$: $\mu_S = \mu_E$ where $\mu_S$ is the mean height of Scandinavian men and $\mu_E$ is the mean height of other European (non-Scandinavian) men.
$H_A$: $\mu_S \not= \mu_E$
<br>
2. $H_O$: The frequency of speeding fines is independent of gender and location.
$H_A$: The frequency of speeding fines is not independent of gender and/or location.
<br>
3. $H_O$: $\mu = 0.20$ where $\mu$ is the mean thickness of the boxes in inches.
$H_A$: $\mu < 0.20$
### Common assumptions about samples/observations
1. Samples come from a normal distribution, or at least one that is symmteric. Alternatively, the number of samples should be high enough (at least $30$) so that the "Law of Large Numbers", i.e. the Central Limit Theorem applies.
2. Samples are independent from one another.
3. For multiple datasets, it is typically assumed they have a common variance. (That itself could constitute another hypothesis test.)
(Other assumptions might be used if data is categorical in nature.)
### Test statistic
The test statistic summarizes the dataset(s) into one value that can be used to (try to) distinguish the null hypothesis from the alternative hypothesis.
Common test statistic distributions: Student's t distribution, normal distribution (using Law of Large Numbers with known variance), $\chi^2$ distribution
### Significance level
The significance level, $\alpha$, is the probability threshold below which the null hypothesis will be rejected. What this means is the following: Assume the null hypothesis is true, and let $T$ denote the test statistic that will be used in the test. The significance level partitions the possible values of $T$ into regions where the null hypothesis will be rejected (the _critical region_), and those where it will not be rejected. (E.g. for a normal distribution, the critical region will be the tail(s) of the distribution.) The probability of the critical region is equal to $\alpha$. Typical values for $\alpha$ are $0.05$ and $0.01$.
### Procedure of the test
1. Having chosen the test statistic $T$, and the significance level $\alpha$, we compute the observed value of $T$ using the samples/observations in our dataset. Call this value $t_{obs}$.
2. Check if $t_{obs}$ lies in the critical region. If so, we reject the null hypothesis in favor of the alternative hypothesis. If not, we cannot reject the null hypothesis.
3. Equivalently, the value of the test statistic corresponds to a $p$-value, which is the probability of that value $t_{obs}$ occurring, assuming that the null hypothesis is true. If the $p$-value is smaller than or equal to the significance level, we reject the null hypothesis in favor of the alternative hypothesis.
### Errors in hypothesis testing
It is possible that we make an error in hypothesis testing, e.g. rejecting the null hypothesis incorrectly or failing
to reject the null hypothesis when it is not the truth (but the samples we have do not support rejecting it).
A common analogy used in hypothesis testing is to a criminal trial. In most countries, a defendant on trial is presumed to be innocent (the "null hypothesis"). The evidence presented during the course of a trial is analogous to the samples taken in a statistical test. If the evidence is sufficiently persuasive, the jury can find the defendant guilty (the "alternative hypothesis"). If the evidence is not persuasive, the defendant is found "not guilty". It's possible, of course, that the jury's decision is wrong.
<table border="2" width="0" style="text-align:center; font-size=14">
<tr><th></th><th colspan="2" style="text-align: center">Truth</th></tr>
<tr style="text-align:center">
<th>(Jury) Decision</th>
<td style="text-align:center">_Not Guilty_<br>_(Null Hypothesis)_</td>
<td style="text-align:center">_Guilty_<br>_(Alternative Hypothesis)_</td>
</tr>
<tr>
<th>Not Guilty<br>(Accept Null Hypothesis)</th>
<td style="text-align:center">Ok</td>
<td style="text-align:center">Error<br>(**Type II Error**)</td>
</tr>
<tr>
<th>Guilty<br>(Accept Alternative Hypothesis)</th>
<td style="text-align:center">Error<br>(**Type I Error**)</td>
<td style="text-align:center">Ok</td>
</tr>
</table>
In hypothesis testing, as in jury trials, we want to minimize the "conviction of an innocent person", or the incorrect rejection of the null hypothesis (the "Type I Error"). There is an asymmetry in that lowering the chances of a Type I error magnifies the chances of a Type II error occurring.
**Note: The significance level $\alpha$ is the (highest) probability of a Type I Error occurring.**
```python
# Cardboard box samples
samples=[0.1803, 0.2160, 0.1622, 0.2277, 0.2253, 0.1809, 0.1765, 0.1861, 0.1814, 0.1698,
0.1853, 0.2086, 0.1839, 0.1783, 0.1814, 0.1565, 0.2127, 0.1811, 0.1718, 0.2089,
0.2067, 0.1614, 0.1690, 0.1812, 0.2172, 0.1555, 0.1623, 0.1887, 0.2069, 0.1676,
0.1845, 0.1859, 0.1917, 0.2170, 0.1943, 0.1813, 0.2017, 0.2097, 0.1737, 0.2076]
print(len(samples))
print(np.mean(samples))
```
40
0.188465
Since we only have the samples in hand, with no before-hand knowledge about the variance
of the distribution, we perform a one-sample, one-sided t-test.
In this case we compute $(\bar{X}-\mu)/(s/\sqrt{n})$, where $\bar{X}$ is the sample mean, $s$ is the sample standard deviation, $n$ is the number of samples, and $\mu$ is the (assumed) mean of the distribution (from the null hypothesis). We compare this to the Student's $t$-distribution with $n-1$ degrees of freedom.
```python
alpha = 0.05
# Perform the t-test.
(statistic, p_value) = stats.ttest_1samp(samples, 0.20)
```
```python
# Note that, by default, Python performs a two-sided t-test.
# To get the one-sided test we want, we reject the null hypothessis if and only if
# the test statistic is negative (based on the alternative hypothesis),
# and we have p_value/2 < alpha.
print(statistic)
print(p_value/2 < alpha)
```
-3.72303812077
True
So in this case we reject the null hypothesis in favor of the alternative hypothesis ($\mu < 0.20$).
Alternatively, we can find the "critical value", where if the test statistic is less than this value, we reject the null hypothesis. Since this is a one-sided test, we want the value of the test statistic where 5% of the cdf is below this value.
```python
t_var = stats.t(len(samples)-1)
t_critical = t_var.ppf(0.05) # "Invert" the cdf to find the critical value.
print(t_critical)
print(statistic < t_critical)
```
-1.6848751195
True
Since the value of the test statistic is less than this critical value, we reject the null hypothesis.
<a id="intro"></a>
## Bayes' Theorem (Introduction)
Bayes' Theorem (or Bayes' Rule) is used to describe the probability of an event, given other knowledge related to that event. It is a way to update our (probabilistic) beliefs as new information is revealed.
As a simple example, consider the case of a mother who has two children.
Assuming that it is equally likely that each child is a girl or a boy, consider these questions:
- What is the probability that both of the mother's children are girls?
- If I tell you that at least one of the children is a girl, what is the probability that both children are girls?
- If I tell you that the eldest child is a girl, what is the probability that both children are girls?
These answers are, respectively, $\frac{1}{4}$, $\frac{1}{3}$, and $\frac{1}{2}$, so additional information about the mother's children changes our assessment of these probabilities. (See [answers](#mother) at the end of this lesson if you are unfamiliar with this solution.)
### Bayesian inference
Bayesian inference is the use of Bayes' Theorem to update the probability of a hypothesis as more evidence becomes available. Also used to infer values of parameters of probability distributions from observations.
### Applications of Bayes' Theorem and Bayesian Inference
1. Medical testing
2. Spam (email) detection
3. Weather prediction
4. Estimating parameters for probability distributions
5. Cracking the Enigma Code
6. The famous ``Monty Hall'' problem (Do you win the car or the goat?)
#### An example
Suppose after a horrible week of upset stomachs, little sleep, pain, and weak knees, you go to the doctor. After running a variety of tests, the doctor tells you that you have a life-threatening disease? Should you be worried?
You need more information in order to decide if you should be (very) concerned or not, so asking the doctor, she tells you the following pieces of information about this disease and the testing method:
- For people who have the disease, the test will correctly detect this 99% of the time. (The _sensitivity_ is 99%.)
- For people without the disease, the test will correctly conclude this 99% of the time. (The _specificity_ is 99%.)
- It is believed (estimated) that 0.1% of people have this disease.
Letting D denote the event that you have the disease, and P denoting the event that you test positive for having the disease, we are trying to determine this quantity:
<div class="alert alert-block alert-warning">$Pr(D \mid P)$ = the probability that you have the disease, given that you test positive for it</div>
Before getting to the statement of Bayes' Theorem, let's solve this problem "by hand".
- In population of 1000 people, we expect there will be one person with this disease, i.e. $1000 \times 0.001 = 1$.
- For this one person with the disease, the test will be correct with probability $1\times 0.99 \approx 1$, so the test will correctly identify this person.
- Out of the 999 people without the disease, the test will incorrectly test positive for (about) $999\times 0.01 \approx 10$ of them.
Thus, there are eleven people that test positive, but only one of them has the disease, so the chances that you have
the disease, given that you test positive is about $1/11 \approx 9\%$.
**Conclusion:** If it was me, I would be worried enough to get tested again (by another doctor, who uses a different lab to process the test).
## Bayes' Theorem
Let A and B denote two events, where $Pr(B) > 0$. Bayes' Theorem allows us to express $Pr(A\mid B)$ (the probability that A happens, given that B has happened) in terms of other probabilities.
$$Pr(A \mid B) = \frac{Pr(B \mid A)\cdot Pr(A)}{Pr(B)}$$
### Some terminology and assumptions
$Pr(A \mid B)$, the conditional probability that the event $A$ happens given the evidence that $B$ has occurred, is typically called the _posterior probability_. The quantity $P(A)$ is usually called the _prior probability_, and $Pr(B\mid A)$ is often called the _likelihood_.
Implicit in the use of Bayes' Theorem are the assumptions that the posterior probability and the likelihood are known quantities (or we have very good estimates). If we gain additional knowledge that the prior probability or the likelihood has somehow changed, we would use them to update $Pr(A\mid B)$ accordingly.
#### Our example
We want to compute $Pr(D\mid P)$.
To apply Bayes' Theorem we need our other probabilities, $Pr(P \mid D)$, $Pr(D)$, and $Pr(P)$.
```python
# The likelihood Pr(P | D)
p_Pos_given_Disease = 0.99
# The prior probability Pr(D)
p_Disease = 0.001
```
Finding $Pr(P)$ is the slightly tricky one here. This uses the so-called "Law of Total Probability".
```python
p_Pos = 0.001*0.99 + 0.999*0.01
```
Then we apply Bayes' Theorem.
```python
p_Disease_given_Pos = p_Pos_given_Disease * p_Disease / p_Pos
print(p_Disease_given_Pos)
```
0.09016393442622951
<a id="second-test"></a>
### The second test
Suppose having tested positive for the disease, you go for a second test, which is positive again.
What's your estimation now of the probability you have the disease? (See [end](#second-test-answer) for answer.)
### Bayesian inference
Bayes' Theorem can be used to estimate parameters of probability distributions, given an assumption about the underlying distribution.
For example, coin flipping can be represented by a Bernoulli random variable with a (possibly unknown) parameter $P$, which equals the probability of obtaining a "heads" in one flip. Our goal might be to estimate the parameter $P$ based on observed evidence of coin flips.
Recall that repeated Bernoulli trials corresponds to the binomial distribution, giving the probability of $k$ "successes" in $n$ trials, where success happens with probability $P$. The binomial probability mass function has this expression:
$$Pr(k;n,P) = {n \choose k}P^k(1-P)^{n-k}.$$
As stated, we want to use Bayes' Theorem to help us estimate $P$. Bayes' Theorem tells us that
$$Pr(P=p\mid heads) \propto Pr(heads \mid P=p)\cdot Pr(P=p)$$
where "$\propto$" means "proportional to". We know that $Pr(heads \mid P=p) = p$, so if we know (or could "guess") a possible distribution for the random variable $P$, we could update our knowledge about $P$ given new evidence of an observed coin flip.
In this case the "right" probability distribution to try for $P$ is the Beta distribution, denoted $\mathbb{B}(\alpha,\beta)$, which is a distribution on the interval $[0,1]$ with the probability density function
$$p(x) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}.$$
Here $B(\alpha,\beta)$ denotes the Beta function, which is the normalizing constant that makes this a probability distribution.
Our application of Bayes' Theorem becomes:
$$
\begin{align}
Pr(P=p\mid heads) & \propto\,\, Pr(heads \mid P=p)\cdot Pr(P=p) \\
& \propto\,\, p\cdot p^{\alpha-1}(1-p)^{\beta-1} \\
& \propto\,\, p^{\alpha}(1-p)^{\beta-1}.
\end{align}
$$
It isn't too difficult to show that
$$Pr(P=p \mid heads) = \mathbb{B}(\alpha+1,\beta),$$
and similarly that
$$Pr(P=p \mid tails) = \mathbb{B}(\alpha, \beta+1).$$
In other words, if $P$ (the prior) is distributed according to the Beta distribution $\mathbb{B}(\alpha,\beta)$, then the _new_ distribution of $P$ (the posterior) is a Beta distribution that depends upon observing a "heads" or "tails" of the new coin flip, either $\mathbb{B}(\alpha+1,\beta)$ or $\mathbb{B}(\alpha,\beta+1)$, respectively.
**Terminology:** With the assumption that the prior of $P$ is the Beta distribution, and the posterior for $P$ is another Beta distribution (with different parameters) we say that the Beta distribution is a "conjugate prior" for Bernoulli trials.
From the properties of the Beta distribution, we know that $\mathbb{E}(\mathbb{B}(\alpha,\beta))=\frac{\alpha}{\alpha+\beta}$.
Therefore, we have that our prior probability is $P=Pr(heads) = \frac{\alpha}{\alpha+\beta}$. Observations make us readjust this belief about $P$ for every coin flip we observe.
If we observe a "heads", we want to adjust $P$ upwards, and observing "tails" makes us adjust $P$ downwards. We do this by changing the parameters of the Beta distribution as noted above.
As the number of observations increases, our estimate for P should get better.
```python
P = 0.8
alpha = 1
beta = 1
l = []
dists = []
for k in range(101):
estP = alpha/(alpha+beta)
if k % 10 == 0:
l.append([alpha, beta, round(estP,4)])
dists.append(stats.beta(a=alpha,b=beta))
r = random.random()
if r <= P:
alpha = alpha+1
else:
beta = beta+1
print(l)
print(round(alpha/(alpha+beta),4))
```
[[1, 1, 0.5], [10, 2, 0.8333], [18, 4, 0.8182], [26, 6, 0.8125], [36, 6, 0.8571], [45, 7, 0.8654], [53, 9, 0.8548], [61, 11, 0.8472], [70, 12, 0.8537], [77, 15, 0.837], [84, 18, 0.8235]]
0.8155
```python
plt.figure()
for k, d in enumerate(dists[0::len(dists)//5]):
ax=plt.subplot(2,3,k+1)
xPoints = np.arange(0,1,1/200)
ax.plot(xPoints, d.pdf(xPoints))
ax.set_title("Beta(a={a}, b={b})".format(**d.kwds))
plt.tight_layout()
```
<a id="coin-example"></a>
#### Estimation the probability of "heads"
Suppose we are examining another coin. What's the estimate for p, the probability of a "heads" for this coin, when we are given this list of 150 coin tosses? (See answer [below](#coin-example-answer).)
```python
ht = ['H','T','H','H','T','T','T','H','H','T','T','T','H','T','T',
'T','T','H','H','H','T','H','T','T','T','T','H','H','H','T',
'T','T','H','H','T','H','T','T','H','H','H','H','T','T','H',
'H','H','T','T','H','H','H','T','T','T','T','T','H','T','H',
'H','T','H','T','T','T','T','T','H','T','T','T','H','T','T',
'H','T','T','H','T','T','H','T','T','T','T','H','T','T','H',
'T','H','H','T','H','T','T','T','H','T','T','T','T','H','H',
'T','T','T','H','H','T','T','T','H','H','T','H','T','H','T',
'T','T','H','T','H','T','T','T','T','T','H','T','T','T','T',
'H','H','T','T','T','H','H','T','H','T','T','H','T','T','H']
```
### Answers to qustions asked
<a id="mother"></a>
#### A mother's children (solution)
There are four possibilities for the mother's two children, GG, GB, BG, and BB (given in birth order). Assuming that B and G are equally likely, then these four possibilities are also equally likely. Hence, with no information, we see the probability that the mother has two girls is $1/4$.
If we are told that at least one child is a girl, this eliminates the BB option, leaving three equally likely options, so the probability that she has two girls is $1/3$.
Finally, if we are told that the eldest child is a girl, this leaves only the two choices $GG$ and $GB$, each equally likely, so the probability she has two girls is $1/2$.
[[Back to Introduction to Bayes' Thm](#intro)]
<a id="second-test-answer"></a>
#### The second test (solution)
Suppose having tested positive for the disease, you go for a second test, which is positive again.
What's your estimation now of the probability you have the disease?
The first test (effectively) alters the prior probability $Pr(D)$, from 0.1% to (about) 9%. This isn't quite accurate, but is close enough for a good estimate.
```python
# New prior
p_Disease = 0.09
# New denominator, since the prior has been updated (given the positive test)
p_Pos = 0.09*0.99+0.91*0.01
p_Disease_given_Pos = p_Pos_given_Disease * p_Disease / p_Pos
print(p_Disease_given_Pos)
```
0.9073319755600815
So now you're about 91% positive that you have the disease.
[[Back to "The second test"](#second-test)]
<a id="coin-example-answer"></a>
#### Estimating the probability of "heads" (solution)
```python
ht = ['H','T','H','H','T','T','T','H','H','T','T','T','H','T','T',
'T','T','H','H','H','T','H','T','T','T','T','H','H','H','T',
'T','T','H','H','T','H','T','T','H','H','H','H','T','T','H',
'H','H','T','T','H','H','H','T','T','T','T','T','H','T','H',
'H','T','H','T','T','T','T','T','H','T','T','T','H','T','T',
'H','T','T','H','T','T','H','T','T','T','T','H','T','T','H',
'T','H','H','T','H','T','T','T','H','T','T','T','T','H','H',
'T','T','T','H','H','T','T','T','H','H','T','H','T','H','T',
'T','T','H','T','H','T','T','T','T','T','H','T','T','T','T',
'H','H','T','T','T','H','H','T','H','T','T','H','T','T','H']
```
As before, we use the Beta distribution as a prior, and update this distribution for each "heads" or "tails" we see. In this case, we start with $\alpha = \beta = 1$ and increase the appropriate variable for "heads" or "tails".
```python
alpha = 1 + sum(1 for x in ht if x == "H")
beta = 1 + sum(1 for x in ht if x == "T")
print(alpha, beta)
print("Estimate for p: ", alpha/(alpha+beta))
```
60 92
Estimate for p: 0.39473684210526316
| e701088e514bbfcadf755173b9497caa3040f25a | 54,086 | ipynb | Jupyter Notebook | HT&BT-Short.ipynb | craw-daddy/Introductory-DS | 77590ef50a1e8fb9311daac3a0e65ddcc0559988 | [
"MIT"
]
| 1 | 2020-10-17T12:25:22.000Z | 2020-10-17T12:25:22.000Z | HT&BT-Short.ipynb | craw-daddy/Introductory-DS | 77590ef50a1e8fb9311daac3a0e65ddcc0559988 | [
"MIT"
]
| null | null | null | HT&BT-Short.ipynb | craw-daddy/Introductory-DS | 77590ef50a1e8fb9311daac3a0e65ddcc0559988 | [
"MIT"
]
| 1 | 2019-12-10T07:01:19.000Z | 2019-12-10T07:01:19.000Z | 73.686649 | 23,984 | 0.753929 | true | 6,099 | Qwen/Qwen-72B | 1. YES
2. YES | 0.819893 | 0.855851 | 0.701707 | __label__eng_Latn | 0.995122 | 0.468631 |
# Chapter 2 Exercises
In this notebook we will go through the exercises of chapter 2 of Introduction to Stochastic Processes with R by Robert Dobrow.
```python
import numpy as np
```
## 3.1
Consider a Markov chain with transition Matrix
$$P=\left(\begin{array}{cc}
1/2 & 1/4 & 0 & 1/4 \\
0 & 1/2 & 1/2 & 0\\
1/4 & 1/4 & 1/2 & 0 \\
0 & 1/4 & 1/2 & 1/4
\end{array}\right)$$
Find the stationary distribution without using technology
### Answer
We have 5 equations with 4 variables:
$$\begin{align}
1/2 \pi_1 +1/4\pi_3 &= \pi_1 \\
1/4 \pi_1 +1/2\pi_2+1/4\pi_3+1/4\pi_4 &= \pi_2 \\
1/2\pi_2+1/2\pi_3+1/2\pi_4 &= \pi_3 \\
1/4 \pi_1 +1/4\pi_4 &= \pi_4 \\
\pi_1+\pi_2+\pi_3+\pi_4=1
\end{align}$$
$=>$
$$\begin{align}
\pi_1 &= 3\pi_4 \\
\pi_3 &= 6\pi_4 \\
\pi_2 &= 5\pi_4 \\
\pi_4 &= 1/15 \\
\pi_2 &= 1/3 \\
\pi_3 &= 2/5 \\
\pi_1 &= 1/5 \\
\end{align}$$
then $\pi = (1/5, 1/3, 2/4, 1/15)$
```python
pi = np.array([1/5, 1/3, 2/5, 1/15])
P = np.matrix([[1/2 , 1/4 , 0 , 1/4],
[0 , 1/2 , 1/2 , 0],
[1/4 , 1/4 , 1/2 , 0],
[0 , 1/4 , 1/2 , 1/4]])
pi*P, pi
```
(matrix([[0.2 , 0.33333333, 0.4 , 0.06666667]]),
array([0.2 , 0.33333333, 0.4 , 0.06666667]))
## 3.2
A stochastic matrix is called *doubly stochastic* if its rows and columns sum to 1. Show that a Markov chain whose transition matrix is doubly stochastic has a stationary distribution, which is uniform on the state space.
### Answer
Let's remember that then the distribution at $X_n$ is $\alpha P^n$, let's see what happens at $n=1$
$n=1$
$$X_1=\alpha P = \left(\begin{array}{cc}
1/k &1/k&...& 1/k
\end{array}\right)\left(\begin{array}{cc}
p_{1,1} & p_{1,2} & ... & p_{1,k} \\
p_{2,1} & p_{2,2} & ... & p_{2,k} \\
... \\
p_{k,1} & p_{k,2} & ... & p_{k,k} \\
\end{array}\right)=\left(\begin{array}{cc}
p_{1,1}*1/k + p_{2,1}*1/k + ... + p_{k,1}*1/k \\
p_{1,2}*1/k + p_{2,2}*1/k + ... + p_{k,2}*1/k \\
... \\
p_{1,k}*1/k + p_{2,k}*1/k + ... + p_{k,k}*1/k \\
\end{array}\right)^T=\left(\begin{array}{cc}
(p_{1,1} + p_{2,1} + ... + p_{k,1})*1/k\\
(p_{1,2} + p_{2,2} + ... + p_{k,2})*1/k \\
... \\
(p_{1,k} + p_{2,k} + ... + p_{k,k})*1/k \\
\end{array}\right)^T = \left(\begin{array}{cc}
(1)*1/k\\
(1)*1/k \\
... \\
(1)*1/k \\
\end{array}\right)^T=\alpha
$$
This last step is due to the matrix being double stochastic
## 3.3
Consider a Markov chain with transition Matrix. Determine if they are regular.
$$P=\left(\begin{array}{cc}
0.4 & 0.6 & 0 & 0 \\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0
\end{array}\right)$$
$$Q=\left(\begin{array}{cc}
0 & 1 \\
p & 1-p
\end{array}\right)$$
$$R=\left(\begin{array}{cc}
0 & 1 & 0 \\
0.25 & 0.5 & 0.25 \\
1 & 0 & 0
\end{array}\right)$$
### Answer
Since all the values have probability of existing in matrices P and R at all times, then these are regular. For $Q$ then it is regular for $0<p<1$ since when this happens the movement is certain and when $p=0$, it does not change from state 2.
## 3.4
Consider the Markov Chain with transition Matrix:
$$R=\left(\begin{array}{cc}
1-a & a & 0 \\
0 & 1-b & b \\
c & 0 & 1-c
\end{array}\right)$$
with $0< a,b,c <1$. Find the stationary distribution
### Answer
We have 4 equations with 3 variables:
$$\begin{align}
(1-a) \pi_1 + c \pi_3 &= \pi_1 \\
(1-b)\pi_2+a\pi_1 &= \pi_2 \\
b \pi_2 +(1-c)\pi_3 &= \pi_3 \\
\pi_1+\pi_2+\pi_3=1
\end{align}$$
$=>$
$$\begin{align}
\pi_3 &= a/c\pi_1 \\
\pi_2 &= a/b\pi_1 \\
\pi_1+a/b\pi_1+a/c\pi_1 &= 1 \\
=> \\
\pi_1 &= \frac{bc}{ab+ac+bc} \\
\pi_2 &= \frac{ac}{ab+ac+bc} \\
\pi_3 &= \frac{ab}{ab+ac+bc} \\
\end{align}$$
then $\pi = (\frac{bc}{ab+ac+bc}, \frac{ac}{ab+ac+bc}, \frac{ab}{ab+ac+bc})$
```python
# An example with a=0.4, b=0.2, c=0.7
pi = np.array([.2*.7/(.4*.2+.2*.7+.4*.7), .4*.7/(.4*.2+.2*.7+.4*.7), .2*.4/(.4*.2+.2*.7+.4*.7)])
P = np.matrix([
[.6 , .4 , 0],
[0 , .8 , .2 ],
[.7 , 0 , .3]])
pi*P, pi
```
(matrix([[0.28, 0.56, 0.16]]), array([0.28, 0.56, 0.16]))
## 3.5
Consider a Markov chain with transition Matrix
$$P=\left(\begin{array}{cc}
0 & 1/4 & 0 & 0 & 3/4 \\
3/4 & 0 & 0 & 0 & 1/4\\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
1/4 & 3/4 & 0 & 0 & 0
\end{array}\right)$$
(a) Describe the set of the stationary distributions for the chain.
(b) Use technology to find the $lim_{n\to \infty}P^n$. Explain the long-term behavior of the chain.
(c) Explain why the chain does not have a limiting distribution, and why this does not contradict the existence of a limiting matrix as in (b)
### Answer
(a)
There is the same probability of catching into state 1,2 and 5 (because of the transition matrix is a double stochastic matrix), so the set of all stationary distributions are:
$(a,a,b,c,a)$ where 3a+b+c=1, although it is necessary to state that you can never reach any other state if you start at state b or c so if you start on this states then these are 1 since you do not move on those states.
```python
# (b)
P = np.matrix([[0 , 1/4 , 0 , 0 , 3/4],
[3/4 , 0 , 0 , 0 , 1/4],
[0, 0 , 1 , 0 , 0],
[0, 0 , 0 , 1 , 0],
[1/4 , 3/4 , 0 , 0 , 0]])
P**100
```
matrix([[0.33333333, 0.33333333, 0. , 0. , 0.33333333],
[0.33333333, 0.33333333, 0. , 0. , 0.33333333],
[0. , 0. , 1. , 0. , 0. ],
[0. , 0. , 0. , 1. , 0. ],
[0.33333333, 0.33333333, 0. , 0. , 0.33333333]])
This means that if the chain starts at state 3 or 4, then you are stuck in there, whereas if you start on position 1,2 or 5, then you can be at any of these 3 states in a probability of 1/3 (since the matrix is a double stochastic matrix)
(c) There is no limit distribution since the limit distribution will bepend on which state you start on, but there is an stationary matrix because it shows you the limiting distribution for starting at state x (with x=1,2,3,4 or 5). This is because there exist 3 different commuinication classes.
## 3.6
Consider a Markov Chain with transition matirx
$$P=\left(\begin{array}{cc}
1/2 & 1/2 & 0 & 0 & .. \\
2/3 & 0 & 1/3 & 0 & ..\\
3/4 & 0 & 0 & 1/4 & .. \\
4/5 & 0 & 0 & 0 & .. \\
. & . & . & . & .
\end{array}\right)$$
defined by
$$ P_{i,j}= \left\{
\begin{array}{ll}
i/(i+1) & \text{if j = 1} \\
1/(i+1) & \text{if j = i+1} \\
0& \text{otherwise} \\
\end{array}
\right. $$
(a) Does the chain have an stationary distribution? If yes, exhibit its behavior, if no, explain why
(b) Classify the states of the chain
(c) Repeat part (a) with the row entries of $\textbf{P}$ switched. That is, let
$$ P_{i,j}= \left\{
\begin{array}{ll}
1/(i+1) & \text{if j = 1} \\
i/(i+1) & \text{if j = i+1} \\
0& \text{otherwise} \\
\end{array}
\right. $$
### Answer
(a) The chain does not have an stationary distribution, this because it does not matter how far it is down the line it would always be able to go back to state 0 and start over again. This means that after an infinity amount of jumps it can always go back to the beginning and start all over again
(b) The chain is an infinite, irreducible and recurrent chain.
(a) the same case as A applyies, eventhough the probability of going back to the beginning, this chain does not have a limiting distribution.
## 3.7
A Markov chain has n states. If the chain is at state k, a coin is flipped, whose heads probability is $p$. If the coin lands heads, the chain stays at $k$. if the coin lands tails, the chaiin moves to a different state uniformly at random. Exhibit the transition matrix and find the stationary distribution.
### Answer
Let's see the matrix:
$$P=\left(\begin{array}{cc}
p & q/n & q/n & q/n & .. \\
q/n & p & q/n & q/n & .. \\
q/n & q/n & p & q/n & .. \\
q/n & q/n & q/n & p & .. \\
.&.&.&.&.
\end{array}\right)$$
It is clear that the rows and columns are equal, hence, double stochastic. Then the stationary distribution is uniform.
| c9d1d0ca25f353ab2be72ffd60a665fbba8e12c3 | 12,665 | ipynb | Jupyter Notebook | Chapter03_py.ipynb | larispardo/StochasticProcessR | a2f8b6c41f2fe451629209317fc32f2c28e0e4ee | [
"MIT"
]
| null | null | null | Chapter03_py.ipynb | larispardo/StochasticProcessR | a2f8b6c41f2fe451629209317fc32f2c28e0e4ee | [
"MIT"
]
| null | null | null | Chapter03_py.ipynb | larispardo/StochasticProcessR | a2f8b6c41f2fe451629209317fc32f2c28e0e4ee | [
"MIT"
]
| null | null | null | 30.890244 | 314 | 0.471931 | true | 3,087 | Qwen/Qwen-72B | 1. YES
2. YES | 0.944177 | 0.810479 | 0.765235 | __label__eng_Latn | 0.96421 | 0.616231 |
```python
import os.path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from traitlets import traitlets
from IPython.display import display
from ipywidgets import HBox, VBox, BoundedFloatText, BoundedIntText, Text, Layout, Button
```
# Processing demographic data
We have retrieved demographic data from the data portal of the **National Centre for Statistics & Information** of the *Sultanate of Oman* (https://data.gov.om/)
Data are formatted in a two-columns file, where<br>
* The __first__ column contains _years_.<br>
* The __second__ column, which must be named *Population*, contains integer numbers and reports the total population in the Muscat region per year [2010-2019]
```python
if os.path.exists("./playgrounds/data/JNNP_2020/Epidemiology_Oman.txt"):
epi_file = "./playgrounds/data/JNNP_2020/Epidemiology_Oman.txt"
elif not os.path.exists("./data/JNNP_2020/Epidemiology_Oman.txt"):
!git clone https://github.com/mazzalab/playgrounds.git
epi_file = "./playgrounds/data/JNNP_2020/Epidemiology_Oman.txt"
else:
epi_file = "./data/JNNP_2020/Epidemiology_Oman.txt"
df = pd.read_csv(epi_file, sep='\t', index_col=0)
print(df)
```
Population
Year
2010 407006
2011 418652
2012 435149
2013 451652
2014 470085
2015 487592
2016 512039
2017 528327
2018 543930
## Predicting Muscat population growth
The observed period of time starts in **2014** and ends in the year specified below.
```python
style = {'description_width': 'initial'}
simulation_end_text = BoundedIntText(
min=2015,
max=2100,
step=1,
value=2050,
description='Simulate from 2014 until:', style=style)
class GenerateTimeButton(Button):
def __init__(self, value=None, *args, **kwargs):
super(GenerateTimeButton, self).__init__(*args, **kwargs)
# Create the value attribute.
self.add_traits(value=traitlets.Any(value))
# Generate time period (list of years) to be simulated
def on_generate_time_button_clicked(button):
button.value = np.arange(2014, simulation_end_text.value+1, 1).reshape((-1, 1))
print("Simulation time points generated")
generate_time_button = GenerateTimeButton(
description="Generate",
button_style='info',
tooltip='Generate simulation time points'
)
generate_time_button.value=np.array([])
generate_time_button.on_click(on_generate_time_button_clicked)
hbox_time = HBox([simulation_end_text, generate_time_button])
display(hbox_time)
```
HBox(children=(BoundedIntText(value=2050, description='Simulate from 2014 until:', max=2100, min=2015, style=D…
Simulation time points generated
Linear regression analysis is conducted on demographic data using the *LinearRegression* module from the Python **sklearn** package. The typical linear regression equation: \begin{align}y & = mx + b\end{align} is fitted and *coefficient of determination ($r^2$)*, *intercept* ($b$) and *slope* ($m$) are inferred.
```python
if not generate_time_button.value.any():
simulation_end_text.value=2050
generate_time_button.click()
x_new = generate_time_button.value
#######################################
%matplotlib inline
x = df.index.values.reshape((-1, 1))
y = df.Population
model = LinearRegression().fit(x, y)
r_sq = model.score(x, y)
print(" ")
print('coefficient of determination:', np.round(r_sq, 3))
print('intercept (b):', np.round(model.intercept_, 3))
print('slope (m):', np.round(model.coef_, 3))
print('')
# Predict response of known data
y_pred = model.predict(x)
print('time:', np.transpose(x)[0], sep='\t\t\t\t')
print('predicted response:', np.around(y_pred), sep='\t\t')
# Plot outputs
axis_scale = 1e5
plt.scatter(x, y/axis_scale, color='black', label="act. population")
plt.plot(x, y_pred/axis_scale, color='blue',
linewidth=3, label="Interpolation")
plt.title('Actual vs Predicted')
plt.xlabel('years')
plt.ylabel('population growth (1e5)')
plt.legend(loc='upper left', frameon=False)
plt.savefig('linear_regression.svg', format='svg', dpi=600)
# Predict response of future data [2018-2020]
y_pred = np.round(model.predict(x_new))
print('predicted response [2018]', y_pred[4], sep='\t')
print('actual data [Dec. 2018]:\t{}'.format(df.iloc[8]['Population']))
print('predicted response [2019]', y_pred[5], sep='\t')
print('actual data [Dec. 2019]:\t{}'.format(567851))
print('predicted response [2020]', y_pred[6], sep='\t')
print('actual data [Feb. 2020]:\t{}'.format(570196))
```
### Theoretical response with plateau and inflection point at 2030, 2040 and 2050
Population growth was simulated considering a plateau in 2030, 2040 and in 2050 through the exponential function: $y = Y_M-(Y_M-Y_0)\cdot(e^{-ax})$, where $Y_M$ is the maximum at which the plateau ends up, $Y_0$ is the starting population and $a$ is a rate constant that governs how fast it gets there. Here, $Y_0 = 5.44 \cdot 1e5$ (actual population in 2018), while $Y_M$ is equals to $7.57$, $9.35$ and $11.13$ per $100000$ individuals, as inferred by linear regression respectively for 2030, 2040 and 2050.
```python
import math
axis_scale = 1e5
y_pred = model.predict(x_new)/axis_scale
fig = plt.figure(figsize=(3,3))
# Plot outputs
plt.scatter(x, y/axis_scale, color='black', label="actual", s=10)
plt.plot(x_new, y_pred, color='#984ea3', linewidth=3, label="linear")
def plateau(x, M, M0, a):
return M - (M-M0)*math.exp(-a*x)
y_pl = np.vectorize(plateau)
M_2018 = y_pred[np.where(x_new==2018)[0][0]]
M_2030 = y_pred[np.where(x_new==2030)[0][0]]
M_2040 = y_pred[np.where(x_new==2040)[0][0]]
M_2050 = y_pred[np.where(x_new==2050)[0][0]]
y_2030= y_pl(x=np.arange(0,33), M=M_2030, M0=M_2018, a=.08)
y_2040= y_pl(x=np.arange(0,33), M=M_2040, M0=M_2018, a=.045)
y_2050= y_pl(x=np.arange(0,33), M=M_2050, M0=M_2018, a=.035)
plt.plot(x_new[4:], y_2030, color='#377eb8',
linewidth=3, label="2030")
plt.plot(x_new[4:], y_2040, color='#ff7f00',
linewidth=3, label="2040")
plt.plot(x_new[4:], y_2050, color='#4daf4a',
linewidth=3, label="2050")
plt.axvline(x=2030, linewidth=.5, linestyle='dashdot', color="#377eb8", ymax=.40)
plt.axvline(x=2040, linewidth=.5, linestyle='dashdot', color="#ff7f00", ymax=.55)
plt.axvline(x=2050, linewidth=.5, linestyle='dashdot', color="#4daf4a", ymax=.72)
plt.xlabel('years')
plt.ylabel('population growth (1e5)')
plt.legend(loc='upper left', frameon=False)
plt.rcParams.update({'font.size': 11})
plt.savefig('fitted_vs_predicted.svg', format='svg', dpi=1200)
```
# Markov Chain design
A **discrete-time Markov chain (DTMC)** is designed to model the *death* event of HD patients and *onset* of the disease among inhabitants of Muscat. To do that, *birth/death* events were inferred from observational data collected from 2013 to 2019 in the Muscat population.
## Set the transition rates
The modeled process is stochastic and *memoryless*, in that it allows to make predictions based solely on its present state. The process passes through three states: **Healthy**, **HD Alive** and **HD Dead** as a result of *birth* and *death* events, driven in turn by: <b>I</b> = incidence rate of the disease and <b>D</b> = death rate because of the disease.<br/>
where:<br/>
* $I_{avg}$:  the average HD incidence rate in the world population (min=**0.38** per 100,000 per year, [https://doi.org/10.1002/mds.25075]; max=**0.9** per 100,000 per year, [https://doi.org/10.1186/s12883-019-1556-3])
* $I_{Muscat}$: the actual HD incidence rate in Muscat (**0.56** per 100,000 per year [2013-2019])
* $D_{avg}$:  the HD death rate (min=**1.55** per million population registered in England-Wales in 1960-1973, [https://pubmed.ncbi.nlm.nih.gov/6233902/]; max=**2.27** per million population registered in USA in 1971-1978, [https://doi.org/10.1212/wnl.38.5.769])
* $D_{Muscat}$: the actual HD death rate in Muscat (**1.82** per million population [2013-2019])
```python
# estimated incidence rate world-wide
## min = 0.38/year per 100,000 [https://doi.org/10.1002/mds.25075]
## max = 0.9/year per 100,000 [https://doi.org/10.1186/s12883-019-1556-3]
est_inc_rates = [0.38, 0.56, 0.7, 0.9]
# actual incidence rate (0.56 per 100,000 per year) ~ 3 new HD patients in Muscat in 2018 [https://data.gov.om/OMPOP2016/population?indicator=1000140®ion=1000020-muscat&nationality=1000010-omani]
act_inc_rate = 0.56
# estimated death rate world-wide
## min = 1.55 per million (England-Wales in 1960-1973, https://www.ncbi.nlm.nih.gov/pubmed/6233902)
## max = 2.27 per million (United States in 1971-1978, https://doi.org/10.1212/wnl.38.5.769)
est_death_rates = [1.55, 1.819, 2.27]
# actual death rate ~ 1 patient in 2018 per 100.000 per year in Muscat
act_death_rate = 1.819
# starting HD individuals in Muscat in 2018 (32)
hd_2018 = 32
```
### Set the number of simulation traces
In order to approximate the posterior distribution of the HD prevalence, one performs a *Monte Carlo* simulation with $1000$ independent simulation runs from where calculating the *average number of incident cases* per year.
```python
simulation_traces = BoundedIntText(
min=1,
max=5000,
step=1,
value=1000,
description='Simulation traces:', style=style)
display(simulation_traces)
```
BoundedIntText(value=1000, description='Simulation traces:', max=5000, min=1, style=DescriptionStyle(descripti…
### Initialize state vectors
For each simulation step, $6$ vectors are updated:<br/>
* ***est_inc*** stores the variation of incidence over time according to $I_{avg}$
* ***act_inc*** stores the variation of incidence over time according to $I_{Muscat}$
* ***est_death*** stores the numbers of deaths over time according to $D_{avg}$
* ***act_death*** stores the numbers of deaths over time according to $D_{Muscat}$
* ***est_alive_HD*** stores the number of alive HD patients over time, based on ***est_inc*** and ***est_death***
* ***act_alive_HD*** astores the number of alive HD patients over time, based on ***act_inc*** and ***act_death***
```python
# record the number of simulation steps
sim_time = len(x_new[4:])
# get the number of simulation traces from the textfield above
num_traces = simulation_traces.value
# get number of possible Iavg and Davg
num_iavg = len(est_inc_rates)
num_davg = len(est_death_rates)
est_inc = np.zeros((num_iavg, num_davg, num_traces, sim_time), dtype=int)
act_inc = np.zeros((num_traces, sim_time), dtype=int)
est_death = np.zeros((num_iavg, num_davg, num_traces, sim_time), dtype=int)
act_death = np.zeros((num_traces, sim_time), dtype=int)
est_alive_HD = np.zeros((num_iavg, num_davg, num_traces, sim_time), dtype=int)
est_alive_HD[:, :, :, 0] = hd_2018
act_alive_HD = np.zeros((num_traces, sim_time), dtype=int)
act_alive_HD[:, 0] = hd_2018
```
### Trigger the Monte Carlo simulation
All possible estimates of $I_{avg}$ and $D_{avg}$ are shuffled here and, for each combination, 1000 independent simulations will be launched. For each simulation and for each time step, the number of alive HD patients will be calculated as the sum of the number of *currently alive* HD patients and that of *new* HD patients, from which the number of currently *deceased* patients is subtracted. This computation is performed for:
#### Linear estimate (regression)
```python
for iavg_idx, iavg_value in enumerate(est_inc_rates):
for davg_idx, davg_value in enumerate(est_death_rates):
print("Evaluating Iavg={} and Davg={}".format(iavg_value, davg_value))
for rep in range(num_traces):
for t in range(sim_time-1):
curr_pop = y_pred[t]*100000 # <-------
if(act_inc[rep, t] == 0):
act_inc[rep, t] = np.random.poisson((act_inc_rate * curr_pop)/100000)
act_death[rep, t] = np.random.poisson((act_death_rate * curr_pop)/1000000)
act_alive_HD[rep, t+1] = act_alive_HD[rep, t] - act_death[rep, t] + act_inc[rep, t]
est_inc[iavg_idx, davg_idx, rep, t] = np.random.poisson(
(iavg_value * curr_pop)/100000)
est_death[iavg_idx, davg_idx, rep, t] = np.random.poisson(
(davg_value * curr_pop)/1000000)
est_alive_HD[iavg_idx, davg_idx, rep, t+1] = est_alive_HD[iavg_idx, davg_idx, rep, t] - \
est_death[iavg_idx, davg_idx, rep, t] + est_inc[iavg_idx, davg_idx, rep, t]
```
Evaluating Iavg=0.38 and Davg=1.55
Evaluating Iavg=0.38 and Davg=1.819
Evaluating Iavg=0.38 and Davg=2.27
Evaluating Iavg=0.56 and Davg=1.55
Evaluating Iavg=0.56 and Davg=1.819
Evaluating Iavg=0.56 and Davg=2.27
Evaluating Iavg=0.7 and Davg=1.55
Evaluating Iavg=0.7 and Davg=1.819
Evaluating Iavg=0.7 and Davg=2.27
Evaluating Iavg=0.9 and Davg=1.55
Evaluating Iavg=0.9 and Davg=1.819
Evaluating Iavg=0.9 and Davg=2.27
##### Plot HD population predicted to be alive
N.b., __est. HD population__ accounts for the population that is predicted to be alive every year according to *incidence* and *death* rates that are the world average; __act. HD population__ refers instead to the actual rates calculated stright on the actual Muscat population records, as previously described.
A ranom plot (i.e., a random trace among 1000) will be generaed for each combination of $I_{avg}$ and $D_{avg}$.
```python
# Select a random trace
rtrace = np.random.randint(0,num_traces)
# Plot a random trace of est/act. alive HD cases in the Muscat region
# for each combination of Iavg and Davg
fig, ax = plt.subplots(len(est_inc_rates), len(est_death_rates), figsize=(12, 9))
for iavg_idx, iavg_value in enumerate(est_inc_rates):
for davg_idx, davg_value in enumerate(est_death_rates):
ax[iavg_idx, davg_idx].plot(x_new[4:], est_alive_HD[iavg_idx, davg_idx, rtrace, :],
lw=2, label='est. {}, {}'.format(iavg_value, davg_value))
ax[iavg_idx, davg_idx].plot(x_new[4:], act_alive_HD[rtrace, :],
lw=2, label='act. {}, {}'.format(act_inc_rate, str(round(act_death_rate, 2))))
ax[iavg_idx, davg_idx].legend(loc='upper left', frameon=False)
plt.savefig('alive_population.svg', format='svg', dpi=1200)
```
#### Inflation at 2030
```python
# record the number of simulation steps
sim_time_2030 = len(y_2030)
# get the number of simulation traces from the textfield above
num_traces_2030 = simulation_traces.value
# get number of possible Iavg and Davg
num_iavg_2030 = len(est_inc_rates)
num_davg_2030 = len(est_death_rates)
est_inc_2030 = np.zeros((num_iavg_2030, num_davg_2030, num_traces_2030, sim_time_2030), dtype=int)
act_inc_2030 = np.zeros((num_traces_2030, sim_time_2030), dtype=int)
est_death_2030 = np.zeros((num_iavg_2030, num_davg_2030, num_traces_2030, sim_time_2030), dtype=int)
act_death_2030 = np.zeros((num_traces_2030, sim_time_2030), dtype=int)
est_alive_HD_2030 = np.zeros((num_iavg_2030, num_davg_2030, num_traces_2030, sim_time_2030), dtype=int)
est_alive_HD_2030[:, :, :, 0] = hd_2018
act_alive_HD_2030 = np.zeros((num_traces_2030, sim_time_2030), dtype=int)
act_alive_HD_2030[:, 0] = hd_2018
```
```python
for iavg_idx, iavg_value in enumerate(est_inc_rates):
for davg_idx, davg_value in enumerate(est_death_rates):
print("Evaluating Iavg={} and Davg={}".format(iavg_value, davg_value))
for rep in range(num_traces_2030):
for t in range(sim_time_2030-1):
curr_pop = y_2030[t]*100000 # <-------
if(act_inc_2030[rep, t] == 0):
act_inc_2030[rep, t] = np.random.poisson((act_inc_rate * curr_pop)/100000)
act_death_2030[rep, t] = np.random.poisson((act_death_rate * curr_pop)/1000000)
act_alive_HD_2030[rep, t+1] = act_alive_HD_2030[rep, t] - act_death_2030[rep, t] + act_inc_2030[rep, t]
est_inc_2030[iavg_idx, davg_idx, rep, t] = np.random.poisson(
(iavg_value * curr_pop)/100000)
est_death_2030[iavg_idx, davg_idx, rep, t] = np.random.poisson(
(davg_value * curr_pop)/1000000)
est_alive_HD_2030[iavg_idx, davg_idx, rep, t+1] = est_alive_HD_2030[iavg_idx, davg_idx, rep, t] - \
est_death_2030[iavg_idx, davg_idx, rep, t] + est_inc_2030[iavg_idx, davg_idx, rep, t]
```
Evaluating Iavg=0.38 and Davg=1.55
Evaluating Iavg=0.38 and Davg=1.819
Evaluating Iavg=0.38 and Davg=2.27
Evaluating Iavg=0.56 and Davg=1.55
Evaluating Iavg=0.56 and Davg=1.819
Evaluating Iavg=0.56 and Davg=2.27
Evaluating Iavg=0.7 and Davg=1.55
Evaluating Iavg=0.7 and Davg=1.819
Evaluating Iavg=0.7 and Davg=2.27
Evaluating Iavg=0.9 and Davg=1.55
Evaluating Iavg=0.9 and Davg=1.819
Evaluating Iavg=0.9 and Davg=2.27
##### Plot HD population predicted to be alive
```python
# Select a random trace
rtrace = np.random.randint(0,num_traces_2030)
# Plot a random trace of est/act. alive HD cases in the Muscat region
# for each combination of Iavg and Davg
fig, ax = plt.subplots(len(est_inc_rates), len(est_death_rates), figsize=(12, 9))
for iavg_idx, iavg_value in enumerate(est_inc_rates):
for davg_idx, davg_value in enumerate(est_death_rates):
ax[iavg_idx, davg_idx].plot(x_new[4:], est_alive_HD_2030[iavg_idx, davg_idx, rtrace, :],
lw=2, label='est. {}, {}'.format(iavg_value, davg_value))
ax[iavg_idx, davg_idx].plot(x_new[4:], act_alive_HD_2030[rtrace, :],
lw=2, label='act. {}, {}'.format(act_inc_rate, str(round(act_death_rate, 2))))
ax[iavg_idx, davg_idx].legend(loc='upper left', frameon=False)
plt.savefig('alive_population_y2030.svg', format='svg', dpi=1200)
```
#### Inflation at 2040
```python
# record the number of simulation steps
sim_time_2040 = len(y_2040)
# get the number of simulation traces from the textfield above
num_traces_2040 = simulation_traces.value
# get number of possible Iavg and Davg
num_iavg_2040 = len(est_inc_rates)
num_davg_2040 = len(est_death_rates)
est_inc_2040 = np.zeros((num_iavg_2040, num_davg_2040, num_traces_2040, sim_time_2040), dtype=int)
act_inc_2040 = np.zeros((num_traces_2040, sim_time_2040), dtype=int)
est_death_2040 = np.zeros((num_iavg_2040, num_davg_2040, num_traces_2040, sim_time_2040), dtype=int)
act_death_2040 = np.zeros((num_traces_2040, sim_time_2040), dtype=int)
est_alive_HD_2040 = np.zeros((num_iavg_2040, num_davg_2040, num_traces_2040, sim_time_2040), dtype=int)
est_alive_HD_2040[:, :, :, 0] = hd_2018
act_alive_HD_2040 = np.zeros((num_traces_2040, sim_time_2040), dtype=int)
act_alive_HD_2040[:, 0] = hd_2018
```
```python
for iavg_idx, iavg_value in enumerate(est_inc_rates):
for davg_idx, davg_value in enumerate(est_death_rates):
print("Evaluating Iavg={} and Davg={}".format(iavg_value, davg_value))
for rep in range(num_traces_2040):
for t in range(sim_time_2040-1):
curr_pop = y_2040[t]*100000 # <-------
if(act_inc_2040[rep, t] == 0):
act_inc_2040[rep, t] = np.random.poisson((act_inc_rate * curr_pop)/100000)
act_death_2040[rep, t] = np.random.poisson((act_death_rate * curr_pop)/1000000)
act_alive_HD_2040[rep, t+1] = act_alive_HD_2040[rep, t] - act_death_2040[rep, t] + act_inc_2040[rep, t]
est_inc_2040[iavg_idx, davg_idx, rep, t] = np.random.poisson(
(iavg_value * curr_pop)/100000)
est_death_2040[iavg_idx, davg_idx, rep, t] = np.random.poisson(
(davg_value * curr_pop)/1000000)
est_alive_HD_2040[iavg_idx, davg_idx, rep, t+1] = est_alive_HD_2040[iavg_idx, davg_idx, rep, t] - \
est_death_2040[iavg_idx, davg_idx, rep, t] + est_inc_2040[iavg_idx, davg_idx, rep, t]
```
Evaluating Iavg=0.38 and Davg=1.55
Evaluating Iavg=0.38 and Davg=1.819
Evaluating Iavg=0.38 and Davg=2.27
Evaluating Iavg=0.56 and Davg=1.55
Evaluating Iavg=0.56 and Davg=1.819
Evaluating Iavg=0.56 and Davg=2.27
Evaluating Iavg=0.7 and Davg=1.55
Evaluating Iavg=0.7 and Davg=1.819
Evaluating Iavg=0.7 and Davg=2.27
Evaluating Iavg=0.9 and Davg=1.55
Evaluating Iavg=0.9 and Davg=1.819
Evaluating Iavg=0.9 and Davg=2.27
##### Plot HD population predicted to be alive
```python
# Select a random trace
rtrace = np.random.randint(0,num_traces_2040)
# Plot a random trace of est/act. alive HD cases in the Muscat region
# for each combination of Iavg and Davg
fig, ax = plt.subplots(len(est_inc_rates), len(est_death_rates), figsize=(12, 9))
for iavg_idx, iavg_value in enumerate(est_inc_rates):
for davg_idx, davg_value in enumerate(est_death_rates):
ax[iavg_idx, davg_idx].plot(x_new[4:], est_alive_HD_2040[iavg_idx, davg_idx, rtrace, :],
lw=2, label='est. {}, {}'.format(iavg_value, davg_value))
ax[iavg_idx, davg_idx].plot(x_new[4:], act_alive_HD_2040[rtrace, :],
lw=2, label='act. {}, {}'.format(act_inc_rate, str(round(act_death_rate, 2))))
ax[iavg_idx, davg_idx].legend(loc='upper left', frameon=False)
plt.savefig('alive_population_y2040.svg', format='svg', dpi=1200)
```
#### Inflation at 2050
```python
# record the number of simulation steps
sim_time_2050 = len(y_2050)
# get the number of simulation traces from the textfield above
num_traces_2050 = simulation_traces.value
# get number of possible Iavg and Davg
num_iavg_2050 = len(est_inc_rates)
num_davg_2050 = len(est_death_rates)
est_inc_2050 = np.zeros((num_iavg_2050, num_davg_2050, num_traces_2050, sim_time_2050), dtype=int)
act_inc_2050 = np.zeros((num_traces_2050, sim_time_2050), dtype=int)
est_death_2050 = np.zeros((num_iavg_2050, num_davg_2050, num_traces_2050, sim_time_2050), dtype=int)
act_death_2050 = np.zeros((num_traces_2050, sim_time_2050), dtype=int)
est_alive_HD_2050 = np.zeros((num_iavg_2050, num_davg_2050, num_traces_2050, sim_time_2050), dtype=int)
est_alive_HD_2050[:, :, :, 0] = hd_2018
act_alive_HD_2050 = np.zeros((num_traces_2050, sim_time_2050), dtype=int)
act_alive_HD_2050[:, 0] = hd_2018
```
```python
for iavg_idx, iavg_value in enumerate(est_inc_rates):
for davg_idx, davg_value in enumerate(est_death_rates):
print("Evaluating Iavg={} and Davg={}".format(iavg_value, davg_value))
for rep in range(num_traces_2050):
for t in range(sim_time_2050-1):
curr_pop = y_2050[t]*100000 # <-------
if(act_inc_2050[rep, t] == 0):
act_inc_2050[rep, t] = np.random.poisson((act_inc_rate * curr_pop)/100000)
act_death_2050[rep, t] = np.random.poisson((act_death_rate * curr_pop)/1000000)
act_alive_HD_2050[rep, t+1] = act_alive_HD_2050[rep, t] - act_death_2050[rep, t] + act_inc_2050[rep, t]
est_inc_2050[iavg_idx, davg_idx, rep, t] = np.random.poisson(
(iavg_value * curr_pop)/100000)
est_death_2050[iavg_idx, davg_idx, rep, t] = np.random.poisson(
(davg_value * curr_pop)/1000000)
est_alive_HD_2050[iavg_idx, davg_idx, rep, t+1] = est_alive_HD_2050[iavg_idx, davg_idx, rep, t] - \
est_death_2050[iavg_idx, davg_idx, rep, t] + est_inc_2050[iavg_idx, davg_idx, rep, t]
```
Evaluating Iavg=0.38 and Davg=1.55
Evaluating Iavg=0.38 and Davg=1.819
Evaluating Iavg=0.38 and Davg=2.27
Evaluating Iavg=0.56 and Davg=1.55
Evaluating Iavg=0.56 and Davg=1.819
Evaluating Iavg=0.56 and Davg=2.27
Evaluating Iavg=0.7 and Davg=1.55
Evaluating Iavg=0.7 and Davg=1.819
Evaluating Iavg=0.7 and Davg=2.27
Evaluating Iavg=0.9 and Davg=1.55
Evaluating Iavg=0.9 and Davg=1.819
Evaluating Iavg=0.9 and Davg=2.27
##### Plot HD population predicted to be alive
```python
# Select a random trace
rtrace = np.random.randint(0,num_traces_2050)
# Plot a random trace of est/act. alive HD cases in the Muscat region
# for each combination of Iavg and Davg
fig, ax = plt.subplots(len(est_inc_rates), len(est_death_rates), figsize=(12, 9))
for iavg_idx, iavg_value in enumerate(est_inc_rates):
for davg_idx, davg_value in enumerate(est_death_rates):
ax[iavg_idx, davg_idx].plot(x_new[4:], est_alive_HD_2050[iavg_idx, davg_idx, rtrace, :],
lw=2, label='est. {}, {}'.format(iavg_value, davg_value))
ax[iavg_idx, davg_idx].plot(x_new[4:], act_alive_HD_2050[rtrace, :],
lw=2, label='act. {}, {}'.format(act_inc_rate, str(round(act_death_rate, 2))))
ax[iavg_idx, davg_idx].legend(loc='upper left', frameon=False)
plt.savefig('alive_population_y2050.svg', format='svg', dpi=1200)
```
# Calculate average estimates of prevalence over traces
The **prevalence** of HD is calculated year by year until the end of simulation and for each simulation trace.
```python
# act_alive_HD_mu = np.around(np.mean(act_alive_HD, axis=0) / y_pred, 2)
# act_alive_HD_sigma = np.around(np.std(act_alive_HD, axis=0) / y_pred, 2)
# act_alive_HD_2030_mu = np.around(np.mean(act_alive_HD_2030 / y_2030, axis=0), 2)
# act_alive_HD_2030_sigma = np.around(np.std(act_alive_HD_2030 / y_2030, axis=0), 2)
# act_alive_HD_2040_mu = np.around(np.mean(act_alive_HD_2040 / y_2040, axis=0), 2)
# act_alive_HD_2040_sigma = np.around(np.std(act_alive_HD_2040 / y_2040, axis=0), 2)
# act_alive_HD_2050_mu = np.around(np.mean(act_alive_HD_2050 / y_2050, axis=0), 2)
# act_alive_HD_2050_sigma = np.around(np.std(act_alive_HD_2050 / y_2050, axis=0), 2)
# Iavg = [0.38, 0.56, 0.7, 0.9]
# Davg = [1.55, 1.819, 2.27]
# Select simulations with Iavg=0.56 (index=1) and Davg=2.27 (index=2) estimates
est_alive_HD_mu = np.around(np.mean(est_alive_HD[1,2,:,:], axis=0) / y_pred[4:], 2)
est_alive_HD_sigma = np.around(np.std(est_alive_HD[1,2,:,:], axis=0) / y_pred[4:], 2)
est_alive_HD_2030_mu = np.around(np.mean(est_alive_HD_2030[1,2,:,:]/ y_2030, axis=0), 2)
est_alive_HD_2030_sigma = np.around(np.std(est_alive_HD_2030[1,2,:,:]/ y_2030, axis=0), 2)
est_alive_HD_2040_mu = np.around(np.mean(est_alive_HD_2040[1,2,:,:]/ y_2040, axis=0), 2)
est_alive_HD_2040_sigma = np.around(np.std(est_alive_HD_2040[1,2,:,:]/ y_2040, axis=0), 2)
est_alive_HD_2050_mu = np.around(np.mean(est_alive_HD_2050[1,2,:,:]/ y_2050, axis=0), 2)
est_alive_HD_2050_sigma = np.around(np.std(est_alive_HD_2050[1,2,:,:]/ y_2050, axis=0), 2)
print("Linear regression (est.): {}".format(est_alive_HD_mu))
# print("Linear regression (act.): {}\n".format(act_alive_HD_mu))
print("flection at 2030 (est.): {}".format(est_alive_HD_2030_mu))
# print("flection at 2030 (act.): {}\n".format(act_alive_HD_2030_mu))
print("flection at 2040 (est.): {}".format(est_alive_HD_2040_mu))
# print("flection at 2040 (act.): {}\n".format(act_alive_HD_2040_mu))
print("flection at 2050 (est.): {}".format(est_alive_HD_2050_mu))
# print("flection at 2050 (act.): {}".format(act_alive_HD_2050_mu))
```
Linear regression (est.): [ 5.88 5.98 6.08 6.19 6.29 6.38 6.49 6.6 6.73 6.85 6.97 7.11
7.22 7.34 7.49 7.61 7.75 7.89 8.02 8.16 8.32 8.46 8.61 8.76
8.91 9.05 9.19 9.33 9.46 9.6 9.74 9.89 10.04]
flection at 2030 (est.): [ 5.88 6.05 6.21 6.4 6.59 6.81 7.01 7.24 7.45 7.68 7.9 8.14
8.41 8.66 8.91 9.16 9.44 9.72 9.99 10.27 10.54 10.83 11.11 11.39
11.68 11.98 12.27 12.57 12.87 13.18 13.49 13.81 14.11]
flection at 2040 (est.): [ 5.88 6.02 6.17 6.32 6.49 6.68 6.86 7.06 7.26 7.46 7.66 7.88
8.1 8.31 8.52 8.75 8.99 9.22 9.46 9.69 9.92 10.16 10.41 10.64
10.89 11.14 11.39 11.63 11.9 12.17 12.42 12.69 12.95]
flection at 2050 (est.): [ 5.88 6. 6.13 6.28 6.44 6.6 6.77 6.92 7.1 7.27 7.46 7.65
7.83 8.02 8.21 8.39 8.59 8.8 9.01 9.23 9.45 9.68 9.91 10.13
10.36 10.58 10.81 11.05 11.29 11.52 11.76 12. 12.25]
### Plot prevalence estimates
Calculate the **average prevalence** values ($\mu$) for each year and over all simulation traces, together with the **standard deviation** ($\sigma$) values.<br/>
Make a line plot with bands, with *years* on the X-axis and the *prevalence* values on the Y-axis.
```python
def plot_double_prevalence(splot_idx: int, ax, x_vector: list, y1_vector_mu: list, y1_vector_sigma:
list, y2_vector_mu: list, y2_vector_sigma: list,
enable_y_axis_label: bool = False):
markers_on = [len(y1_vector_mu)-1]
ax[splot_idx].fill_between(x_vector.flatten(), y1_vector_mu+y1_vector_sigma, y1_vector_mu-y1_vector_sigma,
facecolor='#377eb8', alpha=0.1)
ax[splot_idx].fill_between(x_vector.flatten(), y2_vector_mu+y2_vector_sigma, y2_vector_mu-y2_vector_sigma,
facecolor='#ff7f00', alpha=0.1)
ax[splot_idx].plot(x_vector, y1_vector_mu, '-gD', lw=2,
label=r'$I_{avg}=0.56$, $D_{avg}=1.82$', markevery=markers_on, color='#377eb8')
ax[splot_idx].set_xticks([2014, 2015, 2025, 2035, 2045], minor=True)
ax[splot_idx].plot(x_vector, y2_vector_mu, '-gD', lw=2,
label=r'$I_{avg}=0.56$, $D_{avg}=2.27$', markevery=markers_on, color="#ff7f00")
if enable_y_axis_label:
ax[splot_idx].set_ylabel('Est. prevalence')
ax[splot_idx].set_xticks([2014, 2015, 2025, 2035, 2045], minor=True)
ax[splot_idx].spines['top'].set_visible(False)
ax[splot_idx].spines['right'].set_visible(False)
ax[splot_idx].spines['bottom'].set_visible(True)
ax[splot_idx].spines['left'].set_visible(True)
ax[splot_idx].annotate(y1_vector_mu[-1],
(2050, y1_vector_mu[-1]),
textcoords="offset points",
xytext=(-30, 10),
ha='left')
ax[splot_idx].annotate(y2_vector_mu[-1],
(2050, y2_vector_mu[-1]),
textcoords="offset points",
xytext=(-30, -20),
ha='left')
def plot_prevalence(ax, x_vector: list, y1_vector_mu: list, y1_vector_sigma: list,
y2_vector_mu: list, y2_vector_sigma: list,
y3_vector_mu: list, y3_vector_sigma: list,
y4_vector_mu: list, y4_vector_sigma: list):
markers_on = [len(y1_vector_mu)-1]
ax.fill_between(x_vector.flatten(), y1_vector_mu+y1_vector_sigma,
y1_vector_mu-y1_vector_sigma, facecolor='#377eb8', alpha=0.1)
ax.plot(x_vector, y1_vector_mu, '-gD', lw=2, label=r'2030',
markevery=markers_on, color="#377eb8")
ax.fill_between(x_vector.flatten(), y2_vector_mu+y2_vector_sigma,
y2_vector_mu-y2_vector_sigma, facecolor='#ff7f00', alpha=0.1)
ax.plot(x_vector, y2_vector_mu, '-gD', lw=2, label=r'2040',
markevery=markers_on, color="#ff7f00")
ax.fill_between(x_vector.flatten(), y3_vector_mu+y3_vector_sigma,
y3_vector_mu-y3_vector_sigma, facecolor='#4daf4a', alpha=0.1)
ax.plot(x_vector, y3_vector_mu, '-gD', lw=2, label=r'2050',
markevery=markers_on, color="#4daf4a")
ax.fill_between(x_vector.flatten(), y4_vector_mu+y4_vector_sigma,
y4_vector_mu-y4_vector_sigma, facecolor='#984ea3', alpha=0.1)
ax.plot(x_vector, y4_vector_mu, '-gD', lw=2, label=r'linear',
markevery=markers_on, color="#984ea3")
ax.set_ylabel('Est. prevalence')
ax.set_xticks([2014, 2015, 2025, 2035, 2045], minor=True)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(True)
ax.annotate(y1_vector_mu[-1],
(2050, y1_vector_mu[-1]),
textcoords="offset points",
xytext=(10, 0),
ha='left')
ax.annotate(y2_vector_mu[-1],
(2050, y2_vector_mu[-1]),
textcoords="offset points",
xytext=(10, 0),
ha='left')
ax.annotate(y3_vector_mu[-1],
(2050, y3_vector_mu[-1]),
textcoords="offset points",
xytext=(10, -3),
ha='left')
ax.annotate(y4_vector_mu[-1],
(2050, y4_vector_mu[-1]),
textcoords="offset points",
xytext=(10, -5),
ha='left')
fig, ax = plt.subplots(1, 1, figsize=(2.5, 3))
plot_prevalence(ax, x_new[4:], est_alive_HD_2030_mu, est_alive_HD_2030_sigma, est_alive_HD_2040_mu, est_alive_HD_2040_sigma,
est_alive_HD_2050_mu, est_alive_HD_2050_sigma, est_alive_HD_mu, est_alive_HD_sigma)
lines, labels = fig.axes[-1].get_legend_handles_labels()
fig.legend(lines, labels, bbox_to_anchor=(
0.2, 0.9), loc='upper left', frameon=False)
fig.savefig('prevalence_est_I056_D227.svg', format='svg', dpi=1200)
```
### Generate prevalence tables
```python
prev_df = pd.DataFrame({'Years':x_new[4:].flatten(),
'Linear (avg)':est_alive_HD_mu, 'Linear (std)':est_alive_HD_sigma,
'2030 (avg)':est_alive_HD_2030_mu, '2030 (std)':est_alive_HD_2030_sigma,
'2040 (avg)':est_alive_HD_2040_mu, '2040 (std)':est_alive_HD_2040_sigma,
'2050 (avg)':est_alive_HD_2050_mu, '2050 (std)':est_alive_HD_2050_sigma
})
prev_df.to_excel("prevalence_estimates.xlsx", index=False)
print(prev_df.head())
```
Years Linear (avg) Linear (std) 2030 (avg) 2030 (std) 2040 (avg) \
0 2018 5.88 0.00 5.88 0.00 5.88
1 2019 5.96 0.33 6.02 0.37 6.03
2 2020 6.05 0.48 6.17 0.52 6.16
3 2021 6.14 0.56 6.35 0.63 6.32
4 2022 6.25 0.64 6.54 0.71 6.51
2040 (std) 2050 (avg) 2050 (std)
0 0.00 5.88 0.00
1 0.37 5.99 0.36
2 0.52 6.13 0.51
3 0.62 6.26 0.61
4 0.71 6.39 0.71
### Miscellaneous plots
Create two plots of **frequency** of *adults*, *youngs* HD and *at-risk* subjects.
```python
bars_file = "./data/JNNP_2020/Bar_plots.xlsx"
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(6, 2.5), gridspec_kw={'width_ratios': [2, 1]})
######## Panel A ########
dfA = pd.read_excel(bars_file, sheet_name="Figure A") #index_col=0,
print(dfA)
print("")
labels = dfA.Patients
x = np.arange(len(labels))
adult_onset = dfA.iloc[:,1]
juvenile_onset = dfA.iloc[:,2]
width = 0.35*2 # the width of the bars
rects1 = ax1.bar(x, adult_onset, width, label='Adult onset',
color='white', edgecolor='black')
rects2 = ax1.bar(x, juvenile_onset, width, label='Juvenile onset',
color='lightgray', edgecolor='black', hatch="//////")
ax1.set_ylabel('HD subjects')
ax1.set_xticks(x)
ax1.set_yticks([10, 30, 50], minor=True)
ax1.set_xticklabels(labels)
ax1.legend(loc='upper center', bbox_to_anchor=(0.5, 1.25),ncol=2, prop={"size":8}, frameon=False)
ax1.spines['top'].set_visible(False)
ax1.spines['right'].set_visible(False)
ax1.spines['bottom'].set_visible(True)
ax1.spines['left'].set_visible(True)
######## Panel B ########
dfB = pd.read_excel(bars_file, sheet_name="Figure B")
print(dfB)
print("")
labels = dfB.Patients
more50 = dfB.iloc[:,1]
less50 = dfB.iloc[:,2]
x = np.arange(len(labels))
width = 0.35*2
rects1 = ax2.bar(x, more50, width, label='>50% risk',
color='white', edgecolor='black')
rects2 = ax2.bar(x, less50, width, label='≤50% risk',
color='lightgray', edgecolor='black', hatch="xXX")
ax2.set_ylabel('At-risk subjects')
ax2.set_xticks(x)
ax2.set_xticklabels(labels)
ax2.legend(loc='upper center', bbox_to_anchor=(0.5, 1.25),ncol=2, prop={"size":8}, frameon=False)
ax2.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
ax2.spines['bottom'].set_visible(True)
ax2.spines['left'].set_visible(True)
##### Plotting #####
fig.tight_layout()
plt.rcParams.update({'font.size': 10})
plt.show()
def on_save_misc_button(but):
fig.savefig('misc_plots.svg', format='svg', dpi=1200)
print('Figure saved')
save_misc_button = Button(
description="Save SVG",
button_style='info',
tooltip='Save to SVG file'
)
save_misc_button.on_click(on_save_misc_button)
display(save_misc_button)
```
# Print system and required packages information
```python
%load_ext watermark
%watermark -v -m -p numpy,pandas,matplotlib,sklearn,traitlets,IPython,ipywidgets
# date
print(" ")
%watermark -u -n -t -z
```
CPython 3.7.6
IPython 7.12.0
numpy 1.18.1
pandas 1.0.1
matplotlib 3.1.3
sklearn 0.22.1
traitlets 4.3.3
IPython 7.12.0
ipywidgets 7.5.1
compiler : MSC v.1916 64 bit (AMD64)
system : Windows
release : 10
machine : AMD64
processor : Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
CPU cores : 8
interpreter: 64bit
last updated: Sun May 24 2020 12:30:06 W. Europe Summer Time
| c058443d4c98d3b6592730ba734fb386b6896500 | 543,501 | ipynb | Jupyter Notebook | HD_prevalence_JNNP_2020.ipynb | mazzalab/playgrounds | ead719e92abe1f20e6d83c25d61aebbd24ae1663 | [
"MIT"
]
| null | null | null | HD_prevalence_JNNP_2020.ipynb | mazzalab/playgrounds | ead719e92abe1f20e6d83c25d61aebbd24ae1663 | [
"MIT"
]
| null | null | null | HD_prevalence_JNNP_2020.ipynb | mazzalab/playgrounds | ead719e92abe1f20e6d83c25d61aebbd24ae1663 | [
"MIT"
]
| 1 | 2021-04-26T18:04:48.000Z | 2021-04-26T18:04:48.000Z | 377.43125 | 104,444 | 0.924898 | true | 12,101 | Qwen/Qwen-72B | 1. YES
2. YES | 0.865224 | 0.841826 | 0.728368 | __label__eng_Latn | 0.3971 | 0.530574 |
# Computer lab 2 - Automatic control 2
$ \newcommand{\mexp}[1]{\mathrm{e}^{#1}} $
$ \newcommand{\transp}{ ^{\mathrm{T}} }$
## Preparations
### Exercise 1 - the spectral factorization theorem
Determine a filter
\begin{equation}
H(z) = \frac{b}{z+a}
\end{equation}
That generates a signal with spectral density
\begin{equation}
\phi(\omega) = \frac{0.75}{1.25 - cos\omega}
\end{equation}
Hint: Set up state space model and use the Lyaponov equation to find the variance.
\begin{align}
x(k+1) &= -a x(k) + bu(k)\\
y(k) &= x(k)
\end{align}
The covariance
\begin{equation}
P(k) = E\tilde{x}(k)\tilde{x}\transp(k) = E \big(x(k) - m(k)\big)\big(x^{\mathrm{T}}(k) - m^{\mathrm{T}}(k)\big)
\end{equation}
is governed by the difference equation
\begin{equation}
P(k+1) = \Phi P(k) \Phi^{\mathrm{T}} + b^2R_1 = a^2P(k) + b^2,
\end{equation}
with initial condition
\begin{equation}
P(0) = r_0.
\end{equation}
The solution is
\begin{align}
P(1) &= a^2r_0 + b^2\\
P(2) &= a^2P(1) + b^2 = a^4r_0 + a^2b^2 + b^2\\
P(3) &= a^2P(2) + b^2 = a^6r_0 + a^4b^2 + a^2b^2 + b^2 = a^6r_0 + (1 + a^2 + a^4)b^2\\
& \vdots\\
P(k) &= a^{2k}r_0 + \big(1 + a^2 + \cdots + a^{2(k-1)} \big)b^2 = a^{2k}r_0 + \frac{1-a^{2k}}{1-a^2}b^2,
\end{align}
where we have made use of the properites of finite geomtric sums in the last equality.
For stable system ($|a| < 1$) we get
\begin{equation}
P(k) \to \frac{b^2}{1-a^2}
\end{equation}
The covariance function for $x$ is
\begin{equation}
r_x(k+\tau,k) = E\tilde{x}(k+\tau)\tilde{x}\transp(k) = a^\tau P(k),
\end{equation}
hence,
\begin{equation}
r_x(k+\tau,k) \to \frac{b^2a^{|\tau|}}{1-a^2}
\end{equation}
Hence, the variance is
\begin{equation}
\mathrm{Var} x = r(0) = \frac{b^2}{1-a^2}.
\end{equation}
Identify the parameters $a$ and $b$:
Note that the spectal density is
\begin{equation}
\phi(\omega) 2\pi = H(\mexp{i\omega})H(\mexp{-i\omega}) \phi_u(\omega) = \frac{b^2}{\big(\mexp{i\omega}+a\big)\big(\mexp{-i\omega}+a\big)} = \frac{b^2}{1 + a\mexp{i\omega}+a\mexp{-i\omega} + a^2} = \frac{b^2}{1+a^2+2acos\omega} = \frac{\frac{b^2}{-2a}}{\frac{1+a^2}{-2a} - cos\omega}.
\end{equation}
Setting coefficients equal in this expression and the given spectral density gives
\begin{align}
\frac{b^2}{-2\pi2a} &= \frac{3}{4} \quad \Rightarrow \quad b^2 = -3a\pi\\
\frac{1+a^2}{-2a} &= \frac{5}{4} \quad \Rightarrow \quad a^2 + \frac{5}{2}a + 1 = 0
\end{align}
so,
\begin{align}
a &= \begin{cases} -\frac{5}{4} + \frac{1}{2}\sqrt{\frac{25}{4}-4} = -\frac{1}{2} & \text{stable}\\
-\frac{5}{4} - \frac{1}{2}\sqrt{\frac{25}{4}-4} = -2 & \text{unstable}\\
\end{cases}\\
b &= \sqrt{\frac{3\pi}{2}}.
\end{align}
```python
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import control.matlab as cm
import sympy as sy
from scipy import signal
```
```python
def d_spectrum(A, B, sigma2):
lgw1 = -2
w = np.logspace(lgw1,np.log10(np.pi), 300)
(wp,Hp) = signal.freqz(B,A,w)
wm = -w
(wm,Hm) = signal.freqz(B,A,wm)
return (w, sigma2*Hp*Hm/(2*np.pi))
def lab2a(a,b):
A = [1, a]
B = b
# Compute spectrum
(w_ab, fi_ab)=d_spectrum(A,B,1)
# True spectrum
fi_0 = 0.75 / (1.25-np.cos(w_ab))
# Plot spectra
plt.loglog(w_ab, fi_ab, 'r')
plt.loglog(w_ab, fi_0, '--k')
plt.title('Output spectrum')
plt.xlabel('\omega')
plt.legend('Model', 'Correct solution')
a = -0.5
b = np.sqrt(3*np.pi/2)
lab2a(a,b)
```
## Preparation exercise 3
Covariance function of MA(1) process
\begin{align}
x(k+1) &= \begin{bmatrix} 0 & 0 \\ c & 0 \end{bmatrix} x(k) + \begin{bmatrix}1\\1\end{bmatrix} \epsilon(k)\\
y(k) &= \begin{bmatrix} 0 & 1 \end{bmatrix} x(k) = x_2(k)
\end{align}
with $\epsilon(k)$ being a sequence of white noise with unit variance and zero mean.
The state covariance $r_x(k) = P(k) = \mathrm{E}x(k)x\transp(k)$ is governed by the equation
\begin{equation}
P(k+1) = \Phi P(k) \Phi\transp + R_1.
\end{equation}
In steady state we have $P(k+1) = P(k)=P$, so $P$ can be found by solving the Lyaponov equation
\begin{equation}
P = \Phi P \Phi\transp + R_1,
\end{equation}
where, in the case here
\begin{equation}
R_1 = \begin{bmatrix}1\\1\end{bmatrix} \begin{bmatrix}1 & 1\end{bmatrix} = \begin{bmatrix} 1 & 1\\1 & 1\end{bmatrix}
\end{equation}
According to [wikipedia on the Lyaponov equation](https://en.wikipedia.org/wiki/Lyapunov_equation), the solution can be written as an infinite sum
\begin{equation}
P = \sum_{k=0}^{\infty} \Phi^k R_1 \big(A\transp\big)^k.
\end{equation}
Here we have
\begin{align}
\Phi &= \begin{bmatrix} 0 & 0 \\ c & 0 \end{bmatrix}\\
\Phi^2 &= \begin{bmatrix} 0 & 0 \\ c & 0 \end{bmatrix}\begin{bmatrix} 0 & 0 \\ c & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0\\0 & 0\end{bmatrix}\\
\Phi^3 &= 0\\
&\vdots
\end{align}
This gives a very short series:
\begin{equation}
P = R_1 + \begin{bmatrix} 0 & 0 \\ c & 0 \end{bmatrix}R_1\begin{bmatrix} 0 & c \\ 0 & 0 \end{bmatrix} = R_1 + \begin{bmatrix} 0 & 0\\0 & c^2\end{bmatrix}
\end{equation}
See verification below
```python
c = sy.symbols('c')
Phi = sy.Matrix([[0,0],[c,0]])
Phi2 = Phi*Phi
print Phi2
R1 = sy.Matrix([[1,1],[1,1]])
P = R1 + Phi*R1*Phi.T
P
```
Matrix([[0, 0], [0, 0]])
Matrix([
[1, 1],
[1, c**2 + 1]])
The variance of $y$ is the variance of $x_2$, so
\begin{equation}
\mathrm{var}\; y = 1 + c^2.
\end{equation}
To get the covariance function, note that
\begin{equation}
r_x(k+\tau,k) = \mathrm{E} x(k+\tau)x\transp = \mathrm{E}\big(\Phi^\tau x(k) + \Gamma \epsilon(k+\tau) + \Phi B \epsilon(k+\tau-1) + \Phi^2B \epsilon(k+\tau-2) + \cdots + \Phi^\tau B \epsilon(k) \big)x\transp(k) = \Phi^\tau P(k),
\end{equation}
since $x(k)$ is independent of future noise terms.
Since $\Phi^k = 0$ for $k>1$, we have
\begin{align}
r_x(0) &= P = \begin{bmatrix} 1 & 1\\1 & 1+c^2\end{bmatrix}\\
r_x(1) &= \Phi P = \begin{bmatrix} 0 & 0\\c & c \end{bmatrix}\\
r_x(\tau) &= 0, \; \tau>1
\end{align}
## Exercise 2
Simulate ARMA(1) process
```python
a = np.arange(24)
print len(a[2:])
print len(a[:-2])
print np.cov(a[2:], a[:-2])
```
22
22
[[ 42.16666667 42.16666667]
[ 42.16666667 42.16666667]]
```python
import pdb
def xcov(a,b,nlags):
""" Cross correlation function for given number of lags. Works for 1-dimensional vectors a and b
OBS: Unlike matlabs xcov function, this does not compute values for negative lags (assumes) symmetric
function.
"""
aa = a.ravel()
bb = b.ravel()
return np.array( [np.cov(aa,aa)] + [np.cov(aa[:-tau],bb[tau:]) for tau in range(1,nlags) ] )
def lab2b(C,A,N=100,tau_max=50, nr=1):
lam = 1.0
a = A[1]
c = C[1]
tau = np.arange(tau_max)
e = np.random.normal(0,1,(N,nr))
sys = (C, A, 1)
r_e = []
for i in range(nr):
(t_out, yn) = signal.dlsim(sys, e[:,i])
cv = xcov(yn,yn, tau_max)
r_e.append(cv[:,1,0])
r_e = np.array(r_e)
NN = min(N,50)
r_t = lam/(1-a**2) * np.hstack( (1+c**2-2*a*c, (c-a)*(1-a*c)*(-a)**(tau[1:]-1) ) )
m = np.mean(r_e, axis=0)
st = np.std(r_e, axis=0)
#1/0
plt.plot(tau, r_t, 'r-')
plt.plot(tau, r_e.T, 'b--')
plt.legend(['True', 'Estimated'])
plt.xlabel('\tau')
#lab2b([1, 0], [1, -0.4], N=400, nr=4)
lab2b([1, 0.5], [1, 0], N=400, nr=4)
```
```python
pdb.pm()
```
> <ipython-input-30-e7a3b6b244f2>(31)lab2b()
-> 1/0
(Pdb) r_e.shape
(1, 50)
(Pdb) cv.shape
(50, 2, 2)
(Pdb) cv[:4,:,:]
array([[[ 5.15203502, 5.15203502],
[ 5.15203502, 5.15203502]],
[[ 5.15240258, 4.6261038 ],
[ 4.6261038 , 5.15243297]],
[[ 5.15229716, 4.15816722],
[ 4.15816722, 5.15292142]],
[[ 5.15215686, 3.71376601],
[ 3.71376601, 5.15336661]]])
(Pdb) q
## Exercise 3
Pole-placement for observer
Discrete time system
\begin{align}
x(t+1) &= Fx(t) + Gu(t) + Nv_1(t)\\
y(t) &= Hx(t) + v_2(t),
\end{align}
with
\begin{align}
F &= \begin{bmatrix} 0.3 & -0.5\\ 1 & 0\end{bmatrix},\\
G &= \begin{bmatrix} 1\\0\end{bmatrix},\\
H &= \begin{bmatrix} 0 & 1\end{bmatrix}, \\
N &= \begin{bmatrix} 1\\ 1 \end{bmatrix}
\end{align}
Consider observer
\begin{equation}
\hat{x}(t+1) = F\hat{x}(t) + Gu(t) + K\big(y - H\hat{x}(t)\big),
\end{equation}
with state estimation error $\tilde{x}(t) = x(t) - \hat(x)(t)$ governed by
\begin{equation}
\tilde{x}(t+1) = \big(F-KH\big)\tilde{x}(t) + Nv_1(t) - Kv_2(t).
\end{equation}
Find observer gain $K$ so that the poles of the observer are placed in
\begin{equation}
p = -0.4 \pm 0.3j
\end{equation}
```python
F = np.array([[0.3, -0.5], [1, 0]])
G = np.array([[1],[0]])
H = np.array([[0, 1]])
N = np.array([[1],[1]])
K = cm.place(F.T, H.T, [-0.4+0.3*1j, -0.4-0.3*1j]).T
print K
np.linalg.eig(F-np.dot(K,H))
```
[[ 0.08]
[ 1.1 ]]
(array([-0.4+0.3j, -0.4-0.3j]),
array([[ 0.55689010+0.23866719j, 0.55689010-0.23866719j],
[ 0.79555728+0.j , 0.79555728-0.j ]]))
The covariance matrix of the state estimation error is given by the solution to the Lyaponov equation
\begin{equation}
P = APA\transp + NN\transp \sigma_1^2 + KK\transp \sigma_2^2,
\end{equation}
which can be written
\begin{equation}
APA\transp - P + Q = 0
\end{equation}
where
\begin{equation}
Q = NN\transp \sigma_1^2 + KK\transp \sigma_2^2
\end{equation}
```python
sigma_1 = 1
sigma_2 = 0.5
Ao = F - np.dot(K,H)
Qo = np.dot(N,N.T)*sigma_1**2 + np.dot(K,K.T)*sigma_2**2
Xo = cm.dlyap(Ao, Qo)
print Xo
```
[[ 1.25764228 1.33617886]
[ 1.33617886 1.80691057]]
Find oserver gain for a steady-state Kalman filter instead.
Note that in the Kalman filter we have the recursions
\begin{align}
\bar{P}_k &= FP_{k-1}F\transp + Q\\
P_k &= (1-K_kH)\bar{P}_k,
\end{align}
with Kalman gain
\begin{equation}
K_k = \bar{P}_kH\transp\big(H\bar{P}_kH\transp + R\big)^{-1}.
\end{equation}
Here $Q=NN\transp \sigma_1^2$ is the covariance matrix of the process noise.
In steady state we have $\bar{P}_{k+1}= \bar{P}_k = \bar{P}$, so the two recursions can be combined to yield the algebraic Riccatti equation
\begin{equation}
\bar{P} = F\bar{P}F\transp + Q - F\bar{P}H\transp\big(H\bar{P}H\transp + R\big)^{-1}H\bar{P}F\transp.
\end{equation}
The solution to this can then be used to solve for the steady state Kalman gain
Then compute Kalman gain as
\begin{equation}
K = F \bar{P}H\transp\big(H\bar{P}H\transp + R\big)^{-1},
\end{equation}
where $R=\sigma_2^2$ is the covariance matrix of the measurement noise.
Now the Kalman update on observer form is given by
\begin{equation}
\hat{x}(t+1) = F\hat{x}(t) + Gu(t) + K\big(y(t) - H\hat{x}(t)\big)
\end{equation}
```python
A = F.T
B = H.T
(X,L,G) = cm.dare(F.T, H.T, sigma_1**2*np.dot(N,N.T), sigma_2**2)
K = np.dot(F,G.T)
Kk = np.dot(X, np.dot(H.T, np.linalg.inv(np.dot(H, np.dot(X, H.T)) + sigma_2**2) ))
Gtest = np.dot( np.linalg.inv( np.dot(B.T, np.dot(X, B) ) + sigma_2**2 ), np.dot(B.T, X) )
print Kk
print G
print Gtest
print np.dot(F,Kk)
print np.linalg.eig(F-np.dot(Kk, H))
print np.linalg.eig(A-np.dot(B,G))
print L
Ak = F - np.dot(G.T,H)
Qk = np.dot(N,N.T)*sigma_1**2 + np.dot(Kk,Kk.T)*sigma_2**2
Xk = cm.dlyap(Ak, Qk)
print Xk
print Xo
print X
```
[[ 0.63417447]
[ 0.8469135 ]]
[[-0.23320441 0.63417447]]
[[ 0.63417447 0.8469135 ]]
[[-0.23320441]
[ 0.63417447]]
(array([-0.27345675+0.89739725j, -0.27345675-0.89739725j]), array([[ 0.72899571+0.j , 0.72899571-0.j ],
[ 0.36859189-0.5768061j, 0.36859189+0.5768061j]]))
(array([-0.16708724+0.22051101j, -0.16708724-0.22051101j]), matrix([[ 0.88847810+0.j , 0.88847810-0.j ],
[-0.41499678+0.1959192j, -0.41499678-0.1959192j]]))
[-0.16708724+0.22051101j -0.16708724-0.22051101j]
[[ 1.11076086 1.16498821]
[ 1.16498821 1.35903982]]
[[ 1.25764228 1.33617886]
[ 1.33617886 1.80691057]]
[[ 1.03984474 1.03564729]
[ 1.03564729 1.38306366]]
```python
L
```
array([-0.16708724+0.22051101j, -0.16708724-0.22051101j])
```python
```
| e673858960d3ece1003b8a12da2e6513087a7a25 | 50,799 | ipynb | Jupyter Notebook | state-space/notebooks/Spectral-factorization-example.ipynb | kjartan-at-tec/mr2007-computerized-control | 16e35f5007f53870eaf344eea1165507505ab4aa | [
"MIT"
]
| 2 | 2020-11-07T05:20:37.000Z | 2020-12-22T09:46:13.000Z | state-space/notebooks/Spectral-factorization-example.ipynb | alfkjartan/control-computarizado | 5b9a3ae67602d131adf0b306f3ffce7a4914bf8e | [
"MIT"
]
| 4 | 2020-06-12T20:44:41.000Z | 2020-06-12T20:49:00.000Z | state-space/notebooks/Spectral-factorization-example.ipynb | alfkjartan/control-computarizado | 5b9a3ae67602d131adf0b306f3ffce7a4914bf8e | [
"MIT"
]
| 1 | 2019-09-25T20:02:23.000Z | 2019-09-25T20:02:23.000Z | 75.369436 | 19,376 | 0.758322 | true | 5,049 | Qwen/Qwen-72B | 1. YES
2. YES | 0.849971 | 0.826712 | 0.702681 | __label__eng_Latn | 0.315179 | 0.470895 |
### A問題
```python
a,b = map(int, input().split())
print('Yay!' if max(a, b) <= 8 else ':(')
```
10 6
:(
### B問題
```python
a,b =map(int,input().split())
if a == 0:
print(b)
elif a == 1:
print(b*100**1)
elif a == 2:
print(b*100**2)
else:
pass
```
2 100
1000000
```python
a,b =map(int,input().split())
if b==100:
print((b+1)*100**a)
else:
print(b*100**a)
```
1 100
10100
### C問題
```python
N = int(input())
A = list(map(int,input().split()))
```
10
1 2 3 4 5 6 7 8 9 10
```python
def cout(num, d):
if num%2==0:
return d+1
else:
return d
count=0
N = int(input())
A = list(map(int,input().split()))
for i in A:
j = i
while j%2==0:
count = cout(j,count)
j=j/2
print(count)
```
8
最速コードを真似する。
```python
# https://abc100.contest.atcoder.jp/submissions/2677920
# kyunaさん
input()
print(sum(bin(x)[::-1].index('1')for x in map(int,input().split())))
```
上の"input()"は必要ない数字を落とす。<br>
下の部分で偶数の数をカウントしている。
```python
#binはbite形式に変更する。
#正準
print(bin(A),bin(A).index('1'))
#逆順にする
print(bin(A)[::-1],bin(A)[::-1].index('1'))
#逆順にしたときの1の位置で2が何個入っているのかがわかる。
#なぜ?
for i in range(100):
print(i,bin(i))
#2進数で表すと,2の倍数の数の部分だけ入るみたい。
```
0b10000 2
00001b0 4
0 0b0
1 0b1
2 0b10
3 0b11
4 0b100
5 0b101
6 0b110
7 0b111
8 0b1000
9 0b1001
10 0b1010
11 0b1011
12 0b1100
13 0b1101
14 0b1110
15 0b1111
16 0b10000
17 0b10001
18 0b10010
19 0b10011
20 0b10100
21 0b10101
22 0b10110
23 0b10111
24 0b11000
25 0b11001
26 0b11010
27 0b11011
28 0b11100
29 0b11101
30 0b11110
31 0b11111
32 0b100000
33 0b100001
34 0b100010
35 0b100011
36 0b100100
37 0b100101
38 0b100110
39 0b100111
40 0b101000
41 0b101001
42 0b101010
43 0b101011
44 0b101100
45 0b101101
46 0b101110
47 0b101111
48 0b110000
49 0b110001
50 0b110010
51 0b110011
52 0b110100
53 0b110101
54 0b110110
55 0b110111
56 0b111000
57 0b111001
58 0b111010
59 0b111011
60 0b111100
61 0b111101
62 0b111110
63 0b111111
64 0b1000000
65 0b1000001
66 0b1000010
67 0b1000011
68 0b1000100
69 0b1000101
70 0b1000110
71 0b1000111
72 0b1001000
73 0b1001001
74 0b1001010
75 0b1001011
76 0b1001100
77 0b1001101
78 0b1001110
79 0b1001111
80 0b1010000
81 0b1010001
82 0b1010010
83 0b1010011
84 0b1010100
85 0b1010101
86 0b1010110
87 0b1010111
88 0b1011000
89 0b1011001
90 0b1011010
91 0b1011011
92 0b1011100
93 0b1011101
94 0b1011110
95 0b1011111
96 0b1100000
97 0b1100001
98 0b1100010
99 0b1100011
これは,2進数を10進数に返還するときには次のように表されるためである。
\begin{align}
(10進数)=a*2^n+a*2^{n-1}+\cdots+a*2^{2}+a*2^{1}+a*2^{0}
\end{align}
ただしa=0 or 1のである。
このときn>k>0の範囲でa=0とすると,
\begin{align}
(10進数)=&a*2^n+a*2^{n-1}+\cdots+a*2^k+\cdots+0*2^{2}+0*2^{1}+0*2^{0}\\
=&2^k*(a*2^{(n-k)}+a*2^{(n-1-k)}+\cdots+a*2^0)
\end{align}
とかけるので,これは$2^k$の倍数となる。
### D問題
方針すら謎。<br>
回答を読んでも方針はわかったが,実装の仕方がわからなかたので最速コードをコピーする。
```python
# https://abc100.contest.atcoder.jp/submissions/2683381
# moirさん
n,m = map(int, input().split())
ppp = []
ppm = []
pmp = []
mpp = []
for _ in range(n):
x,y,z = map(int, input().split())
ppp.append(x+y+z)
ppm.append(x+y-z)
pmp.append(x-y+z)
mpp.append(-x+y+z)
else:
p0 = abs(sum(sorted(ppp)[:m]))
p1 = abs(sum(sorted(ppm)[:m]))
p2 = abs(sum(sorted(pmp)[:m]))
p3 = abs(sum(sorted(mpp)[:m]))
p4 = sum(sorted(ppp,reverse=True)[:m])
p5 = sum(sorted(ppm,reverse=True)[:m])
p6 = sum(sorted(pmp,reverse=True)[:m])
p7 = sum(sorted(mpp,reverse=True)[:m])
print(max(p0,p1,p2,p3,p4,p5,p6,p7))
```
ppp:plus plus plus<br>
ppm:plus plus minus<br>
pmp:plus minus plus<br>
mpp:minus plus plus<br>
という定義のようだ。対称性からこの4つでいいらしい。(atcoderの解説では,8つの全探索)<br>
そこからp0~p3で,値の小さい方から(マイナス方向から)sortして,m番目までの値を出す。<br>
reverseをいれることで大きい方からsortして,m番目までの値を出す。<br>
それらの合計値の最大値を出すことで,今回求める値が現れる。
#### sortedのテスト
```python
abc=[1,5,10,2,4]
print(abc)
```
[1, 5, 10, 2, 4]
```python
print(sorted(abc)[:2])
```
[1, 2]
```python
print(sorted(abc, reverse=True)[:2])
```
[10, 5]
```python
```
| e512e2dbb874533e507530070f1185421995aa54 | 9,452 | ipynb | Jupyter Notebook | ABC100.ipynb | ryosukehata/ABC_practice | a35ba66c6af28752fcea9f409ec66b685e67e40a | [
"MIT"
]
| null | null | null | ABC100.ipynb | ryosukehata/ABC_practice | a35ba66c6af28752fcea9f409ec66b685e67e40a | [
"MIT"
]
| null | null | null | ABC100.ipynb | ryosukehata/ABC_practice | a35ba66c6af28752fcea9f409ec66b685e67e40a | [
"MIT"
]
| null | null | null | 19.691667 | 82 | 0.446995 | true | 2,285 | Qwen/Qwen-72B | 1. YES
2. YES | 0.70253 | 0.712232 | 0.500365 | __label__yue_Hant | 0.09991 | 0.000843 |
```
# default_exp definition.interval
```
# definition.interval
```
#hide
from mathbook.utility.markdown import *
from mathbook.configs import *
# uncomment for editing.
# DESTINATION = 'notebook'
# ORIGIN = 'notebook'
```
## Interval
```
#export
if __name__ == '__main__':
embed_markdown_file('definition.interval.md',
destination=DESTINATION, origin=ORIGIN)
```
[(65, 92)]
For real numbers $a,b$, an **interval** between $a$ and $b$ is a [subset](https://hyunjongkimmath.github.io/mathbook/definition.subset.html) of $\mathbb{R}$[^1] containing all of the real numbers between $a$ and $b$. An interval is of one of the following forms, depending on which of the two endpoints (i.e., $a$ and $b$) it contains:
1. The **open interval**, denoted $(a,b)$, is the set of real numbers containing all real numbers between $a$ and $b$, not including $a$ and $b$:
$$
\begin{align}
(a,b) := \{x \in \mathbb{R}: a < x < b \}
\end{align}
$$
2. The **closed interval**, denoted $[a,b]$, is the set of real numbers containing all real numbers between $a$ and $b$, including both $a$ and $b$:
$$
\begin{align}
[a,b] := \{x \in \mathbb{R}: a \leq x \leq b \}
\end{align}
$$
3. The **half open intervals**, denoted $(a,b]$ and $[a,b)$, are the sets of real numbers containing all real numbers between $a$ and $b$, including one of $a$ and $b$ and not the other:
$$
\begin{align}
(a,b] &:= \{x \in \mathbb{R}: a < x \leq b \} \\
[a,b) &:= \{x \in \mathbb{R}: a \leq x < b \}
\end{align}
$$
[^1]:https://hyunjongkimmath.github.io/mathbook/notation.basic.html#$\mathbb{R}$
| c5a6681762caf94a00bd73b5ac17913c19f3c425 | 2,955 | ipynb | Jupyter Notebook | nbs/definition.interval.ipynb | hyunjongkimmath/mathbook | 058f1b804824198ab35e1273ad9091e66985fde6 | [
"Apache-2.0"
]
| null | null | null | nbs/definition.interval.ipynb | hyunjongkimmath/mathbook | 058f1b804824198ab35e1273ad9091e66985fde6 | [
"Apache-2.0"
]
| null | null | null | nbs/definition.interval.ipynb | hyunjongkimmath/mathbook | 058f1b804824198ab35e1273ad9091e66985fde6 | [
"Apache-2.0"
]
| null | null | null | 28.142857 | 348 | 0.503892 | true | 509 | Qwen/Qwen-72B | 1. YES
2. YES | 0.752013 | 0.754915 | 0.567706 | __label__eng_Latn | 0.960042 | 0.1573 |
# Overview of the Devito domain specific language
```python
from sympy import *
from devito import *
```
## From equations to code in a few lines of Python -- the main objective of this notebook is to demonstrate how Devito and its [SymPy](http://www.sympy.org/en/index.html)-powered symbolic API can be used to solve partial differential equations using the finite difference method with highly optimized stencils in a few lines of Python.
## Defining the physical domain
A `Grid` object stores, among other things:
* the physical `extent` (the size) of our domain, and
*
how many points we want to use in each dimension to discretise our data.
```python
grid = Grid(shape=(5, 6), extent=(1., 1.))
grid
```
## Functions, data, and expressions
To express our equation in symbolic form and discretise it using finite differences, Devito provides a set of `Function` types. A `Function` object also carries data.
```python
f = Function(name='f', grid=grid)
f
```
```python
f.data
```
By default, Devito `Function` objects use the spatial dimensions `(x, y)` for 2D grids and `(x, y, z)` for 3D grids. To solve a PDE over several timesteps a time dimension is also required by our symbolic function. For this Devito provides an additional function type, the `TimeFunction`, which incorporates the correct dimension along with some other intricacies needed to create a time stepping scheme.
```python
g = TimeFunction(name='g', grid=grid)
g
```
Since the default time order of a `TimeFunction` is `1`, the shape of `f` is `(2, 5, 6)`, i.e. Devito has allocated two buffers to represent `g(t, x, y)` and `g(t + dt, x, y)`:
```python
g.shape
```
We can also create `Function` objects with custom `Dimension`'s.
```python
x, y = grid.dimensions
d = Dimension(name='d')
```
```python
u1 = Function(name='u', dimensions=(d, x, y), shape=(3,) + grid.shape)
u1
```
```python
u2 = Function(name='u', dimensions=(y, x, d), shape=(6, 5, 3))
u2
```
`Function`'s are used to construct expressions. There is virtually no limit to the complexity an expression can have, but there's a rule -- it must be possible to construct an ordering of `Dimension`'s. In practice, this is never an issue.
```python
cos(g)*f + sin(u1) # OK, Devito can compile this expression
```
```python
cos(g)*f + sin(u2) # Not OK, Devito will complain because it sees both `x, y` and `y, x` as Function dimensions
```
## Derivatives of symbolic functions
Devito provides a set of shorthand expressions (implemented as Python properties) that allow us to generate finite differences in symbolic form. For example, the property `f.dx` denotes $\frac{\partial}{\partial x} f(x, y)$ - only that Devito has already discretised it with a finite difference expression.
```python
f.dx
```
We can express derivatives of arbitrary order, but for this we'll need to define a `Function` with a suitable spatial order. For example, the shorthand for the second derivative in `x` is `.dx2`, for the third order derivative `.dx3`, and so on.
```python
h = Function(name='h', grid=grid, space_order=2)
h.dx2
```
We may also want to take a look at the stencil Devito will generate based on the chosen discretisation.
```python
f.dx.evaluate
```
```python
h.dx2.evaluate
```
A similar set of expressions exist for each spatial dimension defined on our grid, for example `f.dy` and `f.dyl` (here the `l` represents the left derivative). Obviously, one can also take derivatives in time of `TimeFunction` objects. For example, to take the first derivative in time of `g` you can simply write:
```python
g.dt
```
There also exist convenient shortcuts to express the forward and backward stencil points, `g(t+dt, x, y)` and `g(t-dt, x, y)`.
```python
g.forward
```
```python
g.backward
```
And of course, there's nothing to stop us taking derivatives on these objects:
```python
g.forward.dt
```
```python
g.forward.dy
```
There also are shortcuts for classic differential operators
```python
h.laplace
```
```python
h.dx2 + h.dy2 # Equivalent to h.laplace
```
## Some advanced features
More generally, we can take **derivatives of arbitrary expressions**
```python
(g.dt + h.laplace + f.dy).dx2
```
Which can, depending on the chosen discretisation, lead to fairly complex stencils:
```python
(g.dt + h.laplace + f.dy).dx2.evaluate
```
The DSL also extends naturally to **tensorial objects**
```python
A = TensorFunction(name='A', grid=grid, space_order=2)
A
```
```python
v = VectorFunction(name='v', grid=grid, space_order=2)
v
```
```python
b = A*v
b
```
```python
div(b)
```
## A linear convection operator
**Note:** The following example is derived from [step 5](http://nbviewer.ipython.org/github/barbagroup/CFDPython/blob/master/lessons/07_Step_5.ipynb) in the excellent tutorial series [CFD Python: 12 steps to Navier-Stokes](http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes/).
In this simple example we will show how to derive a very simple convection operator from a high-level description of the governing equation.
The governing equation we want to implement is the **linear convection equation**. We start off defining some parameters, such as the computational grid. We also initialize our velocity `u` with a smooth field:
```python
from examples.cfd import init_smooth, plot_field
grid = Grid(shape=(81, 81), extent=(2., 2.))
u = TimeFunction(name='u', grid=grid, space_order=8)
# We can now set the initial condition and plot it
init_smooth(field=u.data[0], dx=grid.spacing[0], dy=grid.spacing[1])
init_smooth(field=u.data[1], dx=grid.spacing[0], dy=grid.spacing[1])
plot_field(u.data[0])
```
In particular, the linear convection equation that we want to implement is
$$\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x} + c\frac{\partial u}{\partial y} = 0.$$
Using the Devito shorthand notation, we can express the governing equations as:
```python
c = 1. # Value for c
eq = Eq(u.dt + c * u.dxl + c * u.dyl, 0)
```
We now need to rearrange our equation so that the term `u(t+dt, x, y)` is on the left-hand side, since it represents the next point in time for our state variable $u$. Here, we use the Devito built-in `solve` function to create a valid stencil for our update to `u(t+dt, x, y)`:
```python
update = Eq(u.forward, solve(eq, u.forward))
update
```
Once we have created this `update` expression, we can create a Devito `Operator`. This `Operator` will basically behave like a Python function that we can call to apply the created stencil over our associated data.
```python
op = Operator(update) # Triggers compilation into C !
```
```python
nt = 100 # Number of timesteps
dt = 0.2 * 2. / 80 # Timestep size (sigma=0.2)
op(t=nt+1, dt=dt)
plot_field(u.data[0])
```
Note that the real power of Devito is hidden within `Operator`, it will automatically generate and compile the optimized C code. We can look at this code (noting that this is not a requirement of executing it) via:
```python
print(op.ccode)
```
## What is not covered by this notebook
* Mechanisms to expression injection and interpolation at grid points ("sparse operations")
* Subdomains and Conditionals
* Boundary conditions (w/ and w/o subdomains)
* Custom stencil coefficients
* Staggered grids
* ...
## How do I get parallel code?
```python
op = Operator(update, language='openmp')
print(op)
```
```python
op = Operator(update, language='openacc', platform='nvidiaX')
print(op)
```
| e3c036a98c57385fa7eb8755251229a5cd37ba4a | 14,655 | ipynb | Jupyter Notebook | presentations/devito-dsl.ipynb | devitocodes/devitocodes.github.io | be1828f200d96a1c477a187372fc5d445ccffd75 | [
"Apache-2.0"
]
| 2 | 2018-12-18T18:58:14.000Z | 2020-01-22T20:07:57.000Z | presentations/devito-dsl.ipynb | devitoproject/devitoproject.github.io | 93599e9a7f58e2a5c6cf81f84e4e29a813fedad8 | [
"Apache-2.0"
]
| 6 | 2018-06-15T14:50:40.000Z | 2019-09-19T08:56:03.000Z | presentations/devito-dsl.ipynb | devitoproject/devitoproject.github.io | 93599e9a7f58e2a5c6cf81f84e4e29a813fedad8 | [
"Apache-2.0"
]
| 3 | 2018-06-22T07:03:42.000Z | 2020-01-22T20:08:13.000Z | 24.343854 | 410 | 0.562129 | true | 1,973 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.766294 | 0.654348 | __label__eng_Latn | 0.992264 | 0.3586 |
Universidade Federal do Rio Grande do Sul (UFRGS)
Programa de Pós-Graduação em Engenharia Civil (PPGEC)
# Pré-Introdução à teoria das vibrações
## Aula 4 - equilíbrio dinâmico para vibração livre não amortecida
### *Daniel Barbosa Mapurunga Matos (Aluno PPGEC/UFRGS)*
```python
import numpy as np
import matplotlib.pyplot as plt
```
## 1. Frequência natural de vibração
A frequência natural de vibração pode ser definida a partir do teorema da conservacao da energia.
\begin{align}
\frac{KA^2}{2} = \frac{mv^2}{2}
\end{align}
Com base na cinemática angular, a velocidade pode ser escrita em função da sua frequência natural e a amplitude do movimento $v= \omega A$, sendo assim:
\begin{align}
\frac{KA^2}{2} = \frac{m(\omega_n A)^2}{2}
\end{align}
Simplificando a equação, obtém-se a seguinte expressão para o cálculo da frequência natural:
\begin{align}
\omega_n = \sqrt{\frac{K}{m}}
\end{align}
Percebe-se que, num sistema linear elástico, a frequência natural de vibraçâo depende apenas de **caracteristicas do sistema (K e m)**
## 2. Funções harmônicas
Uma função harmônica é aquela em que a sua derivada segunda é proporcional a função inicial. Por exemplo, as funções seno e cosseno, como mostrado abaixo.
\begin{align}
Y &= A\sin{ \omega t}\\
Y' &= \omega A\cos{ \omega t}\\
Y'' &= -\omega^2A\sin{ \omega t} = -\omega^2 Y
\end{align}
## 3. Equilibrio dinâmico
Numa vibração livre não amortecida, há duas forças que atuam no corpo: **inercial** e **restitutiva**, como ilustra a figura abaixo:
Pode-se, portanto escrever a equacao de equilibrio da seguinte forma:
\begin{align}
m\ddot{x} + Kx = 0
\end{align}
Dividindo ambos os termos pela massa e lembrando que $\frac{K}{m} = \omega^2$, reescreve-se a expressão:
\begin{align*}
\ddot{x} + \omega_n^2x &= 0 \\
\ddot{x} &=- \omega_n^2x
\end{align*}
Portanto, é notável que se trata de uma função harmônica, sendo possível representar x como uma função trigonométrica:
\begin{align}
x(t) = a\cos{\omega t} + b\sin{\omega t}
\end{align}
Para definir o valor das constantes a e b, utiliza-se os valores de deslocamento inicial $x_0$ e velocidade inicial $v_0$.
\begin{align}
x_0 &= a\cos{\omega_n 0} + b\sin{\omega_n 0}\\
a &= x_0\\
v_0 &= -\omega_n a\sin{\omega_n 0} + \omega_n b\cos{\omega_n 0}\\
b &= \frac{v_0}{\omega_n}
\end{align}
Sendo assim, a equação final é dada por:
\begin{align}
x(t) = x_0\cos{\omega_n t} + \frac{v_0}{\omega_n}\sin{\omega_n t}
\end{align}
## Exemplo
```python
x0 = 2 # deslocamento inicial [m]
v0 = 0 # velocidade inicial[m/s]
m = 90 # massa[kg]
K = 150 # rigidez [N/m]
w = np.sqrt(K/m) # frequencia natural
t = np.linspace(0,100,1000) # tempo discretizado
x = x0*np.cos(w*t)+v0/w*np.sin(w*t) # equacao do movimento
plt.figure(1,figsize=(12,4))
plt.plot(t,x,'black')
plt.xlabel("tempo"); plt.ylabel("deslocamento")
plt.grid(True)
```
## 4. Resolução da equação de equilíbrio a partir da transformada de Laplace
A transformada de Laplace permite a resolução da equação de equiíbrio dinâmico de forma simples, sem que seja necessário trabalhar com equações diferenciais. Nesta aula, não entraremos em detalhes sobre a transformada, sendo necessário apenas apresentar algumas das transformadas necessárias.
Portanto, para a resolução da equação de vibração livre não amortecida, serão necessárias as seguintes relações das funções trigonométricas:
\begin{align}
\mathscr{L} \left\{ \sin (\omega t) \right\} &= \frac{\omega}{s^2 + \omega^2} \\
\mathscr{L} \left\{ \cos (\omega t) \right\} &= \frac{s}{s^2 + \omega^2}
\end{align}
E as transformadas das derivadas primeira e segunda:
\begin{align}
\mathscr{L} \left\{ \dot{f}(t) \right\} &= -f(0) + s \bar{f}(s)\\
\mathscr{L} \left\{ \ddot{f}(t) \right\} &= -s f(0) - \dot{f}(0) + s^2 \bar{f}(s)
\end{align}
Agora, vamos resolver a equação de equilíbrio dinâmico a partir das transformadas.
\begin{align}
m\ddot{x} + Kx = 0
\end{align}
Dividindo ambos os termos pela massa, obtém-se:
\begin{align}
\ddot{x} + \omega_n^2x &= 0 \\
\end{align}
Aplicando a transformada de Laplace, frisando que a transformada é um operador linear, ficamos com a seguinte relação:
\begin{align}
\mathscr{L} \left\{ \ddot{x} \right\} + \omega_n^2\mathscr{L}\left\{x\right\} &= 0 \\
\end{align}
Aplicando a transformada da derivada segunda:
\begin{align}
-s u(0) - \dot{x}(0) + s^2 \bar{x}(s) + \omega_n^2 \bar{x}(s) = 0
\end{align}
Definindo $ x(0) = x_0$ e $ \dot{x}(0) = v_0$, a equação é reescrita da seguinte forma:
\begin{align}
-s x_0 - v_0 + s^2 \bar{x}(s) + \omega_n^2 \bar{x}(s) = 0
\end{align}
Isolando agora o termo $\bar{x}(s)$:
\begin{align}
\bar{x}(s) = x_0\frac{s}{s^2 +\omega_n^2} + \frac{v_0}{\omega_n}\frac{\omega_n}{s^2+\omega_n^2}
\end{align}
Voltando agora ao domínio do tempo, utilizando as transformadas do seno e do cosseno, chegamos a equação da vibração livre não-amortecida:
\begin{align}
x(t) = x_0\cos{\omega_n t} + \frac{v_0}{\omega_n}\sin{\omega_n t}
\end{align}
```python
```
| 482c8ac1564e779d308a052a3eab569cc91b042a | 58,806 | ipynb | Jupyter Notebook | Aula 4- VIbracao livre.ipynb | danielbmmatos/Pre-Vibracoes | 2b62c532fa78060c05be9bf5c4a3330f0751d966 | [
"MIT"
]
| 2 | 2020-03-25T01:25:10.000Z | 2020-05-25T14:44:08.000Z | Aula 4- VIbracao livre.ipynb | danielbmmatos/Pre-Vibracoes | 2b62c532fa78060c05be9bf5c4a3330f0751d966 | [
"MIT"
]
| null | null | null | Aula 4- VIbracao livre.ipynb | danielbmmatos/Pre-Vibracoes | 2b62c532fa78060c05be9bf5c4a3330f0751d966 | [
"MIT"
]
| 4 | 2021-03-10T18:05:51.000Z | 2021-04-12T01:14:59.000Z | 226.176923 | 50,448 | 0.909227 | true | 1,807 | Qwen/Qwen-72B | 1. YES
2. YES | 0.79053 | 0.849971 | 0.671928 | __label__por_Latn | 0.993288 | 0.399445 |
Probabilistic Programming
=====
and Bayesian Methods for Hackers
========
Original content ([this Jupyter notebook](https://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter1_Introduction/Ch1_Introduction_PyMC2.ipynb)) created by Cam Davidson-Pilon ([`@Cmrn_DP`](https://twitter.com/Cmrn_DP))
Ported to Julia by ([`@Fifthist`](https://github.com/Fifthist)).
___
Welcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!
---
### Table of Contents
- Dependencies & Prerequisites
- The Philosophy of Bayesian Inference
- The Bayesian state of mind
- Bayesian Inference in Practice
- Are frequentist methods incorrect then?
- Our Bayesian framework
- Example: Mandatory coin-flip example
- Example: Bug, or just sweet, unintended feature?
- Probability Distributions
- Discrete Case
- Continuous Case
- But what is $\lambda \;$?
- Example: Inferring behaviour from text-message data
- Introducing our first hammer: Gen
- Specify the model
- Specify the inference algorithm
- Sample from the posterior
- Plot the Results
- Interpretation
- Exercises
- References
### Dependencies & Prerequisites
<div class="alert alert-success">
If you're running this notebook in Jupyter on your own machine (and you have already installed Julia), you can use the following
<br>
<ul>
<li> To install the Gen package with the Julia package manager type <code>]</code> to enter the Pkg REPL mode and then run: <code>pkg> add https://github.com/probcomp/Gen</code></li>
<li> To install using Docker: <a style="color: #355C70">https://github.com/probcomp/gen-quickstart</a> </li>
</ul>
</div>
```julia
using Gen;
using PyPlot;
using DelimitedFiles;
using Statistics: mean;
```
Chapter 1
======
***
The Philosophy of Bayesian Inference
------
> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...
If you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives.
### The Bayesian state of mind
Bayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians.
The Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability.
For this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability.
Bayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?
Notice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:
- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result.
- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug.
- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs.
This philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist.
To align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.
John Maynard Keynes, a great economist and thinker, said "When the facts change, I change my mind. What do you do, sir?" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:
1\. $P(A): \;\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\;\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.
2\. $P(A): \;\;$ This big, complex code likely has a bug in it. $P(A | X): \;\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.
3\. $P(A):\;\;$ The patient could have any number of diseases. $P(A | X):\;\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.
It's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others).
By introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*.
### Bayesian Inference in Practice
If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.
For example, in our debugging problem above, calling the frequentist function with the argument "My code passed all $X$ tests; is my code bug-free?" would return a *YES*. On the other hand, asking our Bayesian function "Often my code has bugs. My code passed all $X$ tests; is my code bug-free?" would return something very different: probabilities of *YES* and *NO*. The function might return:
> *YES*, with probability 0.8; *NO*, with probability 0.2
This is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *"Often my code has bugs"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences.
#### Incorporating evidence
As we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like "I expect the sun to explode today", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.
Denote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \rightarrow \infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset.
One may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:
> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is "large enough," you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were "enough" you'd already be on to the next problem for which you need more data.
### Are frequentist methods incorrect then?
**No.**
Frequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.
#### A note on *Big Data*
Paradoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask "Do I really have big data?")
The much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets.
### Our Bayesian framework
We are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.
Secondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:
\begin{align}
P( A | X ) = & \frac{ P(X | A) P(A) } {P(X) } \\\\[5pt]
& \propto P(X | A) P(A)\;\; (\propto \text{is proportional to })
\end{align}
The above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.
##### Example: Mandatory coin-flip example
Every statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be.
We begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data.
Below we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).
```julia
probs_of_heads = range(0., stop=1., length=100)
@gen function binomial_conjugate_model(n::Int64)
# Flip a fair coin $n times
flips = map(i -> @trace(bernoulli(0.5), (:flip, i)), 1:n)
heads = sum(flips)
# Get probabilities of the bernoulli parameter from a beta distribution
betalogpdf = x -> logpdf(beta, x, 1 + heads, 1 + n - heads)
log_observed_probs_heads = betalogpdf.(probs_of_heads)
observed_probs_heads = exp.(log_observed_probs_heads)
return observed_probs_heads
end;
```
```julia
"""
The book uses a custom matplotlibrc file, which provides the unique styles for
matplotlib plots. If executing this book, and you wish to use the book's
styling, provided are two options:
1. Overwrite your own matplotlibrc file with the rc-file provided in the
book's styles/ dir. See http://matplotlib.org/users/customizing.html
2. Also in the styles is bmh_matplotlibrc.json file. This can be used to
update the styles in only this notebook. Try running the following code:
import JSON
using PyCall
s = open("../styles/bmh_matplotlibrc.json") do file
read(file, String)
end
rcParams = PyCall.PyDict(matplotlib."rcParams")
merge!(rcParams, JSON.parse(s))
"""
function render_trace(trace)
# Pull out xs from the trace
n = get_args(trace)[1]
heads = sum(trace[(:flip, i)] for i=1:n)
# Pull out observed probabilities
observed_probs_heads = Gen.get_retval(trace)
# Draw distribution
ax = gca()
setp(ax.get_yticklabels(), visible=false)
plot(probs_of_heads, observed_probs_heads, label="observe $n tosses,\n $heads heads")
fill_between(probs_of_heads, 0, observed_probs_heads, color="#348ABD", alpha=0.4)
vlines(0.5, 0, 4, color="k", linestyles="--", lw=1)
leg = legend()
leg.get_frame().set_alpha(0.4)
autoscale(tight=true)
end;
```
```julia
function grid(renderer::Function, traces; ncols=2)
figure(figsize=(11, 9))
nrows = length(traces)/2
for (i, trace) in enumerate(traces)
subplot(nrows, ncols, i)
xlabel(if i in [1, length(traces)] "\$p\$, probability of heads" else nothing end)
renderer(trace)
end
suptitle("Bayesian updating of posterior probabilities",
y=1.02,
fontsize=14)
tight_layout()
end;
```
```julia
n_trials = [1, 2, 3, 4, 5, 6, 8, 15, 50, 500]
traces = [Gen.simulate(binomial_conjugate_model, (n,)) for n in n_trials]
grid(render_trace, traces)
```
The posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line).
Notice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.
The next example is a simple demonstration of the mathematics of Bayesian inference.
##### Example: Bug, or just sweet, unintended feature?
Let $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$.
We are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.
What is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests.
$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\sim A\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:
\begin{align}
P(X ) & = P(X \text{ and } A) + P(X \text{ and } \sim A) \\\\[5pt]
& = P(X|A)P(A) + P(X | \sim A)P(\sim A)\\\\[5pt]
& = P(X|A)p + P(X | \sim A)(1-p)
\end{align}
We have already computed $P(X|A)$ above. On the other hand, $P(X | \sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\sim A) = 0.5$. Then
\begin{align}
P(A | X) & = \frac{1\cdot p}{ 1\cdot p +0.5 (1-p) } \\\\
& = \frac{ 2 p}{1+p}
\end{align}
This is the posterior probability. What does it look like as a function of our prior, $p \in [0,1]$?
```julia
figure(figsize=(12.5, 4))
p = range(0, stop=1, length=50)
plot(p, 2 .* p ./ (1 .+ p), color="#348ABD", lw=3)
scatter(0.2, 2*(0.2)/1.2, s=140, c="#348ABD")
xlim(0, 1)
ylim(0, 1)
xlabel("Prior, \$P(A) = p\$")
ylabel("Posterior, \$P(A|X)\$, with \$P(A) = p\$")
title("Are there bugs in my code?");
```
We can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33.
Recall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.
Similarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities.
```julia
figure(figsize=(12.5, 4))
colours = ["#348ABD", "#A60628"]
prior = [0.20, 0.80]
posterior = [1. / 3, 2. / 3]
bar([0, .7], prior, alpha=0.70, width=0.25,
color=colours[1], label="prior distribution",
lw="3", edgecolor=colours[1])
bar([0+0.25, .7+0.25], posterior, alpha=0.7,
width=0.25, color=colours[2],
label="posterior distribution",
lw="3", edgecolor=colours[2])
xticks([0.125, .825], ["Bugs Absent", "Bugs Present"])
title("Prior and Posterior probability of bugs present")
ylabel("Probability")
legend(loc="upper left");
```
Notice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.
This was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.
_______
## Probability Distributions
**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter.
We can divide random variables into three classifications:
- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...
- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.
- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories.
### Discrete Case
If $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:
$$P(Z = k) =\frac{ \lambda^k e^{-\lambda} }{k!}, \; \; k=0,1,2, \dots $$
$\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\lambda$ can be any positive number. By increasing $\lambda$, we add more probability to larger values, and conversely by decreasing $\lambda$ we add more probability to smaller values. One can describe $\lambda$ as the *intensity* of the Poisson distribution.
Unlike $\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members.
If a random variable $Z$ has a Poisson mass distribution, we denote this by writing
$$Z \sim \text{Poi}(\lambda) $$
One useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:
$$E\large[ \;Z\; | \; \lambda \;\large] = \lambda $$
We will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\lambda$ values. The first thing to notice is that by increasing $\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.
```julia
figure(figsize=(12.5, 4))
a = 0:16
lambda_ = [1.5, 4.25]
colours = ["#348ABD", "#A60628"]
poisson_pdf = (x, λ) -> exp(logpdf(poisson, x, λ))
bar(a, poisson_pdf.(a, lambda_[1]), color=colours[1],
label="λ = $(lambda_[1])", alpha=0.60,
edgecolor=colours[1], lw="3")
bar(a, poisson_pdf.(a, lambda_[2]), color=colours[2],
label="λ = $(lambda_[2])", alpha=0.60,
edgecolor=colours[2], lw="3")
xticks(a, a)
legend(loc="upper right");
ylabel("probability of \$k\$")
xlabel("\$k\$")
title("Probability mass function of a Poisson random variable; differing λ values");
```
### Continuous Case
Instead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:
$$f_Z(z | \lambda) = \lambda e^{-\lambda z }, \;\; z\ge 0$$
Like a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\lambda$ values.
When a random variable $Z$ has an exponential distribution with parameter $\lambda$, we say *$Z$ is exponential* and write
$$Z \sim \text{Exp}(\lambda)$$
Given a specific $\lambda$, the expected value of an exponential random variable is equal to the inverse of $\lambda$, that is:
$$E[\; Z \;|\; \lambda \;] = \frac{1}{\lambda}$$
```julia
figure(figsize=(12, 4))
a = range(0, stop=4, length=100)
lambda_ = [0.5, 1]
exponential_pdf = (x, λ) -> exp(logpdf(exponential, x, λ))
for (l, c) in zip(lambda_, colours)
plot(a, exponential_pdf.(a, l), lw=3,
color=c, label="λ=$l")
fill_between(a, exponential_pdf.(a, l), color=c, alpha=.33)
end
autoscale(tight=true)
legend(loc="upper right")
ylabel("PDF at \$z\$")
xlabel("\$z\$")
ylim(0,1.2)
title("Probability density function of an Exponential random variable; differing λ");
```
### But what is $\lambda \;$?
**This question is what motivates statistics**. In the real world, $\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\lambda$. Many different methods have been created to solve the problem of estimating $\lambda$, but since $\lambda$ is never actually observed, no one can say for certain which method is best!
Bayesian inference is concerned with *beliefs* about what $\lambda$ might be. Rather than try to guess $\lambda$ exactly, we can only talk about what $\lambda$ is likely to be by assigning a probability distribution to $\lambda$.
This might seem odd at first. After all, $\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\lambda$.
##### Example: Inferring behaviour from text-message data
Let's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:
> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)
```julia
figure(figsize=(12.5, 3.5))
count_data_matrix = round.(Int64, readdlm("data/txtdata.csv"))
# Convert 1xn matrix to n-vector
count_data = [(count_data_matrix...,) ...]
n_count_data = length(count_data)
days = 1:n_count_data
bar(days, count_data, color="#348ABD")
xlabel("Time (days)")
ylabel("count of text-msgs received")
title("Did the user's texting habits change over time?")
xlim(0, n_count_data);
autoscale(tight=true);
```
Before we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period?
How can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$,
$$ C_i \sim \text{Poisson}(\lambda) $$
We are not sure what the value of the $\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\lambda$ increases at some point during the observations. (Recall that a higher value of $\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)
How can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\tau$), the parameter $\lambda$ suddenly jumps to a higher value. So we really have two $\lambda$ parameters: one for the period before $\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:
$$
\lambda =
\begin{cases}
\lambda_1 & \text{if } t \lt \tau \cr
\lambda_2 & \text{if } t \ge \tau
\end{cases}
$$
If, in reality, no sudden change occurred and indeed $\lambda_1 = \lambda_2$, then the $\lambda$s posterior distributions should look about equal.
We are interested in inferring the unknown $\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\lambda$. What would be good prior probability distributions for $\lambda_1$ and $\lambda_2$? Recall that $\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\alpha$.
\begin{align}
&\lambda_1 \sim \text{Exp}( \alpha ) \\\
&\lambda_2 \sim \text{Exp}( \alpha )
\end{align}
$\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:
$$\frac{1}{N}\sum_{i=0}^N \;C_i \approx E[\; \lambda \; |\; \alpha ] = \frac{1}{\alpha}$$
An alternative, and something I encourage the reader to try, would be to have two priors: one for each $\lambda_i$. Creating two exponential distributions with different $\alpha$ values reflects our prior belief that the rate changed at some point during the observations.
What about $\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying
\begin{align}
& \tau \sim \text{DiscreteUniform(1,70) }\\\\
& \Rightarrow P( \tau = k ) = \frac{1}{70}
\end{align}
So after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.
We next turn to [Gen](https://probcomp.github.io/Gen/), a Julia library for performing Bayesian analysis that is undaunted by the mathematical monster we have created.
Introducing our first hammer: Gen
-----
Gen is a rapidly evolving, general-purpose probabilistic programming system developed by the [Probabilistic Computing Lab](http://probcomp.csail.mit.edu/) at MIT.
Since Gen is relatively new, documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why Gen is so cool.
We will model the problem above using Gen. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components.
B. Cronin [5] has a very motivating description of probabilistic programming:
> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.
Because of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is.
Gen code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\tau, \lambda_1, \lambda_2$ ) as variables.
### Specify the model
We'll assume the data is a consequence of the following generative model:
$$\begin{align*}
\lambda_{1} &\sim \text{Exponential}(\text{rate}=\alpha) \\
\lambda_{2} &\sim \text{Exponential}(\text{rate}=\alpha) \\
\tau &\sim \text{Uniform}(\text{low}=1,\text{high}=70) \\
\text{for } i &= 1\ldots N: \\
\lambda_i &= \begin{cases} \lambda_{1}, & \tau > i \\ \lambda_{2}, & \text{otherwise}\end{cases}\\
C_i &\sim \text{Poisson}(\text{rate}=\lambda_i)
\end{align*}$$
Happily, this model can be easily implemented using Gen and Gen's distributions:
This code creates a new function `λ`, but really we can think of it as a random variable: the random variable $\lambda$ from above. We assign `λ_1` or `λ_2` as the value of `λ`, depending on what side of `τ` we are on. The values of `λ` up until `τ` are `λ_1` and the values afterwards are `λ_2`.
Note that because `λ_1`, `λ_2` and `τ` are random, `λ` will be random. We are **not** fixing any variables yet.
```julia
@gen function model(ys::Vector{Int64})
# hyperparameter for the exponential distribution
α = 1.0 / mean(ys)
# average text message rate during the 'low' period
λ_1 = @trace(exponential(α), :λ_1)
# average text message rate during the 'high' period
λ_2 = @trace(exponential(α), :λ_2)
# increase in message counts after this day
τ = @trace(uniform_discrete(1, length(ys)), :τ)
# a day's message count is Poisson distributed
for i in 1:length(ys)
# According to the data, average number of messages seems to change
λ = τ > i ? λ_1 : λ_2
@trace(poisson(λ), "y-$i")
end
end;
```
Notice that the implementation is arguably very close to being a 1:1 translation of the mathematical model.
### Specify the inference algorithm
The code below will be explained in Chapter 3, but we show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which we also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\lambda_1, \lambda_2$ and $\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.
```julia
function inference(ys::Vector{Int64}, num_iters::Int)
# Create a set of constraints fixing the
# y coordinates to the observed y values
constraints = choicemap()
for (i, y) in enumerate(ys)
constraints["y-$i"] = y
end
# Run the model, constrained by `constraints`,
# to get an initial execution trace
(trace, _) = generate(model, (count_data,), constraints)
traces = Trace[]
# Iteratively update parameters,
# using Gen's metropolis_hastings operator.
for iter=1:num_iters
(trace, _) = metropolis_hastings(trace, select(:λ_1))
(trace, _) = metropolis_hastings(trace, select(:λ_2))
(trace, _) = metropolis_hastings(trace, select(:τ))
push!(traces, trace)
end
# Return all of the history of the inference
return traces
end;
```
### Sample from the posterior
```julia
traces = inference(count_data, 100000)
λ_1s = Float64[]
λ_2s = Float64[]
τ_s = Int64[]
for trace in traces
# Read out parameters and store for rendering
(λ_1, λ_2, τ) = (trace[:λ_1], trace[:λ_2], trace[:τ])
push!(λ_1s, λ_1)
push!(λ_2s, λ_2)
push!(τ_s, τ)
end;
```
## Plot the Results
```julia
figure(figsize=(12.5, 10))
#histogram of the samples:
ax = subplot(311)
ax.set_autoscaley_on(false)
hist(λ_1s, histtype="stepfilled", bins=500, alpha=0.85,
label="posterior of \$λ_{1}\$", color="#A60628", normed=true)
legend(loc="upper left")
title("Posterior distributions of the variables
\$λ_{1}\$, \$λ_{2}\$, \$τ\$")
xlim([15, 30])
xlabel("\$λ_{1}\$ value")
ax = subplot(312)
ax.set_autoscaley_on(false)
hist(λ_2s, histtype="stepfilled", bins=100, alpha=0.85,
label="posterior of \$λ_{2}\$", color="#7A68A6", normed=true)
legend(loc="upper left")
xlim([15, 30])
xlabel("\$λ_{2}\$ value")
subplot(313)
w = 1.0 ./ (length(τ_s) .* ones(Float64, length(τ_s)))
hist(τ_s, bins=n_count_data, alpha=1,
label="posterior of τ",
color="#467821", weights=w, rwidth=0.15)
xticks(1:n_count_data)
legend(loc="upper left")
ylim([0, 0.75])
xlim([36, length(count_data)-19])
xlabel("τ (in days)")
ylabel("probability");
```
## Interpretation
Recall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\lambda$s and $\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\lambda_1$ is around 18 and $\lambda_2$ is around 23. The posterior distributions of the two $\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.
What other observations can you make? If you look at the original data again, do these results seem reasonable?
Notice also that the posterior distributions for the $\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.
Our analysis also returned a distribution for $\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 46, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points.
## Why would I want samples from the posterior, anyways?
We will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.
We'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \; 1 \le t \le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\lambda$. Therefore, the question is equivalent to *what is the expected value of $\lambda$ at time $t$*?
In the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\lambda_i$ for that day $t$, using $\lambda_i = \lambda_{1,i}$ if $t \lt \tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\lambda_i = \lambda_{2,i}$.
```julia
figure(figsize=(12.5, 5))
# τ_s, λ_1s, λ_2s contain
# N samples from the corresponding posterior distribution
N = length(τ_s)
expected_texts_per_day = zeros(n_count_data)
for day in 1:n_count_data
# ix is a bool index of all τ samples corresponding to
# the switchpoint occurring prior to value of 'day'
ix = day .< τ_s
# Each posterior sample corresponds to a value for τ.
# for each day, that value of τ indicates whether we're "before"
# (in the λ_1 "regime") or
# "after" (in the λ_2 "regime") the switchpoint.
# by taking the posterior sample of λ1/2 accordingly, we can average
# over all samples to get an expected value for λ on that day.
# As explained, the "message count" random variable is Poisson distributed,
# and therefore lambda (the poisson parameter) is the expected value of
# "message count".
expected_texts_per_day[day] = (sum(λ_1s[ix]) + sum(λ_2s[.!ix])) / N
end
plot(1:n_count_data, expected_texts_per_day, lw=4, color="#E24A33",
label="expected number of text-messages received")
xlim(1, n_count_data)
xlabel("Day")
ylabel("Expected # text-messages")
title("Expected number of text-messages received")
ylim(0, 60)
bar(1:n_count_data, count_data, color="#348ABD", alpha=0.65,
label="observed texts per day")
legend(loc="upper left");
```
Our analysis shows strong support for believing the user's behavior did change ($\lambda_{1}$ would have been close in value to $\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)
## Exercises
1. Using `λ_1s` and `λ_2s`, what is the mean of the posterior distributions of $\lambda_1$ and $\lambda_2$?
```julia
#type your code here.
```
2. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `λ_1s/λ_2s`. Note that this quantity is very different from `mean(λ_1s) / mean(λ_2s)`
```julia
#type your code here.
```
3. What is the mean of $\lambda_1$ **given** that we know $\tau$ is less than 46? That is, suppose we have been given new information that the change in behaviour occurred prior to day 46. What is the expected value of $\lambda_1$ now? (You do not need to redo the Gen part. Just consider all instances where `τ_s < 46`.)
```julia
#type your code here.
```
## References
[1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg)
[2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).
[3] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.
| 3b9b55729a487a284db85bcfef28ede44a318891 | 506,308 | ipynb | Jupyter Notebook | Chapter1_Introduction/Ch1_Introduction_Gen.ipynb | Fifthist/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers | f396b0c917b7e09687d9eea3af208059f67d2a0a | [
"MIT"
]
| null | null | null | Chapter1_Introduction/Ch1_Introduction_Gen.ipynb | Fifthist/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers | f396b0c917b7e09687d9eea3af208059f67d2a0a | [
"MIT"
]
| null | null | null | Chapter1_Introduction/Ch1_Introduction_Gen.ipynb | Fifthist/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers | f396b0c917b7e09687d9eea3af208059f67d2a0a | [
"MIT"
]
| null | null | null | 442.190393 | 151,962 | 0.928526 | true | 11,587 | Qwen/Qwen-72B | 1. YES
2. YES | 0.607663 | 0.831143 | 0.505055 | __label__eng_Latn | 0.998144 | 0.011741 |
```python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
```
# Superposition of two waves in perpendicular direction
\begin{equation}
x = a \sin (2\pi f_1 t)\\
y=b \sin (2\pi f_2 t - \phi)
\end{equation}
```python
a = [10,30] # amplitude of first wave
f1 = [1,2,4,8,12,16] # frequency of first wave
b=[10,30] # amplitude of second wave
f2=[1,2,4,8,12,16] # frequency of second wave
phi=[0,np.pi/4,np.pi/2] # phase angle(0,30,45,60,90)
t = np.arange(0,8.0,0.01) # time
```
## Example
### 1.same amplitude,phase is zero ,different frequency
Lissajous figure
```python
plt.figure(figsize = [6,36])
for i in range(len(f2)):
plt.subplot(len(f2),1,i+1)
ax = plt.gca()
ax.set_facecolor('k') # backgound color
ax.grid(color='xkcd:sky blue') # grid color
x = a[0]*np.sin(2*np.pi*f1[2]*t)
y = b[0]*np.sin(2*np.pi*f2[i]*t-phi[0])
plt.plot(x,y, color ='g',label='f1='+str(f1[2])+',f2='+str(f2[i]))
plt.xlabel("x",color='r',fontsize=14)
plt.ylabel("y",color='r',fontsize=14)
ax.xaxis.set_minor_locator(AutoMinorLocator()) ##
ax.yaxis.set_minor_locator(AutoMinorLocator()) ###
ax.tick_params(which='both', width=2)
ax.tick_params(which='major', length=9)
ax.tick_params(which='minor', length=4)
plt.legend()
plt.subplots_adjust(wspace = 0.5, hspace = 0.5)
plt.show()
```
### 2.same frequency and amplitude,different phase
```python
plt.figure(figsize = [6,24])
for i in range(len(phi)):
plt.subplot(len(phi),1,i+1)
ax = plt.gca()
ax.set_facecolor('xkcd:sky blue')
ax.grid(color='g')
x = a[0]*np.sin(2*np.pi*f1[2]*t)
y = b[0]*np.sin(2*np.pi*f2[2]*t-phi[i])
plt.plot(x,y, color ='purple',label='phase='+str(phi[i]*180/np.pi))
plt.xlabel("x",color='g',fontsize=14)
plt.ylabel("y",color='g',fontsize=14)
plt.legend()
plt.subplots_adjust(wspace = 0.5, hspace = 0.5)
plt.show()
```
### 3.same frequency ,different phase and amplitude
```python
plt.figure(figsize = [8,24])
for i in range(len(phi)):
plt.subplot(len(phi),1,i+1)
ax = plt.gca()
ax.grid(color='tab:brown')
x = a[0]*np.sin(2*np.pi*f1[2]*t)
y = b[1]*np.sin(2*np.pi*f2[2]*t-phi[i])
plt.plot(x,y, color ='r',label='phase='+str(phi[i]*180/np.pi))
plt.xlabel("x",color='r',fontsize=14)
plt.ylabel("y",color='r',fontsize=14)
plt.legend()
plt.subplots_adjust(wspace = 0.5, hspace = 0.5)
plt.show()
```
# Superposition of two waves in same direction
\begin{equation}
y_1 = a \sin (2\pi f_1 t)\\
y_2=b \sin (2\pi f_2 t - \phi)
\end{equation}
## Example
### 1.same amplitude and phase,different frequency
```python
f1=[1,2,3,4,5]
f2=[1,2,3,4,5]
plt.figure(figsize = [12,16])
for i in range(len(f2)):
plt.subplot(len(f2),1,i+1)
ax = plt.gca() #graphic current axis
ax.set_facecolor('k')
ax.grid(False)
y1 = a[0]*np.sin(2*np.pi*f1[0]*t)
y2 = a[0]*np.sin(2*np.pi*f2[i]*t-phi[0])
y=y1+y2
plt.plot(t,y,color ='tab:olive',label='f1='+str(f1[0])+',f2='+str(f2[i]))
plt.xlabel("t",color='r',fontsize=14)
plt.ylabel("y",color='r',fontsize=14)
plt.legend()
plt.subplots_adjust(wspace = 0.5, hspace = 0.5)
plt.show()
```
```python
```
| ba6341d873ccba0073742e9b90416a6a7f3a72bb | 389,107 | ipynb | Jupyter Notebook | Lissajous.ipynb | AmbaPant/NPS | 0500f39f6708388d5c3f2b8d3e5ee5e56a1f646f | [
"MIT"
]
| 1 | 2020-09-16T03:21:55.000Z | 2020-09-16T03:21:55.000Z | Lissajous.ipynb | AmbaPant/NPS | 0500f39f6708388d5c3f2b8d3e5ee5e56a1f646f | [
"MIT"
]
| null | null | null | Lissajous.ipynb | AmbaPant/NPS | 0500f39f6708388d5c3f2b8d3e5ee5e56a1f646f | [
"MIT"
]
| 2 | 2020-08-10T12:17:21.000Z | 2020-09-13T14:31:02.000Z | 1,435.819188 | 180,712 | 0.956793 | true | 1,159 | Qwen/Qwen-72B | 1. YES
2. YES | 0.955319 | 0.885631 | 0.846061 | __label__eng_Latn | 0.33651 | 0.804016 |
# Working with numerical features
We have to prepare our data to work with ML algorithms. In the case of numerical values we have some methods that we should apply before we start working with ML algorithms. Some of those methosts are:
Imputation
Handling Outliers
Feature Scaling
Feature Transformation
Binning
Log Transform
We saw how to work with outliers, and null values, and the techniques for imputation of NaN values. In this lesson we are going to focus on scaling, bining and log transformation.
We discussed previously that the scale of the features is an important consideration when building machine learning models. Briefly:
Feature magnitude matters because:
The regression coefficients of linear models are directly influenced by the scale of the variable.
Variables with bigger magnitude / larger value range dominate over those with smaller magnitude / value range
Gradient descent converges faster when features are on similar scales
Feature scaling helps decrease the time to find support vectors for SVMs
Euclidean distances are sensitive to feature magnitude.
Some algorithms, like PCA require the features to be centered at 0.
The machine learning models affected by the feature scale are:
Linear and Logistic Regression
Neural Networks
Support Vector Machines
KNN
K-means clustering
Linear Discriminant Analysis (LDA)
Principal Component Analysis (PCA)
## Feature Scaling
Feature scaling refers to the methods or techniques used to normalize the range of independent variables in our data, or in other words, the methods to set the feature value range within a similar scale. Feature scaling is generally the last step in the data preprocessing pipeline, performed just before training the machine learning algorithms.
## Feature Scaling: Z-Score Standardization and Min-Max Scaling
- [About standardization](#About-standardization)
- [About Min-Max scaling / "normalization"](#About-Min-Max-scaling-normalization)
- [Standardization or Min-Max scaling?](#Standardization-or-Min-Max-scaling?)
- [Standardizing and normalizing - how it can be done using scikit-learn](#Standardizing-and-normalizing---how-it-can-be-done-using-scikit-learn)
- [Bottom-up approaches](#Bottom-up-approaches)
- [The effect of standardization on PCA in a pattern classification task](#The-effect-of-standardization-on-PCA-in-a-pattern-classification-task)
<br>
<br>
### About standardization
The result of **standardization** (or **Z-score normalization**) is that the features will be rescaled so that they'll have the properties of a standard normal distribution with
$\mu = 0$ and $\sigma = 1$
where $\mu$ is the mean (average) and $\sigma$ is the standard deviation from the mean; standard scores (also called ***z*** scores) of the samples are calculated as follows:
$z = \frac{x - \mu}{\sigma}$
Standardizing the features so that they are centered around 0 with a standard deviation of 1 is not only important if we are comparing measurements that have different units, but it is also a general requirement for many machine learning algorithms. Intuitively, we can think of gradient descent as a prominent example
(an optimization algorithm often used in logistic regression, SVMs, perceptrons, neural networks etc.); with features being on different scales, certain weights may update faster than others since the feature values x_j play a role in the weight updates
$\Delta w_j = - \eta \frac{\partial J}{\partial w_j} = \eta \sum_i (t^{(i)} - o^{(i)})x^{(i)}_{j}$,
so that
$w_j := w_j + \Delta w_j$
where $\eta$ is the learning rate, $t$ the target class label, and $o$ the actual output.
Other intuitive examples include K-Nearest Neighbor algorithms and clustering algorithms that use, for example, Euclidean distance measures -- in fact, tree-based classifier are probably the only classifiers where feature scaling doesn't make a difference.
To quote from the [`scikit-learn`](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) documentation:
*"Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual feature do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance)."*
<br>
<br>
<a id='About-Min-Max-scaling-normalization'></a>
### About Min-Max scaling
[[back to top](#Sections)]
An alternative approach to Z-score normalization (or standardization) is the so-called **Min-Max scaling** (often also simply called "normalization" - a common cause for ambiguities).
In this approach, the data is scaled to a fixed range - usually 0 to 1.
The cost of having this bounded range - in contrast to standardization - is that we will end up with smaller standard deviations, which can suppress the effect of outliers.
A Min-Max scaling is typically done via the following equation:
\begin{equation} X_{norm} = \frac{X - X_{min}}{X_{max}-X_{min}} \end{equation}
<br>
<br>
### Z-score standardization or Min-Max scaling?
[[back to top](#Sections)]
*"Standardization or Min-Max scaling?"* - There is no obvious answer to this question: it really depends on the application.
For example, in clustering analyses, standardization may be especially crucial in order to compare similarities between features based on certain distance measures. Another prominent example is the Principal Component Analysis, where we usually prefer standardization over Min-Max scaling, since we are interested in the components that maximize the variance (depending on the question and if the PCA computes the components via the correlation matrix instead of the covariance matrix; [but more about PCA in my previous article](http://sebastianraschka.com/Articles/2014_pca_step_by_step.html)).
However, this doesn't mean that Min-Max scaling is not useful at all! A popular application is image processing, where pixel intensities have to be normalized to fit within a certain range (i.e., 0 to 255 for the RGB color range). Also, typical neural network algorithm require data that on a 0-1 scale.
<br>
<br>
## Standardizing and normalizing - how it can be done using scikit-learn
[[back to top](#Sections)]
Of course, we could make use of NumPy's vectorization capabilities to calculate the z-scores for standardization and to normalize the data using the equations that were mentioned in the previous sections. However, there is an even more convenient approach using the preprocessing module from one of Python's open-source machine learning library [scikit-learn](http://scikit-learn.org ).
<br>
<br>
For the following examples and discussion, we will have a look at the free "Wine" Dataset that is deposited on the UCI machine learning repository
(http://archive.ics.uci.edu/ml/datasets/Wine).
<br>
<font size="1">
**Reference:**
Forina, M. et al, PARVUS - An Extendible Package for Data
Exploration, Classification and Correlation. Institute of Pharmaceutical
and Food Analysis and Technologies, Via Brigata Salerno,
16147 Genoa, Italy.
Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
</font>
The Wine dataset consists of 3 different classes where each row correspond to a particular wine sample.
The class labels (1, 2, 3) are listed in the first column, and the columns 2-14 correspond to 13 different attributes (features):
1) Alcohol
2) Malic acid
...
#### Loading the wine dataset
```python
import pandas as pd
import numpy as np
df = pd.io.parsers.read_csv(
'https://raw.githubusercontent.com/rasbt/pattern_classification/master/data/wine_data.csv',
header=None,
usecols=[0,1,2]
)
df.columns=['Class label', 'Alcohol', 'Malic acid']
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Class label</th>
<th>Alcohol</th>
<th>Malic acid</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>14.23</td>
<td>1.71</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>13.20</td>
<td>1.78</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>13.16</td>
<td>2.36</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>14.37</td>
<td>1.95</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>13.24</td>
<td>2.59</td>
</tr>
</tbody>
</table>
</div>
<br>
<br>
As we can see in the table above, the features **Alcohol** (percent/volumne) and **Malic acid** (g/l) are measured on different scales, so that ***Feature Scaling*** is necessary important prior to any comparison or combination of these data.
<br>
<br>
#### Standardization and Min-Max scaling
```python
from sklearn import preprocessing
std_scale = preprocessing.StandardScaler().fit(df[['Alcohol', 'Malic acid']])
np_std = std_scale.transform(df[['Alcohol', 'Malic acid']]) # estandarizado
minmax_scale = preprocessing.MinMaxScaler().fit(df[['Alcohol', 'Malic acid']])
np_minmax = minmax_scale.transform(df[['Alcohol', 'Malic acid']]) # escalado
```
```python
std_scale.mean_
```
array([13.00061798, 2.33634831])
```python
type(std_scale)
```
sklearn.preprocessing._data.StandardScaler
```python
type(np_std)
```
numpy.ndarray
```python
type(minmax_scale)
```
sklearn.preprocessing._data.MinMaxScaler
```python
type(np_minmax)
```
numpy.ndarray
```python
np_std[0:5, :]
```
array([[ 1.51861254, -0.5622498 ],
[ 0.24628963, -0.49941338],
[ 0.19687903, 0.02123125],
[ 1.69154964, -0.34681064],
[ 0.29570023, 0.22769377]])
```python
np_minmax[0:5, :]
```
array([[0.84210526, 0.1916996 ],
[0.57105263, 0.2055336 ],
[0.56052632, 0.3201581 ],
[0.87894737, 0.23913043],
[0.58157895, 0.36561265]])
```python
print('Mean after standardization:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(np_std[:,0].mean() , np_std[:,1].mean()))
print('\nStandard deviation after standardization:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format( np_std[:,0].std() , np_std[:,1].std()))
```
Mean after standardization:
Alcohol=-0.00, Malic acid=-0.00
Standard deviation after standardization:
Alcohol=1.00, Malic acid=1.00
```python
print('Min-value after min-max scaling:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(np_minmax[:,0].min(), np_minmax[:,1].min()))
print('\nMax-value after min-max scaling:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(np_minmax[:,0].max(), np_minmax[:,1].max()))
```
Min-value after min-max scaling:
Alcohol=0.00, Malic acid=0.00
Max-value after min-max scaling:
Alcohol=1.00, Malic acid=1.00
<br>
<br>
#### Plotting
```python
%matplotlib inline
```
```python
from matplotlib import pyplot as plt
def plot():
plt.figure(figsize=(8,6))
plt.scatter(df['Alcohol'], df['Malic acid'],
color='green', label='input scale', alpha=0.5)
plt.scatter(np_std[:,0], np_std[:,1], color='red',
label='Standardized [$N (\mu=0, \; \sigma=1)$]', alpha=0.3)
plt.scatter(np_minmax[:,0], np_minmax[:,1],
color='blue', label='min-max scaled [min=0, max=1]', alpha=0.3)
plt.title('Alcohol and Malic Acid content of the wine dataset')
plt.xlabel('Alcohol')
plt.ylabel('Malic Acid')
plt.legend(loc='upper left')
plt.grid() # cuadrícula
plt.tight_layout() # te ajusta para que se vea bien
plot() #¡la acabo de definir yo!
```
<br>
<br>
The plot above includes the wine datapoints on all three different scales: the input scale where the alcohol content was measured in volume-percent (green), the standardized features (red), and the normalized features (blue).
In the following plot, we will zoom in into the three different axis-scales.
Recordatorio de numpy
```python
# en numpy un array UNIDIMENSIONAL tiene de dimensiones
np.array([0,1,2,3,4]).shape
# 5 filas y NINGUNA columna
```
(5,)
```python
np.array([0,1,2,3,4])
```
array([0, 1, 2, 3, 4])
```python
# un numpy un array 2D tiene de dimensiones
np.array([[0,1,2,3,4], [5,6,7,8,9]])
```
array([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]])
```python
np.array([[0,1,2,3,4], [5,6,7,8,9]]).shape
```
(2, 5)
```python
# ¡Importante!
# No es lo mismo un array UNIDIMENSIONAL que uno en 2D con solo una fila o solo una columna
```
```python
np.array([[0,1,2,3,4]]).shape
```
(1, 5)
```python
np.array([[0,1,2,3,4]])
```
array([[0, 1, 2, 3, 4]])
```python
np.array([[0],[1],[2],[3],[4]]).shape
```
(5, 1)
```python
np.array([[0],[1],[2],[3],[4]])
```
array([[0],
[1],
[2],
[3],
[4]])
```python
# un array UNIDIMENSIONAL transpuesto se queda igual
np.array([0,1,2,3,4]).T == np.array([0,1,2,3,4])
```
array([ True, True, True, True, True])
```python
np.array([0,1,2,3,4])
```
array([0, 1, 2, 3, 4])
```python
np.array([0,1,2,3,4]).T
```
array([0, 1, 2, 3, 4])
```python
# un array 2D con una fila o una columna sí se transpone
```
```python
np.array([[0,1,2,3,4]]).T
```
array([[0],
[1],
[2],
[3],
[4]])
```python
np.array([[0], [1], [2], [3], [4]]).T
```
array([[0, 1, 2, 3, 4]])
```python
# veamos el método zip de Python
# zip itera sobre tuplas y crea tuplas nuevas
for x in zip(('azul', 'rojo', 'verde'), ('perro', 'gato')):
print(x)
```
('azul', 'perro')
('rojo', 'gato')
```python
# si zip tiene una sola tupla de argumento
for x in zip(('azul', 'rojo', 'verde')):
print(x)
```
('azul',)
('rojo',)
('verde',)
```python
```
Seguimos con el tratamiento de datos numéricos
<br>
<br>
```python
fig, ax = plt.subplots(3, figsize=(6,14))
for a,d,l in zip(range(len(ax)),
(df[['Alcohol', 'Malic acid']].values, np_std, np_minmax), # values devuelve los valores en tipo numpy
('Input scale',
'Standardized [$N (\mu=0, \; \sigma=1)$]',
'min-max scaled [min=0, max=1]')
):
# a es 0, 1, 2
# d es df[['Alcohol', 'Malic acid']].values, np_std, np_minmax
# l es 'Input scale', 'Standardized [$N (\mu=0, \; \sigma=1)$]', 'min-max scaled [min=0, max=1]'
for i,c in zip(range(1,4), ('red', 'blue', 'green')):
ax[a].scatter(d[df['Class label'].values == i, 0],
d[df['Class label'].values == i, 1],
alpha=0.5,
color=c,
label='Class %s' %i
)
# i es 1, 2, 3
# c es 'red', 'blue', 'green'
ax[a].set_title(l)
ax[a].set_xlabel('Alcohol')
ax[a].set_ylabel('Malic Acid')
ax[a].legend(loc='upper left')
ax[a].grid()
plt.tight_layout()
```
<br>
<br>
```python
# aplicar values a un DataFrame
df_ejemplo = pd.DataFrame({'col1': [1,2,3], 'col2': [4,5,6]})
df_ejemplo.values
# values devuelve un DataFrame en formato numpy
```
array([[1, 4],
[2, 5],
[3, 6]], dtype=int64)
## Bottom-up approaches
Of course, we can also code the equations for standardization and 0-1 Min-Max scaling "manually". However, the scikit-learn methods are still useful if you are working with test and training data sets and want to scale them equally.
E.g.,
<pre>
std_scale = preprocessing.StandardScaler().fit(X_train)
X_train = std_scale.transform(X_train)
X_test = std_scale.transform(X_test)
</pre>
Below, we will perform the calculations using "pure" Python code, and an more convenient NumPy solution, which is especially useful if we attempt to transform a whole matrix.
<br>
<br>
Just to recall the equations that we are using:
Standardization: \begin{equation} z = \frac{x - \mu}{\sigma} \end{equation}
with mean:
\begin{equation}\mu = \frac{1}{N} \sum_{i=1}^N (x_i)\end{equation}
and standard deviation:
\begin{equation}\sigma = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - \mu)^2}\end{equation}
Min-Max scaling: \begin{equation} X_{norm} = \frac{X - X_{min}}{X_{max}-X_{min}} \end{equation}
```python
# lógica del bucle for
suelto = ['caracter. ' + p for p in 'hola']
print(suelto)
```
['caracter. h', 'caracter. o', 'caracter. l', 'caracter. a']
### Pure Python
```python
# Standardization
x = [1,4,5,6,6,2,3]
mean = sum(x)/len(x)
std_dev = (1/len(x) * sum([ (x_i - mean) ** 2 for x_i in x]))**0.5
z_scores = [(x_i - mean)/std_dev for x_i in x]
# Min-Max scaling
minmax = [(x_i - min(x))/ (max(x) - min(x)) for x_i in x]
```
<br>
<br>
### NumPy
```python
import numpy as np
# Standardization
x_np = np.asarray(x) # as array lo convierte en un array de numpy
z_scores_np = (x_np - x_np.mean()) / x_np.std()
# Min-Max scaling
np_minmax = (x_np - x_np.min()) / (x_np.max() - x_np.min())
```
<br>
<br>
### Visualization
Just to make sure that our code works correctly, let us plot the results via matplotlib.
```python
%matplotlib inline
```
```python
from matplotlib import pyplot as plt
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, figsize=(10,5)) #devuelve fig, ax con 4 subplots
y_pos = [0 for i in range(len(x))]
ax1.scatter(z_scores, y_pos, color='g')
ax1.set_title('Python standardization', color='g')
ax2.scatter(minmax, y_pos, color='g')
ax2.set_title('Python Min-Max scaling', color='g')
ax3.scatter(z_scores_np, y_pos, color='b')
ax3.set_title('Python NumPy standardization', color='b')
ax4.scatter(np_minmax, y_pos, color='b')
ax4.set_title('Python NumPy Min-Max scaling', color='b')
plt.tight_layout() # para ajustar la visualización
for ax in (ax1, ax2, ax3, ax4):
ax.get_yaxis().set_visible(False)
ax.grid()
plt.show()
```
```python
# vemos lo mismo, más fácil usando librerías
# estandarizado con valores con media cero y desviación típica 1
# escalado con valores entre 0 y 1
```
<br>
<br>
## The effect of standardization on PCA in a pattern classification task
[[back to top](#Sections)]
Earlier, I mentioned the Principal Component Analysis (PCA) as an example where standardization is crucial, since it is "analyzing" the variances of the different features.
Now, let us see how the standardization affects PCA and a following supervised classification on the whole wine dataset.
In the following section, we will go through the following steps:
- Reading in the dataset
- Dividing the dataset into a separate training and test dataset
- Standardization of the features
- Principal Component Analysis (PCA) to reduce the dimensionality
- Training a naive Bayes classifier
- Evaluating the classification accuracy with and without standardization
<br>
<br>
### Reading in the dataset
[[back to top](#Sections)]
```python
import pandas as pd
df = pd.io.parsers.read_csv(
'https://raw.githubusercontent.com/rasbt/pattern_classification/master/data/wine_data.csv',
header=None,
)
```
<br>
<br>
### Dividing the dataset into a separate training and test dataset
[[back to top](#Sections)]
In this step, we will randomly divide the wine dataset into a training dataset and a test dataset where the training dataset will contain 70% of the samples and the test dataset will contain 30%, respectively.
```python
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
<th>12</th>
<th>13</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>14.23</td>
<td>1.71</td>
<td>2.43</td>
<td>15.6</td>
<td>127</td>
<td>2.80</td>
<td>3.06</td>
<td>0.28</td>
<td>2.29</td>
<td>5.64</td>
<td>1.04</td>
<td>3.92</td>
<td>1065</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>13.20</td>
<td>1.78</td>
<td>2.14</td>
<td>11.2</td>
<td>100</td>
<td>2.65</td>
<td>2.76</td>
<td>0.26</td>
<td>1.28</td>
<td>4.38</td>
<td>1.05</td>
<td>3.40</td>
<td>1050</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>13.16</td>
<td>2.36</td>
<td>2.67</td>
<td>18.6</td>
<td>101</td>
<td>2.80</td>
<td>3.24</td>
<td>0.30</td>
<td>2.81</td>
<td>5.68</td>
<td>1.03</td>
<td>3.17</td>
<td>1185</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>14.37</td>
<td>1.95</td>
<td>2.50</td>
<td>16.8</td>
<td>113</td>
<td>3.85</td>
<td>3.49</td>
<td>0.24</td>
<td>2.18</td>
<td>7.80</td>
<td>0.86</td>
<td>3.45</td>
<td>1480</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>13.24</td>
<td>2.59</td>
<td>2.87</td>
<td>21.0</td>
<td>118</td>
<td>2.80</td>
<td>2.69</td>
<td>0.39</td>
<td>1.82</td>
<td>4.32</td>
<td>1.04</td>
<td>2.93</td>
<td>735</td>
</tr>
</tbody>
</table>
</div>
```python
from sklearn.model_selection import train_test_split
X_wine = df.values[:,1:]
y_wine = df.values[:,0]
X_train, X_test, y_train, y_test = train_test_split(X_wine, y_wine,
test_size=0.30, random_state=12345)
```
```python
```
<br>
<br>
### Feature Scaling - Standardization
[[back to top](#Sections)]
```python
from sklearn import preprocessing
std_scale = preprocessing.StandardScaler().fit(X_train)
X_train_std = std_scale.transform(X_train)
X_test_std = std_scale.transform(X_test)
```
<br>
<br>
### Dimensionality reduction via Principal Component Analysis (PCA)
[[back to top](#Sections)]
Now, we perform a PCA on the standardized and the non-standardized datasets to transform the dataset onto a 2-dimensional feature subspace.
In a real application, a procedure like cross-validation would be done in order to find out what choice of features would yield a optimal balance between "preserving information" and "overfitting" for different classifiers. However, we will omit this step since we don't want to train a perfect classifier here, but merely compare the effects of standardization.
```python
from sklearn.decomposition import PCA
# on non-standardized data
pca = PCA(n_components=2).fit(X_train)
X_train = pca.transform(X_train)
X_test = pca.transform(X_test)
# on standardized data
pca_std = PCA(n_components = 2).fit(X_train_std)
X_train_std = pca_std.transform(X_train_std)
X_test_std = pca_std.transform(X_test_std)
```
Let us quickly visualize how our new feature subspace looks like (note that class labels are not considered in a PCA - in contrast to a Linear Discriminant Analysis - but I will add them in the plot for clarity).
```python
%matplotlib inline
```
```python
from matplotlib import pyplot as plt
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(10,4))
for l,c,m in zip(range(1,4), ('blue', 'red', 'green'), ('^', 's', 'o')):
ax1.scatter(X_train[y_train==l, 0], X_train[y_train==l, 1],
color=c,
label='class %s' %l,
alpha=0.5,
marker=m
)
for l,c,m in zip(range(1,4), ('blue', 'red', 'green'), ('^', 's', 'o')):
ax2.scatter(X_train_std[y_train==l, 0], X_train_std[y_train==l, 1],
color=c,
label='class %s' %l,
alpha=0.5,
marker=m
)
ax1.set_title('Transformed NON-standardized training dataset after PCA')
ax2.set_title('Transformed standardized training dataset after PCA')
for ax in (ax1, ax2):
ax.set_xlabel('1st principal component')
ax.set_ylabel('2nd principal component')
ax.legend(loc='upper right')
ax.grid()
plt.tight_layout()
plt.show()
```
<br>
<br>
### Training a naive Bayes classifier
[[back to top](#Sections)]
We will use a naive Bayes classifier for the classification task. If you are not familiar with it, the term "naive" comes from the assumption that all features are "independent".
All in all, it is a simple but robust classifier based on Bayes' rule
Bayes' Rule:
\begin{equation} P(\omega_j|x) = \frac{p(x|\omega_j) * P(\omega_j)}{p(x)} \end{equation}
where
- ω: class label
- *P(ω|x)*: the posterior probability
- *p(x|ω)*: prior probability (or likelihood)
and the **decsion rule:**
Decide $ \omega_1 $ if $ P(\omega_1|x) > P(\omega_2|x) $ else decide $ \omega_2 $.
<br>
\begin{equation}
\Rightarrow \frac{p(x|\omega_1) * P(\omega_1)}{p(x)} > \frac{p(x|\omega_2) * P(\omega_2)}{p(x)}
\end{equation}
I don't want to get into more detail about Bayes' rule in this article, but if you are interested in a more detailed collection of examples, please have a look at the [Statistical Patter Classification](https://github.com/rasbt/pattern_classification#statistical-pattern-recognition-examples) in my pattern classification repository.
```python
from sklearn.naive_bayes import GaussianNB
# on non-standardized data
gnb = GaussianNB()
fit = gnb.fit(X_train, y_train)
# on standardized data
gnb_std = GaussianNB()
fit_std = gnb_std.fit(X_train_std, y_train)
```
<br>
<br>
### Evaluating the classification accuracy with and without standardization
[[back to top](#Sections)]
```python
from sklearn import metrics
pred_train = gnb.predict(X_train)
print('\nPrediction accuracy for the training dataset')
print('{:.2%}'.format(metrics.accuracy_score(y_train, pred_train)))
pred_test = gnb.predict(X_test)
print('\nPrediction accuracy for the test dataset')
print('{:.2%}\n'.format(metrics.accuracy_score(y_test, pred_test)))
```
Prediction accuracy for the training dataset
81.45%
Prediction accuracy for the test dataset
64.81%
```python
pred_train_std = gnb_std.predict(X_train_std)
print('\nPrediction accuracy for the training dataset')
print('{:.2%}'.format(metrics.accuracy_score(y_train, pred_train_std)))
pred_test_std = gnb_std.predict(X_test_std)
print('\nPrediction accuracy for the test dataset')
print('{:.2%}\n'.format(metrics.accuracy_score(y_test, pred_test_std)))
```
Prediction accuracy for the training dataset
96.77%
Prediction accuracy for the test dataset
98.15%
As we can see, the standardization prior to the PCA definitely led to an decrease in the empirical error rate on classifying samples from test dataset.
# Feature transformations
### Normalization and changing distribution
Monotonic feature transformation is critical for some algorithms and has no effect on others. This is one of the reasons for the increased popularity of decision trees and all its derivative algorithms (random forest, gradient boosting). Not everyone can or want to tinker with transformations, and these algorithms are robust to unusual distributions.
There are also purely engineering reasons: `np.log` is a way of dealing with large numbers that do not fit in `np.float64`. This is an exception rather than a rule; often it's driven by the desire to adapt the dataset to the requirements of the algorithm. Parametric methods usually require a minimum of symmetric and unimodal distribution of data, which is not always given in real data. There may be more stringent requirements; recall [our earlier article about linear models](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-4-linear-classification-and-regression-44a41b9b5220).
However, data requirements are imposed not only by parametric methods; [K nearest neighbors](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-3-classification-decision-trees-and-k-nearest-neighbors-8613c6b6d2cd) will predict complete nonsense if features are not normalized e.g. when one distribution is located in the vicinity of zero and does not go beyond (-1, 1) while the other’s range is on the order of hundreds of thousands.
A simple example: suppose that the task is to predict the cost of an apartment from two variables — the distance from city center and the number of rooms. The number of rooms rarely exceeds 5 whereas the distance from city center can easily be in the thousands of meters.
The simplest transformation is Standard Scaling (or Z-score normalization):
$$ \large z= \frac{x-\mu}{\sigma} $$
Note that Standard Scaling does not make the distribution normal in the strict sense.
```python
from sklearn.preprocessing import StandardScaler
from scipy.stats import beta # beta es un tipo de ditribución
from scipy.stats import shapiro
# shapiro es un test que te dice cómo es de probable
# que tus datos sean gaussianos
import numpy as np
data = beta(1,10).rvs(1000).reshape(-1,1)
shapiro(data)
```
ShapiroResult(statistic=0.8493459820747375, pvalue=8.4536979627521e-30)
```python
# Value of the statistic, p-value
shapiro(StandardScaler().fit_transform(data))
# we reject H0, they are not Gaussian
```
ShapiroResult(statistic=0.8825300931930542, pvalue=7.698277099526105e-27)
Hipótesis nula: $\sf{H_{0}}$
Es la hipótesis por defecto. En este caso: los datos vienen de una Gaussiana.
El p-valor es un estadístico que nos dice si es probable que la hipótesis nula sea cierta o sea falsa.
Si p-valor <= 0.05 entonces rechazo $\sf{H_{0}}$
But, to some extent, it protects against outliers:
```python
prueba = np.array([1,2,3,4,5]).reshape(-1,1)
prueba
```
array([[1],
[2],
[3],
[4],
[5]])
```python
np.array([1,2,3,4,5]).shape
```
(5,)
```python
prueba.shape
```
(5, 1)
```python
np.array([1,2,3,4,5,6]).reshape(3,2)
```
array([[1, 2],
[3, 4],
[5, 6]])
```python
np.array([1,2,3,4,5,6]).reshape(-3,2)
# poner un menos es decirle "no lo sé, hazlo tú"
```
array([[1, 2],
[3, 4],
[5, 6]])
```python
np.array([1,2,3,4,5,6]).reshape(-7,2)
```
array([[1, 2],
[3, 4],
[5, 6]])
```python
np.array([1,2,3,4,5,6]).reshape(1,6).reshape(2,-1)
```
array([[1, 2, 3],
[4, 5, 6]])
```python
data = np.array([1,1,0,-1,2,1,2,3,-2,4,100]).reshape(-1,1).astype(np.float64)
StandardScaler().fit_transform(data)
```
array([[-0.31922662],
[-0.31922662],
[-0.35434155],
[-0.38945648],
[-0.28411169],
[-0.31922662],
[-0.28411169],
[-0.24899676],
[-0.42457141],
[-0.21388184],
[ 3.15715128]])
```python
(data - data.mean())/data.std()
```
array([[-0.31922662],
[-0.31922662],
[-0.35434155],
[-0.38945648],
[-0.28411169],
[-0.31922662],
[-0.28411169],
[-0.24899676],
[-0.42457141],
[-0.21388184],
[ 3.15715128]])
Another fairly popular option is MinMax Scaling, which brings all the points within a predetermined interval (typically (0, 1)).
$$ \large X_{norm}=\frac{X-X_{min}}{X_{max}-X_{min}} $$
```python
from sklearn.preprocessing import MinMaxScaler
MinMaxScaler().fit_transform(data)
```
array([[0.02941176],
[0.02941176],
[0.01960784],
[0.00980392],
[0.03921569],
[0.02941176],
[0.03921569],
[0.04901961],
[0. ],
[0.05882353],
[1. ]])
```python
(data - data.min()) / (data.max() - data.min())
```
array([[0.02941176],
[0.02941176],
[0.01960784],
[0.00980392],
[0.03921569],
[0.02941176],
[0.03921569],
[0.04901961],
[0. ],
[0.05882353],
[1. ]])
StandardScaling and MinMax Scaling have similar applications and are often more or less interchangeable. However, if the algorithm involves the calculation of distances between points or vectors, the default choice is StandardScaling. But MinMax Scaling is useful for visualization by bringing features within the interval (0, 255).
If we assume that some data is not normally distributed but is described by the [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution), it can easily be transformed to a normal distribution:
```python
from scipy.stats import lognorm
data = lognorm(s=1).rvs(1000) # no le estoy pasando la semilla
shapiro(data)
```
ShapiroResult(statistic=0.6178099513053894, pvalue=2.5223372357846707e-42)
```python
shapiro(np.log(data))
```
ShapiroResult(statistic=0.9986675977706909, pvalue=0.6666808724403381)
The lognormal distribution is suitable for describing salaries, price of securities, urban population, number of comments on articles on the internet, etc. However, to apply this procedure, the underlying distribution does not necessarily have to be lognormal; you can try to apply this transformation to any distribution with a heavy right tail. Furthermore, one can try to use other similar transformations, formulating their own hypotheses on how to approximate the available distribution to a normal. Examples of such transformations are [Box-Cox transformation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.boxcox.html) (logarithm is a special case of the Box-Cox transformation) or [Yeo-Johnson transformation](https://gist.github.com/mesgarpour/f24769cd186e2db853957b10ff6b7a95) (extends the range of applicability to negative numbers). In addition, you can also try adding a constant to the feature — `np.log (x + const)`.
# Binning
```python
import pandas as pd
fcc_survey_df = pd.read_csv('ficheros/fcc_2016_coder_survey_subset.csv', encoding='utf-8', sep=',')
fcc_survey_df[['ID.x', 'EmploymentField', 'Age', 'Income']].head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>ID.x</th>
<th>EmploymentField</th>
<th>Age</th>
<th>Income</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>cef35615d61b202f1dc794ef2746df14</td>
<td>office and administrative support</td>
<td>28.0</td>
<td>32000.0</td>
</tr>
<tr>
<th>1</th>
<td>323e5a113644d18185c743c241407754</td>
<td>food and beverage</td>
<td>22.0</td>
<td>15000.0</td>
</tr>
<tr>
<th>2</th>
<td>b29a1027e5cd062e654a63764157461d</td>
<td>finance</td>
<td>19.0</td>
<td>48000.0</td>
</tr>
<tr>
<th>3</th>
<td>04a11e4bcb573a1261eb0d9948d32637</td>
<td>arts, entertainment, sports, or media</td>
<td>26.0</td>
<td>43000.0</td>
</tr>
<tr>
<th>4</th>
<td>9368291c93d5d5f5c8cdb1a575e18bec</td>
<td>education</td>
<td>20.0</td>
<td>6000.0</td>
</tr>
</tbody>
</table>
</div>
## Fixed-width binning
### Developer age distribution
```python
fig, ax = plt.subplots()
fcc_survey_df['Age'].hist(color='#C5A9D3')
ax.set_title('Developer Age Histogram', fontsize = 12)
ax.set_xlabel('Age', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
```
### Binning based on rounding
```
Age Range: Bin
---------------
0 - 9 : 0
10 - 19 : 1
20 - 29 : 2
30 - 39 : 3
40 - 49 : 4
50 - 59 : 5
60 - 69 : 6
... and so on
```
```python
#fcc_survey_df['Age_bin_round'] = np.array(np.floor(np.array(fcc_survey_df['Age']) / 10.))
#fcc_survey_df[['ID.x', 'Age', 'Age_bin_round']].iloc[1071:1076]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>ID.x</th>
<th>Age</th>
<th>Age_bin_round</th>
</tr>
</thead>
<tbody>
<tr>
<th>1071</th>
<td>6a02aa4618c99fdb3e24de522a099431</td>
<td>17.0</td>
<td>1.0</td>
</tr>
<tr>
<th>1072</th>
<td>f0e5e47278c5f248fe861c5f7214c07a</td>
<td>38.0</td>
<td>3.0</td>
</tr>
<tr>
<th>1073</th>
<td>6e14f6d0779b7e424fa3fdd9e4bd3bf9</td>
<td>21.0</td>
<td>2.0</td>
</tr>
<tr>
<th>1074</th>
<td>c2654c07dc929cdf3dad4d1aec4ffbb3</td>
<td>53.0</td>
<td>5.0</td>
</tr>
<tr>
<th>1075</th>
<td>f07449fc9339b2e57703ec7886232523</td>
<td>35.0</td>
<td>3.0</td>
</tr>
</tbody>
</table>
</div>
```python
fcc_survey_df['Age_bin_round'] = (np.floor((fcc_survey_df['Age']) / 10.))
```
```python
fcc_survey_df['Age_bin_round']
```
0 2.0
1 2.0
2 1.0
3 2.0
4 2.0
...
15615 3.0
15616 2.0
15617 3.0
15618 2.0
15619 2.0
Name: Age_bin_round, Length: 15620, dtype: float64
### Binning based on custom ranges
```
Age Range : Bin
---------------
0 - 15 : 1
16 - 30 : 2
31 - 45 : 3
46 - 60 : 4
61 - 75 : 5
75 - 100 : 6
```
```python
# cut crea bins
bin_ranges = [0,15,30,45,60,75, 100]
bin_names = [1, 2, 3, 4, 5, 6]
fcc_survey_df['Age_bin_custom_range'] = pd.cut(fcc_survey_df['Age'],
bins=bin_ranges)
fcc_survey_df['Age_bin_custom_label'] = pd.cut(fcc_survey_df['Age'], bins=bin_ranges, labels = bin_names)
fcc_survey_df[['ID.x', 'Age', 'Age_bin_round',
'Age_bin_custom_range', 'Age_bin_custom_label']].iloc[1071:1076]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>ID.x</th>
<th>Age</th>
<th>Age_bin_round</th>
<th>Age_bin_custom_range</th>
<th>Age_bin_custom_label</th>
</tr>
</thead>
<tbody>
<tr>
<th>1071</th>
<td>6a02aa4618c99fdb3e24de522a099431</td>
<td>17.0</td>
<td>1.0</td>
<td>(15, 30]</td>
<td>2</td>
</tr>
<tr>
<th>1072</th>
<td>f0e5e47278c5f248fe861c5f7214c07a</td>
<td>38.0</td>
<td>3.0</td>
<td>(30, 45]</td>
<td>3</td>
</tr>
<tr>
<th>1073</th>
<td>6e14f6d0779b7e424fa3fdd9e4bd3bf9</td>
<td>21.0</td>
<td>2.0</td>
<td>(15, 30]</td>
<td>2</td>
</tr>
<tr>
<th>1074</th>
<td>c2654c07dc929cdf3dad4d1aec4ffbb3</td>
<td>53.0</td>
<td>5.0</td>
<td>(45, 60]</td>
<td>4</td>
</tr>
<tr>
<th>1075</th>
<td>f07449fc9339b2e57703ec7886232523</td>
<td>35.0</td>
<td>3.0</td>
<td>(30, 45]</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
## Quantile based binning
```python
fcc_survey_df[['ID.x', 'Age', 'Income']].iloc[4:9]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>ID.x</th>
<th>Age</th>
<th>Income</th>
</tr>
</thead>
<tbody>
<tr>
<th>4</th>
<td>9368291c93d5d5f5c8cdb1a575e18bec</td>
<td>20.0</td>
<td>6000.0</td>
</tr>
<tr>
<th>5</th>
<td>dd0e77eab9270e4b67c19b0d6bbf621b</td>
<td>34.0</td>
<td>40000.0</td>
</tr>
<tr>
<th>6</th>
<td>7599c0aa0419b59fd11ffede98a3665d</td>
<td>23.0</td>
<td>32000.0</td>
</tr>
<tr>
<th>7</th>
<td>6dff182db452487f07a47596f314bddc</td>
<td>35.0</td>
<td>40000.0</td>
</tr>
<tr>
<th>8</th>
<td>9dc233f8ed1c6eb2432672ab4bb39249</td>
<td>33.0</td>
<td>80000.0</td>
</tr>
</tbody>
</table>
</div>
```python
fig, ax = plt.subplots()
fcc_survey_df['Income'].hist(bins=30, color='#A9C5D3')
ax.set_title('Developer Income Histogram', fontsize=12)
ax.set_xlabel('Developer Income', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
```
```python
quantile_list = [0, .25, .5, .75, 1.]
quantiles = fcc_survey_df['Income'].quantile(quantile_list)
quantiles
```
0.00 6000.0
0.25 20000.0
0.50 37000.0
0.75 60000.0
1.00 200000.0
Name: Income, dtype: float64
```python
fig, ax = plt.subplots()
fcc_survey_df['Income'].hist(bins=30, color='#A9C5D3')
for quantile in quantiles:
qvl = plt.axvline(quantile, color='r')
ax.legend([qvl], ['Quantiles'], fontsize=10)
ax.set_title('Developer Income Histogram with quantiles', fontsize=12)
ax.set_xlabel('Developer Income', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
```
```python
quantile_labels = ['0-25Q', '25-50Q', '50-75Q', '75-100Q']
fcc_survey_df['Income_quantile_range'] = pd.qcut(fcc_survey_df['Income'],
q=quantile_list)
fcc_survey_df['Income_quantile_label'] = pd.qcut(fcc_survey_df['Income'],
q=quantile_list,
labels=quantile_labels)
fcc_survey_df[['ID.x', 'Age', 'Income',
'Income_quantile_range', 'Income_quantile_label']].iloc[4:9]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>ID.x</th>
<th>Age</th>
<th>Income</th>
<th>Income_quantile_range</th>
<th>Income_quantile_label</th>
</tr>
</thead>
<tbody>
<tr>
<th>4</th>
<td>9368291c93d5d5f5c8cdb1a575e18bec</td>
<td>20.0</td>
<td>6000.0</td>
<td>(5999.999, 20000.0]</td>
<td>0-25Q</td>
</tr>
<tr>
<th>5</th>
<td>dd0e77eab9270e4b67c19b0d6bbf621b</td>
<td>34.0</td>
<td>40000.0</td>
<td>(37000.0, 60000.0]</td>
<td>50-75Q</td>
</tr>
<tr>
<th>6</th>
<td>7599c0aa0419b59fd11ffede98a3665d</td>
<td>23.0</td>
<td>32000.0</td>
<td>(20000.0, 37000.0]</td>
<td>25-50Q</td>
</tr>
<tr>
<th>7</th>
<td>6dff182db452487f07a47596f314bddc</td>
<td>35.0</td>
<td>40000.0</td>
<td>(37000.0, 60000.0]</td>
<td>50-75Q</td>
</tr>
<tr>
<th>8</th>
<td>9dc233f8ed1c6eb2432672ab4bb39249</td>
<td>33.0</td>
<td>80000.0</td>
<td>(60000.0, 200000.0]</td>
<td>75-100Q</td>
</tr>
</tbody>
</table>
</div>
# Mathematical Transformations
## Log transform
```python
fcc_survey_df['Income_log'] = np.log((1+ fcc_survey_df['Income']))
fcc_survey_df[['ID.x', 'Age', 'Income', 'Income_log']].iloc[4:9]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>ID.x</th>
<th>Age</th>
<th>Income</th>
<th>Income_log</th>
</tr>
</thead>
<tbody>
<tr>
<th>4</th>
<td>9368291c93d5d5f5c8cdb1a575e18bec</td>
<td>20.0</td>
<td>6000.0</td>
<td>8.699681</td>
</tr>
<tr>
<th>5</th>
<td>dd0e77eab9270e4b67c19b0d6bbf621b</td>
<td>34.0</td>
<td>40000.0</td>
<td>10.596660</td>
</tr>
<tr>
<th>6</th>
<td>7599c0aa0419b59fd11ffede98a3665d</td>
<td>23.0</td>
<td>32000.0</td>
<td>10.373522</td>
</tr>
<tr>
<th>7</th>
<td>6dff182db452487f07a47596f314bddc</td>
<td>35.0</td>
<td>40000.0</td>
<td>10.596660</td>
</tr>
<tr>
<th>8</th>
<td>9dc233f8ed1c6eb2432672ab4bb39249</td>
<td>33.0</td>
<td>80000.0</td>
<td>11.289794</td>
</tr>
</tbody>
</table>
</div>
```python
income_log_mean = np.round(np.mean(fcc_survey_df['Income_log']), 2)
fig, ax = plt.subplots()
fcc_survey_df['Income_log'].hist(bins=30, color='#A9C5D3')
plt.axvline(income_log_mean, color='r')
ax.set_title('Developer Income Histogram after Log Transform',
fontsize=12)
ax.set_xlabel('Developer Income (log scale)', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
ax.text(11.5, 450, r'$\mu$='+str(income_log_mean), fontsize=10)
```
## Box–Cox transform
```python
from scipy import stats
# get optimal lambda value from non null income values
income = np.array(fcc_survey_df['Income'])
income_clean = income[~np.isnan(income)] # los no nan
l, opt_lambda = stats.boxcox(income_clean)
print('Optimal lambda value:', opt_lambda)
```
Optimal lambda value: 0.11799122497648248
```python
fcc_survey_df['Income_boxcox_lambda_0'] = stats.boxcox((1+fcc_survey_df['Income']),
lmbda=0)
fcc_survey_df['Income_boxcox_lambda_opt'] = stats.boxcox(fcc_survey_df['Income'],
lmbda=opt_lambda)
fcc_survey_df[['ID.x', 'Age', 'Income', 'Income_log',
'Income_boxcox_lambda_0', 'Income_boxcox_lambda_opt']].iloc[4:9]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>ID.x</th>
<th>Age</th>
<th>Income</th>
<th>Income_log</th>
<th>Income_boxcox_lambda_0</th>
<th>Income_boxcox_lambda_opt</th>
</tr>
</thead>
<tbody>
<tr>
<th>4</th>
<td>9368291c93d5d5f5c8cdb1a575e18bec</td>
<td>20.0</td>
<td>6000.0</td>
<td>8.699681</td>
<td>8.699681</td>
<td>15.180667</td>
</tr>
<tr>
<th>5</th>
<td>dd0e77eab9270e4b67c19b0d6bbf621b</td>
<td>34.0</td>
<td>40000.0</td>
<td>10.596660</td>
<td>10.596660</td>
<td>21.115340</td>
</tr>
<tr>
<th>6</th>
<td>7599c0aa0419b59fd11ffede98a3665d</td>
<td>23.0</td>
<td>32000.0</td>
<td>10.373522</td>
<td>10.373522</td>
<td>20.346418</td>
</tr>
<tr>
<th>7</th>
<td>6dff182db452487f07a47596f314bddc</td>
<td>35.0</td>
<td>40000.0</td>
<td>10.596660</td>
<td>10.596660</td>
<td>21.115340</td>
</tr>
<tr>
<th>8</th>
<td>9dc233f8ed1c6eb2432672ab4bb39249</td>
<td>33.0</td>
<td>80000.0</td>
<td>11.289794</td>
<td>11.289794</td>
<td>23.637128</td>
</tr>
</tbody>
</table>
</div>
```python
income_boxcox_mean = np.round(np.mean(fcc_survey_df['Income_boxcox_lambda_opt']), 2)
fig, ax = plt.subplots()
fcc_survey_df['Income_boxcox_lambda_opt'].hist(bins=30, color='#A9C5D3')
plt.axvline(income_boxcox_mean, color='r')
ax.set_title('Developer Income Histogram after Box–Cox Transform', fontsize=12)
ax.set_xlabel('Developer Income (Box–Cox transform)', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
ax.text(24, 450, r'$\mu$='+str(income_boxcox_mean), fontsize=10)
```
```python
```
| ffd927a3f856b31687f7b4d925186705d1f7cedb | 353,711 | ipynb | Jupyter Notebook | 4-Machine_Learning/Feature Engineering/Numericas/Practica/1_Numerical_Features - solucion.ipynb | erfederuiz/thebridge_ft_nov21 | 00f7216024ac0cf05e564eb8b1be6e888f277ea4 | [
"MIT"
]
| null | null | null | 4-Machine_Learning/Feature Engineering/Numericas/Practica/1_Numerical_Features - solucion.ipynb | erfederuiz/thebridge_ft_nov21 | 00f7216024ac0cf05e564eb8b1be6e888f277ea4 | [
"MIT"
]
| null | null | null | 4-Machine_Learning/Feature Engineering/Numericas/Practica/1_Numerical_Features - solucion.ipynb | erfederuiz/thebridge_ft_nov21 | 00f7216024ac0cf05e564eb8b1be6e888f277ea4 | [
"MIT"
]
| null | null | null | 85.43744 | 70,386 | 0.818657 | true | 15,159 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92079 | 0.890294 | 0.819774 | __label__eng_Latn | 0.757305 | 0.742942 |
# Derivations and Equation Reference
This guide explains the origin and derivation of the equations used in ``LEGWORK`` functions. Let's go through each of the modules and build up to an equation for the signal-to-noise ratio for a given LISA source.
At the end of this document ([here](#Equation-to-Function-Table)) is a table that relates each of the functions in ``LEGWORK`` to an equation in this document.
## Conversions and Definitions (`utils`)
This section contains a miscellaneous collection of conversions and definitions that are useful in the later derivations. First, the chirp mass of a binary is defined as
\begin{equation}
\mathcal{M}_{c} = \frac{(m_1 m_2)^{3/5}}{(m_1 + m_2)^{1/5}},
\label{eq:chirpmass}
\end{equation}
where $m_1$ and $m_2$ are the primary and secondary mass of the binary. This term often shows up in many equations and hence is easier to measure in gravitational wave data analysis than the individual component masses.
Kepler's third law allows one to convert between orbital frequency, $f_{\rm orb}$, and the semi-major axis, $a$, of a binary. For convenience we show it here
\begin{equation}
a = \left(\frac{G(m_1 + m_2)}{(2 \pi f_{\rm orb})^{2}}\right)^{1/3},\qquad f_{\rm orb} = \frac{1}{2 \pi} \sqrt{\frac{G(m_1 + m_2)}{a^3}}.
\label{eq:kepler3rd}
\end{equation}
As we deal with eccentric binaries, the different harmonic frequencies of gravitational wave emission become important. We can write that the relative power radiated into the $n^{\rm th}$ harmonic for a binary with eccentricity $e$ is <cite data-cite="Peters1963"></cite> (Eq. 20)
\begin{equation}
\begin{aligned}
g(n, e) = \frac{n^{4}}{32} & \left\{ \left[ J_{n-2}(n e)-2 e J_{n-1}(n e)+\frac{2}{n} J_{n}(n e)+2 e J_{n+1}(n e)-J_{n+2}(n e)\right]^{2}\right.\\
&\left.+\left(1-e^{2}\right)\left[J_{n-2}(n e)-2 J_{n}(n e)+J_{n+2}(n e)\right]^{2}+\frac{4}{3 n^{2}}\left[J_{n}(n e)\right]^{2}\right\},
\end{aligned}
\label{eq:g(n,e)}
\end{equation}
where $J_{n}(v)$ is the [Bessel function of the first kind](https://mathworld.wolfram.com/BesselFunctionoftheFirstKind.html). Thus, the sum of $g(n, e)$ over all harmonics gives the factor by which the gravitational wave emission is stronger for a binary of eccentricity $e$ over an otherwise identical circular binary. This enhancement factor is defined by Peters as <cite data-cite="Peters1963"></cite> (Eq. 17)
\begin{equation}
F(e) = \sum_{n = 1}^{\infty} g(n, e) = \frac{1 + (73 / 24) e^2 + (37 / 96) e^4}{(1 - e^2)^{7/2}}.
\label{eq:eccentricity_enhancement_factor}
\end{equation}
Note that $F(0) = 1$ as one would expect. A useful number to remember is that $F(0.5) \approx 5.0$, or in words, a binary with eccentricity $0.5$ loses energy to gravitational waves at a rate about $5$ times higher than a similar circular binary.
For binary evolution Peters and Mathews introduced two constants that are useful for calculations though without physical meaning. First, from <cite data-cite="Peters1964"></cite> (Eq. 5.9)
\begin{equation}
\beta(m_1, m_2) = \frac{64}{5} \frac{G^3}{c^5} m_1 m_2 (m_1 + m_2).
\label{eq:beta_peters}
\end{equation}
And additionally from <cite data-cite="Peters1964"></cite> (Eq. 5.11)
\begin{equation}
c_0(a_0, e_0) = a_0 \frac{(1 - e_0^2)}{e_0^{12/19}} \left(1 + \frac{121}{304} e_0^2\right)^{-870/2299}
\label{eq:c0_peters}
\end{equation}
where $a_0$ and $e_0$ are the initial semi-major axis and eccentricity respectively.
## Binary Evolution (`evol`)
### Circular binaries
For a circular binary, the evolution can be calculated analytically as the rate at which the binary shrinks can be readily integrated. This gives the semi-major axis of a circular binary as a function of time as <cite data-cite="Peters1964"></cite> (Eq. 5.9)
\begin{equation}
a(t, m_1, m_2) = [a_0^4 - 4 t \beta(m_1, m_2)]^{1/4},
\label{eq:a_over_time_circ}
\end{equation}
where $a_0$ is the initial semi-major axis and $\beta$ is defined in Eq. \eqref{eq:beta_peters}. We can use this to also get the frequency evolution by using Kepler's third law Eq. \eqref{eq:kepler3rd}.
Moreover, we can set the final semi-major axis in Eq. \eqref{eq:a_over_time_circ} equal to zero and solve for the inspiral time (<cite data-cite="Peters1964"></cite> Eq. 5.10)
\begin{equation}
t_{\rm merge, circ} = \frac{a_0^4}{4 \beta}
\label{eq:t_merge_circular}
\end{equation}
### Eccentric binaries
Eccentric binaries are more complicated because the semi-major axis and eccentricity both evolve simultaneously and depend on one another. These equations cannot be solved analytically and require numerical integration. Firstly, we can relate $a$ and $e$ with <cite data-cite="Peters1964"></cite> (Eq. 5.11)
\begin{equation}
a(e) = c_0 \frac{e^{12/19}}{(1 - e^2)} \left(1 + \frac{121}{304} e^2\right)^{870/2299},
\label{eq:a_from_e}
\end{equation}
where $c_0$ is defined in Eq. \eqref{eq:c0_peters} -- such that the initial conditions are satisfied. Then we can numerically integrate <cite data-cite="Peters1964"></cite> (Eq. 5.13)
\begin{equation}
\frac{\mathrm{d}e}{\mathrm{d}t} = -\frac{19}{12} \frac{\beta}{c_{0}^{4}} \frac{e^{-29 / 19}\left(1-e^{2}\right)^{3 / 2}}{\left[1+(121 / 304) e^{2}\right]^{1181 / 2299}},
\label{eq:dedt}
\end{equation}
to find $e(t)$ and use this in conjunction with Eq. \eqref{eq:a_from_e} to solve for $a(t)$, which can in turn be converted to $f_{\rm orb}(t)$.
Furthermore, we can invert this to find the inspiral time by using that $e \to 0$ when the binary merges which gives <cite data-cite="Peters1964"></cite> (Eq. 5.14)
\begin{equation}
t_{\rm merge} = \frac{12}{19} \frac{c_{0}^{4}}{\beta} \int_0^{e_0} \frac{\left[1+(121 / 304) e^{2}\right]^{1181 / 2299}}{e^{-29 / 19}\left(1-e^{2}\right)^{3 / 2}} \mathrm{d}e
\label{eq:t_merge_eccentric}
\end{equation}
For very small or very large eccentricities we can approximate this integral using the following expressions (given in unlabelled equations after <cite data-cite="Peters1964"></cite> Eq. 5.14)
\begin{equation}
t_{\rm merge,\, e^2 \ll 1} = \frac{c_0^4}{4 \beta} \cdot e_0^{48 / 19}
\end{equation}
\begin{equation}
t_{\rm merge,\, (1 - e^2) \ll 1} = \frac{768}{425} \frac{a_0^4}{4 \beta} (1 - e_0^2)^{7/2}
\end{equation}
## Gravitational Wave Strains (`strain`)
### Characteristic Strain
The characteristic strain from a binary in the $n^{\rm th}$ harmonic is defined as follows (e.g. <cite data-cite="Barack&Cutler2004"></cite> Eq. 56; <cite data-cite="Flanagan+1998"></cite> Eq. 5.1)
\begin{equation}
h_{c,n}^2 = \frac{1}{(\pi D_L)^2} \left( \frac{2 G}{c^3} \frac{\dot{E_n}}{\dot{f_n}} \right),
\label{eq:char_strain_dedf}
\end{equation}
where $D_L$ is the luminosity distance to the binary, $\dot{E}_n$ is the power radiated in the $n^{\rm th}$ harmonic and $\dot{f}_n$ is the rate of change of the $n^{\rm th}$ harmonic frequency. The power radiated in the $n^{\rm th}$ harmonic is given by <cite data-cite="Peters1963"></cite> (Eq. 19)
\begin{equation}
\dot{E}_n = \frac{32}{5} \frac{G^{4}}{c^5} \frac{m_{1}^{2} m_{2}^{2}\left(m_{1}+m_{2}\right)}{a^{5}} g(n, e),
\label{eq:edot_peters}
\end{equation}
where $m_1$ is the primary mass, $m_2$ is the secondary mass, $a$ is the semi-major axis of the binary and $e$ is the eccentricity. Using Eq. \eqref{eq:chirpmass} and Eq. \eqref{eq:kepler3rd}, we can recast Eq. \eqref{eq:edot_peters} in a form more applicable for gravitational wave detections that is a function of only the chirp mass, orbital frequency and eccentricity.
\begin{align}
\dot{E}_n &= \frac{32}{5} \frac{G^{4}}{c^5} \left(m_{1}^{2} m_{2}^{2}\left(m_{1}+m_{2}\right)\right) g(n, e) \cdot \left(\frac{(2 \pi f_{\rm orb})^{2}}{G(m_1 + m_2)}\right)^{5/3} \\
\dot{E}_n &= \frac{32}{5} \frac{G^{7/3}}{c^5} \frac{m_{1}^{2} m_{2}^{2}}{\left(m_{1}+m_{2}\right)^{2/3}} (2 \pi f_{\rm orb})^{10/3} g(n, e) \\
\dot{E}_n(\mathcal{M}_c, f_{\rm orb}, e) &= \frac{32}{5} \frac{G^{7 / 3}}{c^{5}}\left(2 \pi f_{\mathrm{orb}} \mathcal{M}_{c}\right)^{10 / 3} g(n, e)
\label{eq:edot}
\end{align}
The last term needed to define the characteristic strain in Eq. \eqref{eq:char_strain_dedf} is the rate of change of the $n^{\rm th}$ harmonic frequency. We can first apply the chain rule and note that
\begin{equation}
\dot{f}_{n} = \frac{\mathrm{d}f_{n}}{\mathrm{d} a} \frac{\mathrm{d} a}{\mathrm{d} t}.
\label{eq:fdot_chainrule}
\end{equation}
The frequency of the $n^{\rm th}$ harmonic is simply defined as $f_n = n \cdot f_{\rm orb}$ and therefore we can find an expression for $\mathrm{d} {f_{n}} / \mathrm{d} {a}$ by rearranging and differentiating Eq. \eqref{eq:kepler3rd}
\begin{align}
f_{n} &= \frac{n}{2 \pi} \sqrt{\frac{G(m_1 + m_2)}{a^3}}, \\
\frac{\mathrm{d}f_{n}}{\mathrm{d} a} &= -\frac{3 n}{4 \pi} \frac{\sqrt{G(m_1 + m_2)}}{a^{5/2}}.
\label{eq:dfda}
\end{align}
The rate at which the semi-major axis decreases is <cite data-cite="Peters1964"></cite> (Eq. 5.6)
\begin{equation}
\frac{\mathrm{d} a}{\mathrm{d} t} = -\frac{64}{5} \frac{G^{3} m_{1} m_{2}\left(m_{1}+m_{2}\right)}{c^{5} a^{3}} F(e).
\label{eq:dadt}
\end{equation}
Substituting Eq. \eqref{eq:dfda} and Eq. \eqref{eq:dadt} into Eq. \eqref{eq:fdot_chainrule} gives an expression for $\dot{f}_{n}$
\begin{align}
\dot{f}_n &= -\frac{3 n}{4 \pi} \frac{\sqrt{G(m_1 + m_2)}}{a^{5/2}} \cdot -\frac{64}{5} \frac{G^{3} m_{1} m_{2}\left(m_{1}+m_{2}\right)}{c^{5} a^{3}} F(e), \\
\dot{f}_n &= \frac{48 n}{5 \pi} \frac{G^{7/2}}{c^5} \left(m_1 m_2 (m_1 + m_2)^{3/2}\right) \frac{F(e)}{a^{11/2}},
\end{align}
which, as above with $\dot{E}_n$, we can recast using Kepler's third law and the definition of the chirp mass
\begin{align}
\dot{f}_n &= \frac{48 n}{5 \pi} \frac{G^{7/2}}{c^5} \left(m_1 m_2 (m_1 + m_2)^{3/2}\right) F(e) \cdot \left(\frac{(2 \pi f_{\rm orb})^{2}}{G(m_1 + m_2)}\right)^{11/6}, \\
&= \frac{48 n}{5 \pi} \frac{G^{5/3}}{c^5} \frac{m_1 m_2}{(m_1 + m_2)^{1/3}} \cdot (2 \pi f_{\rm orb})^{11/3} \cdot F(e), \\
\dot{f}_n(\mathcal{M}_c, f_{\rm orb}, e) &= \frac{48 n}{5 \pi} \frac{\left(G \mathcal{M}_c \right)^{5/3}}{c^5} (2 \pi f_{\rm orb})^{11/3} F(e)
\label{eq:fdot}
\end{align}
With definitions of both $\dot{E}_n$ and $\dot{f}_n$, we are now in a position to find an expression for the characteristic strain by plugging Eq. \eqref{eq:edot} and Eq. \eqref{eq:fdot} into Eq. \eqref{eq:char_strain}:
\begin{align}
h^2_{c,n} &= \frac{1}{(\pi D_L)^2} \left( \frac{2 G}{c^3} \frac{\frac{32}{5} \frac{G^{7 / 3}}{c^{5}}\left(2 \pi f_{\mathrm{orb}} \mathcal{M}_{c}\right)^{10 / 3} g(n, e)}{\frac{48 n}{5 \pi} \frac{\left(G \mathcal{M}_c\right)^{5/3}}{c^5} (2 \pi f_{\rm orb})^{11/3} F(e)} \right) \\
&= \frac{1}{(\pi D_L)^2} \left( \frac{2^{5/3} \pi^{2/3}}{3} \frac{(G \mathcal{M}_c)^{5/3}}{c^3} \frac{1}{f_{\rm orb}^{1/3}} \frac{g(n, e)}{n F(e)} \right)
\end{align}
This gives a final simplified expression for the characteristic strain amplitude of a GW source.
\begin{equation}
h_{c,n}^2(\mathcal{M}_c, D_L, f_{\rm orb}, e) = \frac{2^{5/3}}{3 \pi^{4/3}} \frac{(G \mathcal{M}_c)^{5/3}}{c^3 D_L^2} \frac{1}{f_{\rm orb}^{1/3}} \frac{g(n, e)}{n F(e)}
\label{eq:char_strain}
\end{equation}
### Strain
The strain can be found by dividing the characteristic strain by the square root of twice the number of cycles. This is explained in <cite data-cite="Finn&Thorne2000"></cite> and in further detail in <cite data-cite="Moore+2015"></cite>. The physical reasoning behind the idea is that the binary will spend a certain amount of time in the vicinity of some frequency $f$ and cause a similar gravitational wave strain. This leads to the signal 'accumulating' and resulting in a larger signal-to-noise ratio. Therefore, the characteristic strain represents the strain measured by the detector over the duration of the mission, whilst the strain is what is emitted by the binary at each instantaneous moment.
Following this logic, we can convert between characteristic strain and strain with (e.g. <cite data-cite="Finn&Thorne2000"></cite> see text before Eq. 2.2)
\begin{equation}
h_{c, n}^2 = \left(\frac{f_n^2}{\dot{f}_n} \right) h_n^2.
\label{eq:strain-charstrain}
\end{equation}
Therefore, using this, in addition to Eq. \eqref{eq:fdot} and Eq. \eqref{eq:char_strain}, we can write an expression for the strain amplitude of gravitational waves in the $n^{\rm th}$ harmonic
\begin{align}
h_n^2 &= \left(\frac{\dot{f}_n}{f_n^2} \right) h_{c, n}^2, \\
h_n^2 &= \left(\frac{48 n}{5 \pi} \frac{\left(G \mathcal{M}_c \right)^{5/3}}{c^5} F(e) \cdot \frac{(2 \pi f_{\rm orb})^{11/3}}{n^2 f_{\rm orb}^2} \right) \left(\frac{2^{5/3}}{3 \pi^{4/3}} \frac{(G \mathcal{M}_c)^{5/3}}{c^3 D_L^2} \frac{1}{n f_{\rm orb}^{1/3}} \frac{g(n, e)}{F(e)} \right),
\end{align}
This gives a final simplified expression for the strain amplitude of a GW source.
\begin{equation}
h_n^2(\mathcal{M}_c, f_{\rm orb}, D_L, e) = \frac{2^{28/3}}{5} \frac{(G \mathcal{M}_c)^{10/3}}{c^8 D_L^2} \frac{g(n, e)}{n^2} \left(\pi f_{\rm orb} \right)^{4/3}
\label{eq:strain}
\end{equation}
### Amplitude modulation for orbit averaged sources
Because the LISA detectors are not stationary and instead follow an Earth-trailing orbit, the antenna pattern of LISA is not isotropically distributed or stationary. For sources that have a known position, inclination, and polarisation, we can consider the amplitude modulation of the strain due to the average motion of LISA's orbit. We closely follow the results of <cite data-cite="Cornish2003"></cite> to write down the amplitude modulation as
\begin{equation}
A_{\rm{mod}}^{2}=\frac{1}{2} \left[\left(1+\cos ^{2} \iota\right)^{2}\left\langle F_{+}^{2}\right\rangle+4 \cos ^{2} \iota\left\langle F_{\times}^{2}\right\rangle\right],
\label{eq:amp_mod}
\end{equation}
where $\left\langle F_{+}^{2}\right\rangle$ and $\left\langle F_{\times}^{2}\right\rangle$, the orbit-averaged detector responses, are defined as
\begin{equation}
\left \langle F_{+}^{2} \right\rangle = \frac{1}{4}\big(\cos ^{2} 2 \psi\left\langle D_{+}^{2}\right\rangle -\sin 4 \psi\left\langle D_{+} D_{\times}\right\rangle +\sin ^{2} 2 \psi\left\langle D_{\times}^{2}\right\rangle\big),
\label{eq:response_fplus}
\end{equation}
\begin{equation}
\left\langle F_{\times}^{2} \right\rangle = \frac{1}{4}\big(\cos^{2} 2 \psi \left \langle D_{\times}^{2} \right\rangle +\sin 4 \psi \left \langle D_{+} D_{\times} \right\rangle +\sin ^{2} 2 \psi \left \langle D_{+}^{2} \right\rangle \big),
\label{eq:response_fcross}
\end{equation}
and
\begin{equation}
\left\langle D_{+} D_{\times} \right\rangle = \frac{243}{512} \cos \theta \sin 2 \phi \left(2 \cos ^{2} \phi-1\right) \left(1+\cos ^{2} \theta\right),
\label{eq:d_plus_cross}
\end{equation}
\begin{equation}
\left\langle D_{\times}^{2} \right\rangle = \frac{3}{512}\big(120 \sin ^{2} \theta +\cos ^{2} \theta + 162 \sin ^{2} 2 \phi \cos ^{2} \theta\big),
\label{eq:d_cross}
\end{equation}
\begin{equation}
\left\langle D_{+}^{2} \right\rangle = \frac{3}{2048}\big[487+158 \cos ^{2} \theta+7 \cos ^{4} \theta -162 \sin ^{2} 2 \phi\left(1+\cos ^{2} \theta\right)^{2}\big].
\label{eq:d_plus}
\end{equation}
In the equations above, the inclination is given by $\iota$, the right ascension and declination are given by $\phi$ and $\theta$ respsectively, and the polarisation is given by $\psi$.
The orbital motion of LISA smears the source frequency by roughly $10^{-4}\,\rm{mHz}$ due to the antenna pattern changing as the detector orbits, the Doppler shift from the motion, and the phase modulation from the $+$ and $\times$ polarisations in the antenna pattern. Generally, the modulation reduces the strain amplitude because the smearing in frequency reduces the amount of signal build up at the true source frequency.
We note that since the orbit averaging is carried out in Fourier space, this requires the frequency to be monochromatic and thus is only implemented in `LEGWORK` for quasi-circular binaries. We also note that since the majority of the calculations in `LEGWORK` are carried out for the full position, polarisation, and inclination averages, we place a pre-factor of $5/4$ on the amplitude modulation in the software implementation to undo the factor of $4/5$ which arises from the averaging of Equations \eqref{eq:response_fplus} and \eqref{eq:response_fcross}.
## Sensitivity Curves (`psd`)
### LISA
For the LISA sensitivity curve, we follow the equations from <cite data-cite="Robson+2019"></cite>, which we list here for your convenience.
The *effective* LISA noise power spectral density is defined as (<cite data-cite="Robson+2019"></cite> Eq. 2)
\begin{equation}
S_{\rm n}(f) = \frac{P_n(f)}{\mathcal{R}(f)} + S_c(f),
\end{equation}
where $P_{\rm n}(f)$ is the power spectral density of the detector noise and $\mathcal{R}(f)$ is the sky and polarisation averaged signal response function of the instrument. Alternatively if we expand out $P_n(f)$, approximate $\mathcal{R}(f)$ and simplify we find (<cite data-cite="Robson+2019"></cite> Eq. 1)
\begin{equation}
S_{\rm n}(f) = \frac{10}{3 L^2} \left(P_{\rm OMS}(f) + \frac{4 P_{\rm acc}(f)}{(2 \pi f)^4} \right) \left(1 + \frac{6}{10} \left(\frac{f}{f_*} \right)^2 \right) + S_c(f)
\label{eq:LISA_Sn}
\end{equation}
where $L = 2.5\,\mathrm{Gm}$ is detector arm length, $f^* = 19.09 \, \mathrm{mHz}$ is the response frequency,
\begin{equation}
P_{\rm OMS}(f) = \left(1.5 \times 10^{-11} \mathrm{m}\right)^{2}\left(1+\left(\frac{2 \mathrm{mHz}}{f}\right)^{4}\right) \mathrm{Hz}^{-1}
\end{equation}
is the single-link optical metrology noise (<cite data-cite="Robson+2019"></cite> Eq. 10),
\begin{equation}
P_{\rm acc}(f) = \left(3 \times 10^{-15} \mathrm{ms}^{-2}\right)^{2}\left(1+\left(\frac{0.4 \mathrm{mHz}}{f}\right)^{2}\right)\left(1+\left(\frac{f}{8 \mathrm{mHz}}\right)^{4}\right) \mathrm{Hz}^{-1}
\end{equation}
is the single test mass acceleration noise (<cite data-cite="Robson+2019"></cite> Eq. 11) and
\begin{equation}
S_{c}(f)=A f^{-7 / 3} e^{-f^{\alpha}+\beta f \sin (\kappa f)}\left[1+\tanh \left(\gamma\left(f_{k}-f\right)\right)\right] \mathrm{Hz}^{-1}
\end{equation}
is the galactic confusion noise (<cite data-cite="Robson+2019"></cite> Eq. 14), where the amplitude $A$ is fixed as $9 \times 10^{-45}$ and the various parameters change over time:
<center>
| parameter | 6 months | 1 year | 2 years | 4 years|
|:-:|:-:|:-:|:-:|:-:|
|$\alpha$ | 0.133 | 0.171 | 0.165 | 0.138 |
|$\beta$ | 243 | 292 | 299 | -221 |
|$\kappa$ | 482 | 1020 | 611 | 521 |
|$\gamma$ | 917 | 1680 | 1340 | 1680 |
|$f_{k}$ | 0.00258 | 0.00215 | 0.00173 | 0.00113 |
</center>
### TianQin
We additionally allow other instruments than LISA. We have the TianQin sensitivity curve built in where we use the power spectral density given in <cite data-cite="Huang+2020"></cite> Eq. 13.
\begin{equation}
\begin{split}
S_{N}(f) &= \frac{10}{3 L^{2}}\left[\frac{4 S_{a}}{(2 \pi f)^{4}}\left(1+\frac{10^{-4} H z}{f}\right)+S_{x}\right] \\
& \times\left[1+0.6\left(\frac{f}{f_{*}}\right)^{2}\right]
\end{split}
\label{eq:tianqin}
\end{equation}
Note that this expression includes an extra factor of 10/3 compared Eq. 13 in <cite data-cite="Huang+2020"></cite>, since <cite data-cite="Huang+2020"></cite> absorbs the factor into the waveform but we instead follow the same convention as <cite data-cite="Robson+2019"></cite> for consistency and include it in this 'effective' PSD function instead.
## Signal-to-Noise Ratios for a 6-link (3-arm) LISA (`snr`)
Please note that this section draws heavily from <cite data-cite="Flanagan+1998"></cite> Section II C. We go through the same derivations here in more detail than in a paper and hopefully help clarify all of the different stages.
### Defining the general SNR
In order to calculate the signal to noise ratio for a given source of gravitational waves (GWs) in the LISA detector, we need to consider the following parameters:
- position of the source on the sky: ($\theta$, $\phi$)
- direction from the source to the detector: ($\iota$, $\beta$)
- orientation of the source, which fixes the polarisation of the GW: $\psi$
- the distance from the source to the detector: $D_L$
Then, assuming a matched filter analysis of the GW signal $s(t) + n(t)$ (where $s(t)$ is the signal and $n(t)$ is the noise), which relies on knowing the shape of the signal, the signal to noise ratio, $\rho$, is given in the frequency domain as
\begin{equation}
\rho^2(D_L, \theta, \phi, \psi, \iota, \beta) = \frac{\langle s(t)^{\star}s(t)\rangle}{\langle n(t)^{\star}n(t)\rangle} = 2 \int_{-\infty}^{+\infty} \frac{|\tilde{s}(f)|^2}{P_{\rm n}(f)} df = 4 \int_0^{\infty} \frac{|\tilde{s}(f)|^2}{P_{\rm n}(f)} df,
\label{eq:snr_general_start}
\end{equation}
where $\tilde{s}(f)$ is the Fourier transform of the signal, $s(t)$, and $P_{\rm n}(f)$ is the one sided power spectral density of the noise defined as as $\langle n(t)^{\star}n(t)\rangle = \int_0^{\infty} \frac{1}{2}P_{\rm n}(f) df$ (c.f. <cite data-cite="Robson+2019"></cite> Eq. 2). Here, $\tilde{s}(f)$ is implicitly also dependent on $D_L, \theta, \phi, \psi, \iota,$ and $\beta$ as
\begin{equation}
|\tilde{s}(f)|^2 = |F_+(\theta, \phi, \psi)\tilde{h}_+(t, D_L, \iota, \beta) + F_{\times}(\theta, \phi, \psi)\tilde{h}_{\times}(t, D_L, \iota, \beta)|^2,
\label{eq:signal}
\end{equation}
where $F_{+,\times}$ are the 'plus' and 'cross' antenna patterns of the LISA detector to the 'plus' and 'cross' strains, $h_{+,\times}$. Note throughout any parameters discussed with the subscript $x_{+,\times}$ refers to both $x_{+}$ and $x_{\times}$.
In LISA's case, when averaged over all angles and polarisations, the antenna patterns are orthogonal thus $\langle F_+ F_{\times}\rangle = 0$. This means we can rewrite Eq. \ref{eq:signal} as
\begin{equation}
|\tilde{s}(f)|^2 = |F_+(\theta, \phi, \psi)\tilde{h}_+(t, D_L, \iota, \beta)|^2 + |F_{\times}(\theta, \phi, \psi)\tilde{h}_{\times}(t, D_L, \iota, \beta)|^2,
\end{equation}
which can then be applied to Eq. \eqref{eq:snr_general_start} as
\begin{equation}
\rho^2(D_L, \theta, \phi, \psi, \iota, \beta) = 4 \int_0^{\infty} \frac{|F_+\tilde{h}_+|^2 + |F_{\times}\tilde{h}_{\times}|^2}{P_{\rm n}(f)} df.
\label{eq:snr_general_simpler}
\end{equation}
### Average over position and polarisation
Now, we can consider averaging over different quantities. In particular, we can average over the sky position and polarisation as
\begin{equation}
\label{eq:position_orientation_ave}
\langle \rho \rangle^2_{\theta,\phi,\psi} = 4 \int_0^{\infty} df \int \frac{d\Omega_{\theta,\phi}}{4\pi} \int \frac{d\psi}{\pi} \frac{|F_+(\theta,\phi,\psi)\,\tilde{h}_+(\iota,\beta)|^2 + |F_{\times}(\theta,\phi,\psi)\,\tilde{h}_{\times}(\iota,\beta)|^2}{P_{\rm n}(f)}.
\end{equation}
From <cite data-cite="Robson+2019"></cite>, we can write the position and polarisation average of the signal response function of the instrument, $\mathcal{R}$, as
\begin{equation}
\label{eq:response}
\mathcal{R} = \langle F_+F^{\star}_+ \rangle = \langle F_{\times}F^{\star}_{\times} \rangle, \,\,\rm{where}\,\,
\langle F_{+,\times}F^{\star}_{+,\times} \rangle = \int \frac{d\Omega_{\theta,\phi}}{4 \pi} \int \frac{d\psi}{\pi} |F_{+,\times}|^2.
\end{equation}
Then combining Eq. \eqref{eq:position_orientation_ave} and Eq. \eqref{eq:response}, we then find
\begin{equation}
\label{eq:averaged_antenna_simp}
\langle \rho \rangle^2_{\theta,\phi,\psi} = 4 \int_0^{\infty} df \mathcal{R}(f)\, \left(\frac{|\tilde{h}_+|^2 + |\tilde{h}_{\times}|^2}{P_{\rm n}(f)}\right)
\end{equation}
Note that this is written in <cite data-cite="Flanagan+1998"></cite> for the LIGO response function which is $\mathcal{R} = \langle F_{+,\times} \rangle ^2 = 1/5$.
### Average over orientation
Now, we can average over the orientation of the source: $(\iota, \beta)$, noting that the averaging is independent of the distance $D_L$. Then we can rewrite $|\tilde{h}_+|^2 + |\tilde{h}_{\times}|^2$ in terms of two functions $|\tilde{H}_+|^2$ and $|\tilde{H}_{\times}|^2$, where $\tilde{h}_{+,\times} = \tilde{H}_{+,\times}/D_L$. Then, averaging over the source direction gives
\begin{equation}
\label{eq:averaged_all}
\langle \rho \rangle^2_{(\theta,\phi,\psi),(\iota,\beta)} = \frac{4}{D_L^2} \int_0^{\infty} df \mathcal{R}(f)\,\int \frac{d\Omega_{\iota,\beta}}{4 \pi} \frac{|\tilde{H}_+|^2 + |\tilde{H}_{\times}|^2}{P_{\rm n}(f)},
\end{equation}
where we would like to express $\tilde{H}_{+,\times}(f)^2$ in terms of the energy spectrum of the GW. To do this, we note that the local energy flux of GWs at the detector is given by (e.g. <cite data-cite="Press&Thorne1972"></cite> Eq. 6)
\begin{equation}
\label{eq:energy_flx}
\frac{dE}{dAdt} = \frac{1}{16\pi} \overline{\left[\left(\frac{dh_{+}}{dt}\right)^2 + \left(\frac{dh_{\times}}{dt}\right)^2\right]},
\end{equation}
where the bar indicates an average over several cycles of the wave which is appropriate for LISA sources. We can transform Eq. \eqref{eq:energy_flx} using Parseval's theorem, where we can write
\begin{align}
\int_{-\infty}^{+\infty}dt\int dA \frac{dE}{dAdt} & = \int_{-\infty}^{+\infty}dt\int dA \frac{1}{16\pi} \overline{\left[\left(\frac{dh_{+}}{dt}\right)^2 + \left(\frac{dh_{\times}}{dt}\right)^2\right]} \\
& = \int_{-\infty}^{+\infty}df \int dA \frac{1}{16\pi} \Big[\big((-2\pi if)|\tilde{h}_{+}|\big)^2 + \big((-2\pi if)|\tilde{h}_{\times}|\big)^2\Big] \\
& = \int_{-\infty}^{+\infty}df \int dA \frac{1}{16\pi} (2\pi f)^2 \left(|\tilde{h}_{+}|^2 + |\tilde{h}_{\times}|^2 \right) \\
& = \int_{-\infty}^{+\infty}df \int dA \frac{\pi f^2}{4} \left(|\tilde{h}_{+}|^2 + |\tilde{h}_{\times}|^2 \right) \\
& = \int_{0}^{\infty}df \int dA \frac{\pi f^2}{2} \left(|\tilde{h}_{+}|^2 + |\tilde{h}_{\times}|^2 \right).
\label{eq:Parseval}
\end{align}
Note that we perform a Fourier transform of the square of the time derivatives in the second line. Now, since $A = D_L^2 \Omega$ and $|\tilde{h}_{+,\times}|^2 = |\tilde{H}_{+,\times}|^2 / D_L^2$, we know
\begin{equation}
\label{eq:little_h_to_big}
|\tilde{h}_{+,\times}|^2 dA = |\tilde{H}_{+,\times}|^2 d\Omega_{\iota,\beta},
\end{equation}
then we can write Eq. \eqref{eq:Parseval} in terms of $|H_{+,\times}|^2$ as
\begin{align}
\int_{-\infty}^{+\infty}dt\int dA \frac{dE}{dAdt} & = \int_{0}^{\infty}df \int dA \frac{\pi f^2}{2} \left(|\tilde{h}_{+}|^2 + |\tilde{h}_{\times}|^2 \right) \\
& = \int_{0}^{\infty}df \frac{\pi f^2}{2} \int d\Omega \left(|\tilde{H}_{+}|^2 + |\tilde{H}_{\times}|^2 \right).
\label{eq:Parseval_2}
\end{align}
We can note that by using Eq. \eqref{eq:little_h_to_big} and performing a Fourier transform we also have that
\begin{equation}
\int_{-\infty}^{+\infty}dt\int dA \frac{dE}{dAdt} = \int_{0}^{\infty}df \int d\Omega \frac{dE}{d\Omega df}.
\label{eq:deriv_relation}
\end{equation}
From inspection of Eq. \eqref{eq:Parseval_2} and Eq. \eqref{eq:deriv_relation}, we can write the spectral energy flux as
\begin{equation}
\int d\Omega \frac{dE}{d\Omega df} = \frac{\pi f^2}{2} \int d\Omega \left(|\tilde{H}_{+}|^2 + |\tilde{H}_{\times}|^2 \right) .
\label{eq:se_flux}
\end{equation}
### Fully averaged SNR equation
We are now in a position to write an expression for the fully averaged SNR. Let's take Eq. \eqref{eq:se_flux} and apply it to Eq. \eqref{eq:averaged_all}
\begin{equation}
\langle \rho \rangle^2_{(\theta,\phi,\psi),(\iota,\beta)} = \frac{4}{D_L^2} \int_0^{\infty} df \frac{1}{P_{\rm n}(f)/\mathcal{R}(f)} \int \frac{d\Omega}{4\pi} \frac{dE}{d\Omega df} \frac{2}{\pi f^2}.
\end{equation}
This simplifies nicely to
\begin{equation}
\langle \rho \rangle^2 = \frac{2}{(\pi D_L)^2}\int_0^{\infty}df \frac{dE}{df}\frac{1}{f^2 P_{\rm n}(f)/\mathcal{R}(f)}.
\end{equation}
Finally, noting that $dE/df = dE/dt \times dt/df = \dot{E}/\dot{f}$, we can use the definition of the characteristic strain from Eq. \eqref{eq:char_strain_dedf} (and use $c=G=1$),
\begin{equation}
h_{c}^2 = \frac{1}{(\pi D_L)^2} \left(\frac{2\dot{E}}{\dot{f}}\right),
\end{equation}
to finish up our position, direction, and orientation/polarisation averaged SNR as
\begin{equation}
\langle \rho \rangle^2_{(\theta,\phi,\psi),(\iota,\beta)} = \int_0^{\infty}df \frac{h_{c}^2}{f^2P_{\rm n}(f)/\mathcal{R}(f)} = \int_0^{\infty}df \frac{h_{c}^2}{f^2 S_{\rm n}(f)},
\label{eq:snr_finished_circ}
\end{equation}
where we have used that the effective power spectral density of the noise is defined as $S_{\rm n}(f) = P_{\rm n}(f) / \mathcal{R}(f)$. Note that this defintion is the sensitivity for a 6-link (3-arm) LISA detector in the long wavelength limit, which is appropriate for stellar mass binary LISA sources.
It is also important to note that this is only the SNR for a circular binary for which we need only consider the $n = 2$ harmonic. In the general case, a binary could be eccentric and requires a sum over *all* harmonics. Thus we can generalise Eq. \eqref{eq:snr_finished_circ} to eccentric binaries with
\begin{equation}
\langle \rho \rangle^2_{(\theta,\phi,\psi),(\iota,\beta)} = \sum_{n = 1}^{\infty} \langle \rho_n \rangle^2_{(\theta,\phi,\psi),(\iota,\beta)} = \sum_{n = 1}^{\infty} \int_0^{\infty} d f_n \frac{h_{c, n}^2}{f_n^2 S_{\rm n}(f_n)},
\label{eq:snr_general}
\end{equation}
where $f_n = n \cdot f_{\rm orb}$ (with $n$ being the harmonic and $f_{\rm orb}$ the orbital frequency), $h_{c, n}$ is defined in Eq. \eqref{eq:char_strain} and $S_{\rm n}$ in Eq. \eqref{eq:LISA_Sn}.
### Different SNR approximations
Although Eq. \eqref{eq:snr_general} can be used for every binary, it can be useful to consider different cases in which we can avoid unnecessary sums and integrals. There are four possible cases for binaries in which we can use increasingly simple expressions for the signal-to-noise ratio. Binaries can be circular and stationary in frequency space.
- Circular binaries emit only in the $n=2$ harmonic and so the sum over harmonics can be removed
- Stationary binaries have $f_{n, i} \approx f_{n, f}$ and so the small interval allows one to approximate the integral
We refer to non-stationary binaries as 'evolving' here though many also use 'chirping'.
For an evolving and eccentric binary, no approximation can be made and the SNR is found using Eq. \eqref{eq:snr_general}.
For an evolving and circular binary, the sum can be removed and so the SNR found as
\begin{equation}
\rho^2_{\rm circ, evol} = \int_{f_{2, i}}^{f_{2, f}} \frac{h_{c, 2}^2}{f_2^2 S_{\rm n}(f_2)} \mathrm{d}{f}
\label{eq:snr_chirp_circ}
\end{equation}
For a stationary and eccentric binary we can approximate the integral.
\begin{align}
\rho^2_{\rm ecc, stat} &= \sum_{n=1}^{\infty} \lim_{\Delta f \to 0} \int_{f_{n}}^{f_{n} + \Delta f_n} \frac{h_{c, n}^2}{f_n^2 S_{\rm n}(f_n)} \mathrm{d}{f_n}, \\
&= \sum_{n=1}^{\infty} \frac{\Delta f_n \cdot h_{c, n}^2}{f_n^2 S_{\rm n}(f_n)}, \\
&= \sum_{n=1}^{\infty} \frac{\dot{f}_n \Delta T \cdot h_{c, n}^2}{f_n^2 S_{\rm n}(f_n)}, \\
&= \sum_{n=1}^{\infty} \left(\frac{\dot{f}_n}{f_n^2} h_{c, n}^2 \right) \frac{T_{\rm obs}}{S_{\rm n}(f_n)}, \\
\rho^2_{\rm ecc, stat} &= \sum_{n=1}^{\infty} \frac{h_{n}^2 T_{\rm obs}}{S_{\rm n}(f_n)},
\label{eq:snr_stat_ecc}
\end{align}
where we have applied Eq. \eqref{eq:strain-charstrain} to convert between strains and labelled $\Delta t = T_{\rm obs}$. Finally, for a stationary and circular binary the signal-to-noise ratio is simply
\begin{equation}
\rho^2_{\rm circ, stat} = \frac{h_2^2 T_{\rm obs}}{S_{\rm n}(f_2)}
\label{eq:snr_stat_circ}
\end{equation}
That's all for the derivations of the equations! If you are confused by something or think there is a mistake please feel free to [open an issue](https://github.com/TeamLEGWORK/LEGWORK/issues/new) on GitHub.
Continue reading for the function table and references!
## Equation to Function Table
The following table gives a list of the functions in the modules and which equation numbers in this document that they come from.
| Quantity | Equation | Function |
|:--------:|:--------:|:--------:|
|$\mathcal{M}_c$ | \ref{eq:chirpmass} | [legwork.utils.chirp_mass()](../api/legwork.utils.chirp_mass.rst) |
|$a$ | \ref{eq:kepler3rd} | [legwork.utils.get_a_from_forb()](../api/legwork.utils.get_a_from_f_orb.rst) |
|$f_{\rm orb}$ | \ref{eq:kepler3rd} | [legwork.utils.get_forb_from_a()](../api/legwork.utils.get_f_orb_from_a.rst) |
|$g(n, e)$ | \ref{eq:g(n,e)} | [legwork.utils.peters_g()](../api/legwork.utils.peters_g.rst) |
|$F(e)$ | \ref{eq:eccentricity_enhancement_factor} | [legwork.utils.peters_f()](../api/legwork.utils.peters_f.rst) |
|$\beta$ | \ref{eq:beta_peters} | [legwork.utils.beta()](../api/legwork.utils.beta.rst) |
|$a_{\rm circ}(t), f_{\rm orb, circ}(t)$ | \ref{eq:a_over_time_circ} | [legwork.evol.evol_circ()](../api/legwork.evol.evol_circ.rst) |
|$t_{\rm merge, circ}$ | \ref{eq:t_merge_circular} | [legwork.evol.get_t_merge_circ()](../api/legwork.evol.get_t_merge_circ.rst) |
|$e(t), a(t), f_{\rm orb}(t)$ | \ref{eq:dedt} | [legwork.evol.evol_ecc()](../api/legwork.evol.evol_ecc.rst) |
|$t_{\rm merge}$ | \ref{eq:t_merge_eccentric} | [legwork.evol.get_t_merge_ecc()](../api/legwork.evol.get_t_merge_ecc.rst) |
|$h_{c,n}$ | \ref{eq:char_strain} | [legwork.strain.h_c_n()](../api/legwork.strain.h_c_n.rst) |
|$h_n$ | \ref{eq:strain} | [legwork.strain.h_0_n()](../api/legwork.strain.h_0_n.rst) |
|$S_{\rm n}(f)$ | \ref{eq:LISA_Sn} | [legwork.psd.power_spectral_density()](../api/legwork.psd.power_spectral_density.rst) |
|$\rho$ | \ref{eq:snr_general} | [legwork.source.Source.get_snr()](../api/legwork.source.Source.rst#legwork.source.Source.get_snr) |
|$\rho_{\rm e, e}$ | \ref{eq:snr_general} | [legwork.snr.snr_ecc_evolving()](../api/legwork.snr.snr_ecc_evolving.rst) |
|$\rho_{\rm c, e}$ | \ref{eq:snr_chirp_circ} | [legwork.snr.snr_circ_evolving()](../api/legwork.snr.snr_circ_evolving.rst) |
|$\rho_{\rm e, s}$ | \ref{eq:snr_stat_ecc} | [legwork.snr.snr_ecc_stationary()](../api/legwork.snr.snr_ecc_stationary.rst) |
|$\rho_{\rm c, s}$ | \ref{eq:snr_stat_circ} | [legwork.snr.snr_circ_stationary()](../api/legwork.snr.snr_circ_stationary.rst) |
## References
| 07f122d86d44e0733ba5cb153a4d1e39f8a6ee4a | 52,691 | ipynb | Jupyter Notebook | docs/notebooks/Derivations.ipynb | arfon/LEGWORK | 91ca299d00ed6892acdf5980f33826421fa348ef | [
"MIT"
]
| 14 | 2021-09-28T21:53:24.000Z | 2022-02-05T14:29:44.000Z | docs/notebooks/Derivations.ipynb | arfon/LEGWORK | 91ca299d00ed6892acdf5980f33826421fa348ef | [
"MIT"
]
| 44 | 2021-10-31T15:04:26.000Z | 2022-03-15T19:01:40.000Z | docs/notebooks/Derivations.ipynb | katiebreivik/LEGWORK | 07c3938697ca622fc39d9617d74f28262ac2b1aa | [
"MIT"
]
| 4 | 2021-11-18T09:20:53.000Z | 2022-03-16T11:30:44.000Z | 37.85273 | 717 | 0.559849 | true | 12,443 | Qwen/Qwen-72B | 1. YES
2. YES | 0.919643 | 0.867036 | 0.797363 | __label__eng_Latn | 0.878983 | 0.690874 |
# Intro to neural net training with autograd
In this notebook, we'll practice
* using the **autograd** Python package to compute gradients
* using gradient descent to train a basic linear regression (a NN with 0 hidden layers)
* using gradient descent to train a basic neural network for regression (NN with 1+ hidden layers)
### Requirements:
Standard `comp135_env`, PLUS the `autograd` package: https://github.com/HIPS/autograd
To install autograd, first activate your `comp135_env`, and then do:
```
pip install autograd
```
### Outline
* Part 1: Autograd for scalar input -> scalar output functions
* Part 2: Autograd for vector input -> scalar output functions
* Part 3: Using autograd inside a simple gradient descent procedure
* Part 4: Using autograd to solve linear regression
```python
import pickle
import copy
import time
```
```python
## Import plotting tools
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
```
```python
## Import numpy
import numpy as np
import pandas as pd
```
```python
## Import autograd
import autograd.numpy as ag_np
import autograd
```
# PART 1: Using autograd's 'grad' function on univariate functions
Suppose we have a mathematical function of interest $f(x)$. For now, we'll work with functions that have a scalar input and scalar output.
Then we can of course ask: what is the derivative (aka *gradient*) of this function:
$$
g(x) \triangleq \frac{\partial}{\partial x} f(x)
$$
Instead of computing this gradient by hand via calculus/algebra, we can use autograd to do it for us.
First, we need to implement the math function $f(x)$ as a **Python function** `f`.
The Python function `f` needs to satisfy the following requirements:
* INPUT 'x': scalar float
* OUTPUT 'f(x)': scalar float
* All internal operations are composed of calls to functions from `ag_np`, the `autograd` version of numpy
**Important:**
* You might be used to importing numpy as `import numpy as np`, and then using this shorthand for `np.cos(0.0)` or `np.square(5.0)` etc.
* For autograd to work, you need to instead use **autograd's** provided numpy wrapper interface: `from autograd.numpy as ag_np`
* The `ag_np` module has the same API as `numpy`, so you can call `ag_np.cos(0.0)`, `ag_np.square(5.0)`, etc.
Now, if `f` meeds the above requirements, we can create a Python function `g` to compute derivatives of $f(x)$ by calling `autograd.grad`:
```
g = autograd.grad(f)
```
The symbol `g` is now a **Python function** that takes the same input as `f`, but produces the derivative at a given input.
```python
def f(x):
return ag_np.square(x)
g = autograd.grad(f)
```
```python
f(4.0)
```
16.0
```python
# 'g' is just a function. You can call it as usual, by providing a possible scalar float input
g(0.0)
```
0.0
```python
[g(-1.0), g(1.0)]
```
[-2.0, 2.0]
### Plot to demonstrate the gradient function side-by-side with original function
```python
x_grid_G = np.linspace(-10, 10, 100)
fig_h, subplot_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, squeeze=False)
subplot_grid[0,0].plot(x_grid_G, [f(x_g) for x_g in x_grid_G], 'k.-')
subplot_grid[0,0].set_title('f(x) = x^2')
subplot_grid[0,1].plot(x_grid_G, [g(x_g) for x_g in x_grid_G], 'b.-')
subplot_grid[0,1].set_title('gradient of f(x)')
```
## Exercise 1a:
Consider the decaying periodic function below. Can you compute its derivative using autograd and plot the result?
$$
f(x) = e^{-x/10} * cos(x)
$$
```python
def f(x):
return 0.0 # TODO compute the function above, using 'ag_np'
g = f # TODO define g as gradient of f, using autograd's `grad`
# TODO plot the result
x_grid_G = np.linspace(-10, 10, 500)
fig_h, subplot_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, squeeze=False)
subplot_grid[0,0].plot(x_grid_G, [f(x_g) for x_g in x_grid_G], 'k.-');
subplot_grid[0,0].set_title('f(x) = x^2');
subplot_grid[0,1].plot(x_grid_G, [g(x_g) for x_g in x_grid_G], 'b.-');
subplot_grid[0,1].set_title('gradient of f(x)');
```
# PART 2: Using autograd's 'grad' function on functions with multivariate input
Now, imagine the input $x$ could be a vector of size D.
Our mathematical function $f(x)$ will map each input vector to a scalar.
We want the gradient function
\begin{align}
g(x) &\triangleq \nabla_x f(x)
\\
&= [
\frac{\partial}{\partial x_1} f(x)
\quad \frac{\partial}{\partial x_2} f(x)
\quad \ldots \quad \frac{\partial}{\partial x_D} f(x) ]
\end{align}
Instead of computing this gradient by hand via calculus/algebra, we can use autograd to do it for us.
First, we implement math function $f(x)$ as a **Python function** `f`.
The Python function `f` needs to satisfy the following requirements:
* INPUT 'x': numpy array of float
* OUTPUT 'f(x)': scalar float
* All internal operations are composed of calls to functions from `ag_np`, the `autograd` version of numpy
```python
def f(x_D):
return ag_np.sum(ag_np.square(x_D))
g = autograd.grad(f)
```
```python
x_D = np.zeros(4)
print(x_D)
print(f(x_D))
print(g(x_D))
```
[0. 0. 0. 0.]
0.0
[0. 0. 0. 0.]
```python
x_D = np.asarray([1., 2., 3., 4.])
print(x_D)
print(f(x_D))
print(g(x_D))
```
[1. 2. 3. 4.]
30.0
[2. 4. 6. 8.]
# Part 3: Using autograd gradients within gradient descent to solve multivariate optimization problems
### Helper function: basic gradient descent
Here's a very simple function that will perform many gradient descent steps to optimize a given function.
```python
def run_many_iters_of_gradient_descent(f, g, init_x_D=None, n_iters=100, step_size=0.001):
# Copy the initial parameter vector
x_D = copy.deepcopy(init_x_D)
# Create data structs to track the per-iteration history of different quantities
history = dict(
iter=[],
f=[],
x_D=[],
g_D=[])
for iter_id in range(n_iters):
if iter_id > 0:
x_D = x_D - step_size * g(x_D)
history['iter'].append(iter_id)
history['f'].append(f(x_D))
history['x_D'].append(x_D)
history['g_D'].append(g(x_D))
return x_D, history
```
### Worked Example 3a: Minimize f(x) = sum(square(x))
It's easy to figure out that the vector with smallest L2 norm (smallest sum of squares) is the all-zero vector.
Here's a quick example of showing that using gradient functions provided by autograd can help us solve the optimization problem:
$$
\min_x \sum_{d=1}^D x_d^2
$$
```python
def f(x_D):
return ag_np.sum(ag_np.square(x_D))
g = autograd.grad(f)
# Initialize at x_D = [-3, 4, -5, 6]
init_x_D = np.asarray([-3.0, 4.0, -5.0, 6.0])
```
```python
opt_x_D, history = run_many_iters_of_gradient_descent(f, g, init_x_D, n_iters=1000, step_size=0.01)
```
```python
# Make plots of how x parameter values evolve over iterations, and function values evolve over iterations
# Expected result: f goes to zero. all x values goto zero.
fig_h, subplot_grid = plt.subplots(
nrows=1, ncols=2, sharex=True, sharey=False, figsize=(15,3), squeeze=False)
subplot_grid[0,0].plot(history['iter'], history['x_D'])
subplot_grid[0,0].set_xlabel('iters')
subplot_grid[0,0].set_ylabel('x_d')
subplot_grid[0,1].plot(history['iter'], history['f'])
subplot_grid[0,1].set_xlabel('iters')
subplot_grid[0,1].set_ylabel('f(x)');
```
### Try it Example 3b: Minimize the 'trid' function
Given a 2-dimensional vector $x = [x_1, x_2]$, the trid function is:
$$
f(x) = (x_1-1)^2 + (x_2-1)^2 - x_1 x_2
$$
Background and Picture: <https://www.sfu.ca/~ssurjano/trid.html>
Can you use autograd + gradient descent to find the optimal value $x^*$ that minimizes $f(x)$?
You can initialize your gradient descent at [+1.0, -1.0]
```python
def f(x_D):
return 0.0 # TODO
g = f # TODO
```
```python
# TODO call run_many_iters_of_gradient_descent() with appropriate args
```
```python
# TRID example
# Make plots of how x parameter values evolve over iterations, and function values evolve over iterations
# Expected result: ????
fig_h, subplot_grid = plt.subplots(
nrows=1, ncols=2, sharex=True, sharey=False, figsize=(15,3), squeeze=False)
subplot_grid[0,0].plot(history['iter'], history['x_D'])
subplot_grid[0,0].set_xlabel('iters')
subplot_grid[0,0].set_ylabel('x_d')
subplot_grid[0,1].plot(history['iter'], history['f'])
subplot_grid[0,1].set_xlabel('iters')
subplot_grid[0,1].set_ylabel('f(x)');
```
# Part 4: Solving linear regression with gradient descent + autograd
We observe $N$ examples $(x_i, y_i)$ consisting of D-dimensional 'input' vectors $x_i$ and scalar outputs $y_i$.
Consider the multivariate linear regression model:
\begin{align}
y_i &\sim \mathcal{N}(w^T x_i, \sigma^2), \forall i \in 1, 2, \ldots N
\end{align}
where we assume $\sigma = 0.1$.
One way to train weights would be to just compute the maximum likelihood solution:
\begin{align}
\min_w - \log p(y | w, x)
\end{align}
## Toy Data for linear regression task
We'll generate data that comes from an idealized linear regression model.
Each example has D=2 dimensions for x.
The first dimension is weighted by +4.2.
The second dimension is weighted by -4.2
```python
N = 100
D = 2
sigma = 0.1
true_w_D = np.asarray([4.2, -4.2])
true_bias = 0.1
train_prng = np.random.RandomState(0)
x_ND = train_prng.uniform(low=-5, high=5, size=(N,D))
y_N = np.dot(x_ND, true_w_D) + true_bias + sigma * train_prng.randn(N)
```
## Toy Data Visualization: Pairplots for all possible (x_d, y) combinations
You can clearly see the slopes of the lines:
* x1 vs y plot: slope is around +4
* x2 vs y plot: slope is around -4
```python
sns.pairplot(
data=pd.DataFrame(np.hstack([x_ND, y_N[:,np.newaxis]]), columns=['x1', 'x2', 'y']));
```
```python
# Define the optimization problem as an AUTOGRAD-able function wrt the weights w_D
def calc_neg_likelihood_linreg(w_D):
return 0.5 / ag_np.square(sigma) * ag_np.sum(ag_np.square(ag_np.dot(x_ND, w_D) - y_N))
```
```python
## Test the function at an easy initial point
init_w_D = np.zeros(2)
calc_neg_likelihood_linreg(init_w_D)
```
1521585.0576643152
```python
## Test the gradient at that easy point
calc_grad_wrt_w = autograd.grad(calc_neg_likelihood_linreg)
calc_grad_wrt_w(init_w_D)
```
array([-357441.84423006, 367223.20042115])
```python
# Because the gradient's magnitude is very large, use very small step size
opt_w_D, history = run_many_iters_of_gradient_descent(
calc_neg_likelihood_linreg, autograd.grad(calc_neg_likelihood_linreg), init_w_D,
n_iters=300, step_size=0.000001,
)
```
```python
# LinReg worked example
# Make plots of how w_D parameter values evolve over iterations, and function values evolve over iterations
# Expected result: x
fig_h, subplot_grid = plt.subplots(
nrows=1, ncols=2, sharex=True, sharey=False, figsize=(15,3), squeeze=False)
subplot_grid[0,0].plot(history['iter'], history['x_D'])
subplot_grid[0,0].set_xlabel('iters')
subplot_grid[0,0].set_ylabel('w_d')
subplot_grid[0,1].plot(history['iter'], history['f'])
subplot_grid[0,1].set_xlabel('iters')
subplot_grid[0,1].set_ylabel('-1 * log p(y | w, x)');
```
## Try it Example 4b: Solve the linear regression problem using a weights-and-bias representation
The above example only uses weights on the dimensions of $x_i$, and thus can only learn linear models that pass through the origin.
Can you instead optimize a model that includes a **bias** term $b>0$?
\begin{align}
y_i &\sim \mathcal{N}(w^T x_i + b, \sigma^2), \forall i \in 1, 2, \ldots N
\end{align}
where we assume $\sigma = 0.1$.
One non-Bayesian way to train weights would be to just compute the maximum likelihood solution:
\begin{align}
\min_{w,b} - \log p(y | w, b, x)
\end{align}
An easy way to do this is to imagine that each observation vector $x_i$ is expanded into a $\tilde{x}_i$ that contains a column of all ones. Then, we can write the corresponding expanded weights as $\tilde{w} = [w_1 w_2 b]$.
\begin{align}
\min_{\tilde{w}} - \log p(y | \tilde{w},\tilde{x})
\end{align}
```python
# Now, each expanded xtilde vector has size E = D+1 = 3
xtilde_NE = np.hstack([x_ND, np.ones((N,1))])
```
```python
# TODO: Define f to minimize that takes a COMBINED weights-and-bias vector wtilde_E of size E=3
```
```python
# TODO: Compute gradient of f
```
```python
# TODO run gradient descent and plot the results
```
# Part 5 setup: Autograd for functions of data structures of arrays
#### Useful Fact: autograd can take derivatives with respect to DATA STRUCTURES of parameters
This can help us when it is natural to define models in terms of several parts (e.g. NN layers).
We don't need to turn our many model parameters into one giant weights-and-biases vector. We can express our thoughts more naturally.
### Demo 1: gradient of a LIST of parameters
```python
def f(w_list_of_arr):
return ag_np.sum(ag_np.square(w_list_of_arr[0])) + ag_np.sum(ag_np.square(w_list_of_arr[1]))
g = autograd.grad(f)
```
```python
w_list_of_arr = [np.zeros(3), np.arange(5, dtype=np.float64)]
print("Type of the gradient is: ")
print(type(g(w_list_of_arr)))
print("Result of the gradient is: ")
g(w_list_of_arr)
```
Type of the gradient is:
<class 'list'>
Result of the gradient is:
[array([0., 0., 0.]), array([0., 2., 4., 6., 8.])]
### Demo 2: gradient of DICT of parameters
```python
def f(dict_of_arr):
return ag_np.sum(ag_np.square(dict_of_arr['weights'])) + ag_np.sum(ag_np.square(dict_of_arr['bias']))
g = autograd.grad(f)
```
```python
dict_of_arr = dict(weights=np.arange(5, dtype=np.float64), bias=4.2)
print("Type of the gradient is: ")
print(type(g(dict_of_arr)))
print("Result of the gradient is: ")
g(dict_of_arr)
```
Type of the gradient is:
<class 'dict'>
Result of the gradient is:
{'weights': array([0., 2., 4., 6., 8.]), 'bias': array(8.4)}
# Part 5: Neural Networks and Autograd
### Let's use a convenient data structure for NN model parameters
Use a list of dicts of arrays.
Each entry in the list is a dict that represents the parameters of one "layer".
Each layer-specific dict has two named attributes: a vector of weights 'w' and a vector of biases 'b'
#### Here's a function to create NN params as a 'list-of-dicts' that match a provided set of dimensions
```python
def make_nn_params_as_list_of_dicts(
n_hiddens_per_layer_list=[5],
n_dims_input=1,
n_dims_output=1,
weight_fill_func=np.zeros,
bias_fill_func=np.zeros):
nn_param_list = []
n_hiddens_per_layer_list = [n_dims_input] + n_hiddens_per_layer_list + [n_dims_output]
# Given full network size list is [a, b, c, d, e]
# For loop should loop over (a,b) , (b,c) , (c,d) , (d,e)
for n_in, n_out in zip(n_hiddens_per_layer_list[:-1], n_hiddens_per_layer_list[1:]):
nn_param_list.append(
dict(
w=weight_fill_func((n_in, n_out)),
b=bias_fill_func((n_out,)),
))
return nn_param_list
```
#### Here's a function to pretty-print any given set of NN parameters to stdout, so we can inspect
```python
def pretty_print_nn_param_list(nn_param_list_of_dict):
""" Create pretty display of the parameters at each layer
"""
for ll, layer_dict in enumerate(nn_param_list_of_dict):
print("Layer %d" % ll)
print(" w | size %9s | %s" % (layer_dict['w'].shape, layer_dict['w'].flatten()))
print(" b | size %9s | %s" % (layer_dict['b'].shape, layer_dict['b'].flatten()))
```
## Example: NN with 0 hidden layers (equivalent to linear regression)
For univariate regression: 1D -> 1D
Will fill all parameters with zeros by default
```python
nn_params = make_nn_params_as_list_of_dicts(n_hiddens_per_layer_list=[], n_dims_input=1, n_dims_output=1)
pretty_print_nn_param_list(nn_params)
```
Layer 0
w | size (1, 1) | [0.]
b | size (1,) | [0.]
## Example: NN with 0 hidden layers (equivalent to linear regression)
For multivariate regression when |x_i| = 2: 2D -> 1D
Will fill all parameters with zeros by default
```python
nn_params = make_nn_params_as_list_of_dicts(n_hiddens_per_layer_list=[], n_dims_input=2, n_dims_output=1)
pretty_print_nn_param_list(nn_params)
```
Layer 0
w | size (2, 1) | [0. 0.]
b | size (1,) | [0.]
## Example: NN with 1 hidden layer of 3 hidden units
```python
nn_params = make_nn_params_as_list_of_dicts(n_hiddens_per_layer_list=[3], n_dims_input=2, n_dims_output=1)
pretty_print_nn_param_list(nn_params)
```
Layer 0
w | size (2, 3) | [0. 0. 0. 0. 0. 0.]
b | size (3,) | [0. 0. 0.]
Layer 1
w | size (3, 1) | [0. 0. 0.]
b | size (1,) | [0.]
## Example: NN with 1 hidden layer of 3 hidden units
Use 'ones' as the fill function for weights
```python
nn_params = make_nn_params_as_list_of_dicts(
n_hiddens_per_layer_list=[3], n_dims_input=2, n_dims_output=1,
weight_fill_func=np.ones)
pretty_print_nn_param_list(nn_params)
```
Layer 0
w | size (2, 3) | [1. 1. 1. 1. 1. 1.]
b | size (3,) | [0. 0. 0.]
Layer 1
w | size (3, 1) | [1. 1. 1.]
b | size (1,) | [0.]
## Example: NN with 1 hidden layer of 3 hidden units
Use random draws from standard normal as the fill function for weights
```python
nn_params = make_nn_params_as_list_of_dicts(
n_hiddens_per_layer_list=[3], n_dims_input=2, n_dims_output=1,
weight_fill_func=lambda size_tuple: np.random.randn(*size_tuple))
pretty_print_nn_param_list(nn_params)
```
Layer 0
w | size (2, 3) | [ 1.24823477 -0.70553662 -0.13712655 0.23659527 -1.72792202 -1.66701658]
b | size (3,) | [0. 0. 0.]
Layer 1
w | size (3, 1) | [ 0.23254128 -1.57423719 -0.26868047]
b | size (1,) | [0.]
## Example: NN with 7 hidden layers of diff sizes
Just shows how generic this framework is!
```python
nn_params = make_nn_params_as_list_of_dicts(
n_hiddens_per_layer_list=[3, 4, 5, 6, 5, 4, 3], n_dims_input=2, n_dims_output=1)
pretty_print_nn_param_list(nn_params)
```
Layer 0
w | size (2, 3) | [0. 0. 0. 0. 0. 0.]
b | size (3,) | [0. 0. 0.]
Layer 1
w | size (3, 4) | [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
b | size (4,) | [0. 0. 0. 0.]
Layer 2
w | size (4, 5) | [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
b | size (5,) | [0. 0. 0. 0. 0.]
Layer 3
w | size (5, 6) | [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.]
b | size (6,) | [0. 0. 0. 0. 0. 0.]
Layer 4
w | size (6, 5) | [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.]
b | size (5,) | [0. 0. 0. 0. 0.]
Layer 5
w | size (5, 4) | [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
b | size (4,) | [0. 0. 0. 0.]
Layer 6
w | size (4, 3) | [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
b | size (3,) | [0. 0. 0.]
Layer 7
w | size (3, 1) | [0. 0. 0.]
b | size (1,) | [0.]
## Setup: Function that performs **prediction**
```python
def predict_y_given_x_with_NN(x=None, nn_param_list=None, activation_func=ag_np.tanh):
""" Predict y value given x value via feed-forward neural net
Args
----
x : array_like, n_examples x n_input_dims
Returns
-------
y : array_like, n_examples
"""
for layer_id, layer_dict in enumerate(nn_param_list):
if layer_id == 0:
if x.ndim > 1:
in_arr = x
else:
if x.size == nn_param_list[0]['w'].shape[0]:
in_arr = x[ag_np.newaxis,:]
else:
in_arr = x[:,ag_np.newaxis]
else:
in_arr = activation_func(out_arr)
out_arr = ag_np.dot(in_arr, layer_dict['w']) + layer_dict['b']
return ag_np.squeeze(out_arr)
```
### Example: Make predictions with 0-layer NN whose parameters are filled with the 'true' params for our toy dataset
```python
true_nn_params = make_nn_params_as_list_of_dicts(n_hiddens_per_layer_list=[], n_dims_input=2, n_dims_output=1)
true_nn_params[0]['w'][:] = true_w_D[:,np.newaxis]
true_nn_params[0]['b'][:] = true_bias
```
```python
yhat_N = predict_y_given_x_with_NN(x_ND, true_nn_params)
assert yhat_N.size == N
plt.plot(yhat_N, y_N, 'k.')
plt.xlabel('true y')
plt.ylabel('predicted y|x')
```
### Example: Make predictions with 0-layer NN whose parameters are filled with all zeros
```python
zero_nn_params = make_nn_params_as_list_of_dicts(n_hiddens_per_layer_list=[], n_dims_input=2, n_dims_output=1)
yhat_N = predict_y_given_x_with_NN(x_ND, zero_nn_params)
assert yhat_N.size == N
plt.plot(yhat_N, y_N, 'k.')
plt.xlabel('true y')
plt.ylabel('predicted y|x')
```
## Setup: Gradient descent implementation that can use list-of-dict parameters (not just arrays)
```python
def run_many_iters_of_gradient_descent_with_list_of_dict(f, g, init_x_list_of_dict=None, n_iters=100, step_size=0.001):
# Copy the initial parameter vector
x_list_of_dict = copy.deepcopy(init_x_list_of_dict)
# Create data structs to track the per-iteration history of different quantities
history = dict(
iter=[],
f=[],
x=[],
g=[])
start_time = time.time()
for iter_id in range(n_iters):
if iter_id > 0:
# Gradient is a list of layer-specific dicts
grad_list_of_dict = g(x_list_of_dict)
for layer_id, x_layer_dict in enumerate(x_list_of_dict):
for key in x_layer_dict.keys():
x_layer_dict[key] = x_layer_dict[key] - step_size * grad_list_of_dict[layer_id][key]
fval = f(x_list_of_dict)
history['iter'].append(iter_id)
history['f'].append(fval)
history['x'].append(copy.deepcopy(x_list_of_dict))
history['g'].append(g(x_list_of_dict))
if iter_id < 3 or (iter_id+1) % 50 == 0:
print("completed iter %5d/%d after %7.1f sec | loss %.6e" % (
iter_id+1, n_iters, time.time()-start_time, fval))
return x_list_of_dict, history
```
# Worked Exercise 5a: Train 0-layer NN via gradient descent on LINEAR toy data
```python
def nn_regression_loss_function(nn_params):
yhat_N = predict_y_given_x_with_NN(x_ND, nn_params)
return 0.5 / ag_np.square(sigma) * ag_np.sum(np.square(y_N - yhat_N))
```
```python
fromtrue_opt_nn_params, fromtrue_history = run_many_iters_of_gradient_descent_with_list_of_dict(
nn_regression_loss_function,
autograd.grad(nn_regression_loss_function),
true_nn_params,
n_iters=100,
step_size=0.000001)
```
completed iter 1/100 after 0.0 sec | loss 4.343353e+01
completed iter 2/100 after 0.1 sec | loss 4.330311e+01
completed iter 3/100 after 0.1 sec | loss 4.319213e+01
completed iter 50/100 after 2.7 sec | loss 4.242465e+01
completed iter 100/100 after 5.2 sec | loss 4.234312e+01
```python
pretty_print_nn_param_list(fromtrue_opt_nn_params)
```
Layer 0
w | size (2, 1) | [ 4.19568065 -4.19965201]
b | size (1,) | [0.09469614]
```python
plt.plot(fromtrue_history['iter'], fromtrue_history['f'], 'k.-')
```
```python
fromzero_opt_nn_params, fromzero_history = run_many_iters_of_gradient_descent_with_list_of_dict(
nn_regression_loss_function,
autograd.grad(nn_regression_loss_function),
zero_nn_params,
n_iters=100,
step_size=0.000001)
```
completed iter 1/100 after 0.0 sec | loss 1.521585e+06
completed iter 2/100 after 0.1 sec | loss 1.270163e+06
completed iter 3/100 after 0.2 sec | loss 1.060293e+06
completed iter 50/100 after 2.8 sec | loss 2.686671e+02
completed iter 100/100 after 5.4 sec | loss 4.457975e+01
```python
pretty_print_nn_param_list(fromzero_opt_nn_params)
```
Layer 0
w | size (2, 1) | [ 4.19465049 -4.19892163]
b | size (1,) | [0.11288017]
```python
plt.plot(fromzero_history['iter'], fromzero_history['f'], 'k.-')
```
```python
```
# Create more complex non-linear toy dataset
True method *regression from QUADRATIC features*:
$$
y \sim \text{Normal}( w_1 x_1 + w_2 x_2 + w_3 x_1^2 + w_4 x_2^2 + b, \sigma^2)
$$
```python
N = 300
D = 2
sigma = 0.1
wsq_D = np.asarray([-2.0, 2.0])
w_D = np.asarray([4.2, -4.2])
train_prng = np.random.RandomState(0)
x_ND = train_prng.uniform(low=-5, high=5, size=(N,D))
y_N = (
np.dot(np.square(x_ND), wsq_D)
+ np.dot(x_ND, w_D)
+ sigma * train_prng.randn(N))
```
```python
sns.pairplot(
data=pd.DataFrame(np.hstack([x_ND, y_N[:,np.newaxis]]), columns=['x1', 'x2', 'y']));
```
```python
def nonlinear_toy_nn_regression_loss_function(nn_params):
yhat_N = predict_y_given_x_with_NN(x_ND, nn_params)
return 0.5 / ag_np.square(sigma) * ag_np.sum(np.square(y_N - yhat_N))
```
```python
# Initialize 1-layer, 10 hidden unit network with small random noise on weights
H10_init_nn_params = make_nn_params_as_list_of_dicts(
n_hiddens_per_layer_list=[10], n_dims_input=2, n_dims_output=1,
weight_fill_func=lambda sz_tuple: 0.1 * np.random.randn(*sz_tuple))
```
```python
H10_opt_nn_params, H10_history = run_many_iters_of_gradient_descent_with_list_of_dict(
nonlinear_toy_nn_regression_loss_function,
autograd.grad(nonlinear_toy_nn_regression_loss_function),
H10_init_nn_params,
n_iters=300,
step_size=0.000001)
```
completed iter 1/300 after 0.1 sec | loss 1.195763e+07
completed iter 2/300 after 0.3 sec | loss 1.165899e+07
completed iter 3/300 after 0.5 sec | loss 1.113150e+07
completed iter 50/300 after 9.5 sec | loss 3.516704e+06
completed iter 100/300 after 18.8 sec | loss 1.907578e+06
completed iter 150/300 after 29.2 sec | loss 1.964786e+06
completed iter 200/300 after 39.2 sec | loss 1.749155e+06
completed iter 250/300 after 51.1 sec | loss 1.575341e+06
completed iter 300/300 after 61.6 sec | loss 1.659934e+06
#### Plot objective function vs iters
```python
plt.plot(H10_history['iter'], H10_history['f'], 'k.-')
plt.title('10 hidden units');
```
#### Plot predicted y vs. true y for each example as a scatterplot
```python
yhat_N = predict_y_given_x_with_NN(x_ND, H10_opt_nn_params)
plt.plot(yhat_N, y_N, 'k.');
plt.xlabel('predicted y|x');
plt.ylabel('true y');
```
```python
_, subplot_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=False, figsize=(10,3), squeeze=False)
subplot_grid[0,0].plot(x_ND[:,0], y_N, 'k.');
subplot_grid[0,0].plot(x_ND[:,0], yhat_N, 'b.')
subplot_grid[0,0].set_xlabel('x_0');
subplot_grid[0,1].plot(x_ND[:,1], y_N, 'k.');
subplot_grid[0,1].plot(x_ND[:,1], yhat_N, 'b.')
subplot_grid[0,1].set_xlabel('x_1');
```
## More units! Try 1 layer with H=30 hidden units
```python
# Initialize 1-layer, 30 hidden unit network with small random noise on weights
H30_init_nn_params = make_nn_params_as_list_of_dicts(
n_hiddens_per_layer_list=[30], n_dims_input=2, n_dims_output=1,
weight_fill_func=lambda sz_tuple: 0.1 * np.random.randn(*sz_tuple))
```
```python
H30_opt_nn_params, H30_history = run_many_iters_of_gradient_descent_with_list_of_dict(
nonlinear_toy_nn_regression_loss_function,
autograd.grad(nonlinear_toy_nn_regression_loss_function),
H30_init_nn_params,
n_iters=50,
step_size=0.000001)
```
completed iter 1/50 after 0.1 sec | loss 1.184375e+07
completed iter 2/50 after 0.2 sec | loss 1.087085e+07
completed iter 3/50 after 0.4 sec | loss 9.449709e+06
completed iter 50/50 after 6.7 sec | loss 2.035457e+06
#### Plot objective function vs iterations
```python
plt.plot(H30_history['iter'], H30_history['f'], 'k.-');
plt.title('30 hidden units');
```
#### Plot predicted y value vs true y value for each example
```python
yhat_N = predict_y_given_x_with_NN(x_ND, H30_opt_nn_params)
plt.plot(yhat_N, y_N, 'k.');
plt.xlabel('predicted y|x');
plt.ylabel('true y');
```
```python
_, subplot_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=False, figsize=(10,3), squeeze=False)
subplot_grid[0,0].plot(x_ND[:,0], y_N, 'k.');
subplot_grid[0,0].plot(x_ND[:,0], yhat_N, 'b.')
subplot_grid[0,0].set_xlabel('x_0');
subplot_grid[0,1].plot(x_ND[:,1], y_N, 'k.');
subplot_grid[0,1].plot(x_ND[:,1], yhat_N, 'b.')
subplot_grid[0,1].set_xlabel('x_1');
```
## Even more units! Try 1 layer with H=100 hidden units
```python
# Initialize 1-layer, 100 hidden unit network with small random noise on weights
H100_init_nn_params = make_nn_params_as_list_of_dicts(
n_hiddens_per_layer_list=[100], n_dims_input=2, n_dims_output=1,
weight_fill_func=lambda sz_tuple: 0.05 * np.random.randn(*sz_tuple))
```
```python
H100_opt_nn_params, H100_history = run_many_iters_of_gradient_descent_with_list_of_dict(
nonlinear_toy_nn_regression_loss_function,
autograd.grad(nonlinear_toy_nn_regression_loss_function),
H100_init_nn_params,
n_iters=30,
step_size=0.0000005)
```
completed iter 1/30 after 0.1 sec | loss 1.194856e+07
completed iter 2/30 after 0.2 sec | loss 1.140156e+07
completed iter 3/30 after 0.3 sec | loss 1.064044e+07
```python
yhat_N = predict_y_given_x_with_NN(x_ND, H100_opt_nn_params)
plt.plot(yhat_N, y_N, 'k.');
plt.xlabel('predicted y|x');
plt.ylabel('true y');
```
```python
_, subplot_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=False, figsize=(10,3), squeeze=False)
subplot_grid[0,0].plot(x_ND[:,0], y_N, 'k.');
subplot_grid[0,0].plot(x_ND[:,0], yhat_N, 'b.')
subplot_grid[0,0].set_xlabel('x_0');
subplot_grid[0,1].plot(x_ND[:,1], y_N, 'k.');
subplot_grid[0,1].plot(x_ND[:,1], yhat_N, 'b.')
subplot_grid[0,1].set_xlabel('x_1');
```
# Try it yourself!
* Can you train a prediction network on the non-linear toy data so it has ZERO training error? Is this even possible?
* Can you make the network train faster? What happens if you play with the step_size?
* What if you made the network **deeper** (more layers)?
* What other dataset would you want to try out this regression on?
```python
```
| 512a3363f3d5ab9a09def7380f33d24a4b151a1c | 494,116 | ipynb | Jupyter Notebook | labs/IntroToAutogradAndBackpropForNNets.ipynb | tufts-ml-courses/comp135-19s-assignments | d54f4356e022150d85cfa58ebbf8ccdf66e0f1a9 | [
"MIT"
]
| 8 | 2019-02-23T00:28:06.000Z | 2020-01-28T20:45:57.000Z | labs/IntroToAutogradAndBackpropForNNets.ipynb | tufts-ml-courses/comp135-19s-assignments | d54f4356e022150d85cfa58ebbf8ccdf66e0f1a9 | [
"MIT"
]
| null | null | null | labs/IntroToAutogradAndBackpropForNNets.ipynb | tufts-ml-courses/comp135-19s-assignments | d54f4356e022150d85cfa58ebbf8ccdf66e0f1a9 | [
"MIT"
]
| 18 | 2019-01-24T20:45:04.000Z | 2022-03-21T20:27:11.000Z | 235.517636 | 92,172 | 0.917497 | true | 9,531 | Qwen/Qwen-72B | 1. YES
2. YES | 0.897695 | 0.92079 | 0.826589 | __label__eng_Latn | 0.80206 | 0.758775 |
# ベイズ推定
## ベルヌーイ分布のベイズ推定
具体例 コイントスの確率推定
```python
import numpy as np
import sympy
import matplotlib.pyplot as plt
from scipy.special import gamma
%matplotlib inline
```
```python
mu = sympy.Symbol("u")
def posterior(D, prior):
global mu
# 尤度
likelihood = mu**D[0] * (1-mu)**(D[1]-D[0])
# 事後確率
post = prior * likelihood
# 正規化
post /= sympy.integrate(post, (mu, 0,1))
return post
```
```python
fig = plt.figure(0)
# 事前分布
prior = 1
# 分布のプロット
x = np.linspace(0, 1, 100)
y = [1 for j in x]
plt.plot(x, y)
# 分布の更新 (n: 更新回数)
# 0<=th_min<=th_max<=100
n = 5
th_min = 30
th_max = 60
for i in range(n):
# サンプリング(コインの表の確率[%]の範囲)
data = np.random.randint(th_min,th_max)
# 事後分布の計算
post = posterior((data, 100), prior)
# 事後分布のプロット
y = [post.subs(mu, j) for j in x]
plt.plot(x, y)
prior = post
```
---
## 1変数ガウスのベイズ推定
```python
# ガウス分布を作成
def makeGaussian(mu, sig):
def gaussian(x):
return np.exp(-(x - mu)**2 / (2*sig)) / np.sqrt(2*np.pi*sig)
return gaussian
# ガンマ分布を作成
def makeGammaDist(a, b):
def gammaDist(x):
return b**a*x**(a-1)*np.exp(-b*x) / gamma(a)
return gammaDist
```
データの分布のパラメータ(正規分布)
```python
mu_D, sig_D, N = 2, 9, 10000
```
### 平均$\mu$未知, 分散$\sigma^2$既知の正規分布の$\mu$についてのベイズ推定
平均の事前分布を正規分布とする.
データの確率モデルが正規分布とする時,平均値の共役事前分布は正規分布となる
```python
# 見本のサンプル
x = np.linspace(-10, 10, 100)
Xm = np.random.normal(mu_D, sig_D**0.5, N)
fig, ax1 = plt.subplots()
ax1.hist(Xm, bins=x, normed=True, color="c") # 見本サンプルのヒストグラム
ax2 = ax1.twinx()
# 事前分布のパラメータを更新
mu_init, sig_init = np.random.uniform(-5, 5), np.random.uniform(1, 5)**2
Ns, n = 10, 5 # サンプル数
# 事後分布計算 → 事前分布の更新
mu_prior, sig_prior = mu_init, sig_init
for i in range(n):
# サンプリング
Xs = np.random.normal(mu_D, sig_D**0.5, Ns)
# 事後分布計算
mu_post = sig_D*mu_prior/(Ns*sig_prior+sig_D) + Ns*sig_prior*Xs.mean()/(Ns*sig_prior+sig_D)
sig_post = (1/sig_prior + Ns/sig_D)**-1
dist_post = makeGaussian(mu_post, sig_post)
# 事後分布をプロット
ax2.plot(x, dist_post(x), linewidth=2)
# 事前分布のパラメータを更新
mu_prior = mu_post
sig_prior = sig_post
_=ax2.set_ylim(0, )
# 推定分布
ax1.plot(x, makeGaussian(mu_post, sig_D)(x), "m", linewidth=2)
print("mu", mu_init, "==>", mu_post, "<=>", mu_D)
print("sig", sig_init, "==>", sig_post)
```
### 平均$\mu$既知, 分散$\sigma^2$未知の正規分布の$\mu$についてのベイズ推定
分散の事前分布をガンマ分布とする
データの確率モデルが正規分布とする時,分散値の共役事前分布はガンマ分布となる
```python
# 見本のサンプル
x = np.linspace(-10, 10, 100)
Xm = np.random.normal(mu_D, sig_D**0.5, N)
fig, ax1 = plt.subplots()
ax1.hist(Xm, bins=x, normed=True, color="c") # 見本サンプルのヒストグラム
# 事前分布のパラメータ
a_init, b_init = 0, 0 # 無情報事前分布のパラメータ
Ns, n = 10, 10 # サンプル数 and 繰り返し回数
# 事後分布計算 → 事前分布の更新
a_prior, b_prior = a_init, b_init
for i in range(n):
# サンプリング
Xs = np.random.normal(mu_D, sig_D**0.5, Ns)
# 事後分布計算
a_post = a_prior + Ns/2
b_post = b_prior + Ns*Xs.var()/2;
dist_post = makeGammaDist(a_post, b_post)
ax1.plot(x, makeGaussian(mu_D, b_post/a_post)(x), linewidth=2)
# 事前分布のパラメータを更新
a_prior = a_post
b_prior = b_post
_=ax2.set_ylim(0, )
ax1.plot(x, makeGaussian(mu_D, b_post/a_post)(x), "m", linewidth=2)
print("sig:", b_post/a_post, "<==>", sig_D)
```
### 平均$\mu$未知, 分散$\sigma^2$未知の正規分布の$\mu$についてのベイズ推定
平均の事前分布を正規分布,分散の事前分布をガンマ分布とする
データの確率モデルが正規分布とする時,平均値と分散値の共役事前分布は正規-ガンマ分布となる
```python
# 見本のサンプル
x = np.linspace(-20, 20, 100)
Xs = np.random.normal(mu_D, sig_D**0.5, N)
fig, ax1 = plt.subplots()
ax1.hist(Xs, bins=x, normed=True, color="c") # 見本サンプルのヒストグラム
# 事前分布のパラメータ
a_init, b_init, mu_init, l_init = 0,0,0,0
Ns, n = 10, 10 # サンプル数 and 繰り返し回数
a_prior, b_prior, mu_prior, l_prior = a_init, b_init, mu_init, l_init
Xs = np.zeros(0)
for i in range(n):
Xs = np.random.normal(mu_D, sig_D**0.5, Ns)
a_post = a_prior + Ns/2
b_post = b_prior + ((Xs**2).sum() - Ns*(mu_prior**2)) / 2
mu_post = (Xs.sum() + l_prior*mu_prior) / (Ns + l_prior)
l_post = Ns + l_prior
#plt.subplot(122)
model_dist = makeGaussian(mu_post, b_post/a_post)
plt.plot(np.linspace(-20, 20, 100), model_dist(np.linspace(-20, 20, 100)), )
a_prior = a_post
b_prior = b_post
mu_prior = mu_post
l_prior = l_post
plt.plot(x, makeGaussian(mu_post, b_post/a_post)(x), "m", linewidth=2)
print(mu_D, sig_D)
print(mu_post, b_post/a_post)
```
| c14fdbb5be07c05b01781d8cebe56d9d5c72eee1 | 117,392 | ipynb | Jupyter Notebook | PRML/BayesianInference.ipynb | naktd31/jupyter-notebook | 4fdd4bea40bafd93d647bf09b2b04524f7427960 | [
"MIT"
]
| null | null | null | PRML/BayesianInference.ipynb | naktd31/jupyter-notebook | 4fdd4bea40bafd93d647bf09b2b04524f7427960 | [
"MIT"
]
| null | null | null | PRML/BayesianInference.ipynb | naktd31/jupyter-notebook | 4fdd4bea40bafd93d647bf09b2b04524f7427960 | [
"MIT"
]
| null | null | null | 304.124352 | 32,598 | 0.915045 | true | 2,106 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.658418 | 0.594745 | __label__yue_Hant | 0.088893 | 0.220121 |
# Bayesian classifier
In statistical classification, the Bayes classifier minimizes the probability of misclassification.
```python
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
random.seed(42) # define the seed (important to reproduce the results)
```
```python
#data = pd.read_csv('data/vertebralcolumn-3C.csv', header=(0))
#data = pd.read_csv('data/BreastCancer.csv', header=(0))
data = pd.read_csv('data/Iris.csv', header=(0))
# data = pd.read_csv('data/Vehicle.csv', header=(0))
# data = pd.read_csv(r'data\pima-indians-diabetes.csv', index_col = 0)
data = data.dropna(axis='rows') #remove NaN
# armazena os nomes das classes
classes = np.array(pd.unique(data[data.columns[-1]]), dtype=str)
nrow, ncol = data.shape
print("Matriz de atributos: Número de linhas:", nrow, " colunas: ", ncol)
attributes = list(data.columns)
data.head(10)
```
Matriz de atributos: Número de linhas: 150 colunas: 5
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>sepal_length</th>
<th>sepal_width</th>
<th>petal_length</th>
<th>petal_width</th>
<th>species</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>5.1</td>
<td>3.5</td>
<td>1.4</td>
<td>0.2</td>
<td>setosa</td>
</tr>
<tr>
<th>1</th>
<td>4.9</td>
<td>3.0</td>
<td>1.4</td>
<td>0.2</td>
<td>setosa</td>
</tr>
<tr>
<th>2</th>
<td>4.7</td>
<td>3.2</td>
<td>1.3</td>
<td>0.2</td>
<td>setosa</td>
</tr>
<tr>
<th>3</th>
<td>4.6</td>
<td>3.1</td>
<td>1.5</td>
<td>0.2</td>
<td>setosa</td>
</tr>
<tr>
<th>4</th>
<td>5.0</td>
<td>3.6</td>
<td>1.4</td>
<td>0.2</td>
<td>setosa</td>
</tr>
<tr>
<th>5</th>
<td>5.4</td>
<td>3.9</td>
<td>1.7</td>
<td>0.4</td>
<td>setosa</td>
</tr>
<tr>
<th>6</th>
<td>4.6</td>
<td>3.4</td>
<td>1.4</td>
<td>0.3</td>
<td>setosa</td>
</tr>
<tr>
<th>7</th>
<td>5.0</td>
<td>3.4</td>
<td>1.5</td>
<td>0.2</td>
<td>setosa</td>
</tr>
<tr>
<th>8</th>
<td>4.4</td>
<td>2.9</td>
<td>1.4</td>
<td>0.2</td>
<td>setosa</td>
</tr>
<tr>
<th>9</th>
<td>4.9</td>
<td>3.1</td>
<td>1.5</td>
<td>0.1</td>
<td>setosa</td>
</tr>
</tbody>
</table>
</div>
Vamos construir as variáveis $X$ e $y$, sendo que o processo classificação se resume em estimar a função $f$ na relação $y = f(X) + \epsilon$, onde $\epsilon$ é o erro, que tem distribuição normal com média igual a zero e variância $\sigma^2$.
Convertemos os dados para o formato Numpy para facilitar a sua manipulação.
```python
data = data.to_numpy()
nrow,ncol = data.shape
y = data[:,-1]
X = data[:,0:ncol-1]
```
Vamos normalizar os dados, de modo a evitar o efeito da escala dos atributos.
```python
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X) # z = (x-u)/s
X = scaler.transform(X)
print('Dados transformados:')
print('Media: ', np.mean(X, axis = 0))
print('Desvio Padrao:', np.std(X, axis = 0))
```
Dados transformados:
Media: [-4.73695157e-16 -6.63173220e-16 3.31586610e-16 -2.84217094e-16]
Desvio Padrao: [1. 1. 1. 1.]
## Para treinar o classificador, precisamos definir o conjunto de teste e treinamento.
```python
from sklearn.model_selection import train_test_split
p = 0.8 # fracao de elementos no conjunto de treinamento
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size = p, random_state = 42)
```
A partir desse conjunto de dados, podemos realizar a classificação.
## Classificador Bayesiano
Vamos considerar o caso paramétrico, assumindo que cada variável está distribuída de acordo com uma distribuição Normal. Outras distribuições também podem ser utilizadas.
Já selecionamos os conjuntos de treinamento e teste anteriormente. No conjunto de treinamento, vamos calcular a média e desvio padrão de cada atributo para cada classe. A seguir, realizamos a classificação, dos dados usando a teoria da decisão Bayesiana, isto é: $X \in C_i$ se, e somente se, $P(C_i|X) = \max P(C_j|X)$ para todo $j$.
```python
from scipy.stats import multivariate_normal
print('\n\ncódigo original\n')
#matrix to store the probabilities
P = pd.DataFrame(data=np.zeros((x_test.shape[0], len(classes))), columns = classes) # probability of each class
Pc = np.zeros(len(classes)) #fraction of elements in each class
# para cada classe i
for i in np.arange(0, len(classes)):
# tupla da posição em que o elemento é da classe i
elements = tuple(np.where(y_train == classes[i]))
#OBSERVAR AQUI
Pc[i] = len(elements)/len(y_train)
print(f'Tipo Elements:\t{type(elements)}')
print(f'Tamanho Elements:{len(elements)}\nElements:{elements}\n')
# Z tem as linhas com os elementos de cada classe e as colunas os atributos mesmo
Z = x_train[elements,:][0]
# média de elementos da classe para cada atributo da classe
m = np.mean(Z, axis = 0)
# covariancia entre os atributos de cada classe
cv = np.cov(np.transpose(Z))
## calculo da probabilidade
for j in np.arange(0,x_test.shape[0]):
x = x_test[j,:]
# probabilidade considerando a verossimilhança
pj = multivariate_normal.pdf(x, mean=m, cov=cv, allow_singular=True)
# prioris
prior = Pc[i]
# funciona assim porque o número de classes é discreto
# probabilidade é a priori vezes a posteriori
P[classes[i]][j] = pj*prior
# eu não preciso dividir pela a informação porque é proporcional
```
código original
Tipo Elements: <class 'tuple'>
Tamanho Elements:1
Elements:(array([ 0, 1, 3, 4, 7, 8, 9, 13, 14, 23, 26, 27, 28,
31, 32, 33, 35, 38, 41, 48, 51, 52, 55, 57, 58, 66,
67, 70, 71, 72, 75, 78, 84, 91, 94, 98, 102, 104, 114,
117], dtype=int64),)
Tipo Elements: <class 'tuple'>
Tamanho Elements:1
Elements:(array([ 2, 6, 11, 12, 15, 18, 20, 22, 25, 29, 34, 36, 39,
44, 45, 47, 49, 53, 54, 59, 60, 62, 65, 73, 79, 80,
82, 86, 88, 89, 90, 92, 93, 95, 99, 105, 108, 110, 111,
115, 118], dtype=int64),)
Tipo Elements: <class 'tuple'>
Tamanho Elements:1
Elements:(array([ 5, 10, 16, 17, 19, 21, 24, 30, 37, 40, 42, 43, 46,
50, 56, 61, 63, 64, 68, 69, 74, 76, 77, 81, 83, 85,
87, 96, 97, 100, 101, 103, 106, 107, 109, 112, 113, 116, 119],
dtype=int64),)
```python
from scipy.stats import multivariate_normal
print('código modificado\n')
#matrix to store the probabilities
P = pd.DataFrame(data=np.zeros((x_test.shape[0], len(classes))), columns = classes) # probability of each class
Pc = np.zeros(len(classes)) #fraction of elements in each class
# para cada classe i
for i in np.arange(0, len(classes)):
# tupla da posição em que o elemento é da classe i
elements = tuple(np.where(y_train == classes[i]))
#OBSERVAR AQUI
Pc[i] = len(elements[0])/len(y_train)
print(f'Tipo Elements modificado:\t{type(elements[0])}')
print(f'Tamanho Elements modificado:{len(elements[0])}\nElements:{elements[0]}\n')
# Z tem as linhas com os elementos de cada classe e as colunas os atributos mesmo
Z = x_train[elements,:][0]
# média de elementos da classe para cada atributo da classe
m = np.mean(Z, axis = 0)
# covariancia entre os atributos de cada classe
cv = np.cov(np.transpose(Z))
## calculo da probabilidade
for j in np.arange(0,x_test.shape[0]):
x = x_test[j,:]
# probabilidade considerando a verossimilhança
pj = multivariate_normal.pdf(x, mean=m, cov=cv, allow_singular=True)
# priori
prior = Pc[i]
# funciona assim porque o número de classes é discreto
# probabilidade é a priori vezes a posteriori
P[classes[i]][j] = pj*prior
# eu não preciso dividir pela a informação porque é proporcional
print(f'Proporção das classes {Pc}\t Soma = {Pc.sum()}')
```
código modificado
Tipo Elements modificado: <class 'numpy.ndarray'>
Tamanho Elements modificado:40
Elements:[ 0 1 3 4 7 8 9 13 14 23 26 27 28 31 32 33 35 38
41 48 51 52 55 57 58 66 67 70 71 72 75 78 84 91 94 98
102 104 114 117]
Tipo Elements modificado: <class 'numpy.ndarray'>
Tamanho Elements modificado:41
Elements:[ 2 6 11 12 15 18 20 22 25 29 34 36 39 44 45 47 49 53
54 59 60 62 65 73 79 80 82 86 88 89 90 92 93 95 99 105
108 110 111 115 118]
Tipo Elements modificado: <class 'numpy.ndarray'>
Tamanho Elements modificado:39
Elements:[ 5 10 16 17 19 21 24 30 37 40 42 43 46 50 56 61 63 64
68 69 74 76 77 81 83 85 87 96 97 100 101 103 106 107 109 112
113 116 119]
Proporção das classes [0.33333333 0.34166667 0.325 ] Soma = 1.0
## Predição nos dados de teste
```python
y_pred = []
#np.array(test_x.shape[0], dtype=str)
for i in np.arange(0, x_test.shape[0]):
# c é a posição do maior
c = np.argmax(np.array(P.iloc[[i]]))
# aqui é a predição mesmo
y_pred.append(classes[c])
y_pred = np.array(y_pred, dtype=str)
print(y_pred)
```
['versicolor' 'setosa' 'virginica' 'versicolor' 'versicolor' 'setosa'
'versicolor' 'virginica' 'virginica' 'versicolor' 'virginica' 'setosa'
'setosa' 'setosa' 'setosa' 'versicolor' 'virginica' 'versicolor'
'versicolor' 'virginica' 'setosa' 'virginica' 'setosa' 'virginica'
'virginica' 'virginica' 'virginica' 'virginica' 'setosa' 'setosa']
```python
from sklearn.metrics import accuracy_score
score = accuracy_score(y_pred, y_test)
print('Accuracy:', score)
```
Accuracy: 0.9666666666666667
Código completo.
```python
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from scipy.stats import multivariate_normal
from sklearn.metrics import accuracy_score
random.seed(42)
data = pd.read_csv('data/Vehicle.csv', header=(0))
classes = np.array(pd.unique(data[data.columns[-1]]), dtype=str)
# Converte para matriz e vetor do numpy
data = data.to_numpy()
nrow,ncol = data.shape
y = data[:,-1]
X = data[:,0:ncol-1]
# Transforma os dados para terem media igual a zero e variancia igual a 1
scaler = StandardScaler().fit(X)
X = scaler.transform(X)
# Seleciona os conjuntos de treinamento e teste
p = 0.8 # fraction of elements in the test set
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size = p, random_state = 42)
#### Realiza a classificacao ####
# Matriz que armazena as probabilidades para cada classe
P = pd.DataFrame(data=np.zeros((x_train.shape[0], len(classes))), columns = classes)
Pc = np.zeros(len(classes)) # Armaze a fracao de elementos em cada classe
for i in np.arange(0, len(classes)): # Para cada classe
elements = tuple(np.where(y_train == classes[i])) # elmentos na classe i
Pc[i] = len(elements[0])/len(y_train) # Probabilidade pertencer a classe i
Z = x_train[elements,:][0] # Elementos no conjunto de treinamento
m = np.mean(Z, axis = 0) # Vetor media
cv = np.cov(np.transpose(Z)) # Matriz de covariancia
for j in np.arange(0,x_test.shape[0]): # para cada observacao no conjunto de teste
x = x_test[j,:]
# calcula a probabilidade pertencer a cada classe
pj = multivariate_normal.pdf(x, mean=m, cov=cv, allow_singular=True)
P[classes[i]][j] = pj*Pc[i]
y_pred = [] # Vetor com as classes preditas
for i in np.arange(0, x_test.shape[0]):
c = np.argmax(np.array(P.iloc[[i]]))
y_pred.append(classes[c])
y_pred = np.array(y_pred, dtype=str)
# calcula a acuracia
score = accuracy_score(y_pred, y_test)
print('Acuracia:', score)
```
Acuracia: 0.8823529411764706
## Caso não paramétrico
Para o caso unidimensional, seja $(X_1,X_2, \ldots, X_n)$ uma amostra aleatória unidimensional identicamente distribuída de acordo com alguma função de distribuição $f$ não conhecida. Para estimarmos o formato de $f$, usamos um estimador (kernel density estimator):
\begin{equation}
\widehat{f}_{h}(x)={\frac {1}{n}}\sum _{i=1}^{n}K_{h}(x-x_{i})={\frac {1}{nh}}\sum _{i=1}^{n}K{\Big (}{\frac {x-x_{i}}{h}}{\Big )},
\end{equation}
onde $K$ é a função kernel.
A estimação depende do parâmetro $h$, que é um parâmetro livre e controla a abertura da função.
```python
import numpy as np
import matplotlib.pyplot as plt
N = 20
# gera os dados
X = np.array([1, 2, 3, 4, 12, 20,21,22,23,24,40,41, 50])
X = X.reshape((len(X), 1))
# mostra os dados
plt.figure(figsize=(12,4))
plt.plot(X[:, 0], 0.001*np.ones(X.shape[0]), 'ok')
# valores x para serem usados nas densidades
X_plot = np.linspace(np.min(X)-5, np.max(X)+5, 1000)[:, np.newaxis]
h=2
fhat = 0 # estimacao obtida
for x in X:
# distribuição normal centrada em x
# kernel é gaussiano, dá para usar kerneis diferentes
f = (1/np.sqrt(2*np.pi*h))*np.exp(-((X_plot - x)**2)/(2*h**2))
fhat = fhat + f # acumula as distribuições
plt.plot(X_plot,f, '--', color = 'blue', linewidth=1)
# mostra a distribuição estimada
plt.plot(X_plot,fhat/(len(X)*np.sqrt(h)), color = 'green', linewidth=2)
plt.xlabel('x', fontsize = 20)
plt.ylabel('P(x)', fontsize = 20)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.ylim((0, 0.3))
plt.savefig('kernel-ex.eps')
plt.show(True)
```
Esse resultado pode ser obtido usando-se a função KernelDensity scikit-learn.
```python
import numpy as np
from matplotlib.pyplot import cm
from sklearn.neighbors import KernelDensity
color=['red', 'blue', 'magenta', 'gray', 'green']
N = 20
X = np.array([1, 2, 3, 4, 12, 20,21,22,23,24,40,41, 50])
X = X.reshape((len(X), 1))
plt.figure(figsize=(12,4))
plt.plot(X[:, 0], 0.001*np.ones(X.shape[0]), 'ok')
X_plot = np.linspace(np.min(X)-5, np.max(X)+5, 1000)[:, np.newaxis]
h=2
fhat = 0
for x in X:
f = (1/np.sqrt(2*np.pi*h))*np.exp(-((X_plot - x)**2)/(2*h**2))
fhat = fhat + f
plt.plot(X_plot,f, '--', color = 'blue', linewidth=1)
kde = KernelDensity(kernel='gaussian', bandwidth=h).fit(X)
log_dens = np.exp(kde.score_samples(X_plot)) # score_samples() returns the log density.
plt.plot(X_plot,log_dens, color = 'red', linewidth=2, label = 'h='+str(h))
plt.xlabel('x', fontsize = 20)
plt.ylabel('P(x)', fontsize = 20)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.ylim((0, 0.3))
plt.show(True)
```
Notem que o formato da estimação depende do parâmetro livre $h$.
```python
import numpy as np
from matplotlib.pyplot import cm
color=['red', 'blue', 'gray', 'black', 'green', 'lightblue']
N = 20
X = np.array([1, 2, 3, 4, 12, 20,21,22,23,24,40,41, 50])
X = X.reshape((len(X), 1))
X_plot = np.linspace(np.min(X)-5, np.max(X)+5, 1000)[:, np.newaxis]
plt.figure(figsize=(12,4))
plt.plot(X[:, 0], 0.001*np.ones(X.shape[0]), 'ok')
c = 0
vh = [0.1, 0.5, 1, 2, 5, 10]
for h in vh:
kde = KernelDensity(kernel='gaussian', bandwidth=h).fit(X)
log_dens = np.exp(kde.score_samples(X_plot)) # score_samples() returns the log density.
plt.plot(X_plot,log_dens, color = color[c], linewidth=2, label = 'h='+str(h))
c = c + 1
plt.ylabel('P(x)', fontsize = 20)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
#plt.ylim((0, 0.2))
plt.legend(fontsize = 10)
#plt.savefig('kernel.eps')
plt.show(True)
```
Notem que a estimação é relacionada com a estimação usando-se histogramas.
```python
import numpy as np
N = 20
X = np.array([1, 2, 3, 4, 12, 20,21,22,23,24,40,41, 50])
X = X.reshape((len(X), 1))
plt.figure(figsize=(10,5))
# Histogram
nbins = 10
plt.hist(X,bins = nbins, density = True, color='gray',alpha=0.7, rwidth=0.95)
#Kernel density estimation
X_plot = np.linspace(np.min(X)-5, np.max(X)+5, 1000)[:, np.newaxis]
kde = KernelDensity(kernel='gaussian', bandwidth=2).fit(X)
log_dens = np.exp(kde.score_samples(X_plot)) # score_samples() returns the log density.
plt.plot(X_plot,log_dens, color = 'blue', linewidth=2)
plt.plot(X[:, 0], 0.001*np.ones(X.shape[0]), 'ok')
plt.xlabel('x', fontsize = 15)
plt.ylabel('P(x)', fontsize = 15)
plt.show(True)
```
Usando o método *kernel density estimation*, podemos realizar a classificação.
```python
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KernelDensity
from sklearn.metrics import accuracy_score
random.seed(42)
data = pd.read_csv('data/Iris.csv', header=(0))
# data = pd.read_csv('data/Vehicle.csv', header=(0))
classes = np.array(pd.unique(data[data.columns[-1]]), dtype=str)
# Converte para matriz e vetor do numpy
data = data.to_numpy()
nrow,ncol = data.shape
y = data[:,-1]
X = data[:,0:ncol-1]
# Transforma os dados para terem media igual a zero e variancia igual a 1
scaler = StandardScaler().fit(X)
X = scaler.transform(X)
# Seleciona os conjuntos de treinamento e teste
p = 0.8 # fraction of elements in the training set
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size = p, random_state = 42)
# Matriz que armazena as probabilidades para cada classe
P = pd.DataFrame(data=np.zeros((x_test.shape[0], len(classes))), columns = classes)
Pc = np.zeros(len(classes)) # Armaze a fracao de elementos em cada classe
h = 2
for i in np.arange(0, len(classes)): # Para cada classe
elements = tuple(np.where(y_train == classes[i])) # elmentos na classe i
Pc[i] = len(elements[0])/len(y_train) # Probabilidade pertencer a classe i
Z = x_train[elements,:][0] # Elementos no conjunto de treinamento
kde = KernelDensity(kernel='gaussian', bandwidth=h).fit(Z)
for j in np.arange(0,x_test.shape[0]): # para cada observacao no conjunto de teste
x = x_test[j,:]
x = x.reshape((1,len(x)))
# calcula a probabilidade pertencer a cada classe
pj = np.exp(kde.score_samples(x))
P[classes[i]][j] = pj*Pc[i]
y_pred = [] # Vetor com as classes preditas
for i in np.arange(0, x_test.shape[0]):
c = np.argmax(np.array(P.iloc[[i]]))
y_pred.append(classes[c])
y_pred = np.array(y_pred, dtype=str)
# calcula a acuracia
score = accuracy_score(y_pred, y_test)
print('Acuracia:', score)
```
Acuracia: 0.9666666666666667
| 2656944e1822e24855b52637981c630279439c61 | 240,045 | ipynb | Jupyter Notebook | bayes_classifier.ipynb | marcelns/data-analysis | 1ea76b4876253f408db7a2c13fdfb8c75eb627dc | [
"Apache-2.0"
]
| null | null | null | bayes_classifier.ipynb | marcelns/data-analysis | 1ea76b4876253f408db7a2c13fdfb8c75eb627dc | [
"Apache-2.0"
]
| null | null | null | bayes_classifier.ipynb | marcelns/data-analysis | 1ea76b4876253f408db7a2c13fdfb8c75eb627dc | [
"Apache-2.0"
]
| null | null | null | 264.950331 | 66,232 | 0.910054 | true | 6,402 | Qwen/Qwen-72B | 1. YES
2. YES | 0.849971 | 0.861538 | 0.732283 | __label__por_Latn | 0.523682 | 0.53967 |
## CCNSS 2018 Module 1: Neurons, synapses and networks
# Tutorial 3: Spike timing dependent plasticity
[source](https://colab.research.google.com/drive/1pE0nERUutXNIjCBQIWD_TdlE-mDLhtR1)
Please execute the cell below to initialise the notebook environment.
```
%autosave 0
import matplotlib.pyplot as plt # import matplotlib
import numpy as np # import numpy
import random # import basic random number generator functions
fig_w, fig_h = (6, 4)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
```
Autosave disabled
##Objectives
In your first (pre-course) tutorial, you implemented a LIF spiking neuron with a noisy input current. In this notebook we extend that model to implement spike timing dependent plasticity.
**Background paper:**
- Song S, Miller K and Abott L (2000) Competitive Hebbian learning
through spike-timing-dependent synaptic plasticity. Nature Neurosci 3.
**Extra reading:**
- Sjostrom J and Gerstner W (2010) Spike-timing dependent plasticity. Scholarpedia: http://www.scholarpedia.org/article/Spike-timing_dependent_plasticity
## Conductance based LIF model
We will extend the leaky integrate and fire model by adding synaptic conductances and Poisson inputs. The postsynaptic membrane potential is given by:
\begin{align}
&\tau\,\frac{dV}{dt}\ = E_{L} - V(t) + g_{ex}(t)(E_{ex}-V(t)) &\text{if }\quad V(t) \leq V_{th}\\
&V(t) = V_{r} &\text{otherwise}\\
\end{align}
These equations should look familiar (from the LIF neuron assignment). The only difference is that the synaptic input has now been replaced by an excitatory synaptic conductance, which is given by the following decaying exponential:
$$\tau_{syn} \frac{dg_{ex}}{dt} = -g_{ex} $$
When a spike occurs at presynaptic synapse $i$, the conductance will be updated as:
$$ g_{ex}(t) = g_{ex}(t) + \bar g_{max}$$
The variable $\bar g_{max}$ represents the peak amplitude for a unitary spike input. We will later modify this variable to we implement synaptic plasticity.
First, execute the cell below to set the simulation parameters.
```
t_max = 150e-3 # second
dt = 1e-3 # second
tau = 20e-3 # second
el = -60e-3 # volt
e_ex = 0 # volt
vr = -70e-3 # volt
vth = -50e-3 # volt
tau_syn = 5e-3 # second
gbar_max = 0.015
```
**Exercise 1:** Fill in the following function to generate Poisson distributed spike times for arbitrary firing rate $r$. Poisson spike times can be approximated by setting the probability of a spike occurring in a short time bin $\Delta t$ as the product of the firing rate and the time window $r \Delta t$ (for small $\Delta t$). Check your function by generating Poisson spike times for n=2000 "neurons" at 10 Hz each. Calculate the average firing rate across all neurons, and then plot a histogram of the firing rates over the neurons.
* **Note:** In the function `generate_Poisson_spikes`, initialise the presynaptic spike train using `np.zeros((n,len(t)), dtype=np.int)` to speed up the code.
```
random.seed(0)
def generate_Poisson_spikes(t,rate,n):
""" Generates poisson spike trains
Arguments:
t -- time
rate -- firing rate (Hz)
n -- number of spike trains
Returns:
pre_spike_train -- spike train matrix, ith row represents whether
there is a spike in ith spike train over time
(1 if spike, 0 otherwise)
"""
# insert your code here
return pre_spike_train
# insert your code here
```
**EXPECTED OUTPUT**
```
10.096666666666668
```
**Exercise 2:** Fill in the function below to simulate the conductance based LIF model for a given set of presynaptic spike time inputs. To do this, you will need to discretise both $V(t)$ and $g_{ex}(t)$. This is equivalent to:
\begin{align}
&V[k+1] = V[k] + \frac{dt}{\tau} \left(E_L - V[k] + g_{ex}[k](E_{ex}-V[k])\right)\\
&g_{ex}[k+1] = g_{ex}[k] -\frac{dt}{\tau_{syn}} g_{ex}[k]
\end{align}
Don't forget to reset $V(t)\to V_r$ once it reaches threshold $V_{th}$, and set $g_{ex}(t) \to g_{ex}(t)+\bar g_{max}$ at every timepoint where there is a presynpatic spike. Then, simulate the model for 300 excitatory inputs, each firing at 10 Hz. Plot $V(t)$ and $g_{ex}(t)$, as well as the number of presynaptic spikes at each time point. Note that the latter two plots should be correlated.
```
random.seed(0)
def simulate_postsynaptic_neuron(t,pre_spike_train):
""" Simulate nonplastic postsynaptic neuron
Arguments:
t -- time
pre_spike_train -- presynaptic spike train matrix, same length as t
Returns:
g_ex -- excitatory conductance
v -- membrane potential
"""
# insert your code here
return g_ex,v
# insert your code here
```
**EXPECTED OUTPUT**
## Spike timing dependent plasticity
Now we will incorporate STDP into the conductance based model. Models of STDP generally have a biphasic exponential decaying function. This means the change in weights is given by:
\begin{align}
& \Delta W = A_+ e^{ (t_{pre}-t_{post})/\tau_+} \hspace{10mm} \text{if} \hspace{5mm} t_{post} > t_{pre}\\
& \Delta W = -A_- e^{- (t_{pre}-t_{post})/\tau_-} \hspace{7mm} \text{if} \hspace{5mm} t_{post} < t_{pre}
\end{align}
This model captures potentiation when the presynaptic spike time occurs before the postsynaptic spike time, and depression if it occurs after. The parameters $A_+$ and $A_-$ determine the magnitude of LTP and LTD, and $\tau_{+}$ and $\tau{-}$ determine the temporal window.
Execute the following code to set the STDP parameters and plot the STDP function. For simplicity, we assume $\tau_{+} = \tau_{-} = \tau_{stdp}$.
```
tau_stdp = 20e-3 # second
A_plus = 5e-3
A_minus = A_plus*1.10
# Plot STDP function
time_diff = np.linspace(-5*tau_stdp,5*tau_stdp,50)
plt.figure()
plt.plot([-5*tau_stdp,5*tau_stdp],[0,0],'k',linestyle=':')
plt.plot([0,0],[-A_minus,A_plus],'k',linestyle=':')
for t in range(len(time_diff)):
if time_diff[t] < 0:
plt.plot(time_diff[t],A_plus*np.exp(time_diff[t]/tau_stdp),'C0o')
else:
plt.plot(time_diff[t],-A_minus*np.exp(-time_diff[t]/tau_stdp),'C0o')
plt.xlabel('pre spike time - post spike time',fontsize=15)
plt.ylabel('change in synaptic weight',fontsize=15)
plt.show()
```
## Keeping track of pre and postsynaptic spikes
In order to implement STDP, we first have to keep track of the pre and post synaptic spike times throughout our simulation. A simple way to do this is to define the following equation:
$$\tau_{-} \frac{dM}{dt} = -M$$
Whenever the postsynaptic neuron spikes,
$$M(t) = M(t) - A_{-}$$
Then, $M(t)$ tracks the number of postsynaptic spikes over the timescale $\tau_{-}$. Similarly for each presynaptic spike,
$$\tau_{+} \frac{dP_i}{dt} = -P_i$$
Whenever the $i$th presynaptic neuron spikes,
$$P(t) = P(t) + A_{+}$$
The variables $M(t)$ and $P_i(t)$ are very similar to the equations for the synaptic conductances $g_i(t)$, except that they are used to keep track of pre and postsynaptic spike times on a much longer timescale. Note that $M(t)$ is always negative and $P_i(t)$ is always positive. You can probably already guess that $M$ is used for LTD and $P_i$ for LTP because they are updated by $A_{minus}$ and $A_{plus}$, respectively. But since the equation for $P_i$ only depends on the presynaptic spike times, we wiill generate $P_i(t)$ before simulating the postsynaptic neuron and STDP.
**Exercise 3:** Fill in the following function to generate P from the presynaptic spike train. Test that this function works by simulating 5 inputs spiking at 10 Hz each, and plot both the spike times and $P_i(t)$.
```
random.seed(0)
def generate_P(t,pre_spike_train):
""" Generate P to track presynaptic spikes
Arguments:
t -- time
pre_spike_train -- presynaptic spike train matrix, same length as t
Returns:
P -- matrix, ith row is P for the ith presynaptic input
"""
# insert your code here
return P
# insert your code here
```
**EXPECTED OUTPUT**
## Implementation of STDP
Finally, to implement STDP in spiking networks, we will change the value of the peak synaptic conductance based on the pre and post synaptic timing. Previously, we set the peak synaptic conductance to $\bar g_{max}$. Now, each synapse $i$ will have its own peak synaptic conductance $\bar g_i$, which will vary between $[0, \bar g_{max}]$, and will be modified depending on the pre and post synaptic timing. If presynaptic neuron $i$ spikes, its corresponding peak conductance is updated to:
$$\bar g_i = \bar g_i + M(t)\bar g_{max} $$
Note that $M(t)$ tracks the time since the last postsynaptic potential and is negative. So if the postsynaptic neuron spikes shortly before the presynaptic neuron, the above equation means that the peak conductance will decrease. On the other hand, if the postsynaptic neuron spikes, all conductances are updated according to:
$$\bar g_i = \bar g_i + P_i(t)\bar g_{max} $$
Again, $P_i(t)$ tracks the time since presynaptic neuron $i$ last spiked, and is positive. So this equation means that if the presynaptic neuron spikes before the postsynaptic neuron, its peak conductance will increase.
**Exercise 4:** Fill in the following code to implement STDP, by modifying the code from Exercise 2 and calling the ``generate_P``. Make sure that $\bar g_i$ never goes outside of its bounds. Check that this works by the simulating plastic postsynaptic neuron with 300 inputs firing at 10 Hz each. Plot $v(t)$, $M(t)$, $P_i(t)$, $\bar g_i(t)$, and $g_{ex}(t)$.
```
random.seed(0)
def simulate_postsynaptic_neuron_plastic(t,pre_spike_train):
"""Simulate a plastic neuron
Arguments:
t -- time
pre_spike_train -- presynaptic spike train matrix, same length as t
Returns:
g_ex -- excitatory conductance
gbar -- matrix, ith row is peak excitatory conductance
over time for ith presynaptic neuron
P -- matrix, keeps track of presynaptic spikes
M -- vector, keeps track of postsynaptic spikes
v -- membrane potential
"""
# insert your code here
return g_ex,gbar,P,M,v
# insert your code here
```
**EXPECTED OUTPUT**
**Exercise 5:** Execute the following cell to increase the presynaptic firing rate to 20 Hz and simulate the plastic postsynaptic neuron for 200s. This will take several minutes so this is a good time to take a short break.
```
random.seed(0)
t_max = 200; dt = 1e-3
t = np.arange(0, t_max, dt)
rate = 20; n = 300;
pre_spike_train = generate_Poisson_spikes(t,rate,n)
g_ex,gbar,P,M,v = simulate_postsynaptic_neuron_plastic(t,pre_spike_train)
```
Fill in the cell below to normalize $\bar g(t)$ by its maximum value and plot the trajectories for all presynaptic neurons (to better visualize all the traces, use `linewidth=.2`). Plot a histogram of the peak weights (as a fraction of $\bar g_{max}$) at the beginning of the STDP protocol, as well as halfway through, 2/3 of the way through, 3/4 of the way through, and at end. Is the system at steady state?
```
# insert your code here
```
**EXPECTED OUTPUT**
**Exercise 6:** Run the following cell to repeat the previous exercise, but now we will introduce a subset of correlated inputs by setting the first 10 presynaptic neurons to have the same spike times.
```
random.seed(0)
t_max = 200; dt = 1e-3
t = np.arange(0, t_max, dt)
rate = 20; n = 300;
pre_spike_train = generate_Poisson_spikes(t,rate,n)
for k in range(50):
pre_spike_train[k,:] = pre_spike_train[0,:]
g_ex,gbar,P,M,v = simulate_postsynaptic_neuron_plastic(t,pre_spike_train)
```
Copy/paste the code from Exercise 5 into the cell below to produce the same plots of $\bar g_i(t)$ and the same histograms. What is happening and why is this different from the previous exercise?
```
# insert your code here
```
**EXPECTED OUTPUT**
| c92fdaa450dfe871fce3877085fc79c7207a7978 | 50,246 | ipynb | Jupyter Notebook | module1/3_spike_timing_dependent_plasticity/3_Spike_timing_dependent_plasticity.ipynb | ruyuanzhang/ccnss2018_students | 978b2414ade6116da01c19a945304f9c514fb93f | [
"CC-BY-4.0"
]
| 12 | 2018-07-01T10:51:09.000Z | 2021-11-15T22:57:17.000Z | module1/3_spike_timing_dependent_plasticity/3_Spike_timing_dependent_plasticity.ipynb | marcelomattar/ccnss2018_students | 978b2414ade6116da01c19a945304f9c514fb93f | [
"CC-BY-4.0"
]
| null | null | null | module1/3_spike_timing_dependent_plasticity/3_Spike_timing_dependent_plasticity.ipynb | marcelomattar/ccnss2018_students | 978b2414ade6116da01c19a945304f9c514fb93f | [
"CC-BY-4.0"
]
| 13 | 2018-05-15T02:54:07.000Z | 2021-11-15T22:57:19.000Z | 71.78 | 25,674 | 0.739462 | true | 3,285 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.779993 | 0.673439 | __label__eng_Latn | 0.98399 | 0.402956 |
## Objectives:
- Student should be able to Explain why we care about linear algebra in the scope of data science
- Student should be able to Conceptualize and utilize vectors and matrices through matrix operations and properties such as: square matrix, identity matrix, transpose and inverse
- Student should be able to Show when two vectors/matrices are orthogonal and explain the intuitive implications of orthogonality
- Student should be able to Calculate (by hand for small examples, with numpy for large) and understand importance of eigenvalues, eigenvectors
# Why Linear Algebra? (ELI5 + Soapbox)
Data Science, Machine Learning, and Artificial intelligence is all about getting computers to do things for us better, cheaper, and faster than we could do them ourselves.
How do we do that? Computers are good at doing small repetitive tasks (like arithmetic). if we tell them what small repetitive tasks to do in the right order then sometimes all of those combined behaviors will result in something that looks like a human's behavior (or at least the decisions/output look like something a human might decide to do/create).
<center></center>
<center>[Le Comte de Belamy](https://obvious-art.com/le-comte-de-belamy.htm)</center>
The set of instructions that we give to a computer to complete certain tasks is called an **algorithm**. The better that we can organize the set of instructions, the faster that computers can do them. The method that we use to organize and store our set of instructions so that the computer can do them super fast is called a **data structure**. The practice of optimizing the organization of our data structures so that they run really fast and efficiently is called **computer science**. (This is why we will have a unit dedicated solely to computer science in a few months). Data Scientists should care how fast computers can process their sets of instructions (algorithms).
## A set of ordered instructions
Here's a simple data structure, in Python it's known as a **list**. It's one of the simplest ways that we can store things (data) and maintain their order. When giving instructions to a computer, it's important that the computer knows in what order to execute them.
```python
selfDrivingCarInstructions = [
"open door",
"sit on seat",
"put key in ignition",
"turn key to the right until it stops",
"push brake pedal",
"change gear to 'Drive'",
"release brake pedal",
"push gas pedal",
'''turn wheel to navigate streets with thousands of small rules and
exeptions to rules all while avoiding collision with other
objects/humans/cars, obeying traffic laws, not running out of fuel and
getting there in a timely manner''',
"close door"
]
# We'll have self-driving cars next week for sure. NBD
```
# Maintaining the order of our sets of ordered instruction-sets
Here's another data structure we can make by putting lists inside of lists, this is called a two-dimensional list. Sometimes it is also known as a two-dimensional array or --if you put some extra methods on it-- a dataframe. As you can see things are starting to get a little bit more complicated.
```python
holdMyData = [
[1,2,3],
[4,5,6],
[7,8,9]
]
# Disregard the quality of these bad instructions
```
## Linear Algebra - organize and execute big calculations/operations really fast
So why linear algebra? Because the mathematical principles behinds **vectors** and **matrices** (lists and 2D lists) will help us understand how we can tell computers how to do an insane number of calculations in a very short amount of time.
Remember when we said that computers are really good at doing small and repetitive tasks very quickly?
## I Give You... Matrix Multiplication:
<center></center>
<center>If you mess up any of those multiplications or additions you're up a creek.</center>
## I Give You... Finding the Determinant of a Matrix: (an introductory linear algebra topic)
## 2x2 Matrix
<center></center>
<center>Just use the formula!</center>
## 3x3 Matrix
<center></center>
<center>Just calculate the determinant of 3 different 2x2 matrices and multiply them by 3 other numbers and add it all up.</center>
## 4x4 Matrix
<center></center>
<center>Just calculate 3 diferent 3x3 matrix determinants which will require the calculating of 9 different 2x2 matrix determinants, multiply them all by the right numbers and add them all up. And if you mess up any of those multiplications or additions you're up a creek.</center>
## 5x5 Matrix!
## ...
## ...
Just kidding, any linear algebra professor who assigns the hand calculation of a 5x5 matrix determinant (or larger) is a sadist. This is what computers were invented for! Why risk so much hand calculation in order to do something that computers **never** make a mistake at?
By the way, when was the last time that you worked with a dataframe that was 4 rows x 4 columns or smaller?
Quick, find the determinant of this 42837x42837 dataframe by hand!
# Common Applications of Linear Algebra in Data Science:
- Vectors: Rows, Columns, lists, arrays
- Matrices: tables, spreadsheets, dataframes
- Linear Regression: (You might remember from the intro course)
<center></center>
```python
# Linear Regression Example
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Read CSV
df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/Ice_Cream_Sales.csv')
# Create Column of 1s
df['Ones'] = np.ones(11)
# Format X and Y Matrices
X = df[['Ones', 'Farenheit']].as_matrix()
Y = df['Dollars'].as_matrix().reshape(-1, 1)
# Calculate Beta Values
beta = np.matmul(np.linalg.inv(np.matmul(np.transpose(X), X)), np.matmul(np.transpose(X), Y))
print(beta)
```
[[-596.20648399]
[ 24.68849397]]
```python
# Assign Beta Values to Variables
beta_0 = beta[0,0]
beta_1 = beta[1,0]
# Plot points with line of best fit
plt.scatter(df['Farenheit'], df['Dollars'])
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = beta_0 + beta_1 * x_vals
plt.plot(x_vals, y_vals, '-', color='b')
plt.title('Ice Cream Sales Regression Line')
plt.xlabel('Farenheit')
plt.ylabel('Dollars')
plt.show()
```
- Dimensionality Reduction Techniques: Principle Component Analysis (PCA) and Singular Value Decomposition (SVD)
Take a giant dataset and distill it down to its important parts. (typically as a pre-processing step for creating visualizations or putting into other models.)
<center></center>
- Deep Learning: Convolutional Neural Networks, (Image Recognition)
"Convolving" is the process of passing a filter/kernel (small matrix) over the pixels of an image, multiplying them together, and using the result to create a new matrix. The resulting matrix will be a new image that has been modified by the filter to emphasize certain qualities of an image. This is entirely a linear algebra-based process. A convolutional neural network learns the filters that help it best identify certain aspects of images and thereby classify immages more accurately.
<center></center>
```python
!pip install imageio
```
Requirement already satisfied: imageio in /usr/local/lib/python3.6/dist-packages (2.4.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from imageio) (1.14.6)
Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from imageio) (4.0.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow->imageio) (0.46)
```python
# Convolution in action
import imageio
import matplotlib.pyplot as plt
import numpy as np
import scipy.ndimage as nd
from skimage.exposure import rescale_intensity
img = imageio.imread('https://www.dropbox.com/s/dv3vtiqy439pzag/all_the_things.png?raw=1')
plt.axis('off')
plt.imshow(img);
```
```python
# Convert I to grayscale, so it will be MxNx1 instead of MxNx4
from skimage import color
grayscale = rescale_intensity(1-color.rgb2gray(img))
print(grayscale.shape)
plt.axis('off')
plt.imshow(grayscale);
```
```python
laplacian = np.array([[0,0,1,0,0],
[0,0,2,0,0],
[1,2,-16,2,1],
[0,0,2,0,0],
[0,0,1,0,0]])
laplacian_image = nd.convolve(grayscale, laplacian)
plt.axis('off')
plt.imshow(laplacian_image);
```
```python
sobel_x = np.array([
[-1,0,1],
[-2,0,2],
[-1,0,1]
])
sobel_x_image = nd.convolve(grayscale, sobel_x)
plt.axis('off')
plt.imshow(sobel_x_image);
```
```python
sobel_y = np.array([
[1,2,1],
[0,0,0],
[-1,-2,-1]
])
sobel_y_image = nd.convolve(grayscale, sobel_y)
plt.axis('off')
plt.imshow(sobel_y_image);
```
## Are we going to learn to do Linear Algebra by hand?
Let me quote your seventh grade maths teacher:
<center></center>
Of course you're going to carry a calculator around everywhere, so mostly **NO**, we're not going to do a lot of hand calculating. We're going to try and refrain from calculating things by hand unless it is absolutely necessary in order to understand and implement the concepts.
We're not trying to re-invent the wheel.
We're learning how to **use** the wheel.
# Linear Algebra Overview/Review:
## Scalars:
A single number. Variables representing scalars are typically written in lower case.
Scalars can be whole numbers or decimals.
\begin{align}
a = 2
\qquad
b = 4.815162342
\end{align}
They can be positive, negative, 0 or any other real number.
\begin{align}
c = -6.022\mathrm{e}{+23}
\qquad
d = \pi
\end{align}
```python
import math
import matplotlib.pyplot as plt
import numpy as np
# Start with a simple vector
blue = [.5, .5]
# Then multiply it by a scalar
green = np.multiply(2, blue)
red = np.multiply(math.pi, blue)
orange = np.multiply(-0.5, blue)
# Plot the Scaled Vectors
plt.arrow(0,0, red[0], red[1],head_width=.05, head_length=0.05, color ='red')
plt.arrow(0,0, green[0], green[1],head_width=.05, head_length=0.05, color ='green')
plt.arrow(0,0, blue[0], blue[1],head_width=.05, head_length=0.05, color ='blue')
plt.arrow(0,0, orange[0], orange[1],head_width=.05, head_length=0.05, color ='orange')
plt.xlim(-1,2)
plt.ylim(-1,2)
plt.title("Scaled Vectors")
plt.show()
```
## Vectors:
### Definition
A vector of dimension *n* is an **ordered** collection of *n* elements, which are called **components** (Note, the components of a vector are **not** referred to as "scalars"). Vector notation variables are commonly written as a bold-faced lowercase letters or italicized non-bold-faced lowercase characters with an arrow (→) above the letters:
Written: $\vec{v}$
Examples:
\begin{align}
\vec{a} =
\begin{bmatrix}
1\\
2
\end{bmatrix}
\qquad
\vec{b} =
\begin{bmatrix}
-1\\
0\\
2
\end{bmatrix}
\qquad
\vec{c} =
\begin{bmatrix}
4.5
\end{bmatrix}
\qquad
\vec{d} =
\begin{bmatrix}
Pl\\
a\\
b\\
\frac{2}{3}
\end{bmatrix}
\end{align}
The above vectors have dimensions 2, 3, 1, and 4 respectively.
Why do the vectors below only have two components?
```python
# Vector Examples
yellow = [.5, .5]
red = [.2, .1]
blue = [.1, .3]
plt.arrow(0, 0, .5, .5, head_width=.02, head_length=0.01, color = 'y')
plt.arrow(0, 0, .2, .1, head_width=.02, head_length=0.01, color = 'r')
plt.arrow(0, 0, .1, .3, head_width=.02, head_length=0.01, color = 'b')
plt.title('Vector Examples')
plt.show()
```
In domains such as physics it is emphasized that vectors have two properties: direction and magnitude. It's rare that we talk about them in that sense in Data Science unless we're specifically in a physics context. We just note that the length of the vector is equal to the number of dimensions of the vector.
What happens if we add a third component to each of our vectors?
```python
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
yellow = [.5, .5, .5]
red = [.2, .1, .0]
blue = [.1, .3, .3 ]
vectors = np.array([[0, 0, 0, .5, .5, .5],
[0, 0, 0, .2, .1, .0],
[0, 0, 0, .1, .3, .3]])
X, Y, Z, U, V, W = zip(*vectors)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.quiver(X, Y, Z, U, V, W, length=1)
ax.set_xlim([0, 1])
ax.set_ylim([0, 1])
ax.set_zlim([0, 1])
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
```
### Norm of a Vector (Magnitude or length)
The *Norm* or *Magnitude* of a vector is nothing more than the **length** of the vector. Since a vector is just a line (essentially) if you treat it as the hypotenuse of a triangle you could use the pythagorean theorem to find the equation for the norm of a vector. We're essentially just generalizing the equation for the hypotenuse of a triangle that results from the pythagorean theorem to n dimensional space.
We denote the norm of a vector by wrapping it in double pipes (like double absolute value signs)
\begin{align}
||v|| =
\sqrt{v_{1}^2 + v_{2}^2 + \ldots + v_{n}^2}
\\
\vec{a} =
\begin{bmatrix}
3 & 7 & 2 & 4
\end{bmatrix}
\\
||a|| = \sqrt{3^2 + 7^2 + 2^2 + 4^2} \\
||a|| = \sqrt{9 + 49 + 4 + 16} \\
||a|| = \sqrt{78}
\end{align}
The Norm is the square root of the sum of the squared elements of a vector.
Properties of the Norm:
The norm is always positive or zero $||x|| \geq 0$
The norm is only equal to zero if all of the elements of the vector are zero.
The Triangle Inequality: $|| x + y ||\leq ||x|| + ||y||$
### Dot Product
The dot product of two vectors $\vec{a}$ and $\vec{b}$ is a scalar quantity that is equal to the sum of pair-wise products of the components of vectors a and b.
\begin{align} \vec{a} \cdot \vec{b} = (a_{1} \times b_{1}) + (a_{2} \times b_{2}) + \ldots + ( a_{n} \times b_{n}) \end{align}
Example:
\begin{align}
\vec{a} =
\begin{bmatrix}
3 & 7 & 2 & 4
\end{bmatrix}
\qquad
\vec{b} =
\begin{bmatrix}
4 & 1 & 12 & 6
\end{bmatrix}
\end{align}
The dot product of two vectors would be:
\begin{align}
a \cdot b = (3)(4) + (7)(1) + (2)(12) + (4)(6) \\
= 12 + 7 + 24 + 24 \\
= 67
\end{align}
The dot product is commutative: $ \vec{} \cdot b = b \cdot a$
The dot product is distributive: $a \cdot (b + c) = a \cdot b + a \cdot c$
Two vectors must have the same number of components in order for the dot product to exist. If their lengths differ the dot product is undefined.
### Cross Product
The Cross Product is the vector equivalent of multiplication. The result is a third vector that is perpendicular to the first two vectors.
It is written with a regular looking multiplication sign like $a \times b$ but it is read as "a cross b"
The cross product equation is a little complicated, and gaining an intuition for it is going to take a little bit more time than we have here. I think it's the least useful of the vector operations, but I'll give you a short example anyway.
Assume that we have vectors $x$ and $y$.
\begin{align}
x = \begin{bmatrix} x_1 & x_2 & x_3 \end{bmatrix}
\qquad
y = \begin{bmatrix} y_1 & y_2 & y_3 \end{bmatrix}
\end{align}
The cross product can be found by taking these two vectors and adding a third unit vector to create a 3x3 matrix and then finding the determinant of the 3x3 matrix like follows:
\begin{align}
x = \begin{vmatrix}
i & j & k \\
x_1 & x_2 & x_3 \\
y_1 & y_2 & y_3
\end{vmatrix}
\end{align}
\begin{align} =
i\begin{vmatrix}
x_2 & x_3 \\
y_2 & y_3
\end{vmatrix}
+ j\begin{vmatrix}
x_1 & x_3 \\
y_1 & y_3
\end{vmatrix}
+ k\begin{vmatrix}
x_1 & x_2 \\
y_1 & y_2
\end{vmatrix}
\end{align}
## Matrices:
A **matrix** is a rectangular grid of numbers arranged in rows and columns. Variables that represent matrices are typically written as capital letters (boldfaced as well if you want to be super formal).
\begin{align}
A =
\begin{bmatrix}
1 & 2 & 3\\
4 & 5 & 6\\
7 & 8 & 9
\end{bmatrix}
\qquad
B = \begin{bmatrix}
1 & 2 & 3\\
4 & 5 & 6
\end{bmatrix}
\end{align}
### Dimensionality
The number of rows and columns that a matrix has is called its **dimension**.
When listing the dimension of a matrix we always list rows first and then columns.
The dimension of matrix A is 3x3. (Note: This is read "Three by Three", the 'x' isn't a multiplication sign.)
What is the Dimension of Matrix B?
### Matrix Equality
In order for two Matrices to be equal the following conditions must be true:
1) They must have the same dimensions.
2) Corresponding elements must be equal.
\begin{align}
\begin{bmatrix}
1 & 4\\
2 & 5\\
3 & 6
\end{bmatrix}
\neq
\begin{bmatrix}
1 & 2 & 3\\
4 & 5 & 6
\end{bmatrix}
\end{align}
### Matrix Multiplication
You can multipy any two matrices where the number of columns of the first matrix is equal to the number of rows of the second matrix.
The unused dimensions of the factor matrices tell you what the dimensions of the product matrix will be.
There is no commutative property of matrix multiplication (you can't switch the order of the matrices and always get the same result).
Matrix multiplication is best understood in terms of the dot product. Remember:
\begin{align} \vec{a} \cdot \vec{b} = (a_{1} \times b_{1}) + (a_{2} \times b_{2}) + \ldots + ( a_{n} \times b_{n}) \end{align}
To multiply to matrices together, we will take the dot product of each row of the first matrix with each column of the second matrix. The position of the resulting entries will correspond to the row number and column number of the row and column vector that were used to find that scalar. Lets look at an example to make this more clear.
\begin{align}
\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6
\end{bmatrix}
\times
\begin{bmatrix}
7 & 8 \\
9 & 10 \\
11 & 12
\end{bmatrix}
=
\begin{bmatrix}
(1)(7)+(2)(9)+(3)(11) & (1)(8)+(2)(10)+(3)(12)\\
(4)(7)+(5)(9)+(6)(11) & (4)(8)+(5)(10)+(6)(12)
\end{bmatrix}
=
\begin{bmatrix}
(7)+(18)+(33) & (8)+(20)+(36)\\
(28)+(45)+(66) & (32)+(50)+(72)
\end{bmatrix}
=
\begin{bmatrix}
58 & 64\\
139 & 154
\end{bmatrix}
\end{align}
## Transpose
A transposed matrix is one whose rows are the columns of the original and whose columns are the rows of the original.
Common notation for the transpose of a matrix is to have a capital $T$ superscript or a tick mark:
\begin{align}
B^{T}
\qquad
B^{\prime}
\end{align}
The first is read "B transpose" the second is sometimes read as "B prime" but can also be read as "B transpose".
The transpose of any matrix can be found easily by fixing the elements on the main diagonal and flipping the placement of all other elements across that diagonal.
<center>
\begin{align}
B =
\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6
\end{bmatrix}
\qquad
B^{T} =
\begin{bmatrix}
1 & 4 \\
2 & 5 \\
3 & 6
\end{bmatrix}
\end{align}
## Square Matrix:
In a true linear algebra class after the first few weeks you would deal almost exclusively with square matrices. They have very nice properties that their lopsided sisters and brothers just don't possess.
A square matrix is any matrix that has the same number of rows as columns:
\begin{align}
A =
\begin{bmatrix}
a_{1,1}
\end{bmatrix}
\qquad
B =
\begin{bmatrix}
b_{1,1} & b_{1,2} \\
b_{2,1} & b_{2,2}
\end{bmatrix}
\qquad
C =
\begin{bmatrix}
c_{1,1} & c_{1,2} & c_{1,3} \\
c_{2,1} & c_{2,2} & c_{2,3} \\
c_{3,1} & c_{3,2} & c_{3,3}
\end{bmatrix}
\end{align}
### Special Kinds of Square Matrices
**Diagonal:** Values on the main diagonal, zeroes everywhere else.
\begin{align}
A =
\begin{bmatrix}
a_{1,1} & 0 & 0 \\
0 & a_{2,2} & 0 \\
0 & 0 & a_{3,3}
\end{bmatrix}
\end{align}
**Upper Triangular:** Values on and above the main diagonal, zeroes everywhere else.
\begin{align}
B =
\begin{bmatrix}
b_{1,1} & b_{1,2} & b_{1,3} \\
0 & b_{2,2} & b_{2,3} \\
0 & 0 & b_{3,3}
\end{bmatrix}
\end{align}
**Lower Triangular:** Values on and below the main diagonal, zeroes everywhere else.
\begin{align}
C =
\begin{bmatrix}
c_{1,1} & 0 & 0 \\
c_{2,1} & c_{2,2} & 0 \\
c_{3,1} & c_{3,2} & c_{3,3}
\end{bmatrix}
\end{align}
**Identity Matrix:** A diagonal matrix with ones on the main diagonal and zeroes everywhere else. The product of the any square matrix and the identity matrix is the original square matrix $AI == A$. Also, any matrix multiplied by its inverse will give the identity matrix as its product. $AA^{-1} = I$
\begin{align}
D =
\begin{bmatrix}
1
\end{bmatrix}
\qquad
E =
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}
\qquad
F =
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}
\end{align}
**Symmetric:** The numbers above the main diagonal are mirrored below/across the main diagonal.
\begin{align}
G =
\begin{bmatrix}
1 & 4 & 5 \\
4 & 2 & 6 \\
5 & 6 & 3
\end{bmatrix}
\end{align}
## Determinant
The determinant is a property that all square matrices possess and is denoted $det(A)$ or using pipes (absolute value symbols) $|A|$
The equation given for finding the determinant of a 2x2 matrix is as follows:
\begin{align}
A = \begin{bmatrix}
a & b \\
c & d
\end{bmatrix}
\qquad
|A| = ad-bc
\end{align}
The determinant of larger square matrices is recursive - by finding the determinats of the smaller matrics that make up the large matrix.
For example:
<center></center>
The above equation is **very** similar to the equation that we use to find the cross-product of a 3x3 matrix. The only difference is the negative sign in front of the $b$.
## Inverse
There are multiple methods that we could use to find the inverse of a matrix by hand. I would suggest you explore those methods --if this content isn't already overwhelming enough. The inverse is like the reciprocal of the matrix that was used to generate it. Just like $\frac{1}{8}$ is the reciprocal of 8, $A^{-1}$ acts like the reciprocal of $A$. The equation for finding the determinant of a 2x2 matrix is as follows:
\begin{align}
A = \begin{bmatrix}
a & b \\
c & d
\end{bmatrix}
\qquad
A^{-1} = \frac{1}{ad-bc}\begin{bmatrix}
d & -b\\
-c & a
\end{bmatrix}
\end{align}
### What happens if we multiply a matrix by its inverse?
The product of a matrix multiplied by its inverse is the identity matrix of the same dimensions as the original matrix. There is no concept of "matrix division" in linear algebra, but multiplying a matrix by its inverse is very similar since $8\times\frac{1}{8} = 1$.
\begin{align}
A^{-1}A = I
\end{align}
### Not all matrices are invertible
Matrices that are not square are not invertible.
A matrix is invertible if and only if its determinant is non-zero. You'll notice that the fraction on the left side of the matrix is $\frac{1}{det(A)}$.
As you know, dividing anything by 0 leads to an undefined quotient. Therefore, if the determinant of a matrix is 0, then the entire inverse becomes undefined.
### What leads to a 0 determinant?
A square matrix that has a determinant of 0 is known as a "singular" matrix. One thing that can lead to a matrix having a determinant of 0 is if two rows or columns in the matrix are perfectly collinear. Another way of saying this is that the determinant will be zero if the rows or columns of a matrix are not linearly dependent.
One of the most common ways that a matrix can end up having rows that are linearly dependent is if one column a multiple of another column. Lets look at an example:
\begin{align}
C =\begin{bmatrix}
1 & 5 & 2 \\
2 & 7 & 4 \\
3 & 2 & 6
\end{bmatrix}
\end{align}
Look at the columns of the above matrix, column 3 is exactly double column 1. (could be any multiple or fraction) Think about if you had some measure in a dataset of distance in miles, but then you also wanted to convert its units to feet, so you create another column and multiply the mile measure by 5,280 (Thanks Imperial System). But then you forget to drop one of the columns so you end up with two columns that are linearly dependent which causes the determinant of your dataframe to be 0 and will cause certain algorithms to fail. We'll go deeper into this concept next week (this can cause problems with linear regression) so just know that matrices that have columns that are a multiple or fraction of another column will cause the determinant of that matrix to be 0.
For more details about when a matrix is invertible google the "Invertible Matrix Theorem" but be prepared for some heavy linear algebra jargon.
## Who's ready to get familiar with NumPy???
[Helpful NumPy Linear Algebra Functions](https://docs.scipy.org/doc/numpy-1.15.1/reference/routines.linalg.html)
```python
### What should we do first? :) Want to see anything demonstrated?
```
| 1fc57bda8188b1218b725513981d50c1d33f7bd1 | 466,192 | ipynb | Jupyter Notebook | 05-Linear-Algebra/01_Linear_Algebra.ipynb | ashishpatel26/Data-Science-Tutorial-By-Lambda-School | c145f5cc0559ee8ba7260b53e011c165e842fde0 | [
"MIT"
]
| 15 | 2019-07-23T20:17:55.000Z | 2021-12-09T02:32:53.000Z | 05-Linear-Algebra/01_Linear_Algebra.ipynb | pesobreiro/data-science-journal | 82a72b4ed5ce380988fac17b0acd97254c2b5c86 | [
"MIT"
]
| null | null | null | 05-Linear-Algebra/01_Linear_Algebra.ipynb | pesobreiro/data-science-journal | 82a72b4ed5ce380988fac17b0acd97254c2b5c86 | [
"MIT"
]
| 23 | 2019-10-12T15:32:41.000Z | 2022-03-13T05:05:13.000Z | 341.282577 | 118,890 | 0.916757 | true | 7,003 | Qwen/Qwen-72B | 1. YES
2. YES | 0.843895 | 0.826712 | 0.697658 | __label__eng_Latn | 0.994843 | 0.459225 |
```python
# ref https://www.coder.work/article/5024474
from sympy.diffgeom import Manifold, Patch, CoordSystem, TensorProduct
# from sympy.abc import theta, eta, psi
import sympy as sym
x,y,z,a = sym.symbols("x y z a")
m = Manifold("M",3)
patch = Patch("P",m)
cartesian = CoordSystem("cartesian",patch)
# toroidal = CoordSystem("toroidal",patch)
toroidal = CoordSystem("toroidal", patch, ["eta", "theta", "psi"])
eta, theta, phi = toroidal.coord_functions()
from sympy import sin,cos,sinh,cosh
toroidal.connect_to(cartesian,[eta,theta,psi],
[(a*sinh(eta)*cos(psi))/(cosh(eta) - cos(theta)),
(a*sinh(eta)*sin(psi))/(cosh(eta) - cos(theta)),
(a*sin(theta))/(cosh(eta) - cos(theta))],inverse=False)
g = sym.Matrix([[a**2/(cos(theta) - cosh(eta))**2, 0, 0],
[0, a**2/(cos(theta) - cosh(eta)), 0],
[0, 0, a**2*sinh(eta)**2/(cos(theta) - cosh(eta))**2]])
diff_forms = toroidal.base_oneforms()
metric_diff_form = sum([TensorProduct(di, dj)*g[i, j] for i, di in enumerate(diff_forms) for j, dj in enumerate(diff_forms)])
from sympy.diffgeom import metric_to_Riemann_components
metric_to_Riemann_components(metric_diff_form)
```
$\displaystyle \left[\begin{matrix}\left[\begin{matrix}0 & 0 & 0\\0 & 0 & 0\\0 & 0 & 0\end{matrix}\right] & \left[\begin{matrix}0 & \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sin{\left(\mathbf{\theta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{4}} - \frac{3 \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)^{2}}{4 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{\left(- 2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} + 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)}}{2 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{\left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{- 2 \sin^{2}{\left(\mathbf{\theta} \right)} + 2 \cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} - \frac{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \cosh{\left(\mathbf{\eta} \right)}}{2 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{\left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{3}} & 0\\- \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sin{\left(\mathbf{\theta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{4}} + \frac{3 \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)^{2}}{4 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{\left(- 2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} + 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)}}{2 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{\left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{- 2 \sin^{2}{\left(\mathbf{\theta} \right)} + 2 \cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} + \frac{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \cosh{\left(\mathbf{\eta} \right)}}{2 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{\left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{3}} & 0 & 0\\0 & 0 & 0\end{matrix}\right] & \left[\begin{matrix}0 & 0 & - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}}{4 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} + \frac{\left(- 2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} + 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2}} + \frac{\left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{2 a^{2}} + \frac{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(4 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 4 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{2 a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} - 2 \sinh^{2}{\left(\mathbf{\eta} \right)} - 2 \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}} - \frac{a^{2} \cosh^{2}{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2}} - \frac{\left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}{a^{4} \sinh^{2}{\left(\mathbf{\eta} \right)}}\\0 & 0 & \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{3}{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} + \frac{\left(- 2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} + 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2}} + \frac{\left(- \frac{a^{2} \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \left(4 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 4 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} + \frac{a^{2} \sin{\left(\mathbf{\theta} \right)} \sinh^{3}{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)}{a^{2}}\\\frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}}{4 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{\left(- 2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} + 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2}} - \frac{\left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{2 a^{2}} - \frac{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(4 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 4 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{2 a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} - 2 \sinh^{2}{\left(\mathbf{\eta} \right)} - 2 \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}} - \frac{a^{2} \cosh^{2}{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2}} + \frac{\left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}{a^{4} \sinh^{2}{\left(\mathbf{\eta} \right)}} & - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{3}{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} - \frac{\left(- 2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} + 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2}} - \frac{\left(- \frac{a^{2} \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \left(4 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 4 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} + \frac{a^{2} \sin{\left(\mathbf{\theta} \right)} \sinh^{3}{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)}{a^{2}} & 0\end{matrix}\right]\\\left[\begin{matrix}0 & \frac{a^{4} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right)^{2} \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sin{\left(\mathbf{\theta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{4} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right)^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{4}} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)^{2}}{4 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(4 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 4 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(- 2 \sin^{2}{\left(\mathbf{\theta} \right)} + 2 \cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \cosh{\left(\mathbf{\eta} \right)}}{2 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{\left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{\left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sin{\left(\mathbf{\theta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{\sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2}} & 0\\- \frac{a^{4} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right)^{2} \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sin{\left(\mathbf{\theta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{4} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right)^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{4}} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)^{2}}{4 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(4 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 4 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(- 2 \sin^{2}{\left(\mathbf{\theta} \right)} + 2 \cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \cosh{\left(\mathbf{\eta} \right)}}{2 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{\left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{3}} + \frac{\left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sin{\left(\mathbf{\theta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{\sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2}} & 0 & 0\\0 & 0 & 0\end{matrix}\right] & \left[\begin{matrix}0 & 0 & 0\\0 & 0 & 0\\0 & 0 & 0\end{matrix}\right] & \left[\begin{matrix}0 & 0 & - \frac{a^{4} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right)^{2} \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{3}{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(4 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 4 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \sin{\left(\mathbf{\theta} \right)} \sinh^{3}{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{\left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} + \frac{\left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} + \frac{\left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{3}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}\\0 & 0 & - \frac{a^{4} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right)^{2} \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sin{\left(\mathbf{\theta} \right)} \sinh^{2}{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}}{4 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(4 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 4 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(- 2 \sin^{2}{\left(\mathbf{\theta} \right)} + 2 \cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{\left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)}}{2 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{\left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sin{\left(\mathbf{\theta} \right)} \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}\\\frac{a^{4} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right)^{2} \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{3}{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(4 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 4 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \sin{\left(\mathbf{\theta} \right)} \sinh^{3}{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{\left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} - \frac{\left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} - \frac{\left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{3}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} & \frac{a^{4} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right)^{2} \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sin{\left(\mathbf{\theta} \right)} \sinh^{2}{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}}{4 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(4 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 4 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} + \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(- 2 \sin^{2}{\left(\mathbf{\theta} \right)} + 2 \cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{\left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(- \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)}}{2 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{\left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sin{\left(\mathbf{\theta} \right)} \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} & 0\end{matrix}\right]\\\left[\begin{matrix}0 & 0 & \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)^{2}}{4 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} + \frac{\left(- 2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} + 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} - \frac{\left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{2 a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} - \frac{2 \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \cosh{\left(\mathbf{\eta} \right)}}{a^{2} \sinh^{3}{\left(\mathbf{\eta} \right)}} + \frac{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(4 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 4 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} + \frac{2 a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} - 2 \sinh^{2}{\left(\mathbf{\eta} \right)} - 2 \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}} + \frac{a^{2} \cosh^{2}{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} + \frac{\left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}{a^{4} \sinh^{4}{\left(\mathbf{\eta} \right)}}\\0 & 0 & - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} + \frac{\left(- 2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} + 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} + \frac{\left(\frac{a^{2} \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \left(4 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 4 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{a^{2} \sin{\left(\mathbf{\theta} \right)} \sinh^{3}{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)}{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}}\\- \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)^{2}}{4 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{\left(- 2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} + 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} + \frac{\left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{2 a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} + \frac{2 \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \cosh{\left(\mathbf{\eta} \right)}}{a^{2} \sinh^{3}{\left(\mathbf{\eta} \right)}} - \frac{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(4 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 4 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} + \frac{2 a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} - 2 \sinh^{2}{\left(\mathbf{\eta} \right)} - 2 \cosh^{2}{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}} + \frac{a^{2} \cosh^{2}{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} - \frac{\left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}{a^{4} \sinh^{4}{\left(\mathbf{\eta} \right)}} & \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} - \frac{\left(- 2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} + 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} - \frac{\left(\frac{a^{2} \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \left(4 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 4 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{a^{2} \sin{\left(\mathbf{\theta} \right)} \sinh^{3}{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)}{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} & 0\end{matrix}\right] & \left[\begin{matrix}0 & 0 & - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} + \frac{\left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{\sin{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\\0 & 0 & - \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sin{\left(\mathbf{\theta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} + \frac{3 \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)^{2}}{4 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{- 2 \sin^{2}{\left(\mathbf{\theta} \right)} + 2 \cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} + \frac{\left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}{2 a^{2} \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \sinh{\left(\mathbf{\eta} \right)}}\\\frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} - \frac{\left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{\sin{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}} & \frac{a^{2} \left(\frac{\cos{\left(\mathbf{\theta} \right)}}{a^{2}} - \frac{\cosh{\left(\mathbf{\eta} \right)}}{a^{2}}\right) \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sin{\left(\mathbf{\theta} \right)}}{4 \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} - \frac{3 \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)^{2}}{4 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{- 2 \sin^{2}{\left(\mathbf{\theta} \right)} + 2 \cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)} - \frac{\left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}{2 a^{2} \left(\cos{\left(\mathbf{\theta} \right)} - \cosh{\left(\mathbf{\eta} \right)}\right)^{2} \sinh{\left(\mathbf{\eta} \right)}} & 0\end{matrix}\right] & \left[\begin{matrix}0 & \frac{\left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} - \frac{\sin{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}} - \frac{\left(- 2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} + 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} - \frac{\left(\frac{a^{2} \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \left(4 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 4 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{a^{2} \sin{\left(\mathbf{\theta} \right)} \sinh^{3}{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)}{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} & 0\\- \frac{\left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right)}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{\sin{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}} + \frac{\left(- 2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} + 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(\frac{a^{2} \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}}\right)}{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} + \frac{\left(\frac{a^{2} \left(2 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 2 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}} + \frac{a^{2} \left(4 \sin{\left(\mathbf{\theta} \right)} \cos{\left(\mathbf{\theta} \right)} - 4 \sin{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \left(2 \cos{\left(\mathbf{\theta} \right)} \sinh{\left(\mathbf{\eta} \right)} - 2 \sinh{\left(\mathbf{\eta} \right)} \cosh{\left(\mathbf{\eta} \right)}\right) \sinh^{2}{\left(\mathbf{\eta} \right)}}{2 \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{3}} - \frac{a^{2} \sin{\left(\mathbf{\theta} \right)} \sinh^{3}{\left(\mathbf{\eta} \right)}}{\left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)^{2}}\right) \left(\cos^{2}{\left(\mathbf{\theta} \right)} - 2 \cos{\left(\mathbf{\theta} \right)} \cosh{\left(\mathbf{\eta} \right)} + \cosh^{2}{\left(\mathbf{\eta} \right)}\right)}{a^{2} \sinh^{2}{\left(\mathbf{\eta} \right)}} & 0 & 0\\0 & 0 & 0\end{matrix}\right]\end{matrix}\right]$
| 337c272f3a50747f5a90743a185b3d50fc2392d6 | 119,180 | ipynb | Jupyter Notebook | knowledge/sympy_diffgeom_metric_to_Riemann_components.ipynb | partnernetsoftware/openlab | faa4e58486a7bc4140ad3d56545bfb736cb86696 | [
"MIT"
]
| 1 | 2020-09-26T05:27:30.000Z | 2020-09-26T05:27:30.000Z | knowledge/sympy_diffgeom_metric_to_Riemann_components.ipynb | partnernetsoftware/openlab | faa4e58486a7bc4140ad3d56545bfb736cb86696 | [
"MIT"
]
| null | null | null | knowledge/sympy_diffgeom_metric_to_Riemann_components.ipynb | partnernetsoftware/openlab | faa4e58486a7bc4140ad3d56545bfb736cb86696 | [
"MIT"
]
| null | null | null | 1,527.948718 | 90,259 | 0.539596 | true | 33,438 | Qwen/Qwen-72B | 1. YES
2. YES | 0.957278 | 0.715424 | 0.68486 | __label__yue_Hant | 0.166522 | 0.429489 |
#Probing convexity of some functions planted as exercises
For all problems we asume that $x_1, x_2 \in {\rm I\!R}$ and we asume that $z = \alpha x_1 + (1 - \alpha)x_2$, is usable in all functions for probing the convexity and for last we asume that $\alpha \in [0,1]$
1. Show that $f(x) = a \cdot x + b$ is convex, $x \in {\rm I\!R}$
\begin{equation}
\begin{split}
& f(z) = f(\alpha x_1 + (1 - \alpha) x_2)\\
& f(z) = a (\alpha x_1 + (1 - \alpha) x_2) + b \\
& f(z) = a \alpha x_1 + a(1-\alpha) x_2 + b \\
& f(z) = a \alpha x_1 + a(1-\alpha) x_2 + b + \alpha b - \alpha b\\
& f(z) = a \alpha x_1 + \alpha b + a(1-\alpha) x_2 + b - \alpha b\\
& f(z) = \alpha (a x_1 + b) + a(1-\alpha)x_2 + b(1 - \alpha)\\
& f(z) = \alpha ( a x_1 + b) + (1-\alpha) (a x_2 + b)\\
& f(z) = \alpha f(x_1) + (1-\alpha) f(x_2)
\end{split}
\end{equation}
2. Show that $f(x) = ax^2 + b$ is convex, $x \in {\rm I\!R}$
\begin{equation}
\begin{split}
& f(z) = f(\alpha x_1 + (1 - \alpha) x_2)\\
& f(z) = a (\alpha x_1 + (1 - \alpha)x_2)^2 + b\\
& f(z) = a ( [\alpha x_1]^2 + 2 [\alpha x_1 + (1 - \alpha)x_2] + [(1 - \alpha)x_2]^2) + b\\
& f(z) = a(\alpha x_1)^2 + 2a(\alpha x_1 + (1 - \alpha)x_2) + a[(1 - \alpha)x_2]^2 + b\\
& f(z) = a(\alpha x_1)^2 + 2a\alpha x_1 + 2a(1 - \alpha)x_2 + a[(1 - \alpha)x_2]^2 + b\\
& f(z) = a(\alpha x_1)^2 + 2a\alpha x_1 + \alpha b + 2a(1 - \alpha)x_2 + a[(1 - \alpha)x_2]^2 + b - \alpha b\\
& f(z) = a(\alpha x_1)^2 + 2a\alpha x_1 + \alpha b + 2a(1 - \alpha)x_2 + a[(1 - \alpha)x_2]^2 + b (1 - \alpha)\\
& f(z)= a(\alpha x_1)^2 + 2a\alpha x_1 + \alpha b + (1- \alpha)(2a + b) + a[(1 - \alpha)x_2]^2 \\
& f(z)= a(\alpha x_1)^2 + \alpha (2ax_1 + b) + (1- \alpha)(2a + b) + a[(1 - \alpha)x_2]^2
\end{split}
\end{equation}
3. Show that $g(x) = af(x) + b$ is convex, where $f(x)$ is a convex function, $x \in {\rm I\!R}$
\begin{equation}
\begin{split}
& f(z) = f(\alpha x_1 + (1-\alpha)x_2)\\
& g(x) = a f(x) + b\\
& g(x) = a f(z) + b\\
& g(x) = a f(\alpha x_1 + (1-\alpha)x_2) + b\\
& g(x) = a [\alpha f(x_1) + (1-\alpha) f(x_2)] + b\\
& g(x) = a \alpha f(x_1) + a (1 - \alpha) f(x_2) + b\\
& g(x) = a \alpha f(x_1) + a (1 - \alpha) f(x_2) + b + \alpha b - \alpha b\\
& g(x) = a \alpha f(x_1) + \alpha b + a (1 - \alpha) f(x_2) + b - \alpha b\\
& g(x) = \alpha (a f(x_1) + b) + a (1 - \alpha) f(x_2) + b (1 - \alpha)\\
& g(x) = \alpha (a f(x_1) + b) + (1 - \alpha) (a f(x_2) + b)\\
& g(x) = \alpha g(x_1) + (1 - \alpha) g(x_2)\\
\end{split}
\end{equation}
4. Show that, if $f(x)$ is convex, then $g(x) = f(ax+b)$ is convex as well
\begin{equation}
\begin{split}
& f(z) = f(\alpha x_1 + (1 - \alpha)x_2)\\
& g(x) = f(a x + b)\\
& g(x) = f(a (\alpha x_1 + (1 - \alpha)x_2) + b)\\
& g(x) = f(a\alpha x_1 + a(1 - \alpha)x_2 + b)\\
& g(x) = f(a\alpha x_1 + a(1 - \alpha)x_2 + b + \alpha b - \alpha b)\\
& g(x) = f(a\alpha x_1 + \alpha b + a(1 - \alpha)x_2 + b - \alpha b)\\
& g(x) = f(\alpha [a x_1 + b] + a [1 - \alpha] x_2 + b [1 - \alpha])\\
& g(x) = f(\alpha [a x_1 + b] + [1 - \alpha] [a x_2 + b])\\
& g(x) = \alpha f(a x_1 + b) + (1 - \alpha) f(a x_2 + b)\\
& g(x) = \alpha g(x_1) + (1 - \alpha) g(x_2)\\
\end{split}
\end{equation}
| 5531e227f146a310810ed3b16447d6ebe6147070 | 4,457 | ipynb | Jupyter Notebook | Practice/First exam/Convexity.ipynb | QuantumGorilla/Optimization | ed7dc755cb1346d208f5f33fc4814e3931c9b44d | [
"MIT"
]
| null | null | null | Practice/First exam/Convexity.ipynb | QuantumGorilla/Optimization | ed7dc755cb1346d208f5f33fc4814e3931c9b44d | [
"MIT"
]
| null | null | null | Practice/First exam/Convexity.ipynb | QuantumGorilla/Optimization | ed7dc755cb1346d208f5f33fc4814e3931c9b44d | [
"MIT"
]
| null | null | null | 4,457 | 4,457 | 0.498093 | true | 1,554 | Qwen/Qwen-72B | 1. YES
2. YES | 0.884039 | 0.808067 | 0.714363 | __label__eng_Latn | 0.324285 | 0.498037 |
```python
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
```
# Class 3: NumPy
NumPy is a powerful Python module for scientific computing. Among other things, NumPy defines an N-dimensional array object that is especially convenient to use for plotting functions and for simulating and storing time series data. NumPy also defines many useful mathematical functions like, for example, the sine, cosine, and exponential functions and has excellent functions for probability and statistics including random number generators, and many cumulative density functions and probability density functions.
## Importing NumPy
The standard way to import NumPy so that the namespace is `np`. This is for the sake of brevity.
```python
import numpy as np
```
## NumPy arrays
A NumPy `ndarray` is a homogeneous multidimensional array. Here, *homogeneous* means that all of the elements of the array have the same type. An `nadrray` is a table of numbers (like a matrix but with possibly more dimensions) indexed by a tuple of positive integers. The dimensions of NumPy arrays are called axes and the number of axes is called the rank. For this course, we will work almost exclusively with 1-dimensional arrays that are effectively vectors. Occasionally, we might run into a 2-dimensional array.
### Basics
The most straightforward way to create a NumPy array is to call the `array()` function which takes as an argument a `list`. For example:
```python
# Create a variable called a1 equal to a numpy array containing the numbers 1 through 5
a1 = np.array([1,2,3,4,5])
print(a1)
# Find the type of a1
print(type(a1))
# Find the shape of a1
print(np.shape(a1))
# Use ndim to find the rank or number of dimensions of a1
print(np.ndim(a1))
```
[1 2 3 4 5]
<class 'numpy.ndarray'>
(5,)
1
```python
# Create a variable called a2 equal to a 2-dimensionl numpy array containing the numbers 1 through 4
a2 = np.array([[1,2],[3,4]])
print(a2)
# Use ndim to find the rank or number of dimensions of a2
print(np.ndim(a2))
# Find the shape of a2
print(np.shape(a2))
```
[[1 2]
[3 4]]
2
(2, 2)
```python
# Create a variable called a3 an empty numpy array
a3 = np.array([])
print(a3)
# Use ndim to find the rank or number of dimensions of a3
print(np.ndim(a3))
# Find the shape of a3
print(np.shape(a3))
```
[]
1
(0,)
### Special functions for creating arrays
Numpy has several built-in functions that can assist you in creating certain types of arrays: `arange()`, `zeros()`, and `ones()`. Of these, `arrange()` is probably the most useful because it allows you a create an array of numbers by specifying the initial value in the array, the maximum value in the array, and a step size between elements. `arrange()` has three arguments: `start`, `stop`, and `step`:
arange([start,] stop[, step,])
The `stop` argument is required. The default for `start` is 0 and the default for `step` is 1. Note that the values in the created array will stop one increment *below* stop. That is, if `arrange()` is called with `stop` equal to 9 and `step` equal to 0.5, then the last value in the returned array will be 8.5.
```python
# Create a variable called b that is equal to a numpy array containing the numbers 1 through 5
b = np.arange(1,6,1)
print(b)
```
[1 2 3 4 5]
```python
# Create a variable called c that is equal to a numpy array containing the numbers 0 through 10
c = np.arange(11)
print(c)
```
[ 0 1 2 3 4 5 6 7 8 9 10]
The `zeros()` and `ones()` take as arguments the desired shape of the array to be returned and fill that array with either zeros or ones.
```python
# Construct a 1x5 array of zeros
print(np.zeros(5))
```
[0. 0. 0. 0. 0.]
```python
# Construct a 2x2 array of ones
print(np.zeros([2,2]))
```
[[0. 0.]
[0. 0.]]
### Math with NumPy arrays
A nice aspect of NumPy arrays is that they are optimized for mathematical operations. The following standard Python arithemtic operators `+`, `-`, `*`, `/`, and `**` operate element-wise on NumPy arrays as the following examples indicate.
```python
# Define three 1-dimensional arrays. CELL PROVIDED
A = np.array([2,4,6])
B = np.array([3,2,1])
C = np.array([-1,3,2,-4])
```
```python
# Multiply A by a constant
print(3*A)
```
[ 6 12 18]
```python
# Exponentiate A
print(A**2)
```
[ 4 16 36]
```python
# Add A and B together
print(A+B)
```
[5 6 7]
```python
# Exponentiate A with B
print(A**B)
```
[ 8 16 6]
```python
# Add A and C together
print(A+C)
```
The error in the preceding example arises because addition is element-wise and `A` and `C` don't have the same shape.
```python
# Compute the sine of the values in A
print(np.sin(A))
```
[ 0.90929743 -0.7568025 -0.2794155 ]
### Iterating through Numpy arrays
NumPy arrays are iterable objects just like lists, strings, tuples, and dictionaries which means that you can use `for` loops to iterate through the elements of them.
```python
# Use a for loop with a NumPy array to print the numbers 0 through 4
for x in np.arange(5):
print(x)
```
0
1
2
3
4
### Example: Basel problem
One of my favorite math equations is:
\begin{align}
\sum_{n=1}^{\infty} \frac{1}{n^2} & = \frac{\pi^2}{6}
\end{align}
We can use an iteration through a NumPy array to approximate the lefthand-side and verify the validity of the expression.
```python
# Set N equal to the number of terms to sum
N = 1000
# Initialize a variable called summation equal to 0
summation = 0
# loop over the numbers 1 through N
for n in np.arange(1,N+1):
summation = summation + 1/n**2
# Print the approximation and the exact solution
print('approx:',summation)
print('exact: ',np.pi**2/6)
```
approx: 1.6439345666815615
exact: 1.6449340668482264
## Random numbers and statistics
NumPy has many useful routines for generating draws from probability distributions. Random number generation is useful for computing dynamic simulations of stochastic economies. NumPy also has some routines for computing basic summary statistics (min, max, mean, median, variance) for data in an array.
For more advanced applications involving probability distributions, use SciPy's `scipy.stats` module (https://docs.scipy.org/doc/scipy-0.16.1/reference/stats.html). And for estimating statisticsl models (e.g., OLS models) use StatsModels (https://www.statsmodels.org/stable/index.html).
```python
# Use the help function to view documentation on the np.random.uniform function
help(np.random.uniform)
```
Help on built-in function uniform:
uniform(...) method of numpy.random.mtrand.RandomState instance
uniform(low=0.0, high=1.0, size=None)
Draw samples from a uniform distribution.
Samples are uniformly distributed over the half-open interval
``[low, high)`` (includes low, but excludes high). In other words,
any value within the given interval is equally likely to be drawn
by `uniform`.
Parameters
----------
low : float or array_like of floats, optional
Lower boundary of the output interval. All values generated will be
greater than or equal to low. The default value is 0.
high : float or array_like of floats
Upper boundary of the output interval. All values generated will be
less than high. The default value is 1.0.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``low`` and ``high`` are both scalars.
Otherwise, ``np.broadcast(low, high).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized uniform distribution.
See Also
--------
randint : Discrete uniform distribution, yielding integers.
random_integers : Discrete uniform distribution over the closed
interval ``[low, high]``.
random_sample : Floats uniformly distributed over ``[0, 1)``.
random : Alias for `random_sample`.
rand : Convenience function that accepts dimensions as input, e.g.,
``rand(2,2)`` would generate a 2-by-2 array of floats,
uniformly distributed over ``[0, 1)``.
Notes
-----
The probability density function of the uniform distribution is
.. math:: p(x) = \frac{1}{b - a}
anywhere within the interval ``[a, b)``, and zero elsewhere.
When ``high`` == ``low``, values of ``low`` will be returned.
If ``high`` < ``low``, the results are officially undefined
and may eventually raise an error, i.e. do not rely on this
function to behave when passed arguments satisfying that
inequality condition.
Examples
--------
Draw samples from the distribution:
>>> s = np.random.uniform(-1,0,1000)
All values are within the given interval:
>>> np.all(s >= -1)
True
>>> np.all(s < 0)
True
Display the histogram of the samples, along with the
probability density function:
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, 15, density=True)
>>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r')
>>> plt.show()
```python
# Print a random draw from the uniform(0,1) probability distribution.
print(np.random.uniform())
```
0.0563381686974056
In the previous example, everyone will obtain different output for `np.random.uniform()`. However, we can set the *seed* for NumPy's PRNG (pseudorandom number generator) with the function `np.random.seed()`. Setting the seed before generating a random sequence can ensure replicability of results.
```python
# Set the seed for the NumPy random number generator to 271828
np.random.seed(271828)
# Print a random draw from the uniform(0,1) probability distribution.
print(np.random.uniform())
```
0.9662022955536614
### Example: Random Draws from $\text{uniform}(-1,1)$ Distribution
Create a sample of 200 draws from the $\text{uniform}(-1,1)$ distribution. Compute some summary statistics. Plot the sample.
```python
# Set the seed for the NumPy random number generator to 314159
np.random.seed(314159)
# Create a variable called 'uniform_data' containing 200 random draw from the uniform(-1,1) probability distribution.
uniform_data = np.random.uniform(size=200,low=-1,high=1)
# Print the variable data
print(uniform_data)
```
[ 0.63584662 0.10209259 -0.16044929 -0.80261629 0.6220415 0.93471281
-0.80358661 0.60372074 0.20980423 0.16953762 -0.01382494 -0.04650081
-0.47206371 -0.38079105 0.81590802 -0.63003033 -0.62352117 -0.82321921
0.74842975 0.28481629 0.70565033 0.0661206 0.75880411 -0.41319904
-0.14432861 0.38336681 -0.94102104 -0.73408651 -0.29515564 0.3641906
0.9278488 -0.09180626 0.02434442 -0.06863382 -0.73856441 -0.58805703
-0.53966153 -0.33441377 0.3820414 -0.43905549 0.65183529 -0.66164868
-0.72066187 0.55183355 -0.52219358 0.26076713 0.89209106 -0.47310432
-0.50685854 0.66302636 0.44291651 -0.26846445 0.81962684 0.9394919
0.68700008 0.09848171 0.25145527 0.00982847 0.74560833 0.13846091
-0.6741501 -0.78951496 0.67829627 0.77302013 0.92888613 0.01447009
-0.33037939 0.45926088 0.93300648 -0.23495154 0.19028247 -0.71479702
0.71114841 0.07243895 -0.632449 -0.69894595 -0.65026494 0.9580402
-0.26946754 0.85194828 0.38758249 0.64178299 -0.6738458 0.83502118
0.63629364 0.02859558 0.68737679 -0.21023369 -0.85377648 -0.84879093
0.81242546 -0.94086188 -0.10959221 -0.01291769 -0.95679783 0.85599001
-0.17968129 0.99989443 0.23891269 -0.66466892 0.9739976 0.95522212
0.69345454 0.42483686 0.74119329 -0.25768943 -0.24289615 0.3512096
-0.01420467 -0.81386935 -0.75655242 0.67474155 0.81256821 0.09794052
-0.35859784 0.11619177 0.52145372 -0.26214142 0.35253046 0.55032724
0.18885249 0.78057581 0.83120461 -0.45686194 0.8972115 -0.96391123
0.89429982 -0.81937678 -0.52383 0.91979452 -0.62225414 0.8141226
0.50637822 -0.6921192 0.77288675 0.27131884 -0.90608287 0.76175962
0.90954684 -0.13271693 -0.81301378 0.60804403 0.92988145 0.39675914
0.37276877 -0.57952684 -0.2294779 -0.86317973 -0.16198036 0.28673408
-0.57565177 -0.94692889 -0.63431133 -0.49885546 -0.98798956 0.64392303
-0.22363103 0.35045596 0.77442238 0.9363158 -0.90516324 -0.55774438
-0.57081079 -0.13020764 -0.53775636 -0.22456364 0.24785222 0.90810479
0.29336168 -0.38997457 -0.10495809 -0.13725783 0.44436708 0.2566703
0.12444098 -0.7196481 0.65897494 0.92976743 0.31022742 0.17614419
-0.96249917 0.88456632 0.39453716 -0.75154362 -0.02893635 -0.73586317
-0.80650354 0.02100266 0.46247197 -0.88487504 -0.18950471 -0.70548356
-0.63094273 0.45872944 0.70059834 0.67150071 0.51254239 0.48583715
-0.52763227 -0.68046815]
```python
# Print the mean of the values in variable uniform_data
print('mean: ',np.mean(uniform_data))
# Print the median of the values in variable uniform_data
print('median:',np.median(uniform_data))
# Print the standard deviation of the values in variable uniform_data
print('min: ',np.std(uniform_data))
# Print the maximum of the values in variable uniform_data
print('max: ',np.max(uniform_data))
# Print the index value of the arg max of the values in variable uniform_data
print('argmax:',np.argmax(uniform_data))
```
mean: 0.047187009866624184
median: 0.06927977272929209
min: 0.6090136251058506
max: 0.9998944319955057
argmax: 97
```python
# Create a variable called index_max equal to the argmax of data. Print the corresponding element of uniform_data
index_max = np.argmax(uniform_data)
print('data at argmax:',uniform_data[index_max])
```
data at argmax: 0.9998944319955057
```python
# Plot random sample
plt.plot(uniform_data)
plt.title('200 uniform(-1,1) draws')
plt.grid()
```
### Example: Random Draws from $\text{normal}\left(0,\frac{1}{3}\right)$ Distribution
Create a sample of 200 draws from a normal distribution with mean 0 and variance $\frac{1}{3}$. Note that the variance of the $\text{uniform}(-1,1)$ distribution from the previous example has the same variance See: https://en.wikipedia.org/wiki/Uniform_distribution_(continuous).
Compute some summary statistics. Plot the sample.
```python
# Use the help function to view documentation on the np.random.normal function
help(np.random.normal)
```
Help on built-in function normal:
normal(...) method of numpy.random.mtrand.RandomState instance
normal(loc=0.0, scale=1.0, size=None)
Draw random samples from a normal (Gaussian) distribution.
The probability density function of the normal distribution, first
derived by De Moivre and 200 years later by both Gauss and Laplace
independently [2]_, is often called the bell curve because of
its characteristic shape (see the example below).
The normal distributions occurs often in nature. For example, it
describes the commonly occurring distribution of samples influenced
by a large number of tiny, random disturbances, each with its own
unique distribution [2]_.
Parameters
----------
loc : float or array_like of floats
Mean ("centre") of the distribution.
scale : float or array_like of floats
Standard deviation (spread or "width") of the distribution. Must be
non-negative.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``loc`` and ``scale`` are both scalars.
Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized normal distribution.
See Also
--------
scipy.stats.norm : probability density function, distribution or
cumulative density function, etc.
Notes
-----
The probability density for the Gaussian distribution is
.. math:: p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }}
e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },
where :math:`\mu` is the mean and :math:`\sigma` the standard
deviation. The square of the standard deviation, :math:`\sigma^2`,
is called the variance.
The function has its peak at the mean, and its "spread" increases with
the standard deviation (the function reaches 0.607 times its maximum at
:math:`x + \sigma` and :math:`x - \sigma` [2]_). This implies that
`numpy.random.normal` is more likely to return samples lying close to
the mean, rather than those far away.
References
----------
.. [1] Wikipedia, "Normal distribution",
https://en.wikipedia.org/wiki/Normal_distribution
.. [2] P. R. Peebles Jr., "Central Limit Theorem" in "Probability,
Random Variables and Random Signal Principles", 4th ed., 2001,
pp. 51, 51, 125.
Examples
--------
Draw samples from the distribution:
>>> mu, sigma = 0, 0.1 # mean and standard deviation
>>> s = np.random.normal(mu, sigma, 1000)
Verify the mean and the variance:
>>> abs(mu - np.mean(s))
0.0 # may vary
>>> abs(sigma - np.std(s, ddof=1))
0.1 # may vary
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, 30, density=True)
>>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
... np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
... linewidth=2, color='r')
>>> plt.show()
Two-by-four array of samples from N(3, 6.25):
>>> np.random.normal(3, 2.5, size=(2, 4))
array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random
[ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random
```python
# Set the seed for the NumPy random number generator to 314159
np.random.seed(314159)
# Create a variable called 'normal_data' containing 200 random draw from the normal(0,1/3) probability distribution.
normal_data = np.random.normal(size=200,loc=0,scale=1/np.sqrt(3))
# Print the variable normal_data
print(normal_data)
```
[ 0.12143549 0.75631678 -0.50674044 -0.10130139 0.83074966 1.0280597
-1.92533743 -0.57241295 -0.51265814 -0.63553831 0.19357178 0.50866078
0.06320691 0.67455494 -0.21112296 0.38770897 1.02091162 -0.38434928
0.7808367 -0.63282346 -0.03008466 0.3040535 -1.76142292 0.62477673
-0.17253536 -0.21669408 -0.40996677 -0.66158548 -0.6408805 0.55765817
-0.22355639 0.22024066 0.21849396 -0.28534015 0.37852699 -0.75801105
0.39010377 -0.29821956 -0.48548378 0.80095813 0.09902537 0.69079261
0.0529736 1.35529688 0.11087969 0.5970842 0.00488096 0.3133261
0.70744534 -0.50891633 -0.05539998 0.21999659 -0.61272051 0.1631092
0.06779908 0.66559788 -0.20814988 -0.18834673 0.36935728 -0.1168261
0.53048069 0.32036534 0.03481789 0.77474907 -0.19405353 0.63447439
-0.20067492 -1.70250295 -0.64075609 0.23031732 0.27426255 0.44767446
-0.18670597 0.53702323 0.87605589 -0.60587922 -0.52375627 -0.00914125
0.0618483 0.51312738 0.35156255 -1.08501292 -0.38057956 0.75705177
0.63411274 0.40620206 0.52555331 0.12715235 -0.12791087 0.23271823
-0.36527786 0.26724985 0.17083956 0.48665855 -0.04839994 0.33169856
-0.59265491 0.38121314 -0.37510574 -0.09972254 1.05956336 -0.59856316
-0.33056808 -0.42032833 0.91195135 -0.58192938 -0.18789336 -0.82369635
-0.32696855 -0.78298257 0.27393143 0.07476506 -0.78159695 0.58796295
-1.21538751 -0.92938051 0.47173415 0.81670191 -0.63784809 0.11029619
0.57882996 1.01944278 -0.41393313 0.2173021 -0.6381914 -0.0250956
0.01392882 -0.53486761 -0.04042405 0.02112727 -0.62492897 -0.1678664
0.33839992 -0.46543986 0.13840744 0.14440495 0.46850309 0.49425552
-0.35292786 -0.27365884 -0.60058536 -0.24254545 0.68385251 0.29115578
0.50685277 -0.4930715 -0.62612325 0.09506943 1.5985902 -0.44056757
-1.07103108 -0.15234541 -0.37974221 0.02521733 -0.15139174 0.18952504
0.45473891 0.04437202 0.46953245 0.40186251 0.57882478 0.03737229
-0.51040583 0.4320896 0.13216004 -0.13552132 0.96610447 1.10346492
-0.36091273 -0.03749187 0.06744852 0.06116802 0.35073528 -0.19900517
-0.63715201 0.52139514 0.20782661 -0.018552 0.43281211 -0.55847675
-0.30212557 -0.32978655 -0.35717671 0.93606909 0.59562641 -0.4201014
0.12613396 0.208413 -0.48309414 0.05591969 -0.66945551 -0.39336486
0.0247874 -0.49181131 0.63436971 0.91645032 -0.50152753 -0.0021372
0.05158724 -0.14778507]
```python
# Print the mean of the values in variable normal_data
print('mean: ',np.mean(normal_data))
# Print the median of variable the values in normal_data
print('median:',np.median(normal_data))
# Print the standard deviation of the values in variable normal_data
print('min: ',np.std(normal_data))
# Print the maximum of the values in variable normal_data
print('max: ',np.max(normal_data))
# Print the index value of the arg max of the values in variable normal_data
print('argmax:',np.argmax(normal_data))
```
mean: 0.020031561013250983
median: 0.040872154927040286
min: 0.5552816540529858
max: 1.5985901984025754
argmax: 148
```python
# Create a variable called 'index_max' equal to the argmax of data. Print corresponding element of normal_data
index_max = np.argmax(normal_data)
print('data at argmax:',normal_data[index_max])
```
data at argmax: 1.5985901984025754
```python
# Plot the random sample
plt.plot(normal_data)
plt.title('200 normal(0,1/3) draws')
plt.grid()
```
```python
# Plot the uniform and normal samples together on the same axes
plt.plot(normal_data,label='normal(0,1/3)')
plt.plot(uniform_data,label='uniform(-1,1)')
plt.legend(loc='lower right')
plt.title('Two random samples')
plt.grid()
```
| 1c4428e6797d941c2041597b48c89860883e855c | 177,839 | ipynb | Jupyter Notebook | Lecture Notebooks/Econ126_Class_03.ipynb | t-hdd/econ126 | 17029937bd6c40e606d145f8d530728585c30a1d | [
"MIT"
]
| null | null | null | Lecture Notebooks/Econ126_Class_03.ipynb | t-hdd/econ126 | 17029937bd6c40e606d145f8d530728585c30a1d | [
"MIT"
]
| null | null | null | Lecture Notebooks/Econ126_Class_03.ipynb | t-hdd/econ126 | 17029937bd6c40e606d145f8d530728585c30a1d | [
"MIT"
]
| null | null | null | 168.248817 | 66,212 | 0.887111 | true | 7,303 | Qwen/Qwen-72B | 1. YES
2. YES | 0.800692 | 0.92079 | 0.737269 | __label__eng_Latn | 0.922233 | 0.551255 |
```python
# Método para resolver las energías y eigenfunciones de un sistema cuántico numéricamente por Teoría de Pertubaciones
# Modelado Molecular 2
# By: José Manuel Casillas Martín 22-oct-2017
import numpy as np
from sympy import *
from sympy.physics.qho_1d import E_n, psi_n
from sympy.physics.hydrogen import E_nl, R_nl
from sympy import init_printing; init_printing(use_latex = 'mathjax')
from scipy import integrate
from scipy.constants import hbar, m_e, m_p, e
from mpmath import spherharm
from numpy import inf, array
import numpy as np
import matplotlib.pyplot as plt
import traitlets
from IPython.display import display
from ipywidgets import Layout, Box, Text, Dropdown, Label, IntRangeSlider, IntSlider, RadioButtons
```
<h1><center>Teoría de Perturbaciones</center></h1>
Consiste en resolver un sistema perturbado(se conoce la solución al no perturbado), y donde el interés es conocer la contribución de la parte perturbada $H'$ al nuevo sistema total.
$$ H = H^{0}+H'$$
La resolución adecuada del problema, depende en gran parte, de una correcta elección de $H'$.
```python
form_item_layout = Layout(display='flex',flex_flow='row',justify_content='space-between')
PType=Dropdown(options=['Particle in a one-dimensional box', 'Harmonic oscilator', 'Hydrogen atom (Helium correction)'])
Pert=Text()
Rang=IntRangeSlider(min=0, max=20, step=1, disabled=False, continuous_update=False, orientation='horizontal',\
readout=True, readout_format='d')
M=Text()
Correc=Dropdown(options=['1', '2'])
hbarra=Dropdown(options=[1, 1.0545718e-34])
form_items = [
Box([Label(value='Problem'),PType], layout=form_item_layout),
Box([Label(value='Perturbation'),Pert], layout=form_item_layout),
Box([Label(value='Correction order'),Correc], layout=form_item_layout),
Box([Label(value='n Range'),Rang], layout=form_item_layout),
Box([Label(value='Mass'),M], layout=form_item_layout),
Box([Label(value='Hbar'),hbarra], layout=form_item_layout),]
form = Box(form_items, layout=Layout(display='flex',flex_flow='column',border='solid 2px',align_items='stretch',width='40%'))
form
```
En esta caja interactiva llena los datos del problema que deseas resolver.
# Nota 1:
Es recomendable usar unidades atómicas de Hartree para eficientar los cálculos. 1 u.a. (energía)= 27.211eV.
# Nota 2:
Para la partícula en una caja unidimensional es recomendable que n sea mayor a 1.
## Nota 3:
Para la correción a la energía del átomo de Helio sólo es necesario seleccionar el problema, automáticamente se calcula la correción a primer orden y no se corrigen las funciones de onda.
```python
Problem=PType.value
form_item_layout = Layout(display='flex',flex_flow='row',justify_content='space-between')
L=Text()
W=Text()
atomic_number=RadioButtons(options=['1 (Show Hydrogen energies)','2 (Correct Helium first energy)'],disabled=False)
if Problem=='Particle in a one-dimensional box':
form_items = [Box([Label(value='Large of box'),L], layout=form_item_layout)]
if Problem=='Harmonic oscilator':
form_items = [Box([Label(value='Angular Fr'),W], layout=form_item_layout)]
if Problem=='Hydrogen atom (Helium correction)':
form_items = [Box([Label(value='Atomic number'),atomic_number], layout=form_item_layout)]
form = Box(form_items, layout=Layout(display='flex',flex_flow='column',border='solid 2px',align_items='stretch',width='40%'))
form
```
```python
# Variables que se utilizarán
# x=variable de integracion, l=largo del pozo, m=masa del electrón, w=frecuencia angular
# n=número cuántico principal, Z=Número atómico, q=número cuántico angular(l)
var('x theta phi')
var('r1 r2', real=True)
var('l m hbar w n Z', positive=True, real=True)
# Perturbación
if Pert.value!='':
H_p=sympify(Pert.value)
h_p=eval(Pert.value)
else:
H_p=0
h_p=0
# Constantes
h=hbarra.value
a0=5.2917721067e-11
if M.value!='':
mass=float(eval(M.value))
else:
mass=1
# Energías y funciones que se desea corregir
n_inf=min(Rang.value)
n_sup=max(Rang.value)
if Problem=='Particle in a one-dimensional box':
if L.value=='':
large=1
else:
large=float(eval(L.value))
omega=0
# Energías del pozo de potencial infinito
k=n*pi/l
En=hbar**2*k**2/(2*m)
# Funciones de onda del pozo de potencial infinito
Psin=sqrt(2/l)*sin(n*pi*x/l)
# Límites del pozo definido de 0 a l para sympy
li_sympy=0
ls_sympy=l
# Mismo limites para scipy
li_scipy=0
ls_scipy=large
if Problem=='Harmonic oscilator':
large=0
if W.value=='':
omega=1
else:
omega=float(eval(W.value))
# Energías del oscilador armónico cuántico
En=E_n(n,w)
# Funciones de onda del oscilador armónico cuántico
Psin=psi_n(n,x,m,w)
# Límites del pozo definido de -oo a oo para sympy
li_sympy=-oo
ls_sympy=oo
# Límites del pozo definido de -oo a oo para scipy
li_scipy=-inf
ls_scipy=inf
if Problem=='Hydrogen atom (Helium correction)':
if atomic_number.value=='1 (Show Hydrogen energies)':
z=1
if atomic_number.value=='2 (Correct Helium first energy)':
z=2
large=0
omega=0
# Energías del átomo hidrogenoide
En=z*E_nl(n,z)
# Funciones de onda del átomo de hidrógeno
# Número cuántico l=0
q=0 # La variable l ya esta siendo utilizada para el largo de la caja por ello se sustituyo por q
Psin=(R_nl(n,q,r1,z)*R_nl(n,q,r2,z))
# Límites del átomo de hidrógeno de 0 a oo para sympy
li_sympy=0
ls_sympy=oo
# Límites del átomo de hidrógeno de 0 a oo para scipy
li_scipy=0
ls_scipy=inf
```
Para sistemas no degenerados, la corrección a la energía a primer orden se calcula como
$$ E_{n}^{(1)} = \int\psi_{n}^{(0)*} H' \psi_{n}^{(0)}d\tau$$
** Tarea 1 : Programar esta ecuación si conoces $H^{0}$ y sus soluciones. **
```python
def correcion_1st_order_Energy(E_n,Psi_n,H_p,li,ls):
E1_n=Integral(Psi_n*(H_p)*Psi_n,(x,li,ls)).doit()
return(E_n+E1_n)
```
```python
# Correción de la energía a primer orden
E=[]
Eev=[]
Ec1=[]
if Problem=='Particle in a one-dimensional box' or Problem=='Harmonic oscilator':
for i in range(n_inf,n_sup+1):
E.append(En.subs({n:i}))
Eev.append(E[i-n_inf].subs({m:mass, l:large, hbar:h}).evalf())
Ec1.append(correcion_1st_order_Energy(En.subs({n:i}),Psin.subs({n:i}),H_p,li_sympy,ls_sympy))
if Problem=='Hydrogen atom (Helium correction)':
for i in range(n_inf,n_sup+1):
E.append(En.subs({n:i}))
Eev.append(E[i-n_inf])
if z==2:
integral_1=Integral(Integral((16*z**6*r1*r2**2*exp(-2*z*(r1+r2))),(r2,0,r1)),(r1,0,oo)).doit()
integral_2=Integral(Integral((16*z**6*r1**2*r2*exp(-2*z*(r1+r2))),(r2,r1,oo)),(r1,0,oo)).doit()
integral_total=(integral_1+integral_2)
Ec1.append(E[0]+integral_total)
```
Y la corrección a la función de onda, también a primer orden, se obtiene como:
$$ \psi_{n}^{(1)} = \sum_{m\neq n} \frac{\langle\psi_{m}^{(0)} | H' | \psi_{n}^{(0)} \rangle}{E_{n}^{(0)} - E_{m}^{(0)}} \psi_{m}^{(0)}$$
**Tarea 2: Programar esta ecuación si conoces $H^{0}$ y sus soluciones. **
```python
# Correción de las funciones a primer orden
if Pert.value!='':
if Problem=='Particle in a one-dimensional box' or Problem=='Harmonic oscilator':
Psi_c=[]
integrals=np.zeros((n_sup+1,n_sup+1))
for i in range(n_inf,n_sup+1):
a=0
for j in range(n_inf,n_sup+1):
if i!=j:
integ= lambda x: eval(str(Psin.subs({n:j})*(h_p)*Psin.subs({n:i}))).subs({m:mass,l:large,w:omega,hbar:h})
integrals[i,j]=integrate.quad(integ,li_scipy,ls_scipy)[0]
cte=integrals[i,j]/(En.subs({n:i,m:mass,l:large})-En.subs({n:j,m:mass,l:large})).evalf()
a=a+cte*Psin.subs({n:j})
Psi_c.append(Psin.subs({n:i})+a)
```
**Tarea 3: Investigue las soluciones a segundo orden y también programe las soluciones. **
Y la corrección a la energía a segundo orden, se obtiene como:
$$ E_{n}^{(2)} = \sum_{m\neq n} \frac{|\langle\psi_{m}^{(0)} | H' | \psi_{n}^{(0)} \rangle|^{2}}{E_{n}^{(0)} - E_{m}^{(0)}} $$
```python
# Correción a la energía a segundo orden
if Pert.value!='':
if Problem=='Particle in a one-dimensional box' or Problem=='Harmonic oscilator':
if Correc.value=='2':
Ec2=[]
for i in range(n_inf,n_sup+1):
a=0
for j in range(n_inf,n_sup+1):
if i!=j:
cte=((integrals[i,j])**2)/(En.subs({n:i,m:mass,l:large,hbar:h})-En.subs({n:j,m:mass,l:large,hbar:h})).evalf()
a=a+cte
Ec2.append(Ec1[i-n_inf]+a)
```
**A continuación se muestran algunos de los resultados al problema resuelto**
Las energías sin perturbación son:
```python
E
```
$$\left [ \frac{\pi^{2} \hbar^{2}}{2 l^{2} m}, \quad \frac{2 \pi^{2} \hbar^{2}}{l^{2} m}, \quad \frac{9 \pi^{2} \hbar^{2}}{2 l^{2} m}\right ]$$
La correción a primer orden de las energías son:
```python
Ec1
```
$$\left [ \frac{\pi^{2} \hbar^{2}}{2 l^{2} m} + \frac{1}{l} \left(\frac{l l^{2}}{4} - \frac{l l^{2}}{2} - \frac{l l}{2} - \frac{l^{3}}{2 \pi^{2}} + \frac{l^{3}}{3} + \frac{l^{2}}{2}\right), \quad \frac{2 \pi^{2} \hbar^{2}}{l^{2} m} - \frac{1}{l} \left(\frac{l l^{2}}{8 \pi^{2}} - \frac{l^{2}}{8 \pi^{2}}\right) + \frac{1}{l} \left(\frac{l l^{2}}{4} - \frac{l l^{2}}{2} + \frac{l l^{2}}{8 \pi^{2}} - \frac{l l}{2} - \frac{l^{3}}{8 \pi^{2}} + \frac{l^{3}}{3} - \frac{l^{2}}{8 \pi^{2}} + \frac{l^{2}}{2}\right), \quad \frac{9 \pi^{2} \hbar^{2}}{2 l^{2} m} + \frac{1}{l} \left(\frac{l l^{2}}{4} - \frac{l l^{2}}{2} - \frac{l l}{2} - \frac{l^{3}}{18 \pi^{2}} + \frac{l^{3}}{3} + \frac{l^{2}}{2}\right)\right ]$$
Si seleccionaste en los parámetros iniciales una correción a segundo orden entonces...
Las correciones a la energía a segundo orden son:
```python
Ec2
```
$$\left [ \frac{\pi^{2} \hbar^{2}}{2 l^{2} m} - 0.00222818412053003 + \frac{1}{l} \left(\frac{l l^{2}}{4} - \frac{l l^{2}}{2} - \frac{l l}{2} - \frac{l^{3}}{2 \pi^{2}} + \frac{l^{3}}{3} + \frac{l^{2}}{2}\right), \quad \frac{2 \pi^{2} \hbar^{2}}{l^{2} m} + 0.000657835441671339 - \frac{1}{l} \left(\frac{l l^{2}}{8 \pi^{2}} - \frac{l^{2}}{8 \pi^{2}}\right) + \frac{1}{l} \left(\frac{l l^{2}}{4} - \frac{l l^{2}}{2} + \frac{l l^{2}}{8 \pi^{2}} - \frac{l l}{2} - \frac{l^{3}}{8 \pi^{2}} + \frac{l^{3}}{3} - \frac{l^{2}}{8 \pi^{2}} + \frac{l^{2}}{2}\right), \quad \frac{9 \pi^{2} \hbar^{2}}{2 l^{2} m} + 0.00157034867885869 + \frac{1}{l} \left(\frac{l l^{2}}{4} - \frac{l l^{2}}{2} - \frac{l l}{2} - \frac{l^{3}}{18 \pi^{2}} + \frac{l^{3}}{3} + \frac{l^{2}}{2}\right)\right ]$$
Ahora vamos con la función de onda $(\psi)$
```python
form_item_layout = Layout(
display='flex',
flex_flow='row',
justify_content='space-between')
Graph=IntSlider(min=n_inf, max=n_sup, step=1, disabled=False, continuous_update=False, orientation='horizontal',\
readout=True, readout_format='d')
form_items = [
Box([Label(value='What function do you want to see?'),
Graph], layout=form_item_layout)]
form = Box(form_items, layout=Layout(
display='flex',
flex_flow='column',
border='solid 2px',
align_items='stretch',
width='40%'))
form
```
La función de onda original es:
```python
Psin.subs({n:Graph.value})
```
$$\frac{\sqrt{2}}{\sqrt{l}} \sin{\left (\frac{3 \pi}{l} x \right )}$$
La correción a primer orden a la función de onda (utilizando todas las funciones en el rango seleccionado) es:
```python
Psi_c[Graph.value-n_inf]
```
$$\frac{\sqrt{2}}{\sqrt{l}} \sin{\left (\frac{3 \pi}{l} x \right )} + \frac{0.000962435836376656 \sqrt{2}}{\hbar^{2} \sqrt{l}} \sin{\left (\frac{\pi x}{l} \right )} - \frac{0.00788427437159757 \sqrt{2}}{\hbar^{2} \sqrt{l}} \sin{\left (\frac{2 \pi}{l} x \right )}$$
Vamos a graficarlas para verlas mejor...
La función de onda original es:
```python
if Problem=='Particle in a one-dimensional box':
plot(eval(str(Psin)).subs({n:Graph.value,m:mass,l:large,w:omega,hbar:h}),xlim=(li_scipy,ls_scipy),\
title='$\psi_{%d}$'%Graph.value)
if Problem=='Harmonic oscilator':
plot(eval(str(Psin)).subs({n:Graph.value,m:mass,l:large,w:omega,hbar:h}),xlim=(-10*h/(mass*omega),10*h/(mass*omega)),\
title='$\psi_{%d}$'%Graph.value)
if Problem=='Hydrogen atom (Helium correction)':
print('Densidad de probabilidad para un electrón')
plot(eval(str((4*pi*x**2*R_nl(Graph.value,q,x,z)**2))),xlim=(0,10),ylim=(0,20/Graph.value), title='$\psi_{%ds}$'%Graph.value)
print('Tome en cuenta que debido a la dificultad para seleccionar los límites de la gráfica se muestran bien los primeros\n\
3 estados. A partir de ahí visualizar la gráfica se complica.')
```
La corrección a la función de onda es:
```python
if Problem=='Particle in a one-dimensional box':
if Pert.value!='':
plot(eval(str(Psi_c[Graph.value-n_inf])).subs({n:Graph.value,m:mass,l:large,w:omega,hbar:h}),\
xlim=(li_scipy,ls_scipy),title='$\psi_{%d}$'%Graph.value)
if Pert.value=='':
print('No se ingreso ninguna perturbación')
if Problem=='Harmonic oscilator':
if Pert.value!='':
plot(eval(str(Psi_c[Graph.value-n_inf])).subs({n:Graph.value,m:mass,l:large,w:omega,hbar:h}),\
xlim=(-10*h/(mass*omega),10*h/(mass*omega)),title='$\psi_{%d}$' %Graph.value)
if Pert.value=='':
print('No se ingreso ninguna perturbación')
if Problem=='Hydrogen atom (Helium correction)':
print('Este programa no corrige las fucniones de un átomo hidrogenoide')
```
**Tarea 4. Resolver el átomo de helio aplicando los programas anteriores.**
Para resolver el átomo de helio se utilizaron los conceptos, que sirvieron como base para las primeras tareas. Sin embargo, en el Apéndice 1 viene con mayor detalles las consideraciones tomadas para resolver el problema.
## Apéndice 1
Para el cálculo a las correciones del átomo de Helio se tomó en cuenta lo siguiente...
La función de onda del átomo de Helio puede ser representada como:
$$ \psi_{nlm} = \psi(r1)_{nlm} \psi(r2)_{nlm}$$
Donde, para el estado fundamental:
$$ \psi(r_{1}.r_{2})_{100} = \frac{Z^{3}}{\pi a_{0}^{3}} e^{\frac{-Z}{a_{0}}(r_{1}+r_{2})}$$
Y la perturbación sería el término de repulsión entre los dos electrones, es decir:
$$ H'= \frac{e^{2}}{r_{12}}=\frac{e^{2}}{|r_{1}-r_{2}|}$$
Finalmente la correción a primer orden de la energía sería:
$$ E^{1}= \langle\psi_{n}^{(0)} | H' | \psi_{n}^{(0)} \rangle =\frac{Z^{6}e^{2}}{\pi^{2} a_{0}^{6}} \int_{0}^{2\pi}\int_{0}^{2\pi}\int_{0}^{\pi}\int_{0}^{\pi}\int_{0}^{\infty}\int_{0}^{\infty} \frac{e^{\frac{-2Z}{a_{0}}(r_{1}+r_{2})}}{r_{12}} r_{1}^{2}r_{2}^{2}sen{\theta_{1}}sen{\theta_{2}} dr_{2} dr_{1} d\theta_{2} d\theta_{1} d\phi_{2} d\phi_{1}$$
Se utiliza una expansión del termino de repulsión con los armónicos esféricos y se integra la parte angular. Una vez hecho eso, la integral queda expresada de la siguiente manera:
$$ E^{1}= \frac{16Z^{6}e^{2}}{a_{0}^{6}} \left[\int_{0}^{\infty} r_{1}^{2} e^{\frac{-2Z}{a_{0}}r_{1}} \left(\int_{0}^{r_{1}} \frac{r_{2}^{2}}{r_{1}} e^{\frac{-2Z}{a_{0}}r_{2}} dr_{2}+\int_{r_{1}}^{\infty}r_{2} e^{\frac{-2Z}{a_{0}}r_{2}}dr_{2}\right) dr_{1} \right]$$
**Tarea 5: Método variacional-perturbativo. **
Este método nos permite estimar de forma precisa $E^{(2)}$ y correcciones perturbativas de la energía de órdenes más elevados para el estado fundamental del sistema, sin evaluar sumas infinitas. Ver ecuación 9.38 del libro.
```python
```
**Tarea 6. Revisar sección 9.7. **
Inicialmente a mano, y sengunda instancia favor de intentar programar sección del problema, i.e. integral de Coulomb e integral de intercambio.
```python
```
| 90206a92c90ca5ad1befd95e6a177373e1d27dd7 | 80,351 | ipynb | Jupyter Notebook | Perturbaciones/Chema/Ejemplo_perturbaciones_(particula en una caja).ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
]
| null | null | null | Perturbaciones/Chema/Ejemplo_perturbaciones_(particula en una caja).ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
]
| null | null | null | Perturbaciones/Chema/Ejemplo_perturbaciones_(particula en una caja).ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
]
| null | null | null | 86.772138 | 24,548 | 0.757713 | true | 5,431 | Qwen/Qwen-72B | 1. YES
2. YES | 0.73412 | 0.752013 | 0.552067 | __label__spa_Latn | 0.529465 | 0.120967 |
```python
from scipy.stats import gaussian_kde
from scipy.interpolate import interp1d
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rc
rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
rc('text', usetex=True)
```
# Building the joint prior
In this repository there exists code to compute the conditional priors $p(\chi_\mathrm{eff}|q)$ and $p(\chi_q|q)$ (functions `chi_effective_prior_from_isotropic_spins` and `chi_p_prior_from_isotropic_spins`, respectively) on $\chi_\mathrm{eff}$ and $\chi_p$ corresponding to uniform and isotropic component spin priors. Each of these priors have been marginalized over all other spin degrees of freedom.
In some circumstances, though, we might want the *joint* prior $p(\chi_\mathrm{eff},\chi_p|q)$ acting on the two effective spin parameters. Although we were able to derive closed-form expressions for $p(\chi_\mathrm{eff}|q)$ and $p(\chi_q|q)$, I personally lack the will-power and/or attention span to derive an analytic expression for $p(\chi_\mathrm{eff},\chi_p|q)$. Instead, let's build a function to do this numerically.
First, note that the joint prior on $\chi_\mathrm{eff}$ and $\chi_p$ is *weird*. Demonstrate this by drawing random component spins, computing the corresponding effective spins, and plotting the resulting density.
```python
def chi_p(a1,a2,cost1,cost2,q):
sint1 = np.sqrt(1.-cost1**2)
sint2 = np.sqrt(1.-cost2**2)
return np.maximum(a1*sint1,((3.+4.*q)/(4.+3.*q))*q*a2*sint2)
def chi_eff(a1,a2,cost1,cost2,q):
return (a1*cost1 + q*a2*cost2)/(1.+q)
# Choose some fixed mass ratio
q = 0.5
# Draw random component spins and compute effective parameters
ndraws = 30000
random_a1s = np.random.random(ndraws)
random_a2s = np.random.random(ndraws)
random_cost1s = 2.*np.random.random(ndraws)-1.
random_cost2s = 2.*np.random.random(ndraws)-1.
# Plot!
random_chi_effs = chi_eff(random_a1s,random_a2s,random_cost1s,random_cost2s,q)
random_chi_ps = chi_p(random_a1s,random_a2s,random_cost1s,random_cost2s,q)
fig,ax = plt.subplots()
ax.hexbin(random_chi_effs,random_chi_ps,cmap='Blues',gridsize=30)
ax.set_xlabel('$\chi_\mathrm{eff}$',fontsize=14)
ax.set_ylabel('$\chi_p$',fontsize=14)
plt.show()
```
There are a few visible features we need to worry about.
1. First, the prior distribution comes to a sharp point at $\chi_\mathrm{eff} = \chi_p = 0$; this is related to the fact that the marginal $p(\chi_\mathrm{eff}|q)$ is quite sharply peaked about the origin (see `Demo.ipynb`)
2. The concentration about $\chi_\mathrm{eff} = 0$ also implies that vanishingly few of our prior draws occur in the distant wings of the joint prior, at very negative or very positive $\chi_\mathrm{eff}$.
3. In the vertical direction, we can see the same sharp drop and extended plateau as seen in the marginal $\chi_p$ prior in `Demo.ipynb`
Naively, we could just draw a bunch of prior samples and form a KDE over this space. The first two features listed above, though, make this extremely difficult. The extreme narrowness of $p(\chi_\mathrm{eff},\chi_p|q)$ near the origin means we must use an extremely small KDE bandwidth to accurately capture this behavior, but such a small bandwidth will accentuate sampling fluctuations elsewhere. Meanwhile, the fact that very few samples occur at very positive or very negative $\chi_\mathrm{eff}$ means that we will need to perform a vast number of draws (like, many millions) if we wish to accurately estimate the prior on posterior samples falling in these areas.
Recall that this prior remains *conditional* on $q$, and so we can't just build a single KDE (in which case we might tolerate having to perform a vast number of draws and slow KDE evaluation), but will need to build a new estimator every time we consider a different mass ratio.
Instead, let's leverage our knowledge of the marginal prior $p(\chi_\mathrm{eff}|q)$ and factor the joint prior as
\begin{equation}
p(\chi_\mathrm{eff},\chi_p|q) = p(\chi_p|\chi_\mathrm{eff},q) p(\chi_\mathrm{eff},q),
\end{equation}
so that we only have to worry about numerically constructing the one-dimensional distribution $p(\chi_p|\chi_\mathrm{eff},q)$.
Given $\chi_\mathrm{eff}$ and $q$, we will repeatedly draw $\{a_1,a_2,\cos t_1,\cos t_2\}$ consistent with $\chi_\mathrm{eff}$, and then construct the resulting distribution over $\chi_p$. In particular, we will regard
\begin{equation}
\cos t_1 = \frac{(1+q)\chi_\mathrm{eff} - q a_2 \cos t_2}{a_1}
\end{equation}
as a function of the $\chi_\mathrm{eff}$ and the three other component spin parameters. In making this choice, though, we are *really* drawing from a slice through
\begin{equation}
\frac{dP}{d a_1 da_2 d\chi_\mathrm{eff} d\cos t_2 } = \frac{dP}{d a_1 da_2 d \cos t_1 d\cos t_2} \frac{\partial \cos t_1}{\partial \chi_\mathrm{eff}}.
\end{equation}
Thus, in order to have properly sampled from the underlying uniform and isotropic distribution $dP/d a_1 da_2 d \cos t_1 d\cos t_2$, we will need to remember to divide out the Jacobian weights $\partial \cos t_1/\partial \chi_\mathrm{eff} = a_1/(1+q)$.
Let's try this in the following cell:
```python
# Fix some value for chi_eff and q
# Feel free to change these!
aMax = 1.
Xeff = 0.2
q = 0.5
# Draw random spin magnitudes.
# Note that, given a fixed chi_eff, a1 can be no larger than (1+q)*chi_eff,
# and a2 can be no larger than (1+q)*chi_eff/q
ndraws = 100000
a1 = np.random.random(ndraws)*aMax
a2 = np.random.random(ndraws)*aMax
# Draw random tilts for spin 2
cost2 = 2.*np.random.random(ndraws)-1.
# Finally, given our conditional value for chi_eff, we can solve for cost1
# Note, though, that we still must require that the implied value of cost1 be *physical*
cost1 = (Xeff*(1.+q) - q*a2*cost2)/a1
# While any cost1 values remain unphysical, redraw a1, a2, and cost2, and recompute
# Repeat as necessary
while np.any(cost1<-1) or np.any(cost1>1):
to_replace = np.where((cost1<-1) | (cost1>1))[0]
a1[to_replace] = np.random.random(to_replace.size)*aMax
a2[to_replace] = np.random.random(to_replace.size)*aMax
cost2[to_replace] = 2.*np.random.random(to_replace.size)-1.
cost1 = (Xeff*(1.+q) - q*a2*cost2)/a1
Xp_draws = chi_p(a1,a2,cost1,cost2,q)
jacobian_weights = (1.+q)/a1
```
For comparison, let's also take a brute-force approach, drawing truly random component spins and saving those whose $\chi_\mathrm{eff}$ are "close to" the conditioned $\chi_\mathrm{eff}$ value specified above. This can take a while, depending on the values of $q$ and $\chi_\mathrm{eff}$ we've chosen...
```python
test_a1s = np.array([])
test_a2s = np.array([])
test_cost1s = np.array([])
test_cost2s = np.array([])
while test_a1s.size<30000:
test_a1 = np.random.random()*aMax
test_a2 = np.random.random()*aMax
test_cost1 = 2.*np.random.random()-1.
test_cost2 = 2.*np.random.random()-1.
test_xeff = chi_eff(test_a1,test_a2,test_cost1,test_cost2,q)
if np.abs(test_xeff-Xeff)<0.02:
test_a1s = np.append(test_a1s,test_a1)
test_a2s = np.append(test_a2s,test_a2)
test_cost1s = np.append(test_cost1s,test_cost1)
test_cost2s = np.append(test_cost2s,test_cost2)
```
Let's plot both approaches below. For completeness, also plot what happens if we *forget* the Jacobian factors, which gives a clear mismatch relative to the brute force draws.
```python
fig,ax = plt.subplots()
ax.hist(Xp_draws,density=True,bins=30,weights=jacobian_weights,label='Our approach')
ax.hist(Xp_draws,density=True,bins=30,histtype='step',ls='--',color='black',label='Our approach (w/out Jacobians)')
ax.hist(chi_p(test_a1s,test_a2s,test_cost1s,test_cost2s,q),density=True,histtype='step',bins=30,color='black',
label='Brute force')
plt.legend()
ax.set_xlabel(r'$\chi_p$',fontsize=14)
ax.set_ylabel(r'$p(\chi_p|\chi_\mathrm{eff},q)$',fontsize=14)
plt.show()
```
We could stop here, KDE our (appropriately weighted) draws, and evaluate the KDE at a $\chi_p$ of interest. We want to be a bit more careful with the end points, though. If we KDE directly, some of our probability will leak out past our boundaries at $\chi_p = 0$ and $\chi_p = 1$.
```python
demo_kde = gaussian_kde(Xp_draws,weights=jacobian_weights)
fig,ax = plt.subplots()
ax.hist(Xp_draws,density=True,bins=30,weights=jacobian_weights)
ax.plot(np.linspace(-0.1,1.1,50),demo_kde(np.linspace(-0.1,1.1,50)),color='black',label='KDE')
plt.legend()
ax.set_xlabel(r'$\chi_p$',fontsize=14)
ax.set_ylabel(r'$p(\chi_p|\chi_\mathrm{eff},q)$',fontsize=14)
plt.show()
```
Even if we truncate to the interval $0 \leq \chi_p \leq 1$, we will still generically end up in a situation where our prior does not go to zero at $\chi_p = 0$ and $\chi_p = 1$:
```python
# Integrate across (0,1) to obtain appropriate normalization
truncated_grid = np.linspace(0,1,100)
norm_constant = np.trapz(demo_kde(truncated_grid),truncated_grid)
fig,ax = plt.subplots()
ax.hist(Xp_draws,density=True,bins=30,weights=jacobian_weights)
ax.plot(truncated_grid,demo_kde(truncated_grid)/norm_constant,color='black',label='KDE')
plt.legend()
ax.set_xlabel(r'$\chi_p$',fontsize=14)
ax.set_ylabel(r'$p(\chi_p|\chi_\mathrm{eff},q)$',fontsize=14)
plt.show()
```
Instead, we will take a two step approach. First, use a KDE to evaluate $p(\chi_p|\chi_\mathrm{eff},q)$ across a grid of points well inside the boundaries at $0$ and $\mathrm{Max}(\chi_p)$. Then manually specificy the endpoints, with $p(\chi_p|\chi_\mathrm{eff},q) = 0$.
Note that the maximum value of $\chi_p$ given some $\chi_\mathrm{eff}$ is
\begin{equation}
\begin{aligned}
\mathrm{Max}(\chi_p) &= \mathrm{Max}\left[\mathrm{max}\left( s_{1p}, \frac{3+4q}{4+3q} q s_{2p}\right)\right] \\
&= \mathrm{Max}(s_{1p}),
\end{aligned}
\end{equation}
defining $s_p = a \sin t$ as the in-plane spin component. If we define $s_z = a \cos t$, then
\begin{equation}
\begin{aligned}
\mathrm{Max}(\chi_p)
&= \mathrm{Max}\sqrt{a^2_\mathrm{max}-s_{1z}^2} \\
&= \sqrt{a^2_\mathrm{max}-\mathrm{Min}(s_{1z}^2)} \\
&= \sqrt{a^2_\mathrm{max}-\mathrm{Min}\left[\left((1+q)\chi_\mathrm{eff} - q s_{2z}\right)^2\right]}
\end{aligned}
\end{equation}
where the minimum is taken over possible $s_{2z}$. If $(1+q)\chi_\mathrm{eff} \leq a_\mathrm{max} q$, then there is always some $s_{2z}$ available such that the bracketed term is zero, giving $\mathrm{Max}(\chi_p) = a_\mathrm{max}$. If, on the other hand, $(1+q)\chi_\mathrm{eff} > a_\mathrm{max} q$ then the bracketed term will necessarily always be non-zero, with its smallest value occurring at $s_{2z} = a_\mathrm{max}$. In this case, $\mathrm{Max}(\chi_p) = \sqrt{a^2_\mathrm{max}-\left((1+q)\chi_\mathrm{eff} - a_\mathrm{max} q\right)^2}$.
```python
# Compute maximum chi_p
if (1.+q)*np.abs(Xeff)/q<aMax:
max_Xp = aMax
else:
max_Xp = np.sqrt(aMax**2 - ((1.+q)*np.abs(Xeff)-q)**2.)
# Set up a grid slightly inside (0,max chi_p) and evaluate KDE
reference_grid = np.linspace(0.05*max_Xp,0.95*max_Xp,30)
reference_vals = demo_kde(reference_grid)
# Manually prepend/append zeros at the boundaries
reference_grid = np.concatenate([[0],reference_grid,[max_Xp]])
reference_vals = np.concatenate([[0],reference_vals,[0]])
norm_constant = np.trapz(reference_vals,reference_grid)
# Interpolate!
prior_vals = [np.interp(Xp,reference_grid,reference_vals) for Xp in truncated_grid]
fig,ax = plt.subplots()
ax.hist(Xp_draws,density=True,bins=30,weights=jacobian_weights)
ax.plot(truncated_grid,prior_vals/norm_constant,color='black',label='Our interpolant')
plt.legend()
ax.set_xlabel(r'$\chi_p$',fontsize=14)
ax.set_ylabel(r'$p(\chi_p|\chi_\mathrm{eff},q)$',fontsize=14)
plt.show()
```
This procedure is implemented in the function `chi_p_prior_given_chi_eff_q` appearing in `priors.py`. For completeness, let's compare the output of this function against the result we got in this notebook.
```python
from priors import *
ndraws=100000
priors_from_function = [chi_p_prior_given_chi_eff_q(q,aMax,Xeff,xp,ndraws=ndraws,bw_method=1.*ndraws**(-1./5.)) for xp in reference_grid]
```
```python
fig,ax = plt.subplots()
ax.plot(reference_grid,priors_from_function,label='From priors.py')
ax.plot(reference_grid,reference_vals)
ax.set_xlabel(r'$\chi_p$',fontsize=14)
ax.set_ylabel(r'$p(\chi_p|\chi_\mathrm{eff},q)$',fontsize=14)
plt.legend()
plt.show()
```
```python
```
| 428fa365ec959f2e3bb484ef14f98d3898709a87 | 145,240 | ipynb | Jupyter Notebook | Joint-ChiEff-ChiP-Prior.ipynb | tcallister/effective-spin-priors | cd5813890de043b2dc59bfaaf9f5eb7d57882641 | [
"MIT"
]
| 4 | 2021-04-08T05:21:14.000Z | 2021-11-08T07:05:24.000Z | Joint-ChiEff-ChiP-Prior.ipynb | tcallister/effective-spin-priors | cd5813890de043b2dc59bfaaf9f5eb7d57882641 | [
"MIT"
]
| 2 | 2021-05-20T00:47:02.000Z | 2021-06-02T15:26:27.000Z | Joint-ChiEff-ChiP-Prior.ipynb | tcallister/effective-spin-priors | cd5813890de043b2dc59bfaaf9f5eb7d57882641 | [
"MIT"
]
| 2 | 2021-04-21T01:13:35.000Z | 2021-05-03T01:10:05.000Z | 317.811816 | 41,940 | 0.924029 | true | 3,720 | Qwen/Qwen-72B | 1. YES
2. YES | 0.808067 | 0.782662 | 0.632444 | __label__eng_Latn | 0.93818 | 0.30771 |
## Topics covered in this notebook:
1. What is K-Nearest Neighbors(kNN) mean?
2. Implementation.
3. How to choose K?
4. Common Issues & Fix.
5. Where kNN can fail?
6. References.
## 1. K - Nearest Neighbors:
1. The idea is to make prediction using the closest know data points.
2. Look at the image below:
1. There are 2 categories: Star & Triangle.
2. Consider a new test data point -> Green square. What is this point classified as?
1. K = 3 -> 3-nearest neighbor -> Pick star.
2. K = 5 -> 5-nearest neighbor -> Pick traingle.
3. We classify by calcualting eulidean distance between nearby K points & their corresponding classes.
3. Forms complex decision boundaries; adapts to data density.
4. These type of models are called non-parametric models.
5. These type of classifiers are also called as lazy classifiers.
1. train(X,Y) doesn't do anything. Just stores X & Y.
2. predict(X') does all the work by looking through stored X & Y.
<br>
6. Few assumptions:
1. Output varies smoothly with input.
2. Data occupies sub-space of high-dimensional input space.
## 2. Implementation:
3. Idea is simple but implementation can be tricky.
4. Keeping track of arbitrary number of distances not so easy.
5. First -> need to look through all of the training data -> O(N).
6. Then need to look through the closest distances you have stored so far -> O(K).
\begin{align}
\large \lvert\lvert \large x^{a} \,-\, x^{b}\rvert\rvert_2 \, &= \large \sqrt {\Sigma_{j=1}^d (x_j^{a} \,-\, x_j^{b})^2} \\
\end{align}
7. Total O(NK).
8. Searching through a sorted list would be O(log K), a little better.
9. Even better: Ball Tree, K-D Tre etc.
8. Once we have the k-nearest neighbors we need to turn them into votes which means we need to store the class as well.
9. {dist1: class1, dist2:class2,...}
9. Count up the values:
1. {class1: num_class1,class2:num_class2...}
10. Pick the class that has the highest votes.
1. What if there is tie?
1. Use whatever argmax(votes) outputs.
2. Pick one at random.
3. Weight by distance to neighbors.
## 3. How to choose K?:
1. No easy answer.
2. K is hyperparameter.
3. Use cross-validation.
4. Larger k may lead to better performance.
5. But if we set k too large we may look at samples that are not neighbors.
6. Rule of thumb: k < sqrt(n), where n is number of training examples.
## 4. Common Issues & Fix:
1. If some attribute have larger ranges, they are treated as more important:
1. Normalize scale:
1. Linear transformatio to be between [0,1].
2. Scale to have 0 mean and 1 variance.
3. Caution: Sometimes scale matters.
2. Irrelevant attributes can add noise to distance measure.
1. Remove attributes.
2. Adapt weights using regularization techniques (using othere types of classifier).
3. Computation:O(NK)
1. Use subset of dimensions.
2. Pre-sort training examples into fast data structures(e.g. kd-trees) - Need to read about it.
3. Compute only approximate distance(e.g. LSH).
4. Remove redundant data(e.g., condensing).
4. High Dimensional Data: 'Curse of dimensionality'
1. Required amount of data increases exponentially with dimension.
2. Computation cost also increases.
## 5. Where kNN can fail?
1. Grid of alteranting dots.
1. If you choose K=3, there will always 2/3 vote from wrong class.
1. Can fix by choosing K = 1.
2. Weighing each points by distance.
## 6. References:
1. An Introduction to Statistical Learning Textbook by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani.
2. University of Michigan EECS 445 - Machine Learning Course (https://github.com/eecs445-f16/umich-eecs445-f16).<br>
3. University of Toronto CSC 411 - Intro. to Machine Learning (http://www.cs.toronto.edu/~urtasun/courses/CSC411_Fall16/CSC411_Fall16.html).<br>
4. Stanford CS109 - Intro. to proabability for computer scientists (https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/). <br>
5. Few online courses on Udemy, Coursera etc.
| a0280591296d1c4499535f026ddad23e42c74665 | 5,689 | ipynb | Jupyter Notebook | 3.K-Nearest Neighbors/0.Theory/KNN.ipynb | ananth-repos/machine-learning | a510dcf81fab9137c33f568e73d65262667b3973 | [
"MIT"
]
| null | null | null | 3.K-Nearest Neighbors/0.Theory/KNN.ipynb | ananth-repos/machine-learning | a510dcf81fab9137c33f568e73d65262667b3973 | [
"MIT"
]
| null | null | null | 3.K-Nearest Neighbors/0.Theory/KNN.ipynb | ananth-repos/machine-learning | a510dcf81fab9137c33f568e73d65262667b3973 | [
"MIT"
]
| null | null | null | 43.098485 | 153 | 0.595008 | true | 1,124 | Qwen/Qwen-72B | 1. YES
2. YES | 0.819893 | 0.771843 | 0.632829 | __label__eng_Latn | 0.986451 | 0.308605 |
<figure>
<IMG SRC="gfx/Logo_norsk_pos.png" WIDTH=100 ALIGN="right">
</figure>
# Diatomic Molecules and Spectroscopy
*Roberto Di Remigio*, *Luca Frediani*
Spectroscopy probes the electronic structure of atoms and molecules by measuring their interaction with light.
Different portions of the light spectrum can be used to explore different aspects of the electronic structure of a molecule. In this notebook, we will look closely at rotatonial and vibrational spectroscopy of diatomic molecules.
Some properties: http://hyperphysics.phy-astr.gsu.edu/hbase/Tables/diatomic.html
## Rotational Spectroscopy
Rotational spectroscopy uses _microwave radiation_ (300 MHz to 300 GHz) to investigate the rotational levels of molecule, _i.e._ the distribution of the nuclear masses in the molecule.
In a rotational spectroscopy experiment, transitions between the closely space rotational levels are observed and their energy differences can provide information on the molecular geometry and the masses of the nuclei.
For a diatomic molecule, the energy of the transition is:
\begin{equation}
F(J) = BJ(J+1)
\end{equation}
with intensity
\begin{equation}
|\mu_{J+1, J}|^2 = \frac{J+1}{2J+1}\mu_0^2
\end{equation}
$B$ and $\mu_0$ are the rotational constant and the dipole moment, respectively:
\begin{equation}
B = \frac{\hbar}{4\pi c I}; \quad I = \mu R_\mathrm{e}^2
\end{equation}
Thus a rotational transition will be deteceted only if the molecule has a nonzero dipole moment in the ground state. Moreover, the selection rule:
\begin{equation}
\Delta J = \pm 1
\end{equation}
applies.
## Vibrational Spectroscopy
Vibrational spectoscopy uses _infrared radiation_ (430 THz to 300 GHz) to investigate the relative motions of nuclei in molecules, _i.e._ molecular vibrations.
In the harmonic model of diatomic molecules, the wavenumber for the vibrational levels are obtained as:
\begin{equation}
G(v) = (v+\frac{1}{2})\tilde{\nu}, \quad \tilde{\nu} = \frac{1}{2\pi c}\sqrt{\frac{k}{\mu}}
\end{equation}
The intensity of a vibrational transition can be calculated as:
\begin{equation}
I_{vw} = \left\langle \psi_v|\mu|\psi_w\right\rangle
\end{equation}
The first-order term in the Taylor expansion with respect to the nuclear displacements determines the selection rules for vibrational spectroscopy in the harmonic approximation.
Vibrational motion is never exactly harmonic. In the harmonic model we cannot, in fact, observe bond dissociation which is clearly in contrast with experiment.
## Computing the Rovibrational Spectrum
All vibrational levels have a _stack_ of rotational levels associated. Thus, when measuring vibrational spectra in _gas phase_ one can observe transitions between these stacks of rotational levels associated with different vibrational levels. The energy of such transitions is in general expressed as:
\begin{equation}
S(v, J) = G(v) + F_v(J)
\end{equation}
where $G(v)$ can contain anharmonic terms and $F_v(J)$ centrifugal distortion terms, _i.e._ depending on the vibrational quantum number $v$.
In the simplest case, these nonlinear terms can be neglected, obtaining the simplified formula:
\begin{equation}
S(v, J) = (v+\frac{1}{2})\tilde{\nu} + BJ(J+1)
\end{equation}
where the $\tilde{\nu}$ and $B$ are the vibrational and rotational constants, respectively.
Due to the small energy separation between rotational levels, the rovibrational spectrum will be temperature dependent. Each state, with energy $S(v, J)$, will be represented with a _weight_ according to the Boltzmann distribution:
\begin{equation}
w(v, J, T) = \frac{\mathrm{deg}(J)\exp(-\frac{S(v, J)}{k_\mathrm{B}T})}{\sum_{v,J}\mathrm{deg}(J)\exp(-\frac{S(v, J)}{k_\mathrm{B}T})}
\end{equation}
$\mathrm{deg}(J)$ is the degeneration of the rotational state with quantum number $J$: states with higher degeneration will be more represented in the spectrum.
The rovibrational spectrum captures the _rotational transitions_ that occur when the _vibrational transition_ $v+1\leftarrow v$ occurs. It consists of three branches:
- The **P branch** with all transitions where $\Delta J = -1$.
- The **Q branch** with the $\Delta J = 0$ transition. This can only occur with open-shell molecules.
- The **R branch** with all transitions where $\Delta J = +1$.
### Tasks
0. Derive a general formula for the energy difference (in wavenumbers) of transitions in the P, Q and R branches of the spectrum
1. Set up general script(s) to calculate and plot the rovibrational spectrum of a diatomic molecule. More in detail, the script(s) should:
- Calculate the vibrational and rotational constants, given the masses, bond length and bond stifness constant.
- Calculate the energies of the rotational levels for the P and R branches. **We assume the Q branch to be forbidden**.
- Calculate the _ relative intensities_ of each transition in the P and R branches in terms of the Boltzmann distribution weight.
- Plot the spectrum as wavenumber _vs_ relative intensity.
2. Compute the spectrum for the HCl molecule. An experimentally measured spectrum is reported below.
<figure>
<IMG SRC="gfx/rovib-HCl.png">
</figure>
```python
```
| 04135c389126cfdcdd38cd729767b3a5b927408d | 6,911 | ipynb | Jupyter Notebook | 09_diatomics-spectroscopy.ipynb | ilfreddy/seminars | c7e13874b41cc906a45b672e5b85c57d6880473e | [
"MIT"
]
| 4 | 2017-02-04T01:34:33.000Z | 2021-06-12T12:27:37.000Z | 09_diatomics-spectroscopy.ipynb | ilfreddy/seminars | c7e13874b41cc906a45b672e5b85c57d6880473e | [
"MIT"
]
| 3 | 2020-03-30T11:00:35.000Z | 2020-05-12T05:42:24.000Z | 09_diatomics-spectroscopy.ipynb | ilfreddy/seminars | c7e13874b41cc906a45b672e5b85c57d6880473e | [
"MIT"
]
| 7 | 2016-04-26T20:42:43.000Z | 2022-02-06T11:12:57.000Z | 47.335616 | 310 | 0.653306 | true | 1,318 | Qwen/Qwen-72B | 1. YES
2. YES | 0.795658 | 0.682574 | 0.543095 | __label__eng_Latn | 0.991998 | 0.100122 |
<a href="https://colab.research.google.com/github/aschelin/SimulacoesAGFE/blob/main/SC_EDO_sistemasequacoes.ipynb" target="_parent"></a>
# Sistemas de EDOs
Considere o sistema abaixo:
\begin{equation}
\begin{aligned}
\dot{x} &= f(t,x(t),y(t)) \\
\dot{y} &= g(t,x(t),y(t))
\end{aligned}
\end{equation}
com $x(t=0)=x_0$ e $y(t=0)=y_0$.
Como podemos resolver esse sistema de EDOs numéricamente? A seguir, vamos mostrar alguns exemplos.
```python
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-poster')
```
## Exemplo 1
Resolva o seguinte sistema de equações diferenciais ordinárias com valores iniciais:
\begin{equation}
\begin{aligned}
\dot{x} &= -y(t) \\
\dot{y} &= x(t)
\end{aligned}
\end{equation}
com $x(t=0)=1$ e $y(t=0)=0$.
Use o método de Euler para achar a sua solução aproximada.
```python
f = lambda t,x,y: -y
g = lambda t,x,y: x
```
```python
# Parametros
h = 1e-4
tfim = 10
nt = int(tfim/h)
x0 = 1
y0 = 0
s0 = [x0,y0]
```
```python
def Euler2D(f,g,h,s0,tmax=10):
nt = int(tmax/h)
x = np.zeros(nt)
y = np.zeros(nt)
tempo = np.linspace(0,tmax,nt)
x[0] = s0[0]
y[0] = s0[1]
for k in np.arange(1,nt):
x[k] = x[k-1] + h*f(tempo[k-1],x[k-1],y[k-1])
y[k] = y[k-1] + h*g(tempo[k-1],x[k-1],y[k-1])
return tempo,x,y
```
```python
tempo, x_num, y_num = Euler2D(f,g,h,s0,tfim)
```
```python
plt.figure(figsize = (12, 8))
plt.plot(tempo, y_num, 'bo--', label='Numérica com h={}'.format(h))
plt.title('Solução Aproximada de um Sistema de EDO')
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.legend(loc='lower left')
plt.show()
```
```python
plt.figure(figsize = (12, 8))
plt.plot(tempo, x_num, 'bo--', label='x')
plt.plot(tempo, y_num, 'go--', label='y')
plt.title('Solução Aproximada de um Sistema de EDO')
plt.xlabel('t')
plt.ylabel('y')
plt.grid()
plt.legend(loc='lower left')
plt.show()
```
Para generalizar, vamos criar uma função *derivadas* para colocar as funções na forma vetorial:
```python
def derivadas(t,X):
F = np.zeros(len(X))
F[0] = -X[1] # xdot = -y
F[1] = X[0] # ydot = x
return F
```
```python
def Euler_geral(derivadas,h,s0,tmax=10):
nt = int(tmax/h)
X = np.zeros([nt,len(s0)])
tempo = np.linspace(0,tmax,nt)
X[0,:] = s0
for k in np.arange(1,nt):
F = derivadas(tempo[k-1],X[k-1,:])
X[k,:] = X[k-1,:] + h*F[:]
return tempo,X
```
```python
tempo,X = Euler_geral(derivadas,h,[1,0],tmax=1)
```
```python
plt.figure(figsize = (12, 8))
plt.plot(X[:,0],X[:,1], 'bo--', label='Numérica com h={}'.format(h))
plt.title('Solução Aproximada de um Sistema de EDO')
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.legend(loc='lower left')
plt.show()
```
## Exemplo 2
Resolva o seguinte sistema de equações diferenciais ordinárias com valores iniciais:
\begin{equation}
\begin{aligned}
\dot{x} &= -y(t) \\
\dot{y} &= x(t)
\end{aligned}
\end{equation}
com $x(t=0)=1$ e $y(t=0)=0$.
Use o método de Euler **modificado** para achar a sua solução aproximada.
```python
def Euler_mod_geral(derivadas,h,s0,tmax=10):
nt = int(tmax/h)
X = np.zeros([nt,2])
tempo = np.linspace(0,tmax,nt)
X[0,:] = s0
for k in np.arange(1,nt):
F = derivadas(tempo[k-1],X[k-1,:])
kx1 = F[0]
ky1 = F[1]
x_euler = X[k-1,0] + h*kx1
y_euler = X[k-1,1] + h*ky1
Fnew = derivadas(tempo[k],[x_euler,y_euler])
kx2 = Fnew[0]
ky2 = Fnew[1]
X[k,0] = X[k-1,0] + h*(kx1+kx2)/2
X[k,1] = X[k-1,1] + h*(ky1+ky2)/2
return tempo,X
```
```python
# Parametros
h = .1
tfim = 10
nt = int(tfim/h)
x0 = 1
y0 = 0
s0 = [x0,y0]
# Rodar o integrador
tempo, X = Euler_mod_geral(derivadas,h,s0,tmax=10)
```
```python
plt.figure(figsize = (12, 8))
plt.plot(X[:,0],X[:,1], 'bo--', label='Numérica com h={}'.format(h))
plt.title('Solução Aproximada de um Sistema de EDO')
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.legend(loc='lower left')
plt.show()
```
## Tarefa
Resolva o seguinte sistema de equações diferenciais ordinárias com valores iniciais:
\begin{equation}
\begin{aligned}
\dot{x} &= -x(t)-y(t) \\
\dot{y} &= x(t)-y(t)^3
\end{aligned}
\end{equation}
com $x(t=0)=1$ e $y(t=0)=0$.
Use o método de Euler **modificado** para achar a sua solução aproximada.
```python
def derivadas(t,X):
F = np.zeros(len(X))
F[0] = -X[0]-X[1]
F[1] = X[0]-X[1]**3
return F
```
```python
# Parametros
h = 1e-2
tfim = 1
nt = int(tfim)/h
x0 = 1
y0 = 0
s0 = [x0,y0]
X=[]
# Rodar o integrador
tempo, X = Euler_mod_geral(derivadas,h,s0,tmax=10)
```
```python
plt.figure(figsize = (12, 8))
plt.plot(X[:,0],X[:,1], 'bo--', label='Numérica com h={}'.format(h))
plt.title('Solução Aproximada de um Sistema de EDO')
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.legend(loc='lower left')
plt.show()
```
# Modelo SIR
O Sistema de EDOs a seguir modela epidemeologicamente a propagação de uma doença (como a covid19). Considere $S(t)$ o número de indivíduos *susceptíveis* à infecção. $I(t)$ corresponde ao número de pessoas infectadas e $R(t)$ o número de indivíduos recuperados.
\begin{equation}
\begin{aligned}
S^{\prime } &= -\beta S I \\
I^{\prime } &= \beta S I - \gamma I \\
R^{\prime } &= \gamma I
\end{aligned}
\end{equation}
onde o parâmetros $\beta$ correspode à taxa de contaminação e $\gamma$ à probabilidade de um indivíduo se recuperar em um intervalo de tempo.
```python
def derivadas_SIR(t,X,params=[0,0]):
F = np.zeros(len(X))
beta = params[0]
gamma = params[1]
F[0] = -beta*X[0]*X[1]
F[1] = beta*X[0]*X[1]-gamma*X[1]
F[2] = gamma*X[1]
return F
```
```python
def Euler_geral_D(derivadas,params,h,s0,tmax=10):
nt = int(tmax/h)
X = np.zeros([nt,len(s0)])
tempo = np.linspace(0,tmax,nt)
X[0,:] = s0
for k in np.arange(1,nt):
F = derivadas(tempo[k],X[k-1,:],params)
X[k,:] = X[k-1,:] + h*F[:]
return tempo,X
```
```python
beta = 10/(40*8*24)
gamma = 3/(15*24)
params = [beta,gamma]
s0 = [50,1,0]
h=1e-2
tmax = 800
tempo,X = Euler_geral_D(derivadas_SIR,params,h,s0,tmax)
```
```python
plt.figure(figsize = (12, 8))
plt.plot(tempo,X[:,0], 'bo--', label='S')
plt.plot(tempo,X[:,1], 'ro--', label='I')
plt.plot(tempo,X[:,2], 'go--', label='R')
plt.title('Modelo SIR')
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.legend(loc='lower left')
plt.show()
```
```python
```
| 8a5b180b2ca77098413969ba0e3c490725c04828 | 258,911 | ipynb | Jupyter Notebook | SC_EDO_sistemasequacoes.ipynb | aschelin/SimulacoesAGFE | 5294771ff8bf85a1129611bd3406780ef64ac75a | [
"MIT"
]
| null | null | null | SC_EDO_sistemasequacoes.ipynb | aschelin/SimulacoesAGFE | 5294771ff8bf85a1129611bd3406780ef64ac75a | [
"MIT"
]
| null | null | null | SC_EDO_sistemasequacoes.ipynb | aschelin/SimulacoesAGFE | 5294771ff8bf85a1129611bd3406780ef64ac75a | [
"MIT"
]
| null | null | null | 396.49464 | 52,574 | 0.927346 | true | 2,437 | Qwen/Qwen-72B | 1. YES
2. YES | 0.884039 | 0.749087 | 0.662223 | __label__por_Latn | 0.534338 | 0.376896 |
<h1><center>MLHEP 2019</center></h1>
<h2><center>Seminar: Unsupervised Learning</center></h2>
# About
The goal of this seminar is to consider main domains of unsupervised learning and demonstrate algorithms implemented in [scikit-learn](https://scikit-learn.org) library.
Topics:
- Clustering
- Data Scaling
- Principal Component Analysis (PCA) (Optionally)
- Anomalies Detection (Optionally)
```python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
# Part 1: Clustering
## Data Preparation
```python
from sklearn import datasets
n_samples = 1500
random_state = 170
X, y = datasets.make_blobs(centers=3, n_samples=n_samples, random_state=random_state, center_box=(-10, 10))
# To play with
# X, y = datasets.make_circles(n_samples=n_samples, factor=.5, noise=.05)
# X, y = datasets.make_moons(n_samples=n_samples, noise=.05)
# X = np.random.rand(n_samples, 2)
```
```python
X[:5]
```
```python
y[:5]
```
```python
def plot_clusters(X, y):
# Create an figure with a custom size
# plt.figure(figsize=(6, 4))
if y is not None:
for cluster_label in np.unique(y):
# Plot all objects with y == i (class 0)
plt.scatter(X[y == cluster_label, 0], # selects all objects with y == i and the 1st column of X
X[y == cluster_label, 1], # selects all objects with y == i and the 2nd column of X
label=str(cluster_label)) # label for the plot legend
else:
plt.scatter(X[:, 0], X[:, 1], label='samples')
plt.xlabel('X1', size=12) # set up X-axis label
plt.ylabel('X2', size=12) # set up Y-axis label
plt.xticks(size=12)
plt.yticks(size=12)
plt.legend(loc='best', fontsize=12) # create the plot legend and set up it position
plt.grid(b=1) # create grid on the plot
plt.show() # display the plot
```
```python
plot_clusters(X, y=None)
```
## Clustering: K-Means
Suppose we have $N$ samples and $K$ clusters. Each cluster is described by its center (centroid) with coordinates $\mu_{j}$. The centroids are estimated by minimizing **within-cluster distance criterion**:
$$
L = \sum_{i=1}^{N} \min_{\mu_{k}} \rho(x_{i}, \mu_{k}) \to \min_{\mu_{1}, ..., \mu_{K}}
$$
$$
\rho(x_{i}, \mu_{k}) = || x_{i} - \mu_{k} ||^{2}
$$
where $x_{i}$ is a sample coordinates, $\rho(x_{i}, \mu_{k})$ is distance between the $i$-th sample and the $k$-th cluster's centroid.
**K-Means algorithm:**
<center></center>
<center></center>
```python
from sklearn.utils import resample
class MyKmeans(object):
def __init__(self, n_clusters=2, max_iter=10, n_init=10):
"""
K-Means clustering algorithms implementation.
Parameters:
-----------
n_clusters: int
Number of clusters.
max_iters: int
Number of iterations of the centroids search.
n_init: int
Number of different initializations of the centroids.
"""
self.n_clusters = n_clusters
self.max_iter = max_iter
self.n_init = n_init
def _predict_for_centers(self, cluster_centers, X):
"""
Predict cluster labels based on their centroids.
Parameters:
-----------
cluster_centers: numpy.array
Array of the cluster centers.
X: numpy.array
Samples coordinates.
Returns:
--------
labels: numpy.array
Predicted cluster labels. Example: labels = [0, 0, 1, 1, 0, 2, ...].
"""
object_distances2 = []
for one_cluster_center in cluster_centers:
dist2 = ((X - one_cluster_center)**2).sum(axis=1)
object_distances2.append(dist2)
object_distances2 = np.array(object_distances2)
labels = np.argmin(object_distances2, axis=0)
return labels
def _calculate_cluster_centers(self, X, y):
"""
Estimate cluster centers based on samples in these clusters.
Parameters:
-----------
X: numpy.array
Samples coordinates.
y: numpy.array
Cluster labels of the samples.
Returns:
--------
cluster_centers: numpy.array
Estimated cluster centers.
"""
cluster_centers = []
cluster_labels = np.unique(y)
for one_cluster_label in cluster_labels:
one_cluster_center = X[y == one_cluster_label].mean(axis=0)
cluster_centers.append(one_cluster_center)
return np.array(cluster_centers)
def _calculate_cluster_metric(self, cluster_centers, X):
"""
Calculate within-cluster distance criterion.
Parameters:
-----------
cluster_centers: numpy.array
Array of the cluster centers.
X: numpy.array
Samples coordinates.
Returns:
--------
criterion: float
The criterion value.
"""
object_distances2 = []
for one_cluster_center in cluster_centers:
dist2 = ((X - one_cluster_center)**2).sum(axis=1)
object_distances2.append(dist2)
object_distances2 = np.array(object_distances2)
min_dists2 = np.min(object_distances2, axis=0)
criterion = min_dists2.mean()
return criterion
def _fit_one_init(self, X):
"""
Run k-Means algorithm for randomly init cluster centers.
Parameters:
-----------
X: numpy.array
Samples coordinates.
Returns:
--------
cluster_centers: numpy.array
Estimated cluster centers.
metric: float
Within-cluster distance criterion criterion value.
"""
# Init cluster centers
cluster_centers = resample(X, n_samples=self.n_clusters, random_state=None, replace=False)
# Search for cluster centers
for i in range(self.max_iter):
labels = self._predict_for_centers(cluster_centers, X)
cluster_centers = self._calculate_cluster_centers(X, labels)
# Calculate within-cluster distance criterion
metric = self._calculate_cluster_metric(cluster_centers, X)
return cluster_centers, metric
def fit(self, X):
"""
Run k-Means algorithm.
Parameters:
-----------
X: numpy.array
Samples coordinates.
"""
self.best_cluster_centers = None
self.best_metric = np.inf
for i in range(self.n_init):
# Run K-Means algorithms for randomly init cluster centers
cluster_centers, metric = self._fit_one_init(X)
# Save the best clusters
if metric < self.best_metric:
self.best_metric = metric
self.best_cluster_centers = cluster_centers
def predict(self, X):
"""
Predict cluster labels.
Parameters:
-----------
X: numpy.array
Samples coordinates.
Returns:
--------
y: numpy.array
Predicted cluster labels. Example: labels = [0, 0, 1, 1, 0, 2, ...].
"""
y = self._predict_for_centers(self.best_cluster_centers, X)
return y
```
```python
clusterer = MyKmeans(n_clusters=3, max_iter=20, n_init=10)
clusterer.fit(X)
y_pred = clusterer.predict(X)
```
```python
y_pred[:10]
```
```python
plot_clusters(X, y_pred)
```
## Metrics
**Silhouette Score:**
$$
s = \frac{b - a}{max(a, b)}
$$
- **a**: The mean distance between a sample and all other points in the same class.
- **b**: The mean distance between a sample and all other points in the next nearest cluster.
**Adjusted Rand Index (ARI):**
$$
ARI = \frac{RI - Expected\_RI}{max(RI) - Expected\_RI}
$$
$$
RI = \frac{a + b}{a + b + c + d}
$$
- a, the number of pairs of elements in S that are in the same subset in X and in the same subset in Y
- b, the number of pairs of elements in S that are in different subsets in X and in different subsets in Y
- c, the number of pairs of elements in S that are in the same subset in X and in different subsets in Y
- d, the number of pairs of elements in S that are in different subsets in X and in the same subset in Y
```python
from sklearn import metrics
silhouette_score_values = []
adjusted_rand_score_values = []
within_cluster_dist_values = []
n_clusters = np.arange(2, 21)
for n in n_clusters:
clusterer = MyKmeans(n_clusters=n, max_iter=10, n_init=10)
clusterer.fit(X)
y_pred = clusterer.predict(X)
score1 = metrics.silhouette_score(X, y_pred)
silhouette_score_values.append(score1)
score2 = metrics.adjusted_rand_score(y, y_pred)
adjusted_rand_score_values.append(score2)
score3 = clusterer.best_metric
within_cluster_dist_values.append(score3)
```
```python
plt.figure(figsize=(9, 6))
plt.plot(n_clusters, silhouette_score_values, linewidth=3, label='Silhouette score')
plt.plot(n_clusters, adjusted_rand_score_values, linewidth=3, label='Adjusted rand score')
plt.xlabel('Number of clusters', size=16)
plt.ylabel('Score', size=16)
plt.xticks(n_clusters, size=16)
plt.yticks(size=16)
plt.legend(loc='best', fontsize=16)
plt.grid(b=1)
plt.show()
plt.figure(figsize=(9, 6))
plt.plot(n_clusters, within_cluster_dist_values, linewidth=3, label='Within-cluster distance')
plt.xlabel('Number of clusters', size=16)
plt.ylabel('Score', size=16)
plt.xticks(n_clusters, size=16)
plt.yticks(size=16)
plt.legend(loc='best', fontsize=16)
plt.grid(b=1)
plt.show()
```
## Other Clustering Algorithms
Short overview of other clustering algorithms you can find in `scikit-learn` library [here](https://scikit-learn.org/stable/modules/clustering.html):
<center></center>
```python
from sklearn import datasets
n_samples = 1500
random_state = 170
X, y = datasets.make_blobs(centers=3, n_samples=n_samples, random_state=random_state, center_box=(-10, 10))
# To play with
# X, y = datasets.make_circles(n_samples=n_samples, factor=.5, noise=.05)
# X, y = datasets.make_moons(n_samples=n_samples, noise=.05)
# X = np.random.rand(n_samples, 2)
plot_clusters(X, None)
```
```python
# Import clustering algorithms
from sklearn import cluster
```
```python
# MiniBatchKMeans
# Run clusterer
clusterer = cluster.MiniBatchKMeans(n_clusters=3, batch_size=100)
clusterer.fit(X)
y_pred = clusterer.predict(X)
# Plot clustering results
plot_clusters(X, y_pred)
```
**DBSCAN:**
<center></center>
```python
# DBSCAN
# Run clusterer
clusterer = cluster.DBSCAN(eps=0.5, min_samples=5)
y_pred = clusterer.fit_predict(X)
# Plot clustering results
plot_clusters(X, y_pred)
```
**Agglomerative Clustering:**
<center></center>
[image link](https://quantdare.com/hierarchical-clustering/)
```python
# AgglomerativeClustering
# Run clusterer
clusterer = cluster.AgglomerativeClustering(n_clusters=3)
y_pred = clusterer.fit_predict(X)
# Plot clustering results
plot_clusters(X, y_pred)
```
## Tasks:
- Rerun cells above for other datasets. Explain the clustering results.
- Try different number of clusters and other options. How can you explain what you see?
---
---
---
# Part 2: Scaling
## Data Preparation
Multiply one of the sample features by a large number.
```python
X_scaled = X.copy()
X_scaled[:, 1] *= 100
```
```python
plot_clusters(X_scaled, None)
```
## Clustering without scaling
All clustering algorithms are based on distances between objects $\rho(x_{i}, x_{j})$. For an axample in 2D case:
$$
\rho(x_{i}, x_{j}) = \sqrt{ (x_{1i} - x_{1j})^{2} + (x_{2i} - x_{2j})^{2} }
$$
where $x_{1i}$ is the 1st input feature, $x_{2i}$ is the 2nd one.
Suppose, that the features have different scales:
$$
\frac{x_{2i}}{x_{1i}} = 1000
$$
Then
$$
\rho(x_{i}, x_{j}) = \sqrt{ (x_{1i} - x_{1j})^{2} + (x_{2i} - x_{2j})^{2} } \approx\sqrt{ (x_{2i} - x_{2j})^{2} }
$$
So, the 1st feature will not be taken into account by clustering algorithms.
```python
from sklearn import cluster
# Run clustering algorithm
clusterer = cluster.KMeans(n_clusters=3, n_init=10)
clusterer.fit(X_scaled)
y_pred = clusterer.predict(X_scaled)
# Show clustering results
plot_clusters(X_scaled, y_pred)
```
## Clustering with Standard Scaler
Satndard Scaler transforms feature $x$ to a new feature $x_{new}$ with zero mean and unit variance by the following way:
$$
x_{new} = \frac{ x - \mu }{ \sigma }
$$
where
$$
\mu = \frac{1}{N} \sum_{i=1}^{N}x_{i}
$$
$$
\sigma = \sqrt{ \frac{1}{N-1} \sum_{i=1}^{N} (x_{i} - \mu)^{2} }
$$
This transforms all input features to the same scale.
```python
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
ss.fit(X_scaled)
X_scaled_ss = ss.transform(X_scaled)
```
```python
from sklearn import cluster
# Run clustering algorithm
clusterer = cluster.KMeans(n_clusters=3, n_init=10)
clusterer.fit(X_scaled_ss)
y_pred = clusterer.predict(X_scaled_ss)
# Show clustering results
plot_clusters(X_scaled, y_pred)
```
---
---
---
# Part 3: Principal Component Analysis (PCA)
## Data Preparation
```python
from sklearn import datasets
# Generate 2D Gaussian distribution
n_samples = 2000
X, y = datasets.make_blobs(n_samples=n_samples, random_state=42, centers=[[0, 0]])
# Apply coordiantes transformation
transformation = [[0.6, 0.4],
[0.4, 0.6]]
X_aniso = np.dot(X, transformation)
```
```python
plot_clusters(X_aniso, None)
```
## Principal Component Analysis (PCA)
Find directions along which our datapoints have the greatest variance:
<center></center>
These directions are principal components. Principal components $a_{1},a_{2},...a_{D}\in\mathbb{R}^{D}$ are orthonormal:
$$
\langle a_{i},a_{j}\rangle=\begin{cases}
1, & i=j\\
0 & i\ne j
\end{cases}
$$
## PCA algorithm (detailed):
### Step 1:
Calculate variance across a principal component $a$ assuming that $X$ is centralized:
$$
\begin{align} \sigma^2_a & = \frac{1}{n}\sum\limits_{i=1}^n(a^\top x_i - \mu)^2 \\
& = \frac{1}{n}\sum\limits_{i=1}^n(a^\top x_i - 0)^2 \\
& = \frac{1}{n}\sum\limits_{i=1}^n a^\top( x_i x_i^\top) a \\
& = a^\top \left(\frac{1}{n}\sum\limits_{i=1}^n x_i x_i^\top \right) a \\
& = a^\top X^\top X a \\
\end{align}
$$
### Step 2:
Find $a_1$ that maximizes the variance:
$$
\begin{equation}
\begin{cases}
a_1^\top X^\top X a_1 \rightarrow \max_{a_1} \\
a_1^\top a_1 = 1
\end{cases}
\end{equation}
$$
Lagrangian of optimization problem:
$$ \mathcal{L}(a_1, \nu) = a_1^\top X^\top X a_1 - \nu (a_1^\top a_1 - 1) \rightarrow max_{a_1, \nu}$$
Derivative w.r.t. $a_1$:
$$ \frac{\partial\mathcal{L}}{\partial a_1} = 2X^\top X a_1 - 2\nu a_1 = 0 $$
$$X^\top X a_1 = \nu a_1$$
---
#### Note:
So $a_1$ is selected from a set of eigenvectors of $X^\top X$. But which one?
$$ a_1^\top X^\top X a_1 = \nu a_1^\top a_1 = \nu \rightarrow \max$$
That means:
* $\nu$ should be the greatest eigenvalue of matrix $X^\top X$, which is $\lambda_1$
* $a_1$ is eigenvector, correspondent to $\lambda_1$
---
### Step 3:
Similarly for $a_{2}$:
$$
\begin{equation}
\begin{cases}
a_2^\top X^\top X a_2 \rightarrow \max_{a_2} \\
a_2^\top a_2 = 1 \\
a_2^\top a_1 = 0
\end{cases}
\end{equation}
$$
...
## PCA algorithm (short)
1. Center (and scale) dataset
2. Calculate covariance matrix $С=X^\top X$
3. Find first $k$ eigenvalues and eigenvectors
$$A =
\left[
\begin{array}{cccc}
\mid & \mid & & \mid\\
a_{1} & a_{2} & \ldots & a_{k} \\
\mid & \mid & & \mid
\end{array}
\right]
$$
4. Perform projection:
$$ Z = XA $$
```python
# Import PCA
from sklearn.decomposition import PCA
# Fit PCA
pca = PCA(n_components=2)
pca.fit(X_aniso)
# Apply PCA
X_aniso_pca = pca.transform(X_aniso)
```
```python
X_aniso_pca
```
```python
pca_1 = PCA(n_components=1)
pca_1.fit(X_aniso)
X_aniso_pca_1 = pca_1.transform(X_aniso)
pca_2 = PCA(n_components=2)
pca_2.fit(X_aniso)
X_aniso_pca_2 = pca_2.transform(X_aniso)
```
```python
plt.figure(figsize=(15, 5))
# Plot original X with eigenvectors
plt.subplot(1, 3, 1)
plt.scatter(X_aniso[:, 0], X_aniso[:, 1], color='b')
for vector in pca_2.components_:
plt.arrow(0, 0, vector[0], vector[1], head_width=0.2, head_length=0.2, fc='k', ec='k')
plt.title("Original X", size=14)
plt.xticks(size=14)
plt.yticks(size=14)
#plt.grid(b=1)
plt.legend(loc='best')
# Plot for PCA with n_components=2
plt.subplot(1, 3, 2)
plt.scatter(X_aniso_pca_2[:, 0], X_aniso_pca_2[:, 1], color='b')
for vector in pca_2.transform(pca_2.components_):
plt.arrow(0, 0, vector[0], vector[1], head_width=0.2, head_length=0.2, fc='k', ec='k', linewidth=2)
plt.title("PCA with n_component = 2", size=14)
plt.xticks(size=14)
plt.yticks(size=14)
plt.ylim(-4.5, 4.5)
#plt.grid(b=1)
# Plot for PCA with n_components=1
plt.subplot(1, 3, 3)
plt.scatter(X_aniso_pca_1[:, 0], [0]*len(X_aniso_pca_1), color='b')
for vector in pca_1.transform(pca_1.components_):
if vector[0] <= 0.5: continue
plt.arrow(0, 0, vector[0], 0, head_width=0.2, head_length=0.2, fc='k', ec='k', linewidth=2)
plt.title("PCA with n_component = 1", size=14)
plt.xticks(size=14)
plt.yticks(size=14)
plt.xlim(-4.5, 4.5)
plt.ylim(-4.5, 4.5)
#plt.grid(b=1)
plt.tight_layout()
plt.show()
```
## Real Data Example
## Gender Recognition by Voice
This database was created to identify a voice as male or female, based upon acoustic properties of the voice and speech. The dataset consists of 3,168 recorded voice samples, collected from male and female speakers. The voice samples are pre-processed by acoustic analysis in R using the seewave and tuneR packages, with an analyzed frequency range of 0hz-280hz (human vocal range).
The following acoustic properties of each voice are measured and included within the CSV:
* meanfreq: mean frequency (in kHz)
* sd: standard deviation of frequency
* median: median frequency (in kHz)
* Q25: first quantile (in kHz)
* Q75: third quantile (in kHz)
* IQR: interquantile range (in kHz)
* skew: skewness (see note in specprop description)
* kurt: kurtosis (see note in specprop description)
* sp.ent: spectral entropy
* sfm: spectral flatness
* mode: mode frequency
* centroid: frequency centroid (see specprop)
* peakf: peak frequency (frequency with highest energy)
* meanfun: average of fundamental frequency measured across acoustic signal
* minfun: minimum fundamental frequency measured across acoustic signal
* maxfun: maximum fundamental frequency measured across acoustic signal
* meandom: average of dominant frequency measured across acoustic signal
* mindom: minimum of dominant frequency measured across acoustic signal
* maxdom: maximum of dominant frequency measured across acoustic signal
* dfrange: range of dominant frequency measured across acoustic signal
* modindx: modulation index. Calculated as the accumulated absolute difference between adjacent measurements of fundamental frequencies divided by the frequency range
* label: male or female
```python
! wget https://raw.githubusercontent.com/yandexdataschool/mlhep2019/master/notebooks/day-2/Clustering/data/voice.csv
```
```python
# Read data sample
data = pd.read_csv("voice.csv")
print("DataFrame shape: ", data.shape)
data.head()
```
```python
# Get feature names
feature_names = data.columns.drop(['label'])
print("Feature names: ", feature_names)
```
```python
# Prepare X and y
X = data[feature_names].values
y = 1. * (data['label'].values == 'male')
```
## Train / Test Split + Standardization
```python
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# Split data into train and test samples
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)
# Standardization
ss = StandardScaler()
ss.fit(X_train)
X_train = ss.transform(X_train)
X_test = ss.transform(X_test)
```
## Train Classifier
```python
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
# You can play with other classifiers
clf = LogisticRegression(solver='lbfgs')
clf.fit(X_train, y_train)
```
```python
y_test_predict = clf.predict(X_test)
y_test_proba = clf.predict_proba(X_test)[:, 1]
```
```python
from sklearn.metrics import accuracy_score, roc_auc_score
accuracy = accuracy_score(y_test, y_test_predict)
auc = roc_auc_score(y_test, y_test_proba)
print("Accuracy: ", accuracy)
print("ROC AUC: ", auc)
```
## Apply PCA
```python
pca_accuracies = []
pca_aucs = []
pca_components = np.arange(1, 21)
for n_components in pca_components:
# For each n_components run PCA
pca = PCA(n_components=n_components)
pca.fit(X_train)
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
# Fit a classifier
clf = LogisticRegression(solver='lbfgs')
clf.fit(X_train_pca, y_train)
# Make predictions
y_test_predict = clf.predict(X_test_pca)
y_test_proba = clf.predict_proba(X_test_pca)[:, 1]
# Calculate quality metrics
accuracy = accuracy_score(y_test, y_test_predict)
pca_accuracies.append(accuracy)
auc = roc_auc_score(y_test, y_test_proba)
pca_aucs.append(auc)
```
```python
plt.figure(figsize=(9, 6))
plt.plot(pca_components, pca_accuracies, label='Accuracy', color='b', linewidth=3)
plt.plot(pca_components, pca_aucs, label='ROC AUC', color='r', linewidth=3)
plt.xticks(pca_components, size=14)
plt.xlabel("N components of PCA", size=14)
plt.yticks(size=14)
plt.ylabel("Metric values", size=14)
plt.legend(loc='best', fontsize=14)
plt.grid(b=1)
plt.show()
```
## Explained variance
Explained variance for $a_i$ can be calculated as the following ratio:
$$
\frac{\lambda_{i}}{\sum_{d=1}^{D}\lambda_{d}}
$$
where $\lambda_{i}$ is an eigenvalue.
```python
# Fit PCA
pca = PCA(n_components=20)
pca.fit(X_train)
# Take all eigenvalues (sorted)
eigenvalues = pca.explained_variance_
```
```python
eigenvalues
```
```python
pca_components = np.arange(1, 21)
# Calculate explained variance
explained_variance = eigenvalues / eigenvalues.sum()
# Calculate cumulative explained variance
cumsum_explained_variance = np.cumsum(explained_variance)
```
```python
plt.figure(figsize=(18, 6))
plt.subplot(1, 2, 1)
plt.plot(pca_components, explained_variance, color='b', linewidth=3)
plt.xticks(pca_components, size=14)
plt.xlabel("N components of PCA", size=14)
plt.yticks(size=14)
plt.ylabel("Explained Variance", size=14)
plt.title("PCA Explained Variance", size=14)
plt.grid(b=1)
plt.subplot(1, 2, 2)
plt.plot(pca_components, cumsum_explained_variance, color='b', linewidth=3)
plt.xticks(pca_components, size=14)
plt.xlabel("N components of PCA", size=14)
plt.yticks(size=14)
plt.ylabel("Explained Variance", size=14)
plt.title("PCA Cumulative Explained Variance", size=14)
plt.grid(b=1)
plt.show()
```
----
----
----
# Part 4: Anomalies Detection
`Scikit-learn` has several anomalies detection algorithms. Their description and examples are provided on [this page](https://scikit-learn.org/stable/modules/outlier_detection.html):
<center></center>
## Data Preparation
```python
from sklearn import datasets
# Define key constants
n_samples = 500
outliers_fraction = 0.15
n_outliers = int(outliers_fraction * n_samples)
# Generate sample
X = 4 * (datasets.make_moons(n_samples=n_samples, noise=.05, random_state=0)[0] - np.array([0.5, 0.25]))
# Add outliers
X = np.concatenate([X, np.random.RandomState(42).uniform(low=-6, high=6, size=(n_outliers, 2))], axis=0)
```
```python
def plot_anomalies(X, ano):
if ano is not None:
try:
y = ano.predict(X)
except:
y = ano.fit_predict(X)
else:
y = None
# Create an figure with a custom size
# plt.figure(figsize=(9, 6))
if y is not None:
for cluster_label in np.unique(y):
if cluster_label == -1:
suff = " (Anomaly)"
else:
suff = " (Normal)"
# Plot all objects with y == i (class 0)
plt.scatter(X[y == cluster_label, 0], # selects all objects with y == i and the 1st column of X
X[y == cluster_label, 1], # selects all objects with y == i and the 2nd column of X
label=str(cluster_label)+suff) # label for the plot legend
else:
plt.scatter(X[:, 0], X[:, 1], label='samples')
# Plot decision doundary
if ano is not None:
xx, yy = np.meshgrid(np.linspace(-7, 7, 150), np.linspace(-7, 7, 150))
try:
Z = ano.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='black')
except:
pass
plt.xlabel('X1', size=12) # set up X-axis label
plt.ylabel('X2', size=12) # set up Y-axis label
plt.xticks(size=12)
plt.yticks(size=12)
plt.legend(loc='best', fontsize=12) # create the plot legend and set up it position
plt.grid(b=1) # create grid on the plot
plt.show() # display the plot
```
```python
plot_anomalies(X, ano=None)
```
## Run Anomalies Detection
```python
# IsolationForest
from sklearn.ensemble import IsolationForest
# Run anomalies detection algorithm
ano = IsolationForest(behaviour='new', contamination=outliers_fraction, random_state=42)
ano.fit(X)
# Detect anomalies
y_pred = ano.predict(X)
# Plot detected anomalies
plot_anomalies(X, ano)
```
```python
# EllipticEnvelope (Robust covariance)
from sklearn.covariance import EllipticEnvelope
ano = EllipticEnvelope(contamination=outliers_fraction)
ano.fit(X)
# Detect anomalies
y_pred = ano.predict(X)
# Plot detected anomalies
plot_anomalies(X, ano)
```
```python
# DBSCAN
# Run anomalies detection algorithm
ano = cluster.DBSCAN(eps=0.5, min_samples=5)
ano.fit(X)
# Detect anomalies
y_pred = ano.fit_predict(X)
# Plot detected anomalies
plot_anomalies(X, ano)
```
Task:
* Change `contamination` parameter. How can you explain the results?
```python
```
| 6af33e865d3905e223044bf9f1ab018e52b4487f | 43,211 | ipynb | Jupyter Notebook | notebooks/day-2/Clustering/Clustering.ipynb | Meshreki/mlhep2019 | 7934173666267ee21faa88d939e26cafe8c5323e | [
"MIT"
]
| null | null | null | notebooks/day-2/Clustering/Clustering.ipynb | Meshreki/mlhep2019 | 7934173666267ee21faa88d939e26cafe8c5323e | [
"MIT"
]
| null | null | null | notebooks/day-2/Clustering/Clustering.ipynb | Meshreki/mlhep2019 | 7934173666267ee21faa88d939e26cafe8c5323e | [
"MIT"
]
| null | null | null | 27.986399 | 391 | 0.514522 | true | 7,167 | Qwen/Qwen-72B | 1. YES
2. YES | 0.870597 | 0.831143 | 0.723591 | __label__eng_Latn | 0.71935 | 0.519476 |
# Lagrangian mechanics
> Marcos Duarte
> [Laboratory of Biomechanics and Motor Control](http://pesquisa.ufabc.edu.br/bmclab)
> Federal University of ABC, Brazil
<center><div style="background-color:#f2f2f2;border:1px solid black;width:72%;padding:5px 10px 5px 10px;text-align:left;">
<i>"The theoretical development of the laws of motion of bodies is a problem of such interest and importance, that it has engaged the attention of all the most eminent mathematicians, since the invention of dynamics as a mathematical science by <b>Galileo</b>, and especially since the wonderful extension which was given to that science by <b>Newton</b>. Among the successors of those illustrious men, <b>Lagrange</b> has perhaps done more than any other analyst, to give extent and harmony to such deductive researches, by showing that the most varied consequences respecting the motions of systems of bodies may be derived from one radical formula; the beauty of the method so suiting the dignity of the results, as to make of his great work a kind of scientific poem."</i> <b>Hamilton</b> (1834)
</div></center>
<h1>Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></span></li><li><span><a href="#Generalized-coordinates" data-toc-modified-id="Generalized-coordinates-2"><span class="toc-item-num">2 </span>Generalized coordinates</a></span></li><li><span><a href="#Euler–Lagrange-equations" data-toc-modified-id="Euler–Lagrange-equations-3"><span class="toc-item-num">3 </span>Euler–Lagrange equations</a></span><ul class="toc-item"><li><span><a href="#Steps-to-deduce-the-Euler-Lagrange-equations" data-toc-modified-id="Steps-to-deduce-the-Euler-Lagrange-equations-3.1"><span class="toc-item-num">3.1 </span>Steps to deduce the Euler-Lagrange equations</a></span></li><li><span><a href="#Example:-Particle-moving-under-the-influence-of-a-conservative-force" data-toc-modified-id="Example:-Particle-moving-under-the-influence-of-a-conservative-force-3.2"><span class="toc-item-num">3.2 </span>Example: Particle moving under the influence of a conservative force</a></span></li><li><span><a href="#Example:-Ideal-mass-spring-system" data-toc-modified-id="Example:-Ideal-mass-spring-system-3.3"><span class="toc-item-num">3.3 </span>Example: Ideal mass-spring system</a></span></li><li><span><a href="#Example:-Simple-pendulum-under-the-influence-of-gravity" data-toc-modified-id="Example:-Simple-pendulum-under-the-influence-of-gravity-3.4"><span class="toc-item-num">3.4 </span>Example: Simple pendulum under the influence of gravity</a></span><ul class="toc-item"><li><span><a href="#Numerical-solution-of-the-equation-of-motion-for-the-simple-pendulum" data-toc-modified-id="Numerical-solution-of-the-equation-of-motion-for-the-simple-pendulum-3.4.1"><span class="toc-item-num">3.4.1 </span>Numerical solution of the equation of motion for the simple pendulum</a></span></li></ul></li><li><span><a href="#Python-code-to-automate-the-calculation-of-the-Euler–Lagrange-equation" data-toc-modified-id="Python-code-to-automate-the-calculation-of-the-Euler–Lagrange-equation-3.5"><span class="toc-item-num">3.5 </span>Python code to automate the calculation of the Euler–Lagrange equation</a></span></li><li><span><a href="#Example:-Double-pendulum-under-the-influence-of-gravity" data-toc-modified-id="Example:-Double-pendulum-under-the-influence-of-gravity-3.6"><span class="toc-item-num">3.6 </span>Example: Double pendulum under the influence of gravity</a></span><ul class="toc-item"><li><span><a href="#Numerical-solution-of-the-equation-of-motion-for-the-double-pendulum" data-toc-modified-id="Numerical-solution-of-the-equation-of-motion-for-the-double-pendulum-3.6.1"><span class="toc-item-num">3.6.1 </span>Numerical solution of the equation of motion for the double pendulum</a></span></li></ul></li><li><span><a href="#Example:-Double-compound-pendulum-under-the-influence-of-gravity" data-toc-modified-id="Example:-Double-compound-pendulum-under-the-influence-of-gravity-3.7"><span class="toc-item-num">3.7 </span>Example: Double compound pendulum under the influence of gravity</a></span></li><li><span><a href="#Example:-Double-compound-pendulum-in-joint-space" data-toc-modified-id="Example:-Double-compound-pendulum-in-joint-space-3.8"><span class="toc-item-num">3.8 </span>Example: Double compound pendulum in joint space</a></span></li><li><span><a href="#Example:-Mass-attached-to-a-spring-on-a-horizontal-plane" data-toc-modified-id="Example:-Mass-attached-to-a-spring-on-a-horizontal-plane-3.9"><span class="toc-item-num">3.9 </span>Example: Mass attached to a spring on a horizontal plane</a></span></li></ul></li><li><span><a href="#Generalized-forces" data-toc-modified-id="Generalized-forces-4"><span class="toc-item-num">4 </span>Generalized forces</a></span><ul class="toc-item"><li><span><a href="#Example:-Simple-pendulum-on-moving-cart" data-toc-modified-id="Example:-Simple-pendulum-on-moving-cart-4.1"><span class="toc-item-num">4.1 </span>Example: Simple pendulum on moving cart</a></span></li><li><span><a href="#Example:-Two-masses-and-two-springs-under-the-influence-of-gravity" data-toc-modified-id="Example:-Two-masses-and-two-springs-under-the-influence-of-gravity-4.2"><span class="toc-item-num">4.2 </span>Example: Two masses and two springs under the influence of gravity</a></span></li><li><span><a href="#Example:-Mass-spring-damper-system-with-gravity" data-toc-modified-id="Example:-Mass-spring-damper-system-with-gravity-4.3"><span class="toc-item-num">4.3 </span>Example: Mass-spring-damper system with gravity</a></span><ul class="toc-item"><li><span><a href="#Numerical-solution-of-the-equation-of-motion-for-mass-spring-damper-system" data-toc-modified-id="Numerical-solution-of-the-equation-of-motion-for-mass-spring-damper-system-4.3.1"><span class="toc-item-num">4.3.1 </span>Numerical solution of the equation of motion for mass-spring-damper system</a></span></li></ul></li></ul></li><li><span><a href="#Forces-of-constraint" data-toc-modified-id="Forces-of-constraint-5"><span class="toc-item-num">5 </span>Forces of constraint</a></span><ul class="toc-item"><li><span><a href="#Example:-Force-of-constraint-in-a-simple-pendulum-under-the-influence-of-gravity" data-toc-modified-id="Example:-Force-of-constraint-in-a-simple-pendulum-under-the-influence-of-gravity-5.1"><span class="toc-item-num">5.1 </span>Example: Force of constraint in a simple pendulum under the influence of gravity</a></span></li></ul></li><li><span><a href="#Lagrangian-formalism-applied-to-non-mechanical-systems" data-toc-modified-id="Lagrangian-formalism-applied-to-non-mechanical-systems-6"><span class="toc-item-num">6 </span>Lagrangian formalism applied to non-mechanical systems</a></span><ul class="toc-item"><li><span><a href="#Example:-Lagrangian-formalism-for-RLC-eletrical-circuits" data-toc-modified-id="Example:-Lagrangian-formalism-for-RLC-eletrical-circuits-6.1"><span class="toc-item-num">6.1 </span>Example: Lagrangian formalism for RLC eletrical circuits</a></span></li></ul></li><li><span><a href="#Considerations-on-the-Lagrangian-mechanics" data-toc-modified-id="Considerations-on-the-Lagrangian-mechanics-7"><span class="toc-item-num">7 </span>Considerations on the Lagrangian mechanics</a></span></li><li><span><a href="#Further-reading" data-toc-modified-id="Further-reading-8"><span class="toc-item-num">8 </span>Further reading</a></span></li><li><span><a href="#Video-lectures-on-the-internet" data-toc-modified-id="Video-lectures-on-the-internet-9"><span class="toc-item-num">9 </span>Video lectures on the internet</a></span></li><li><span><a href="#Problems" data-toc-modified-id="Problems-10"><span class="toc-item-num">10 </span>Problems</a></span></li><li><span><a href="#References" data-toc-modified-id="References-11"><span class="toc-item-num">11 </span>References</a></span></li></ul></div>
```python
# import necessary libraries and configure environment
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('notebook', font_scale=1.2, rc={"lines.linewidth": 2})
# import Sympy functions
import sympy as sym
from sympy import Symbol, symbols, cos, sin, Matrix, simplify, Eq, latex, expand
from sympy.solvers.solveset import nonlinsolve
from sympy.physics.mechanics import dynamicsymbols, mlatex, init_vprinting
init_vprinting()
from IPython.display import display, Math
```
## Introduction
We know that some problems in dynamics can be solved using the principle of conservation of mechanical energy, that the total mechanical energy in a system (the sum of potential and kinetic energies) is constant when only conservative forces are present in the system. Such approach is one kind of energy methods, see for example, pages 495-512 in Ruina and Pratap (2019).
Lagrangian mechanics (after [Joseph-Louis Lagrange](https://en.wikipedia.org/wiki/Joseph-Louis_Lagrange)) can be seen as another kind of energy methods, but much more general, to the extent is an alternative to Newtonian mechanics.
The Lagrangian mechanics is a formulation of classical mechanics where the equations of motion are obtained from the kinetic and potential energy of the system (scalar quantities) represented in generalized coordinates instead of using Newton's laws of motion to deduce the equations of motion from the forces on the system (vector quantities) represented in Cartesian coordinates.
## Generalized coordinates
The direct application of Newton's laws to mechanical systems results in a set of equations of motion in terms of Cartesian coordinates of each of the particles that make up the system. In many cases, this is not the most convenient coordinate system to solve the problem or describe the movement of the system. For example, for a serial chain of rigid links, such as a member of the human body or from a robot manipulator, it may be simpler to describe the positions of each link by the angles between links.
Coordinate systems such as angles of a chain of links are referred as [generalized coordinates](https://en.wikipedia.org/wiki/Generalized_coordinates). Generalized coordinates uniquely specify the positions of the particles in a system. Although there may be several generalized coordinates to describe a system, usually a judicious choice of generalized coordinates provides the minimum number of independent coordinates that define the configuration of a system (which is the number of <a href="https://en.wikipedia.org/wiki/Degrees_of_freedom_(mechanics)">degrees of freedom</a> of the system), turning the problem simpler to solve. In this case, when the number of generalized coordinates equals the number of degrees of freedom, the system is referred as a holonomic system. In a non-holonomic system, the number of generalized coordinates necessary do describe the system depends on the path taken by the system.
Being a little more technical, according to [Wikipedia](https://en.wikipedia.org/wiki/Configuration_space_(physics)):
"In classical mechanics, the parameters that define the configuration of a system are called generalized coordinates, and the vector space defined by these coordinates is called the configuration space of the physical system. It is often the case that these parameters satisfy mathematical constraints, such that the set of actual configurations of the system is a manifold in the space of generalized coordinates. This manifold is called the configuration manifold of the system."
In problems where it is desired to use generalized coordinates, one can write Newton's equations of motion in terms of Cartesian coordinates and then transform them into generalized coordinates. However, it would be desirable and convenient to have a general method that would directly establish the equations of motion in terms of a set of convenient generalized coordinates. In addition, general methods for writing, and perhaps solving, the equations of motion in terms of any coordinate system would also be desirable. The [Lagrangian mechanics](https://en.wikipedia.org/wiki/Lagrangian_mechanics) is such a method.
## Euler–Lagrange equations
See [this notebook](http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/lagrangian_mechanics_generalized.ipynb) for a deduction of the Lagrange's equation in generalized coordinates.
Consider a system whose configuration (positions) can be described by a set of $N$ generalized coordinates $q_i\,(i=1,\dotsc,N)$.
Let's define the Lagrange or Lagrangian function $\mathcal{L}$ as the difference between the total kinetic energy $T$ and the total potential energy $V$ of the system in terms of the generalized coordinates as:
<p>
<span class="notranslate">
\begin{equation}
\mathcal{L}(t,q,\dot{q}) = T(\dot{q}_1(t),\dotsc,\dot{q}_N(t)) - V(q_1(t),\dotsc,q_N(t))
\label{}
\end{equation}
</span>
where the total potential energy is only due to conservative forces, that is, forces in which the total work done to move the system between two points is independent of the path taken.
The Euler–Lagrange equations (or Lagrange's equations of the second kind) of the system are (omitting the functions' dependencies for sake of clarity):
<p>
<span class="notranslate">
\begin{equation}
\frac{\mathrm d }{\mathrm d t}\left( {\frac{\partial \mathcal{L}}{\partial \dot{q}_i }}
\right)-\frac{\partial \mathcal{L}}{\partial q_i } = Q_{NCi} \quad i=1,\dotsc,N
\label{}
\end{equation}
</span>
where $Q_{NCi}$ are the generalized forces due to non-conservative forces acting on the system, any forces that can't be expressed in terms of a potential.
Once all derivatives of the Lagrangian function are calculated and substitute them in the equations above, the result is the equation of motion (EOM) for each generalized coordinate. There will be $N$ equations for a system with $N$ generalized coordinates.
### Steps to deduce the Euler-Lagrange equations
1. Model the problem. Define the number of degrees of freedom. Carefully select the corresponding generalized coordinates to describe the system;
2. Calculate the total kinetic and total potential energies of the system. Calculate the Lagrangian;
3. Calculate the generalized forces for each generalized coordinate;
4. For each generalized coordinate, calculate the three derivatives present on the left side of the Euler-Lagrange equation;
5. For each generalized coordinate, substitute the result of these three derivatives in the left side and the corresponding generalized forces in the right side of the Euler-Lagrange equation.
The EOM's, one for each generalized coordinate, are the result of the last step.
### Example: Particle moving under the influence of a conservative force
Let's deduce the EOM of a particle with mass $m$ moving in the three-dimensional space under the influence of a [conservative force](https://en.wikipedia.org/wiki/Conservative_force).
The model is the particle moving in 3D space and there is no generalized force (non-conservative force); the particle has three degrees of freedom and we need three generalized coordinates, which can be $(x, y, z)$, where $y$ is vertical, in a Cartesian frame of reference.
The Lagrangian $(\mathcal{L} = T - V)$ of the particle is:
<p>
<span class="notranslate">
\begin{equation}
\mathcal{L} = \frac{1}{2}m(\dot x^2(t) + \dot y^2(t) + \dot z^2(t)) - V(x(t),y(t),z(t))
\label{}
\end{equation}
</span>
The equations of motion for the particle are found by applying the Euler–Lagrange equation for each coordinate.
For the $x$ coordinate:
<p>
<span class="notranslate">
\begin{equation}
\frac{\mathrm d }{\mathrm d t}\left( {\frac{\partial \mathcal{L}}{\partial \dot{x}}}
\right) - \frac{\partial \mathcal{L}}{\partial x } = 0
\label{}
\end{equation}
</span>
And the derivatives are:
<p>
<span class="notranslate">
\begin{equation} \begin{array}{rcl}
&\dfrac{\partial \mathcal{L}}{\partial x} &=& -\dfrac{\partial V}{\partial x} \\
&\dfrac{\partial \mathcal{L}}{\partial \dot{x}} &=& m\dot{x} \\
&\dfrac{\mathrm d }{\mathrm d t}\left( {\dfrac{\partial \mathcal{L}}{\partial \dot{x}}} \right) &=& m\ddot{x}
\end{array}
\label{}
\end{equation}
</span>
Finally, the EOM is:
<p>
<span class="notranslate">
\begin{equation}\begin{array}{l}
m\ddot{x} + \dfrac{\partial V}{\partial x} = 0 \quad \rightarrow \\
m\ddot{x} = -\dfrac{\partial V}{\partial x}
\end{array}
\label{}
\end{equation}
</span>
and same procedure for the $y$ and $z$ coordinates.
The equation above is the Newton's second law of motion.
For instance, if the conservative force is due to the gravitational field near Earth's surface $(V=[0, mgy, 0])$, the Euler–Lagrange equations (the EOM's) are:
<p>
<span class="notranslate">
\begin{equation} \begin{array}{rcl}
m\ddot{x} &=& -\dfrac{\partial (0)}{\partial x} &=& 0 \\
m\ddot{y} &=& -\dfrac{\partial (mgy)}{\partial y} &=& -mg \\
m\ddot{z} &=& -\dfrac{\partial (0)}{\partial z} &=& 0
\end{array}
\label{}
\end{equation}
</span>
### Example: Ideal mass-spring system
<figure></figure>
Consider a system with a mass $m$ attached to an ideal spring (massless, length $\ell_0$, and spring constant $k$) at the horizontal direction $x$. A force is momentarily applied to the mass and then the system is left unperturbed.
Let's deduce the EOM of this system.
The system can be modeled as a particle attached to a spring moving at the direction $x$, the only generalized coordinate needed (with origin of the Cartesian reference frame at the wall where the spring is attached), and there is no generalized force.
The Lagrangian $(\mathcal{L} = T - V)$ of the system is:
<p>
<span class="notranslate">
\begin{equation}
\mathcal{L} = \frac{1}{2}m\dot x^2 - \frac{1}{2}k(x-\ell_0)^2
\label{}
\end{equation}
</span>
And the derivatives are:
<p>
<span class="notranslate">
\begin{equation} \begin{array}{rcl}
&\dfrac{\partial \mathcal{L}}{\partial x} &=& -k(x-\ell_0) \\
&\dfrac{\partial \mathcal{L}}{\partial \dot{x}} &=& m\dot{x} \\
&\dfrac{\mathrm d }{\mathrm d t}\left( {\dfrac{\partial \mathcal{L}}{\partial \dot{x}}} \right) &=& m\ddot{x}
\end{array}
\end{equation}
</span>
Finally, the Euler–Lagrange equation (the EOM) is:
<p>
<span class="notranslate">
\begin{equation}
m\ddot{x} + k(x-\ell_0) = 0
\label{}
\end{equation}
</span>
### Example: Simple pendulum under the influence of gravity
<figure></figure>
Consider a pendulum with a massless rod of length $d$ and a mass $m$ at the extremity swinging in a plane forming the angle $\theta$ with the vertical.
Let's deduce the EOM of this system.
The model is a particle oscillating as a pendulum under a constant gravitational force $-mg$.
Although the pendulum moves at the plane, it only has one degree of freedom, which can be described by the angle $\theta$, the generalized coordinate. Let's adopt the origin of the reference frame at the point of the pendulum suspension.
The kinetic energy of the system is:
<p>
<span class="notranslate">
\begin{equation}
T = \frac{1}{2}mv^2 = \frac{1}{2}m(\dot{x}^2+\dot{y}^2)
\end{equation}
</span>
where $\dot{x}$ and $\dot{y}$ are:
<p>
<span class="notranslate">
\begin{equation} \begin{array}{l}
x = d\sin(\theta) \\
y = -d\cos(\theta) \\
\dot{x} = d\cos(\theta)\dot{\theta} \\
\dot{y} = d\sin(\theta)\dot{\theta}
\end{array} \end{equation}
</span>
Consequently, the kinetic energy is:
<p>
<span class="notranslate">
\begin{equation}
T = \frac{1}{2}m\left((d\cos(\theta)\dot{\theta})^2 + (d\sin(\theta)\dot{\theta})^2\right) = \frac{1}{2}md^2\dot{\theta}^2
\end{equation}
</span>
And the potential energy of the system is:
<p>
<span class="notranslate">
\begin{equation}
V = -mgy = -mgd\cos\theta
\end{equation}
</span>
The Lagrangian function is:
<p>
<span class="notranslate">
\begin{equation}
\mathcal{L} = \frac{1}{2}md^2\dot\theta^2 + mgd\cos\theta
\end{equation}
</span>
And the derivatives are:
<p>
<span class="notranslate">
\begin{equation} \begin{array}{rcl}
&\dfrac{\partial \mathcal{L}}{\partial \theta} &=& -mgd\sin\theta \\
&\dfrac{\partial \mathcal{L}}{\partial \dot{\theta}} &=& md^2\dot{\theta} \\
&\dfrac{\mathrm d }{\mathrm d t}\left( {\dfrac{\partial \mathcal{L}}{\partial \dot{\theta}}} \right) &=& md^2\ddot{\theta}
\end{array} \end{equation}
</span>
Finally, the Euler–Lagrange equation (the EOM) is:
<p>
<span class="notranslate">
\begin{equation}
md^2\ddot\theta + mgd\sin\theta = 0
\end{equation}
</span>
Note that although the generalized coordinate of the system is $\theta$, we had to employ Cartesian coordinates at the beginning to derive expressions for the kinetic and potential energies. For kinetic energy, we could have used its equivalent definition for circular motion $(T=I\dot{\theta}^2/2=md^2\dot{\theta}^2/2)$, but for the potential energy there is no other way since the gravitational force acts in the vertical direction.
In cases like this, a fundamental aspect is to express the Cartesian coordinates in terms of the generalized coordinates.
#### Numerical solution of the equation of motion for the simple pendulum
A classical approach to solve analytically the EOM for the simple pendulum is to consider the motion for small angles where $\sin\theta \approx \theta$ and the differential equation is linearized to $d\ddot\theta + g\theta = 0$. This equation has an analytical solution of the type $\theta(t) = A \sin(\omega t + \phi)$, where $\omega = \sqrt{g/d}$ and $A$ and $\phi$ are constants related to the initial position and velocity.
For didactic purposes, let's solve numerically the differential equation for the pendulum using [Euler’s method](https://nbviewer.jupyter.org/github/demotu/BMC/blob/master/notebooks/OrdinaryDifferentialEquation.ipynb#Euler-method).
Remember that we have to:
1. Transform the second-order ODE into two coupled first-order ODEs,
2. Approximate the derivative of each variable by its discrete first order difference
3. Write an equation to calculate the variable in a recursive way, updating its value with an equation based on the first order difference.
We will also implement different variations of the Euler method: Forward (standard), Semi-implicit, and Semi-implicit variation (same results as Semi-implicit).
Implementing these steps in Python:
```python
def euler_method(T=10, y0=[0, 0], h=.01, method=2):
"""
First-order numerical procedure for solving ODE given initial condition.
Parameters:
T: total period (in s) of the numerical integration
y0: initial state [position, velocity]
h: step for the numerical integration
method: Euler method implementation, one of the following:
1: 'forward' (standard)
2: 'semi-implicit' (a.k.a., symplectic, Euler–Cromer)
3: 'semi-implicit variation' (same results as 'semi-implicit')
Two coupled first-order ODEs:
dydt = v
dvdt = a # calculate the expression for acceleration at each step
Two equations to update the values of the variables based on first-order difference:
y[i+1] = y[i] + h*v[i]
v[i+1] = v[i] + h*dvdt[i]
Returns arrays time, [position, velocity]
"""
N = int(np.ceil(T/h))
y = np.zeros((2, N))
y[:, 0] = y0
t = np.linspace(0, T, N, endpoint=False)
for i in range(N-1):
if method == 1: # forward (standard) Euler method
y[0, i+1] = y[0, i] + h*y[1, i]
y[1, i+1] = y[1, i] + h*dvdt(t[i], y[:, i])
elif method == 2: # semi-implicit Euler (Euler–Cromer) method
y[1, i+1] = y[1, i] + h*dvdt(t[i], y[:, i])
y[0, i+1] = y[0, i] + h*y[1, i+1]
elif method == 3: # variant of semi-implicit (equal results)
y[0, i+1] = y[0, i] + h*y[1, i]
y[1, i+1] = y[1, i] + h*dvdt(t[i], [y[0, i+1], y[1, i]])
else:
raise ValueError('Valid options for method are 1, 2, 3.')
return t, y
def dvdt(t, y):
"""
Returns dvdt at `t` given state `y`.
"""
d = 0.5 # length of the pendulum in m
g = 10 # acceleration of gravity in m/s2
return -g/d*np.sin(y[0])
def plot(t, y, labels):
"""
Plot data given in t, y, v with labels [title, ylabel@left, ylabel@right]
"""
fig, ax1 = plt.subplots(1, 1, figsize=(10, 4))
ax1.set_title(labels[0])
ax1.plot(t, y[0, :], 'b', label=' ')
ax1.set_xlabel('Time (s)')
ax1.set_ylabel(u'\u2014 ' + labels[1], color='b')
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
ax2.plot(t, y[1, :], 'r-.', label=' ')
ax2.set_ylabel(u'\u2014 \u2027 ' + labels[2], color='r')
ax2.tick_params('y', colors='r')
plt.tight_layout()
plt.show()
```
```python
T, y0, h = 10, [45*np.pi/180, 0], .01
t, theta = euler_method(T, y0, h, method=2)
labels = ['Trajectory of simple pendulum under gravity',
'Angular position ($^o$)', 'Angular velocity ($^o/s$)']
plot(t, np.rad2deg(theta), labels)
```
### Python code to automate the calculation of the Euler–Lagrange equation
The three derivatives in the Euler–Lagrange equations are first-order derivatives and behind the scenes we are using latex to write the equations. Both tasks are boring and error prone.
Let's write a function using the Sympy library to automate the calculation of the derivative terms in the Euler–Lagrange equations and display them nicely.
```python
# helping function
def printeq(lhs, rhs=None):
"""Rich display of Sympy expression as lhs = rhs."""
if rhs is None:
display(Math(r'{}'.format(lhs)))
else:
display(Math(r'{} = '.format(lhs) + mlatex(simplify(rhs, ratio=1.7))))
def lagrange_terms(L, q, show=True):
"""Calculate terms of Euler-Lagrange equations given the Lagrangian and q's.
"""
if not isinstance(q, list):
q = [q]
Lterms = []
if show:
s = '' if len(q) == 1 else 's'
printeq(r"\text{Terms of the Euler-Lagrange equation%s:}"%(s))
for qi in q:
dLdqi = simplify(L.diff(qi))
Lterms.append(dLdqi)
dLdqdi = simplify(L.diff(qi.diff(t)))
Lterms.append(dLdqdi)
dtdLdqdi = simplify(dLdqdi.diff(t))
Lterms.append(dtdLdqdi)
if show:
printeq(r'\text{For generalized coordinate}\;%s:'%latex(qi.func))
printeq(r'\quad\dfrac{\partial\mathcal{L}}{\partial %s}'%latex(qi.func), dLdqi)
printeq(r'\quad\dfrac{\partial\mathcal{L}}{\partial\dot{%s}}'%latex(qi.func), dLdqdi)
printeq(r'\quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{'+
r'\partial\mathcal{L}}{\partial\dot{%s}}}\right)'%latex(qi.func), dtdLdqdi)
return Lterms
def lagrange_eq(Lterms, Qnc=None):
"""Display Euler-Lagrange equation given the Lterms."""
s = '' if len(Lterms) == 3 else 's'
if Qnc is None:
Qnc = int(len(Lterms)/3) * [0]
printeq(r"\text{Euler-Lagrange equation%s (EOM):}"%(s))
for i in range(int(len(Lterms)/3)):
#display(Eq(simplify(Lterms[3*i+2]-Lterms[3*i]), Qnc[i], evaluate=False))
printeq(r'\quad ' + mlatex(simplify(Lterms[3*i+2]-Lterms[3*i])), Qnc[i])
def lagrange_eq_solve(Lterms, q, Qnc=None):
"""Display Euler-Lagrange equation given the Lterms."""
if not isinstance(q, list):
q = [q]
if Qnc is None:
Qnc = int(len(Lterms)/3) * [0]
system = [simplify(Lterms[3*i+2]-Lterms[3*i]-Qnc[i]) for i in range(len(q))]
qdds = [qi.diff(t, 2) for qi in q]
sol = nonlinsolve(system, qdds)
s = '' if len(Lterms) == 3 else 's'
printeq(r"\text{Euler-Lagrange equation%s (EOM):}"%(s))
if len(sol.args):
for i in range(int(len(Lterms)/3)):
display(Eq(qdds[i], simplify(sol.args[0][i]), evaluate=False))
else:
display(sol)
return sol
```
Let's recalculate the EOM of the simple pendulum using Sympy and the code for automation.
```python
# define variables
t = sym.Symbol('t')
m, d, g = sym.symbols('m, d, g', positive=True)
θ = dynamicsymbols('theta') # \theta<TAB>
```
Position and velocity of the simple pendulum under the influence of gravity:
```python
x, y = d*sin(𝜃), -d*cos(θ)
xd, yd = x.diff(t), y.diff(t)
printeq('x', x)
printeq('y', y)
printeq(r'\dot{x}', xd)
printeq(r'\dot{y}', yd)
```
$\displaystyle x = d \operatorname{sin}\left(\theta\right)$
$\displaystyle y = - d \operatorname{cos}\left(\theta\right)$
$\displaystyle \dot{x} = d \operatorname{cos}\left(\theta\right) \dot{\theta}$
$\displaystyle \dot{y} = d \operatorname{sin}\left(\theta\right) \dot{\theta}$
Kinetic and potential energies of the simple pendulum under the influence of gravity and the corresponding Lagrangian function:
```python
T = m*(xd**2 + yd**2)/2
V = m*g*y
printeq('T', T)
printeq('V', V)
L = T - V
printeq(r'\mathcal{L}', L)
```
$\displaystyle T = \frac{d^{2} m \dot{\theta}^{2}}{2}$
$\displaystyle V = - d g m \operatorname{cos}\left(\theta\right)$
$\displaystyle \mathcal{L} = \frac{d m \left(d \dot{\theta}^{2} + 2 g \operatorname{cos}\left(\theta\right)\right)}{2}$
And the automated part for the derivatives:
```python
Lterms = lagrange_terms(L, θ)
```
$\displaystyle \text{Terms of the Euler-Lagrange equation:}$
$\displaystyle \text{For generalized coordinate}\;\theta:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \theta} = - d g m \operatorname{sin}\left(\theta\right)$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\theta}} = d^{2} m \dot{\theta}$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\theta}}}\right) = d^{2} m \ddot{\theta}$
Finally, the EOM is:
```python
lagrange_eq(Lterms)
```
$\displaystyle \text{Euler-Lagrange equation (EOM):}$
$\displaystyle \quad d m \left(d \ddot{\theta} + g \operatorname{sin}\left(\theta\right)\right) = 0$
And rearranging:
```python
sol = lagrange_eq_solve(Lterms, q=θ, Qnc=None)
```
Same result as before.
### Example: Double pendulum under the influence of gravity
<figure></figure>
Consider a double pendulum (one pendulum attached to another) with massless rods of length $d_1$ and $d_2$ and masses $m_1$ and $m_2$ at the extremities of each rod swinging in a plane forming the angles $\theta_1$ and $\theta_2$ with vertical.
The system has two particles with two degrees of freedom; two adequate generalized coordinates to describe the system's configuration are the angles in relation to the vertical ($\theta_1, \theta_2$). Let's adopt the origin of the reference frame at the point of the upper pendulum suspension.
Let's use Sympy to solve this problem.
```python
# define variables
t = Symbol('t')
d1, d2, m1, m2, g = symbols('d1, d2, m1, m2, g', positive=True)
θ1, θ2 = dynamicsymbols('theta1, theta2')
```
The positions and velocities of masses $m_1$ and $m_2$ are:
```python
x1 = d1*sin(θ1)
y1 = -d1*cos(θ1)
x2 = d1*sin(θ1) + d2*sin(θ2)
y2 = -d1*cos(θ1) - d2*cos(θ2)
x1d, y1d = x1.diff(t), y1.diff(t)
x2d, y2d = x2.diff(t), y2.diff(t)
printeq(r'x_1', x1)
printeq(r'y_1', y1)
printeq(r'x_2', x2)
printeq(r'y_2', y2)
printeq(r'\dot{x}_1', x1d)
printeq(r'\dot{y}_1', y1d)
printeq(r'\dot{x}_2', x2d)
printeq(r'\dot{y}_2', y2d)
```
$\displaystyle x_1 = d_{1} \operatorname{sin}\left(\theta_{1}\right)$
$\displaystyle y_1 = - d_{1} \operatorname{cos}\left(\theta_{1}\right)$
$\displaystyle x_2 = d_{1} \operatorname{sin}\left(\theta_{1}\right) + d_{2} \operatorname{sin}\left(\theta_{2}\right)$
$\displaystyle y_2 = - d_{1} \operatorname{cos}\left(\theta_{1}\right) - d_{2} \operatorname{cos}\left(\theta_{2}\right)$
$\displaystyle \dot{x}_1 = d_{1} \operatorname{cos}\left(\theta_{1}\right) \dot{\theta}_{1}$
$\displaystyle \dot{y}_1 = d_{1} \operatorname{sin}\left(\theta_{1}\right) \dot{\theta}_{1}$
$\displaystyle \dot{x}_2 = d_{1} \operatorname{cos}\left(\theta_{1}\right) \dot{\theta}_{1} + d_{2} \operatorname{cos}\left(\theta_{2}\right) \dot{\theta}_{2}$
$\displaystyle \dot{y}_2 = d_{1} \operatorname{sin}\left(\theta_{1}\right) \dot{\theta}_{1} + d_{2} \operatorname{sin}\left(\theta_{2}\right) \dot{\theta}_{2}$
The kinetic and potential energies of the system are:
```python
T = m1*(x1d**2 + y1d**2)/2 + m2*(x2d**2 + y2d**2)/2
V = m1*g*y1 + m2*g*y2
printeq(r'T', T)
printeq(r'V', V)
```
$\displaystyle T = \frac{d_{1}^{2} m_{1} \dot{\theta}_{1}^{2}}{2} + \frac{m_{2} \left(d_{1}^{2} \dot{\theta}_{1}^{2} + 2 d_{1} d_{2} \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1} \dot{\theta}_{2} + d_{2}^{2} \dot{\theta}_{2}^{2}\right)}{2}$
$\displaystyle V = - g \left(d_{1} m_{1} \operatorname{cos}\left(\theta_{1}\right) + d_{1} m_{2} \operatorname{cos}\left(\theta_{1}\right) + d_{2} m_{2} \operatorname{cos}\left(\theta_{2}\right)\right)$
The Lagrangian function is:
```python
L = T - V
printeq(r'\mathcal{L}', L)
```
$\displaystyle \mathcal{L} = \frac{d_{1}^{2} m_{1} \dot{\theta}_{1}^{2}}{2} + d_{1} g m_{1} \operatorname{cos}\left(\theta_{1}\right) + g m_{2} \left(d_{1} \operatorname{cos}\left(\theta_{1}\right) + d_{2} \operatorname{cos}\left(\theta_{2}\right)\right) + \frac{m_{2} \left(d_{1}^{2} \dot{\theta}_{1}^{2} + 2 d_{1} d_{2} \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1} \dot{\theta}_{2} + d_{2}^{2} \dot{\theta}_{2}^{2}\right)}{2}$
And the derivatives are:
```python
Lterms = lagrange_terms(L, [θ1, θ2])
```
$\displaystyle \text{Terms of the Euler-Lagrange equations:}$
$\displaystyle \text{For generalized coordinate}\;\theta_{1}:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \theta_{1}} = - d_{1} \left(d_{2} m_{2} \operatorname{sin}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1} \dot{\theta}_{2} + g m_{1} \operatorname{sin}\left(\theta_{1}\right) + g m_{2} \operatorname{sin}\left(\theta_{1}\right)\right)$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\theta_{1}}} = d_{1} \left(d_{1} m_{1} \dot{\theta}_{1} + m_{2} \left(d_{1} \dot{\theta}_{1} + d_{2} \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{2}\right)\right)$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\theta_{1}}}}\right) = d_{1} \left(d_{1} m_{1} \ddot{\theta}_{1} + m_{2} \left(d_{1} \ddot{\theta}_{1} - d_{2} \left(\dot{\theta}_{1} - \dot{\theta}_{2}\right) \operatorname{sin}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{2} + d_{2} \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \ddot{\theta}_{2}\right)\right)$
$\displaystyle \text{For generalized coordinate}\;\theta_{2}:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \theta_{2}} = d_{2} m_{2} \left(d_{1} \operatorname{sin}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1} \dot{\theta}_{2} - g \operatorname{sin}\left(\theta_{2}\right)\right)$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\theta_{2}}} = d_{2} m_{2} \left(d_{1} \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1} + d_{2} \dot{\theta}_{2}\right)$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\theta_{2}}}}\right) = d_{2} m_{2} \left(- d_{1} \left(\dot{\theta}_{1} - \dot{\theta}_{2}\right) \operatorname{sin}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1} + d_{1} \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \ddot{\theta}_{1} + d_{2} \ddot{\theta}_{2}\right)$
Finally, the EOM are:
```python
lagrange_eq(Lterms)
```
$\displaystyle \text{Euler-Lagrange equations (EOM):}$
$\displaystyle \quad d_{1} \left(d_{1} m_{1} \ddot{\theta}_{1} + d_{1} m_{2} \ddot{\theta}_{1} + d_{2} m_{2} \operatorname{sin}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{2}^{2} + d_{2} m_{2} \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \ddot{\theta}_{2} + g m_{1} \operatorname{sin}\left(\theta_{1}\right) + g m_{2} \operatorname{sin}\left(\theta_{1}\right)\right) = 0$
$\displaystyle \quad d_{2} m_{2} \left(- d_{1} \operatorname{sin}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1}^{2} + d_{1} \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \ddot{\theta}_{1} + d_{2} \ddot{\theta}_{2} + g \operatorname{sin}\left(\theta_{2}\right)\right) = 0$
The EOM's are a system with two coupled equations, $\theta_1$ and $\theta_2$ appear on both equations.
The motion of a double pendulum is very interesting; most of times it presents a chaotic behavior.
#### Numerical solution of the equation of motion for the double pendulum
The analytical solution in infeasible to deduce. For the numerical solution, first we have to rearrange the equations to find separate expressions for $\theta_1$ and $\theta_2$ (solve the system of equations algebraically).
Using Sympy, here are the two expressions:
```python
sol = lagrange_eq_solve(Lterms, q=[θ1, θ2], Qnc=None)
```
In order to solve numerically the ODEs for the double pendulum we have to transform each equation above into two first ODEs. But we should avoid using Euler's method because of the non-negligible error in the numerical integration in this case; more accurate methods such as [Runge-Kutta](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) should be employed. See such solution in [https://www.myphysicslab.com/pendulum/double-pendulum-en.html](https://www.myphysicslab.com/pendulum/double-pendulum-en.html).
We can use Sympy to transform the symbolic equations into Numpy functions that can be used for the numerical solution. Here is the code for that:
```python
θ1dd_fun = sym.lambdify((g, m1, d1, θ1, θ1.diff(t), m2, d2, θ2, θ2.diff(t)), sol.args[0][0], 'numpy')
θ2dd_fun = sym.lambdify((g, m1, d1, θ1, θ1.diff(t), m2, d2, θ2, θ2.diff(t)), sol.args[0][1], 'numpy')
```
The reader is invited to write the code for the numerical simulation.
### Example: Double compound pendulum under the influence of gravity
<figure></figure>
Consider the double compound pendulum (or physical pendulum) shown on the the right with length $d$ and mass $m$ of each rod swinging in a plane forming the angles $\theta_1$ and $\theta_2$ with vertical and $g=10 m/s^2$.
The system has two degrees of freedom and we need two generalized coordinates ($\theta_1, \theta_2$) to describe the system's configuration.
Let's use the Lagrangian mechanics to derive the equations of motion for each pendulum.
To calculate the potential and kinetic energy of the system, we will need to calculate the position and velocity of each pendulum. Now each pendulum is a rod with distributed mass and we will have to calculate the moment of rotational inertia of the rod. In this case, the kinetic energy of each pendulum will be given as the kinetic energy due to rotation of the pendulum plus the kinetic energy due to the speed of the center of mass of the pendulum, such that the total kinetic energy of the system is:
\begin{equation}\begin{array}{rcl}
T = \overbrace{\underbrace{\,\frac{1}{2}I_{cm}\dot\theta_1^2\,}_{\text{rotation}} + \underbrace{\frac{1}{2}m(\dot x_{1,cm}^2 + \dot y_{1,cm}^2)}_{\text{translation}}}^{\text{pendulum 1}} + \overbrace{\underbrace{\,\frac{1}{2}I_{cm}\dot\theta_2^2\,}_{\text{rotation}} + \underbrace{\frac{1}{2}m(\dot x_{2,cm}^2 + \dot y_{2,cm}^2)}_{\text{translation}}}^{\text{pendulum 2}}
\end{array}\end{equation}
And the potential energy of the system is:
\begin{equation}\begin{array}{rcl}
V = mg\big(y_{1,cm} + y_{2,cm}\big)
\end{array}\end{equation}
Let's use Sympy once again.
The position and velocity of the center of mass of the rods $1$ and $2$ are:
```python
d, m, g = symbols('d, m, g', positive=True)
θ1, θ2 = dynamicsymbols('theta1, theta2')
I = m*d*d/12 # rotational inertia of a rod
x1 = d*sin(θ1)/2
y1 = -d*cos(θ1)/2
x2 = d*sin(θ1) + d*sin(θ2)/2
y2 = -d*cos(θ1) - d*cos(θ2)/2
x1d, y1d = x1.diff(t), y1.diff(t)
x2d, y2d = x2.diff(t), y2.diff(t)
printeq(r'x_1', x1); printeq(r'y_1', y1)
printeq(r'x_2', x2); printeq(r'y_2', y2)
printeq(r'\dot{x}_1', x1d); printeq(r'\dot{y}_1', y1d)
printeq(r'\dot{x}_2', x2d); printeq(r'\dot{y}_2', y2d)
```
$\displaystyle x_1 = \frac{d \operatorname{sin}\left(\theta_{1}\right)}{2}$
$\displaystyle y_1 = - \frac{d \operatorname{cos}\left(\theta_{1}\right)}{2}$
$\displaystyle x_2 = \frac{d \left(2 \operatorname{sin}\left(\theta_{1}\right) + \operatorname{sin}\left(\theta_{2}\right)\right)}{2}$
$\displaystyle y_2 = - \frac{d \left(2 \operatorname{cos}\left(\theta_{1}\right) + \operatorname{cos}\left(\theta_{2}\right)\right)}{2}$
$\displaystyle \dot{x}_1 = \frac{d \operatorname{cos}\left(\theta_{1}\right) \dot{\theta}_{1}}{2}$
$\displaystyle \dot{y}_1 = \frac{d \operatorname{sin}\left(\theta_{1}\right) \dot{\theta}_{1}}{2}$
$\displaystyle \dot{x}_2 = \frac{d \left(2 \operatorname{cos}\left(\theta_{1}\right) \dot{\theta}_{1} + \operatorname{cos}\left(\theta_{2}\right) \dot{\theta}_{2}\right)}{2}$
$\displaystyle \dot{y}_2 = \frac{d \left(2 \operatorname{sin}\left(\theta_{1}\right) \dot{\theta}_{1} + \operatorname{sin}\left(\theta_{2}\right) \dot{\theta}_{2}\right)}{2}$
The kinetic and potential energies of the system are:
```python
T = I/2*(θ1.diff(t))**2 + m/2*(x1d**2+y1d**2) + I/2*(θ2.diff(t))**2 + m/2*(x2d**2+y2d**2)
V = m*g*y1 + m*g*y2
printeq('T', T)
printeq('V', V)
```
$\displaystyle T = \frac{d^{2} m \left(3 \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1} \dot{\theta}_{2} + 4 \dot{\theta}_{1}^{2} + \dot{\theta}_{2}^{2}\right)}{6}$
$\displaystyle V = - \frac{d g m \left(3 \operatorname{cos}\left(\theta_{1}\right) + \operatorname{cos}\left(\theta_{2}\right)\right)}{2}$
The Lagrangian function is:
```python
L = T - V
printeq(r'\mathcal{L}', L)
```
$\displaystyle \mathcal{L} = \frac{d m \left(3 d \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1} \dot{\theta}_{2} + 4 d \dot{\theta}_{1}^{2} + d \dot{\theta}_{2}^{2} + 9 g \operatorname{cos}\left(\theta_{1}\right) + 3 g \operatorname{cos}\left(\theta_{2}\right)\right)}{6}$
And the derivatives are:
```python
Lterms = lagrange_terms(L, [θ1, θ2])
```
$\displaystyle \text{Terms of the Euler-Lagrange equations:}$
$\displaystyle \text{For generalized coordinate}\;\theta_{1}:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \theta_{1}} = - \frac{d m \left(d \operatorname{sin}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1} \dot{\theta}_{2} + 3 g \operatorname{sin}\left(\theta_{1}\right)\right)}{2}$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\theta_{1}}} = \frac{d^{2} m \left(3 \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{2} + 8 \dot{\theta}_{1}\right)}{6}$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\theta_{1}}}}\right) = \frac{d^{2} m \left(- 3 \left(\dot{\theta}_{1} - \dot{\theta}_{2}\right) \operatorname{sin}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{2} + 3 \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \ddot{\theta}_{2} + 8 \ddot{\theta}_{1}\right)}{6}$
$\displaystyle \text{For generalized coordinate}\;\theta_{2}:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \theta_{2}} = \frac{d m \left(d \operatorname{sin}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1} \dot{\theta}_{2} - g \operatorname{sin}\left(\theta_{2}\right)\right)}{2}$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\theta_{2}}} = \frac{d^{2} m \left(3 \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1} + 2 \dot{\theta}_{2}\right)}{6}$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\theta_{2}}}}\right) = \frac{d^{2} m \left(- 3 \left(\dot{\theta}_{1} - \dot{\theta}_{2}\right) \operatorname{sin}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1} + 3 \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \ddot{\theta}_{1} + 2 \ddot{\theta}_{2}\right)}{6}$
Finally, the EOM are:
```python
lagrange_eq(Lterms)
```
$\displaystyle \text{Euler-Lagrange equations (EOM):}$
$\displaystyle \quad \frac{d m \left(3 d \operatorname{sin}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{2}^{2} + 3 d \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \ddot{\theta}_{2} + 8 d \ddot{\theta}_{1} + 9 g \operatorname{sin}\left(\theta_{1}\right)\right)}{6} = 0$
$\displaystyle \quad \frac{d m \left(- 3 d \operatorname{sin}\left(\theta_{1} - \theta_{2}\right) \dot{\theta}_{1}^{2} + 3 d \operatorname{cos}\left(\theta_{1} - \theta_{2}\right) \ddot{\theta}_{1} + 2 d \ddot{\theta}_{2} + 3 g \operatorname{sin}\left(\theta_{2}\right)\right)}{6} = 0$
And rearranging:
```python
sol = lagrange_eq_solve(Lterms, q=[θ1, θ2], Qnc=None);
```
### Example: Double compound pendulum in joint space
Let's recalculate the former example but employing generalized coordinates in the joint space: $\alpha_1=\theta_1$ and $\alpha_2=\theta_2-\theta_1$.
```python
d, m, g = symbols('d, m, g', positive=True)
α1, α2 = dynamicsymbols('alpha1, alpha2')
I = m*d*d/12 # rotational inertia of a rod
x1 = d*sin(α1)/2
y1 = -d*cos(α1)/2
x2 = d*sin(α1) + d*sin(α1+α2)/2
y2 = -d*cos(α1) - d*cos(α1+α2)/2
x1d, y1d = x1.diff(t), y1.diff(t)
x2d, y2d = x2.diff(t), y2.diff(t)
printeq(r'x_1', x1); printeq(r'y_1', y1)
printeq(r'x_2', x2); printeq(r'y_2', y2)
printeq(r'\dot{x}_1', x1d); printeq(r'\dot{y}_1', y1d)
printeq(r'\dot{x}_2', x2d); printeq(r'\dot{y}_2', y2d)
```
$\displaystyle x_1 = \frac{d \operatorname{sin}\left(\alpha_{1}\right)}{2}$
$\displaystyle y_1 = - \frac{d \operatorname{cos}\left(\alpha_{1}\right)}{2}$
$\displaystyle x_2 = \frac{d \left(\operatorname{sin}\left(\alpha_{1} + \alpha_{2}\right) + 2 \operatorname{sin}\left(\alpha_{1}\right)\right)}{2}$
$\displaystyle y_2 = - \frac{d \left(\operatorname{cos}\left(\alpha_{1} + \alpha_{2}\right) + 2 \operatorname{cos}\left(\alpha_{1}\right)\right)}{2}$
$\displaystyle \dot{x}_1 = \frac{d \operatorname{cos}\left(\alpha_{1}\right) \dot{\alpha}_{1}}{2}$
$\displaystyle \dot{y}_1 = \frac{d \operatorname{sin}\left(\alpha_{1}\right) \dot{\alpha}_{1}}{2}$
$\displaystyle \dot{x}_2 = \frac{d \left(\left(\dot{\alpha}_{1} + \dot{\alpha}_{2}\right) \operatorname{cos}\left(\alpha_{1} + \alpha_{2}\right) + 2 \operatorname{cos}\left(\alpha_{1}\right) \dot{\alpha}_{1}\right)}{2}$
$\displaystyle \dot{y}_2 = \frac{d \left(\left(\dot{\alpha}_{1} + \dot{\alpha}_{2}\right) \operatorname{sin}\left(\alpha_{1} + \alpha_{2}\right) + 2 \operatorname{sin}\left(\alpha_{1}\right) \dot{\alpha}_{1}\right)}{2}$
```python
T = I/2*(α1.diff(t))**2 + m/2*(x1d**2+y1d**2) + I/2*(α1.diff(t)+α2.diff(t))**2 + m/2*(x2d**2+y2d**2)
V = m*g*y1 + m*g*y2
L = T - V
printeq('T', T)
printeq('V', V)
printeq(r'\mathcal{L}', L)
```
$\displaystyle T = \frac{d^{2} m \left(3 \operatorname{cos}\left(\alpha_{2}\right) \dot{\alpha}_{1}^{2} + 3 \operatorname{cos}\left(\alpha_{2}\right) \dot{\alpha}_{1} \dot{\alpha}_{2} + 5 \dot{\alpha}_{1}^{2} + 2 \dot{\alpha}_{1} \dot{\alpha}_{2} + \dot{\alpha}_{2}^{2}\right)}{6}$
$\displaystyle V = - \frac{d g m \left(\operatorname{cos}\left(\alpha_{1} + \alpha_{2}\right) + 3 \operatorname{cos}\left(\alpha_{1}\right)\right)}{2}$
$\displaystyle \mathcal{L} = \frac{d m \left(3 d \operatorname{cos}\left(\alpha_{2}\right) \dot{\alpha}_{1}^{2} + 3 d \operatorname{cos}\left(\alpha_{2}\right) \dot{\alpha}_{1} \dot{\alpha}_{2} + 5 d \dot{\alpha}_{1}^{2} + 2 d \dot{\alpha}_{1} \dot{\alpha}_{2} + d \dot{\alpha}_{2}^{2} + 3 g \operatorname{cos}\left(\alpha_{1} + \alpha_{2}\right) + 9 g \operatorname{cos}\left(\alpha_{1}\right)\right)}{6}$
```python
Lterms = lagrange_terms(L, [α1, α2])
```
$\displaystyle \text{Terms of the Euler-Lagrange equations:}$
$\displaystyle \text{For generalized coordinate}\;\alpha_{1}:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \alpha_{1}} = - \frac{d g m \left(\operatorname{sin}\left(\alpha_{1} + \alpha_{2}\right) + 3 \operatorname{sin}\left(\alpha_{1}\right)\right)}{2}$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\alpha_{1}}} = \frac{d^{2} m \left(6 \operatorname{cos}\left(\alpha_{2}\right) \dot{\alpha}_{1} + 3 \operatorname{cos}\left(\alpha_{2}\right) \dot{\alpha}_{2} + 10 \dot{\alpha}_{1} + 2 \dot{\alpha}_{2}\right)}{6}$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\alpha_{1}}}}\right) = \frac{d^{2} m \left(- 6 \operatorname{sin}\left(\alpha_{2}\right) \dot{\alpha}_{1} \dot{\alpha}_{2} - 3 \operatorname{sin}\left(\alpha_{2}\right) \dot{\alpha}_{2}^{2} + 6 \operatorname{cos}\left(\alpha_{2}\right) \ddot{\alpha}_{1} + 3 \operatorname{cos}\left(\alpha_{2}\right) \ddot{\alpha}_{2} + 10 \ddot{\alpha}_{1} + 2 \ddot{\alpha}_{2}\right)}{6}$
$\displaystyle \text{For generalized coordinate}\;\alpha_{2}:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \alpha_{2}} = - \frac{d m \left(d \left(\dot{\alpha}_{1} + \dot{\alpha}_{2}\right) \operatorname{sin}\left(\alpha_{2}\right) \dot{\alpha}_{1} + g \operatorname{sin}\left(\alpha_{1} + \alpha_{2}\right)\right)}{2}$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\alpha_{2}}} = \frac{d^{2} m \left(3 \operatorname{cos}\left(\alpha_{2}\right) \dot{\alpha}_{1} + 2 \dot{\alpha}_{1} + 2 \dot{\alpha}_{2}\right)}{6}$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\alpha_{2}}}}\right) = \frac{d^{2} m \left(- 3 \operatorname{sin}\left(\alpha_{2}\right) \dot{\alpha}_{1} \dot{\alpha}_{2} + 3 \operatorname{cos}\left(\alpha_{2}\right) \ddot{\alpha}_{1} + 2 \ddot{\alpha}_{1} + 2 \ddot{\alpha}_{2}\right)}{6}$
```python
lagrange_eq(Lterms)
```
$\displaystyle \text{Euler-Lagrange equations (EOM):}$
$\displaystyle \quad \frac{d m \left(d \left(- 6 \operatorname{sin}\left(\alpha_{2}\right) \dot{\alpha}_{1} \dot{\alpha}_{2} - 3 \operatorname{sin}\left(\alpha_{2}\right) \dot{\alpha}_{2}^{2} + 6 \operatorname{cos}\left(\alpha_{2}\right) \ddot{\alpha}_{1} + 3 \operatorname{cos}\left(\alpha_{2}\right) \ddot{\alpha}_{2} + 10 \ddot{\alpha}_{1} + 2 \ddot{\alpha}_{2}\right) + 3 g \left(\operatorname{sin}\left(\alpha_{1} + \alpha_{2}\right) + 3 \operatorname{sin}\left(\alpha_{1}\right)\right)\right)}{6} = 0$
$\displaystyle \quad \frac{d m \left(3 d \operatorname{sin}\left(\alpha_{2}\right) \dot{\alpha}_{1}^{2} + 3 d \operatorname{cos}\left(\alpha_{2}\right) \ddot{\alpha}_{1} + 2 d \ddot{\alpha}_{1} + 2 d \ddot{\alpha}_{2} + 3 g \operatorname{sin}\left(\alpha_{1} + \alpha_{2}\right)\right)}{6} = 0$
```python
sol = lagrange_eq_solve(Lterms, q=[α1, α2], Qnc=None)
```
**Forces on the pendulum**
We can see that besides the terms proportional to gravity $g$, there are three types of forces in the equations, two of these forces we already saw in the solution of the double pendulum employing generalized coordinates in the segment space: forces proportional to angular velocity squared $\dot{\theta}_i^2$ (now proportional to $\dot{\alpha}_i^2$) and forces proportional to the angular acceleration $\ddot{\theta}_i$ (now proportional to $\ddot{\alpha}_i$). These are the centripetal forces and tangential forces.
A new type of force appeared explicitly in the equations when we employed generalized coordinates in the joint space: forces proportional to the product of the two angular velocities in joint space $\dot{\alpha}_1\dot{\alpha}_2$. These are the force of Coriolis.
### Example: Mass attached to a spring on a horizontal plane
<figure></figure>
Let's solve the exercise 13.1.7 of Ruina and Pratap (2019):
"Two ice skaters whirl around one another. They are connected by a linear elastic cord whose center is stationary in space. We wish to consider the motion of one of the skaters by modeling her as a mass m held by a cord that exerts k Newtons for each meter it is extended from the central position.
a) Draw a free-body diagram showing the forces that act on the mass is at an arbitrary position.
b) Write the differential equations that describe the motion."
Let's solve item b using Lagrangian mechanics.
To calculate the potential and kinetic energy of the system, we will need to calculate the position and velocity of the mass. It's convenient to use as generalized coordinates, the radial position $r$ and the angle $\theta$.
Using Sympy, declaring our parameters and coordinates:
```python
t = Symbol('t')
m, k = symbols('m, k', positive=True)
r, θ = dynamicsymbols('r, theta')
```
The position and velocity of the skater are:
```python
x, y = r*cos(θ), r*sin(θ)
xd, yd = x.diff(t), y.diff(t)
printeq(r'x', x)
printeq(r'y', y)
printeq(r'\dot{x}', xd)
printeq(r'\dot{y}', yd)
```
$\displaystyle x = r \operatorname{cos}\left(\theta\right)$
$\displaystyle y = r \operatorname{sin}\left(\theta\right)$
$\displaystyle \dot{x} = - r \operatorname{sin}\left(\theta\right) \dot{\theta} + \operatorname{cos}\left(\theta\right) \dot{r}$
$\displaystyle \dot{y} = r \operatorname{cos}\left(\theta\right) \dot{\theta} + \operatorname{sin}\left(\theta\right) \dot{r}$
So, the kinetic and potential energies of the skater are:
```python
T = m*(xd**2 + yd**2)/2
V = (k*r**2)/2
display(Math('T=' + mlatex(T)))
display(Math('V=' + mlatex(V)))
printeq('T', T)
printeq('V', V)
```
$\displaystyle T=\frac{m \left(\left(- r \operatorname{sin}\left(\theta\right) \dot{\theta} + \operatorname{cos}\left(\theta\right) \dot{r}\right)^{2} + \left(r \operatorname{cos}\left(\theta\right) \dot{\theta} + \operatorname{sin}\left(\theta\right) \dot{r}\right)^{2}\right)}{2}$
$\displaystyle V=\frac{k r^{2}}{2}$
$\displaystyle T = \frac{m \left(r^{2} \dot{\theta}^{2} + \dot{r}^{2}\right)}{2}$
$\displaystyle V = \frac{k r^{2}}{2}$
Where we considered the equilibrium length of the spring as zero.
The Lagrangian function is:
```python
L = T - V
printeq(r'\mathcal{L}', L)
```
$\displaystyle \mathcal{L} = - \frac{k r^{2}}{2} + \frac{m \left(r^{2} \dot{\theta}^{2} + \dot{r}^{2}\right)}{2}$
And the derivatives are:
```python
Lterms = lagrange_terms(L, [r, θ])
```
$\displaystyle \text{Terms of the Euler-Lagrange equations:}$
$\displaystyle \text{For generalized coordinate}\;r:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial r} = \left(- k + m \dot{\theta}^{2}\right) r$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{r}} = m \dot{r}$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{r}}}\right) = m \ddot{r}$
$\displaystyle \text{For generalized coordinate}\;\theta:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \theta} = 0$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\theta}} = m r^{2} \dot{\theta}$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\theta}}}\right) = m \left(r \ddot{\theta} + 2 \dot{r} \dot{\theta}\right) r$
Finally, the EOM are:
```python
lagrange_eq(Lterms)
```
$\displaystyle \text{Euler-Lagrange equations (EOM):}$
$\displaystyle \quad m \ddot{r} + \left(k - m \dot{\theta}^{2}\right) r = 0$
$\displaystyle \quad m \left(r \ddot{\theta} + 2 \dot{r} \dot{\theta}\right) r = 0$
In Ruina and Pratap's book they give as solution the equation: $2r\dot{r}\dot{\theta} + r^3\ddot{\theta}=0$, but using dimensional analysis we can check the book's solution is not correct.
## Generalized forces
How non-conservative forces are treated in the Lagrangian Mechanics is different than in Newtonian mechanics.
Newtonian mechanics consider the forces (and moment of forces) acting on each body (via FBD) and write down the equations of motion for each body/coordinate.
In Lagrangian Mechanics, we consider the forces (and moment of forces) acting on each generalized coordinate. For such, the effects of the non-conservative forces have to be calculated in the direction of each generalized coordinate, these will be the generalized forces.
A robust approach to determine the generalized forces on each generalized coordinate is to compute the work done by the forces to produce a small variation of the system on the direction of the generalized coordinate.
<figure></figure>
For example, consider a pendulum with a massless rod of length $d$ and a mass $m$ at the extremity swinging in a plane forming the angle $\theta$ with the vertical.
An external force acts on the tip of the pendulum at the horizontal direction.
The pendulum cord is inextensible and the tip of the pendulum can only move along the arc of a circumference with radius $d$.
The work done by this force to produce a small variation $\delta \theta$ is:
<p>
<span class="notranslate">
\begin{equation}\begin{array}{l}
\delta W_{NC} = \vec{F} \cdot \delta \vec{r} \\
\delta W_{NC} = F d \cos(\theta) \delta \theta
\end{array}
\label{}
\end{equation}
</span>
We now reexpress the work as the product of the corresponding generalized force $Q_{NC}$ and the generalized coordinate:
<p>
<span class="notranslate">
\begin{equation}
\delta W_{NC} = Q_{NC} \delta \theta
\label{}
\end{equation}
</span>
And comparing the last two equations, the generalized force (in fact, a moment of force) is:
<p>
<span class="notranslate">
\begin{equation}
Q_{NC} = F d \cos(\theta)
\label{}
\end{equation}
</span>
Note that the work done by definition was expressed in Cartesian coordinates as the scalar product between vectors $\vec F$ and $\delta \vec{r}$ and after the scalar product was evaluated we end up with the work done expressed in terms of the generalized coordinate. This is somewhat similar to the calculation of kinetic and potential energy, these quantities are typically expressed first in terms of Cartesian coordinates, which in turn are expressed in terms of the generalized coordinates, so we end up with only generalized coordinates.
Also note, we employ the term generalized force to refer to a non-conservative force or moment of force expressed in the generalized coordinate.
If the force had components on both directions, we would calculate the work computing the scalar product between the variation in displacement and the force, as usual. For example, consider a force $\vec{F}=2\hat{i}+7\hat{j}$, the work done is:
<p>
<span class="notranslate">
\begin{equation}\begin{array}{l}
\delta W_{NC} = \vec{F} \cdot \delta \vec{r} \\
\delta W_{NC} = [2\hat{i}+7\hat{j}] \cdot [d\cos(\theta) \delta \theta \hat{i} + d\sin(\theta) \delta \theta \hat{j}] \\
\delta W_{NC} = d[2\cos(\theta) + 7\sin(\theta)] \delta \theta
\end{array}
\label{}
\end{equation}
</span>
Finally, the generalized force (a moment of force) is:
<p>
<span class="notranslate">
\begin{equation}
Q_{NC} = d[2\cos(\theta) + 7\sin(\theta)]
\label{}
\end{equation}
</span>
For a system with $N$ generalized coordinates and $n$ non-conservative forces, to determine the generalized force at each generalized coordinate, we would compute the work as the sum of the works done by each force at each small variation:
<p>
<span class="notranslate">
\begin{equation}
\delta W_{NC} = \sum\limits_{j=1}^n F_{j} \cdot \delta x_j(\delta q_1, \dotsc, \delta q_N ) = \sum\limits_{i=1}^N Q_{i} \delta q_i
\label{}
\end{equation}
</span>
For simpler problems, in which we can separately analyze each non-conservative force acting on each generalized coordinate, the work done by each force on a given generalized coordinate can be calculated by making all other coordinates immovable ('frozen') and then sum the generalized forces.
The next examples will help to understand how to calculate the generalized force.
### Example: Simple pendulum on moving cart
<figure></figure>
Consider a simple pendulum with massless rod of length $d$ and mass $m$ at the extremity of the rod forming an angle $\theta$ with the vertical direction under the action of gravity. The pendulum swings freely from a cart with mass $M$ that moves at the horizontal direction pushed by a force $F_x$.
Let's use the Lagrangian mechanics to derive the EOM for the system.
We will model the cart as a particle moving along the axis $x$, i.e., $y=0$. The system has two degrees of freedom and because of the constraints introduced by the constant length of the rod and the motion the cart can perform, good generalized coordinates to describe the configuration of the system are $x$ and $\theta$.
**Determination of the generalized force**
The force $F$ acts along the same direction of the generalized coordinate $x$, this means $F$ contributes entirely to the work done at the direction $x$. At this generalized coordinate, the generalized force due to $F$ is $F$.
At the generalized coordinate $θ$, if we 'freeze' the generalized coordinate $x$ and let $F$ act on the system, there is no movement at the generalized coordinate $θ$, so no work is done. At this generalized coordinate, the generalized force due to $F$ is $0$.
Let's now use Sympy to determine the EOM.
The positions of the cart (c) and of the pendulum tip (p) are:
```python
t = Symbol('t')
M, m, d = symbols('M, m, d', positive=True)
x, y, θ, Fx = dynamicsymbols('x, y, theta, F_x')
```
The positions of the cart (c) and of the pendulum tip (p) are:
```python
xc, yc = x, y*0
xcd, ycd = xc.diff(t), yc.diff(t)
xp, yp = x + d*sin(𝜃), -d*cos(θ)
xpd, ypd = xp.diff(t), yp.diff(t)
printeq(r'x_c', xc)
printeq(r'y_c', yc)
printeq(r'x_p', xp)
printeq(r'y_p', yp)
```
$\displaystyle x_c = x$
$\displaystyle y_c = 0$
$\displaystyle x_p = d \operatorname{sin}\left(\theta\right) + x$
$\displaystyle y_p = - d \operatorname{cos}\left(\theta\right)$
The velocities of the cart and of the pendulum are:
```python
printeq(r'\dot{x}_c', xcd)
printeq(r'\dot{y}_c', ycd)
printeq(r'\dot{x}_p', xpd)
printeq(r'\dot{y}_p', ypd)
```
$\displaystyle \dot{x}_c = \dot{x}$
$\displaystyle \dot{y}_c = 0$
$\displaystyle \dot{x}_p = d \operatorname{cos}\left(\theta\right) \dot{\theta} + \dot{x}$
$\displaystyle \dot{y}_p = d \operatorname{sin}\left(\theta\right) \dot{\theta}$
The total kinetic and total potential energies and the Lagrangian of the system are:
```python
T = M*(xcd**2 + ycd**2)/2 + m*(xpd**2 + ypd**2)/2
V = M*g*yc + m*g*yp
printeq('T', T)
printeq('V', V)
L = T - V
printeq(r'\mathcal{L}', L)
```
$\displaystyle T = \frac{M \dot{x}^{2}}{2} + \frac{m \left(d^{2} \dot{\theta}^{2} + 2 d \operatorname{cos}\left(\theta\right) \dot{\theta} \dot{x} + \dot{x}^{2}\right)}{2}$
$\displaystyle V = - d g m \operatorname{cos}\left(\theta\right)$
$\displaystyle \mathcal{L} = \frac{M \dot{x}^{2}}{2} + d g m \operatorname{cos}\left(\theta\right) + \frac{m \left(d^{2} \dot{\theta}^{2} + 2 d \operatorname{cos}\left(\theta\right) \dot{\theta} \dot{x} + \dot{x}^{2}\right)}{2}$
And the derivatives are:
```python
Lterms = lagrange_terms(L, [x, θ])
```
$\displaystyle \text{Terms of the Euler-Lagrange equations:}$
$\displaystyle \text{For generalized coordinate}\;x:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial x} = 0$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{x}} = M \dot{x} + m \left(d \operatorname{cos}\left(\theta\right) \dot{\theta} + \dot{x}\right)$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{x}}}\right) = M \ddot{x} + m \left(- d \operatorname{sin}\left(\theta\right) \dot{\theta}^{2} + d \operatorname{cos}\left(\theta\right) \ddot{\theta} + \ddot{x}\right)$
$\displaystyle \text{For generalized coordinate}\;\theta:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \theta} = - d m \left(g + \dot{\theta} \dot{x}\right) \operatorname{sin}\left(\theta\right)$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\theta}} = d m \left(d \dot{\theta} + \operatorname{cos}\left(\theta\right) \dot{x}\right)$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\theta}}}\right) = d m \left(d \ddot{\theta} - \operatorname{sin}\left(\theta\right) \dot{\theta} \dot{x} + \operatorname{cos}\left(\theta\right) \ddot{x}\right)$
Finally, the EOM are:
```python
lagrange_eq(Lterms, [Fx, 0])
```
$\displaystyle \text{Euler-Lagrange equations (EOM):}$
$\displaystyle \quad M \ddot{x} + m \left(- d \operatorname{sin}\left(\theta\right) \dot{\theta}^{2} + d \operatorname{cos}\left(\theta\right) \ddot{\theta} + \ddot{x}\right) = F_{x}$
$\displaystyle \quad d m \left(d \ddot{\theta} + g \operatorname{sin}\left(\theta\right) + \operatorname{cos}\left(\theta\right) \ddot{x}\right) = 0$
```python
sol = lagrange_eq_solve(Lterms, q=[x, θ], Qnc=[Fx, 0])
```
Note that although the force $F_x$ acts solely on the cart, the acceleration of the pendulum $\ddot{\theta}$ is also dependent on $F$, as expected.
[Clik here for solutions to this problem using the Newtonian and Lagrangian approaches and how this system of two coupled second order differential equations can be rearranged for its numerical solution](http://www.emomi.com/download/neumann/pendulum_cart.html).
### Example: Two masses and two springs under the influence of gravity
<figure></figure>
Consider a system composed by two masses $m_1,\, m_2$ and two ideal springs (massless, lengths $\ell_1,\, \ell_2$, and spring constants $k_1,\,k_2$) attached in series under gravity and a force $F$ acting directly on $m_2$.
We can model this system as composed by two particles with two degrees of freedom and we need two generalized coordinates to describe the system's configuration; two obvious options are:
- ${y_1, y_2}$, positions of masses $m_1,\, m_2$ w.r.t. ceiling (origin).
- ${z_1, z_2}$, position of mass $m_1$ w.r.t. ceiling and position of mass $m_2$ w.r.t. mass $m_1$.
The set ${y_1, y_2}$ is at an inertial reference frame, while the second set it's not.
Let's find the EOM's using both sets of generalized coordinates and compare them.
**Generalized forces**
Using ${y_1, y_2}$, force $F$ acts on mass $m_2$ at the same direction of the generalized coordinate $y_2$. At this coordinate, the generalized force of $F$ is $F$. At the generalized coordinate $y_1$, if we 'freeze' the generalized coordinate $y_2$ and let $F$ act on the system, there is no movement at the generalized coordinate $y_1$, so no work is done. At this generalized coordinate, the generalized force due to $F$ is $0$.
Using ${z_1, z_2}$, force $F$ acts on mass $m_2$ at the same direction of the generalized coordinate $z_2$. At this coordinate, the generalized force of $F$ is $F$. At the generalized coordinate $z_1$, if we 'freeze' the generalized coordinate $z_2$ and let $F$ act on the system, mass $m_1$ suffers the action of force $F$ at the generalized coordinate $y_1$. At this generalized coordinate, the generalized force due to $F$ is $F$.
Sympy is our friend once again:
```python
t = Symbol('t')
m1, m2, ℓ01, ℓ02, g, k1, k2 = symbols('m1, m2, ell01, ell02, g, k1, k2', positive=True) # \ell<TAB>
y1, y2, F = dynamicsymbols('y1, y2, F')
```
The total kinetic and total potential energies of the system are:
```python
y1d, y2d = y1.diff(t), y2.diff(t)
T = (m1*y1d**2)/2 + (m2*y2d**2)/2
V = (k1*(y1-ℓ01)**2)/2 + (k2*((y2-y1)-ℓ02)**2)/2 - m1*g*y1 - m2*g*y2
printeq(r'T', T)
printeq(r'V', V)
```
$\displaystyle T = \frac{m_{1} \dot{y}_{1}^{2}}{2} + \frac{m_{2} \dot{y}_{2}^{2}}{2}$
$\displaystyle V = - g m_{1} y_{1} - g m_{2} y_{2} + \frac{k_{1} \left(\ell_{01} - y_{1}\right)^{2}}{2} + \frac{k_{2} \left(\ell_{02} + y_{1} - y_{2}\right)^{2}}{2}$
For sake of clarity, let's consider the resting lengths of the springs to be zero:
```python
V = V.subs([(ℓ01, 0), (ℓ02, 0)])
printeq(r'V', V)
```
$\displaystyle V = - g m_{1} y_{1} - g m_{2} y_{2} + \frac{k_{1} y^{2}_{1}}{2} + \frac{k_{2} \left(y_{1} - y_{2}\right)^{2}}{2}$
The Lagrangian function is:
```python
L = T - V
printeq(r'\mathcal{L}', L)
```
$\displaystyle \mathcal{L} = g m_{1} y_{1} + g m_{2} y_{2} - \frac{k_{1} y^{2}_{1}}{2} - \frac{k_{2} \left(y_{1} - y_{2}\right)^{2}}{2} + \frac{m_{1} \dot{y}_{1}^{2}}{2} + \frac{m_{2} \dot{y}_{2}^{2}}{2}$
And the derivatives are:
```python
Lterms = lagrange_terms(L, [y1, y2])
```
$\displaystyle \text{Terms of the Euler-Lagrange equations:}$
$\displaystyle \text{For generalized coordinate}\;\operatorname{y_{1}}:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \operatorname{y_{1}}} = g m_{1} - k_{1} y_{1} - k_{2} \left(y_{1} - y_{2}\right)$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\operatorname{y_{1}}}} = m_{1} \dot{y}_{1}$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\operatorname{y_{1}}}}}\right) = m_{1} \ddot{y}_{1}$
$\displaystyle \text{For generalized coordinate}\;\operatorname{y_{2}}:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \operatorname{y_{2}}} = g m_{2} + k_{2} \left(y_{1} - y_{2}\right)$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\operatorname{y_{2}}}} = m_{2} \dot{y}_{2}$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\operatorname{y_{2}}}}}\right) = m_{2} \ddot{y}_{2}$
Finally, the EOM are:
```python
lagrange_eq(Lterms, [0, F])
```
$\displaystyle \text{Euler-Lagrange equations (EOM):}$
$\displaystyle \quad - g m_{1} + k_{1} y_{1} + k_{2} \left(y_{1} - y_{2}\right) + m_{1} \ddot{y}_{1} = 0$
$\displaystyle \quad - g m_{2} - k_{2} \left(y_{1} - y_{2}\right) + m_{2} \ddot{y}_{2} = F$
```python
lagrange_eq_solve(Lterms, [y1, y2], [0, F]);
```
**Same problem, but with the other set of coordinates**
Using ${z_1, z_2}$ as the position of mass $m_1$ w.r.t. the ceiling and the position of mass $m_2$ w.r.t. the mass $m_1$, the solution is:
```python
z1, z2 = dynamicsymbols('z1, z2')
z1d, z2d = z1.diff(t), z2.diff(t)
T = (m1*z1d**2)/2 + (m2*(z1d + z2d)**2)/2
V = (k1*(z1-ℓ01)**2)/2 + (k2*(z2-ℓ02)**2)/2 - m1*g*z1 - m2*g*(z1+z2)
printeq('T', T)
printeq('V', V)
```
$\displaystyle T = \frac{m_{1} \dot{z}_{1}^{2}}{2} + \frac{m_{2} \left(\dot{z}_{1} + \dot{z}_{2}\right)^{2}}{2}$
$\displaystyle V = - g m_{1} z_{1} - g m_{2} \left(z_{1} + z_{2}\right) + \frac{k_{1} \left(\ell_{01} - z_{1}\right)^{2}}{2} + \frac{k_{2} \left(\ell_{02} - z_{2}\right)^{2}}{2}$
For sake of clarity, let's consider the resting lengths of the springs to be zero:
```python
V = V.subs([(ℓ01, 0), (ℓ02, 0)])
printeq(r'V', V)
```
$\displaystyle V = - g m_{1} z_{1} - g m_{2} \left(z_{1} + z_{2}\right) + \frac{k_{1} z^{2}_{1}}{2} + \frac{k_{2} z^{2}_{2}}{2}$
```python
L = T - V
printeq(r'\mathcal{L}', L)
```
$\displaystyle \mathcal{L} = g m_{1} z_{1} + g m_{2} \left(z_{1} + z_{2}\right) - \frac{k_{1} z^{2}_{1}}{2} - \frac{k_{2} z^{2}_{2}}{2} + \frac{m_{1} \dot{z}_{1}^{2}}{2} + \frac{m_{2} \left(\dot{z}_{1} + \dot{z}_{2}\right)^{2}}{2}$
```python
Lterms = lagrange_terms(L, [z1, z2])
lagrange_eq(Lterms, [F, F])
```
$\displaystyle \text{Terms of the Euler-Lagrange equations:}$
$\displaystyle \text{For generalized coordinate}\;\operatorname{z_{1}}:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \operatorname{z_{1}}} = g m_{1} + g m_{2} - k_{1} z_{1}$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\operatorname{z_{1}}}} = m_{1} \dot{z}_{1} + m_{2} \left(\dot{z}_{1} + \dot{z}_{2}\right)$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\operatorname{z_{1}}}}}\right) = m_{1} \ddot{z}_{1} + m_{2} \left(\ddot{z}_{1} + \ddot{z}_{2}\right)$
$\displaystyle \text{For generalized coordinate}\;\operatorname{z_{2}}:$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial \operatorname{z_{2}}} = g m_{2} - k_{2} z_{2}$
$\displaystyle \quad\dfrac{\partial\mathcal{L}}{\partial\dot{\operatorname{z_{2}}}} = m_{2} \left(\dot{z}_{1} + \dot{z}_{2}\right)$
$\displaystyle \quad\dfrac{\mathrm d}{\mathrm{dt}}\left({\dfrac{\partial\mathcal{L}}{\partial\dot{\operatorname{z_{2}}}}}\right) = m_{2} \left(\ddot{z}_{1} + \ddot{z}_{2}\right)$
$\displaystyle \text{Euler-Lagrange equations (EOM):}$
$\displaystyle \quad - g m_{1} - g m_{2} + k_{1} z_{1} + m_{1} \ddot{z}_{1} + m_{2} \left(\ddot{z}_{1} + \ddot{z}_{2}\right) = F$
$\displaystyle \quad - g m_{2} + k_{2} z_{2} + m_{2} \left(\ddot{z}_{1} + \ddot{z}_{2}\right) = F$
```python
lagrange_eq_solve(Lterms, [z1, z2], [F, F]);
```
The solutions using the two sets of coordinates seem different; the reader is invited to verify that in fact they are the same (remember that $y_1 = z_1,\, y_2 = z_1+z_2,\, \ddot{y}_2 = \ddot{z}_1+\ddot{z}_2$).
### Example: Mass-spring-damper system with gravity
<figure></figure>
Consider a mass-spring-damper system under the action of the gravitational force and an external force acting on the mass.
The massless spring has a stiffness coefficient $k$ and length at rest $\ell_0$.
The massless damper has a damping coefficient $b$.
The gravitational force acts downwards and it is negative (see figure).
The system has one degree of freedom and we need only one generalized coordinate ($y$) to describe the system's configuration.
There are two non-conservative forces acting at the direction of the generalized coordinate: the external force F and the force of the damper. By calculating the work done by each of these forces, the total generalized force is: $Q_{NC} = F_0 \cos(\omega t) - b\dot y$.
Let's use the Lagrangian mechanics to derive the equations of motion for the system.
The kinetic energy of the system is:
\begin{equation}
T = \frac{1}{2} m \dot y^2
\end{equation}
The potential energy of the system is:
\begin{equation}
V = \frac{1}{2} k (y-\ell_0)^2 + m g y
\end{equation}
The Lagrangian function is:
\begin{equation}
\mathcal{L} = \frac{1}{2} m \dot y^2 - \frac{1}{2} k (y-\ell_0)^2 - m g y
\end{equation}
The derivatives of the Lagrangian w.r.t. $y$ and $t$ are:
\begin{equation}\begin{array}{rcl}
\dfrac{\partial \mathcal{L}}{\partial y} &=& -k(y-\ell_0) - mg \\
\dfrac{\partial \mathcal{L}}{\partial \dot{y}} &=& m \dot{y} \\
\dfrac{\mathrm d }{\mathrm d t}\left( {\dfrac{\partial \mathcal{L}}{\partial \dot{y}}} \right) &=& m\ddot{y}
\end{array}\end{equation}
Substituting all these terms in the Euler-Lagrange equation, results in:
\begin{equation}
m\ddot{y} + b\dot{y} + k(y-\ell_0) + mg = F_0 \cos(\omega t)
\end{equation}
#### Numerical solution of the equation of motion for mass-spring-damper system
Let's solve numerically the differential equation for the mass-spring-damper system with gravity using the function for the Euler's method we implemented before. We just have to write a new function for calculating the derivative of velocity:
```python
def dvdt(t, y):
"""
Returns dvdt at `t` given state `y`.
"""
m = 1 # mass, kg
k = 100 # spring coefficient, N/m
l0 = 1.0 # sprint resting length
b = 1.0 # damping coefficient, N/m/s
F0 = 2.0 # external force amplitude, N
f = 1 # frequency, Hz
g = 10 # acceleration of gravity, m/s2
F = F0*np.cos(2*np.pi*f*t) # external force, N
return (F - b*y[1] - k*(y[0]-l0) - m*g)/m
T, y0, h = 10, [1.1, 0], .01
t, y = euler_method(T, y0, h, method=2)
labels = ['Trajectory of mass-spring-damper system (Euler method)',
'Position (m)', 'Velocity (m/s)']
plot(t, y, labels)
```
Here is the solution for this problem using the integration method explicit Runge-Kutta, a method with smaller errors in the integration and faster (for large amount of data) because the integration method in the Scipy function is implemented in Fortran:
```python
from scipy.integrate import solve_ivp
def dvdt2(t, y):
"""
Returns dvdt at `t` given state `y`.
"""
m = 1 # mass, kg
k = 100 # spring coefficient, N/m
l0 = 1.0 # sprint resting length
b = 1.0 # damping coefficient, N/m/s
F0 = 2.0 # external force amplitude, N
f = 1 # frequency, Hz
g = 10 # acceleration of gravity, m/s2
F = F0*np.cos(2*np.pi*f*t) # external force, N
return y[1], (F - b*y[1] - k*(y[0]-l0) - m*g)/m
T = 10.0 # s
freq = 100 # Hz
y02 = [1.1, 0.0] # [y0, v0]
t = np.linspace(0, T, int(T*freq), endpoint=False)
s = solve_ivp(fun=dvdt2, t_span=(t[0], t[-1]), y0=y02, method='RK45', t_eval=t)
labels = ['Trajectory of mass-spring-damper system (Runge-Kutta method)',
'Position (m)', 'Velocity (m/s)']
plot(s.t, s.y, labels)
```
## Forces of constraint
The fact the Lagrangian formalism uses generalized coordinates means that in a system with constraints we typically have fewer coordinates (in turn, fewer equations of motion) and we don't need to worry about forces of constraint that we would have to consider in the Newtonian formalism.
However, when we do want to determine a force of constraint, using Lagrangian formalism in fact will be disadvantageous! Let's see now one way of determining a force of constraint using Lagrangian formalism. The trick is to postpone the consideration that there is a constraint in the system; this will increase the number of generalized coordinates but will allow the determination of a force of constraint.
Let's exemplify this approach determining the tension at the rod in the simple pendulum under the influence of gravity we saw earlier.
### Example: Force of constraint in a simple pendulum under the influence of gravity
<figure></figure>
Consider a pendulum with a massless rod of length $d$ and a mass $m$ at the extremity swinging in a plane forming the angle $\theta$ with vertical and $g=10 m/s^2$.
Although the pendulum moves at the plane, it only has one degree of freedom, which can be described by the angle $\theta$, the generalized coordinate. But because we want to determine the force of constraint tension at the rod, let's also consider for now the variable $r$ for the 'varying' length of the rod (instead of the constant $d$).
In this case, the kinetic energy of the system will be:
<p>
<span class="notranslate">
\begin{equation}
T = \frac{1}{2}mr^2\dot\theta^2 + \frac{1}{2}m\dot r^2
\end{equation}
</span>
And for the potential energy we will also have to consider the constraining potential, $V_r(r(t))$:
<p>
<span class="notranslate">
\begin{equation}
V = -mgr\cos\theta + V_r(r(t))
\end{equation}
</span>
The Lagrangian function is:
<p>
<span class="notranslate">
\begin{equation}
\mathcal{L} = \frac{1}{2}m(\dot r^2(t) + r^2(t)\,\dot\theta^2(t)) + mgr(t)\cos\theta(t) - V_r(r(t))
\end{equation}
</span>
The derivatives w.r.t. $\theta$ are:
<p>
<span class="notranslate">
\begin{equation} \begin{array}{rcl}
&\dfrac{\partial \mathcal{L}}{\partial \theta} &=& -mgr\sin\theta \\
&\dfrac{\partial \mathcal{L}}{\partial \dot{\theta}} &=& mr^2\dot{\theta} \\
&\dfrac{\mathrm d }{\mathrm d t}\left( {\dfrac{\partial \mathcal{L}}{\partial \dot{\theta}}} \right) &=& 2mr\dot{r}\dot{\theta} + mr^2\ddot{\theta}
\end{array} \end{equation}
</span>
The derivatives w.r.t. $r$ are:
<p>
<span class="notranslate">
\begin{equation} \begin{array}{rcl}
&\dfrac{\partial \mathcal{L}}{\partial r} &=& mr \dot\theta^2 + mg\cos\theta - \dot{V}_r(r) \\
&\dfrac{\partial \mathcal{L}}{\partial \dot{r}} &=& m\dot r \\
&\dfrac{\mathrm d }{\mathrm d t}\left( {\dfrac{\partial \mathcal{L}}{\partial \dot{r}}} \right) &=& m\ddot{r}
\end{array} \end{equation}
</span>
The Euler-Lagrange's equations (the equations of motion) are:
<p>
<span class="notranslate">
\begin{equation} \begin{array}{rcl}
&2mr\dot{r}\dot{\theta} + mr^2\ddot{\theta} + mgr\sin\theta &=& 0 \\
&m\ddot{r} - mr \dot\theta^2 - mg\cos\theta + \dot{V}_r(r) &=& 0 \\
\end{array} \end{equation}
</span>
Now, we will apply the constraint condition, $r(t)=d$. This means that $\dot{r}=\ddot{r}=0$.
With this constraint applied, the first Euler-Lagrange equation is the equation for the simple pendulum:
<p>
<span class="notranslate">
\begin{equation}
md^2\ddot{\theta} + mgd\sin\theta = 0
\end{equation}
</span>
The second equation yields:
<p>
<span class="notranslate">
\begin{equation}
-\dfrac{\mathrm d V_r}{\mathrm d r}\bigg{\rvert}_{r=d} = - md \dot\theta^2 - mg\cos\theta
\end{equation}
</span>
But the tension force, $F_T$, is by definition equal to the gradient of the constraining potential, so:
<p>
<span class="notranslate">
\begin{equation}
F_T = - md \dot\theta^2 - mg\cos\theta
\end{equation}
</span>
As expected, the tension at the rod is proportional to the centripetal and the gravitational forces.
## Lagrangian formalism applied to non-mechanical systems
### Example: Lagrangian formalism for RLC eletrical circuits
<figure></figure>
It's possible to solve a RLC (Resistance-Inductance-Capacitance) electrical circuit using the Lagrangian formalism as an analogy with a mass-spring-damper system.
In such analogy, the electrical charge is equivalent to position, current to velocity, inductance to mass, inverse of the capacitance to spring constant, resistance to damper constant (a dissipative element), and a generator would be analog to an external force actuating on the system. See the [Wikipedia](https://en.wikipedia.org/wiki/Mechanical%E2%80%93electrical_analogies) and [this paper](https://arxiv.org/pdf/1711.10245.pdf) for more details on this analogy.
Let's see how to deduce the equivalent of equation of motion for a RLC series circuit (the Kirchhoff’s Voltage Law (KVL) equation).
<figure></figure>
For a series RLC circuit, consider the following notation:
$q$: charge
$\dot{q}$: current admitted through the circuit
$R$: effective resistance of the combined load, source, and components
$C$: capacitance of the capacitor component
$L$: inductance of the inductor component
$u$: voltage source powering the circuit
$P$: power dissipated by the resistance
So, the equivalents of kinetic and potential energies are:
$T = \frac{1}{2}L\dot{q}^2$
$V = \frac{1}{2C}q^2$
With a dissipative element:
$P = \frac{1}{2}R\dot{q}^2$
And the Lagrangian function is:
$\mathcal{L} = \frac{1}{2}L\dot{q}^2 - \frac{1}{2C}q^2$
Calculating the derivatives and substituting them in the Euler-Lagrange equation, we will have:
<p>
<span class="notranslate">
\begin{equation}
L \ddot{q} + R\dot{q} + \frac{q}{C} = u(t)
\end{equation}
</span>
Replacing $\dot{q}$ by $i$ and considering $v_c = q/C$ for a capacitor, we have the familar KVL equation:
<p>
<span class="notranslate">
\begin{equation}
L \frac{\mathrm d i}{\mathrm d t} + v_c + Ri = u(t)
\end{equation}
</span>
## Considerations on the Lagrangian mechanics
The Lagrangian mechanics does not constitute a new theory in classical mechanics; the results using Lagrangian or Newtonian mechanics must be the same for any mechanical system, only the method used to obtain the results is different.
We are accustomed to think of mechanical systems in terms of vector quantities such as force, velocity, angular momentum, torque, etc., but in the Lagrangian formalism the equations of motion are obtained entirely in terms of the kinetic and potential energies (scalar operations) in the configuration space. Another important aspect of the force vs. energy analogy is that in situations where it is not possible to make explicit all the forces acting on the body, it is still possible to obtain expressions for the kinetic and potential energies.
In fact, the concept of force does not enter into Lagrangian mechanics. This is an important property of the method. Since energy is a scalar quantity, the Lagrangian function for a system is invariant for coordinate transformations. Therefore, it is possible to move from a certain configuration space (in which the equations of motion can be somewhat complicated) to a space that can be chosen to allow maximum simplification of the problem.
## Further reading
- [The Principle of Least Action in ](https://www.feynmanlectures.caltech.edu/II_19.html)
- Vandiver JK (MIT OpenCourseWare) [An Introduction to Lagrangian Mechanics](https://ocw.mit.edu/courses/mechanical-engineering/2-003sc-engineering-dynamics-fall-2011/lagrange-equations/MIT2_003SCF11_Lagrange.pdf)
## Video lectures on the internet
- iLectureOnline: [Lectures in Lagrangian Mechanics](http://www.ilectureonline.com/lectures/subject/PHYSICS/34/245)
- MIT OpenCourseWare: [Introduction to Lagrange With Examples](https://youtu.be/zhk9xLjrmi4)
## Problems
1. Derive the Euler-Lagrange equation (the equation of motion) for a mass-spring system where the spring is attached to the ceiling and the mass in hanging in the vertical.
<figure></figure>
2. Derive the Euler-Lagrange equation for an inverted pendulum in the vertical.
<figure></figure>
3. Derive the Euler-Lagrange equation for the following system:
<figure></figure>
4. Derive the Euler-Lagrange equation for a spring pendulum, a simple pendulum where a mass $m$ is attached to a massless spring with spring constant $k$ and length at rest $d_0$.
<figure></figure>
5. Derive the Euler-Lagrange equation for the system shown below.
<figure></figure>
6. Derive the Euler-Lagrange equation for the following Atwood machine (consider that $m_1 > m_2$, i.e., the pulley will rotate counter-clockwise, and that moving down is in the positive direction):
<figure></figure>
7. Write computer programs (in Python!) to solve numerically the equations of motion from the problems above.
## References
- Hamilton WR (1834) [On a General Method in Dynamics](https://www.maths.tcd.ie/pub/HistMath/People/Hamilton/Dynamics/#GenMethod). Philosophical Transactions of the Royal Society, part II, 247-308.
- Ruina A, Rudra P (2019) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press.
| 4eeacc1509556b9c357e661aec6ef1c6eb3da47a | 396,428 | ipynb | Jupyter Notebook | notebooks/lagrangian_mechanics.ipynb | e-moncao-lima/BMC | 98c3abbf89e630d64b695b535b0be4ddc8b2724b | [
"CC-BY-4.0"
]
| 1 | 2021-03-15T20:07:52.000Z | 2021-03-15T20:07:52.000Z | notebooks/lagrangian_mechanics.ipynb | e-moncao-lima/BMC | 98c3abbf89e630d64b695b535b0be4ddc8b2724b | [
"CC-BY-4.0"
]
| null | null | null | notebooks/lagrangian_mechanics.ipynb | e-moncao-lima/BMC | 98c3abbf89e630d64b695b535b0be4ddc8b2724b | [
"CC-BY-4.0"
]
| 1 | 2018-10-13T17:35:16.000Z | 2018-10-13T17:35:16.000Z | 84.256748 | 59,516 | 0.781766 | true | 27,339 | Qwen/Qwen-72B | 1. YES
2. YES | 0.754915 | 0.810479 | 0.611843 | __label__eng_Latn | 0.844907 | 0.259846 |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Objectives" data-toc-modified-id="Objectives-1"><span class="toc-item-num">1 </span>Objectives</a></span></li><li><span><a href="#What-&-Why-of-Linear-Algebra" data-toc-modified-id="What-&-Why-of-Linear-Algebra-2"><span class="toc-item-num">2 </span>What & Why of Linear Algebra</a></span></li><li><span><a href="#Scalars,-Vectors,-and-Tensors!-Oh-My!" data-toc-modified-id="Scalars,-Vectors,-and-Tensors!-Oh-My!-3"><span class="toc-item-num">3 </span>Scalars, Vectors, and Tensors! Oh My!</a></span><ul class="toc-item"><li><span><a href="#Scalars" data-toc-modified-id="Scalars-3.1"><span class="toc-item-num">3.1 </span>Scalars</a></span></li><li><span><a href="#Vectors" data-toc-modified-id="Vectors-3.2"><span class="toc-item-num">3.2 </span>Vectors</a></span><ul class="toc-item"><li><span><a href="#Code-for-Vectors" data-toc-modified-id="Code-for-Vectors-3.2.1"><span class="toc-item-num">3.2.1 </span>Code for Vectors</a></span></li><li><span><a href="#Math-with-Vectors" data-toc-modified-id="Math-with-Vectors-3.2.2"><span class="toc-item-num">3.2.2 </span>Math with Vectors</a></span><ul class="toc-item"><li><span><a href="#Vector-Addition" data-toc-modified-id="Vector-Addition-3.2.2.1"><span class="toc-item-num">3.2.2.1 </span>Vector Addition</a></span></li><li><span><a href="#Vector-Multiplication" data-toc-modified-id="Vector-Multiplication-3.2.2.2"><span class="toc-item-num">3.2.2.2 </span>Vector Multiplication</a></span></li></ul></li></ul></li><li><span><a href="#Matrices-and-Tensors" data-toc-modified-id="Matrices-and-Tensors-3.3"><span class="toc-item-num">3.3 </span>Matrices and Tensors</a></span><ul class="toc-item"><li><span><a href="#Code-for-Matrices-and-Tensors" data-toc-modified-id="Code-for-Matrices-and-Tensors-3.3.1"><span class="toc-item-num">3.3.1 </span>Code for Matrices and Tensors</a></span></li><li><span><a href="#Math-with-Tensors" data-toc-modified-id="Math-with-Tensors-3.3.2"><span class="toc-item-num">3.3.2 </span>Math with Tensors</a></span><ul class="toc-item"><li><span><a href="#Addition" data-toc-modified-id="Addition-3.3.2.1"><span class="toc-item-num">3.3.2.1 </span>Addition</a></span></li><li><span><a href="#Dot-Product" data-toc-modified-id="Dot-Product-3.3.2.2"><span class="toc-item-num">3.3.2.2 </span>Dot-Product</a></span></li></ul></li></ul></li></ul></li><li><span><a href="#More-with-Matrices" data-toc-modified-id="More-with-Matrices-4"><span class="toc-item-num">4 </span>More with Matrices</a></span><ul class="toc-item"><li><span><a href="#Identity-Matrices" data-toc-modified-id="Identity-Matrices-4.1"><span class="toc-item-num">4.1 </span>Identity Matrices</a></span><ul class="toc-item"><li><span><a href="#Identity-Matrices-in-NumPy" data-toc-modified-id="Identity-Matrices-in-NumPy-4.1.1"><span class="toc-item-num">4.1.1 </span>Identity Matrices in NumPy</a></span></li></ul></li><li><span><a href="#Inverse-Matrices" data-toc-modified-id="Inverse-Matrices-4.2"><span class="toc-item-num">4.2 </span>Inverse Matrices</a></span></li></ul></li><li><span><a href="#Solving-a-System-of-Linear-Equations" data-toc-modified-id="Solving-a-System-of-Linear-Equations-5"><span class="toc-item-num">5 </span>Solving a System of Linear Equations</a></span><ul class="toc-item"><li><span><a href="#Representing-the-System-with-Matrices" data-toc-modified-id="Representing-the-System-with-Matrices-5.1"><span class="toc-item-num">5.1 </span>Representing the System with Matrices</a></span><ul class="toc-item"><li><span><a href="#Coding-It-with-NumPy" data-toc-modified-id="Coding-It-with-NumPy-5.1.1"><span class="toc-item-num">5.1.1 </span>Coding It with NumPy</a></span></li><li><span><a href="#Solve-It-Faster-with-NumPy's-linalg.solve()" data-toc-modified-id="Solve-It-Faster-with-NumPy's-linalg.solve()-5.1.2"><span class="toc-item-num">5.1.2 </span>Solve It Faster with NumPy's <code>linalg.solve()</code></a></span></li></ul></li></ul></li><li><span><a href="#Solving-for-the-Line-of-Best-Fit:-Linear-Regression" data-toc-modified-id="Solving-for-the-Line-of-Best-Fit:-Linear-Regression-6"><span class="toc-item-num">6 </span>Solving for the Line of Best Fit: Linear Regression</a></span><ul class="toc-item"><li><span><a href="#Linear-Algebra-Solves-the-Best-Fit-Line-Problem" data-toc-modified-id="Linear-Algebra-Solves-the-Best-Fit-Line-Problem-6.1"><span class="toc-item-num">6.1 </span>Linear Algebra Solves the Best-Fit Line Problem</a></span></li></ul></li></ul></div>
```python
import numpy as np
from sklearn.linear_model import LinearRegression
```
# Objectives
- Use `numpy` to construct and manipulate scalars, vectors, and matrices
- Use `numpy` to construct and manipulate identity matrices and inverse matrices
- Use `numpy` to solve systems of equations
- Describe the matrix manipulations required to solve the best-fit line problem
# What & Why of Linear Algebra
Matrices are a fundamental aspect of data science models and problems, including image processing, deep learning, NLP, and PCA. You will encounter matrices *many* times in your career as a data scientist. Matrices are a fundamental tool in **linear algebra**.
-----------
- Study of "vector spaces"; relationship of **linear** relationships
- Uses vectors, matrices, and tensors
- Mapping & dimensionality (PCA)
- Used in lots of ML applications
We'll try to put abstract ideas into the formalism of linear algebra, such as:
- data values
- images/pixels
- language (NLP)
# Scalars, Vectors, and Tensors! Oh My!
> You can think of these values being built up to higher dimensions
## Scalars
A _scalar_ has simply a single value. Any real number can be the value of a scalar.
```python
# Scalar
s = np.arange(1)
display(s)
```
## Vectors
A _vector_ can be specified by with just _two_ parameters: magnitude and direction. To remind of this direction, a vector is typically denoted with an arrow above the variable representing it: $\vec{v}$ .
In a Cartesian coordinate system, a vector $\vec{v}$ will often be specified by the components defined by the coordinate system.
In this way a vector can be embedded in a higher-dimensional space, which allows us to speak of vectors of any dimension (even though, as directed line segments, all vectors are, strictly speaking, one-dimensional).
> <a href="https://commons.wikimedia.org/wiki/File:Vector-length.png">Svjo</a>, <a href="https://creativecommons.org/licenses/by-sa/4.0">CC BY-SA 4.0</a>, via Wikimedia Commons
For more on vectors, see this helpful [video](https://www.youtube.com/watch?v=fNk_zzaMoSs&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab).
### Code for Vectors
> Note we can think of vectors as a one-dimensional object residing in multidimensional space.
```python
# Vector
v = np.arange(4)
display(v)
print('Shape:', np.shape(v))
```
```python
# Vector
v = np.array([1,0,2,1,0,1,0,1,1])
display(v)
print('Shape:', np.shape(v))
```
```python
# Other ways to define vector
x = np.linspace(-np.pi, np.pi, 10)
display(x)
print('Shape:', np.shape(v))
```
These vectors above are usually called _row vectors_.
We can also specify _column vectors_.
```python
# Column vector from scratch
col_vector = np.array([[1],[0],[2],[1],[0],[1],[0],[1],[1]])
display(col_vector)
print('Shape:', np.shape(col_vector))
```
```python
# Reformatting or "reshaping" a row vector to a column vector
display(v)
print('Shape:', np.shape(v))
print('')
print('='*64)
col_vector = v.reshape(-1,1) # This is a common way for row -> column
display(col_vector)
print('Shape:', np.shape(col_vector))
```
### Math with Vectors
#### Vector Addition
Vector addition is simple: Just add the corresponding components together:
$[8, 14] + [7, 6] = [15, 20]$
Base Python is not particularly good for non-scalar arithmetic. Make a general practice of turning to `numpy` for mathematical operations.
```python
# Let's try this again, but this time we'll use NumPy arrays:
vec_1 = np.array([8, 14])
vec_2 = np.array([7, 6])
vec_1 + vec_2
```
#### Vector Multiplication
In fact there are multiple ways of understanding the notion of vector multiplication. All are potentially useful, but the one that we'll likely be of most use is the *dot-product*, which is defined as follows:
$$
\begin{equation}
\begin{bmatrix}
a \\
b
\end{bmatrix}
.
\begin{bmatrix}
c \\
d
\end{bmatrix}
=
ac + bd
\end{equation}
$$
The dot-product is the sum of the pariwise products of the vectors' entries.
```python
# Let's check out the different attributes and methods available to
# a NumPy array.
# vec_1.
# There are many options. Notice that one of these options is 'dot'.
# This is our dot-product! So let's use the .dot() method to calculate
# the dot-product of our two vectors:
vec_1.dot(vec_2)
```
```python
# We can also use '@'
vec_1 @ vec_2
```
## Matrices and Tensors
For higher dimensions we can use **matrices** to express ourselves. Suppose we had a two-variable system:
$$
\begin{align}
a_{1,1}x_1 + a_{1,2}x_2 = c_1 \\
a_{2,1}x_1 + a_{2,2}x_2 = c_2
\end{align}
$$
Using matrices, we can write this as:
$$
\begin{equation}
\begin{bmatrix}
a_{1,1} & a_{1,2} \\
a_{2,1} & a_{2,2}
\end{bmatrix}
\begin{bmatrix}
x_1 \\
x_2
\end{bmatrix} =
\begin{bmatrix}
c_1 \\
c_2
\end{bmatrix}
\end{equation}
$$
or
$A\vec{x} = \vec{c}$,
where:
- $\vec{x}$ is the _vector_ $(x_1, x_2)$;
- $\vec{c}$ is the _vector_ $(c_1, c_2)$; and
- $A$ is the _matrix_ of coefficients that describe our system:
$\begin{equation} A =
\begin{bmatrix}
a_{1,1} & a_{1,2} \\
a_{2,1} & a_{2,2}
\end{bmatrix}
\end{equation}$
------
Technically, all of what we've talked about are **tensors**, just different ranks:
- scalar == 0th rank tensor
- vector == 1st rank tensor
+ vectors are made up from a "list" of scalars
- matrix == 2nd rank tensor
+ matrices are made up from a "list" of vectors
- 3D matrix == 3rd rank tensor
+ 3D matrices are made up from a "list" of (2D) matrices
- and so on...
> However, people are referring to the higher ranked tensors (2nd and above) when they say "tensor".
### Code for Matrices and Tensors
```python
# Matrix
M = np.arange(4 * 2).reshape((4, 2))
display(M)
```
```python
# 3D Tensor
T_3d = np.arange(4 * 2 * 3).reshape((4, 2, 3))
display(T_3d)
```
### Math with Tensors
#### Addition
The addition of matrices is straightforward: Just add corresponding elements. In order to add two matrices $A$ and $B$, they must have the same number of rows and columns:
$\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{bmatrix}
+
\begin{bmatrix}
b_{11} & b_{12} \\
b_{21} & b_{22}
\end{bmatrix}
=
\begin{bmatrix}
a_{11} + b_{11} & a_{12} + b_{12} \\
a_{21} + b_{21} & a_{22} + b_{22}
\end{bmatrix}
$
```python
A = np.array([[1, 2], [3, 4]])
B = np.array([[4, 3], [2, 1]])
display(A)
display(B)
```
```python
A + B
```
#### Dot-Product
> Just as there were different notions of "multiplication" for vectors, so too there are different notions of multiplication for matrices.
Very often when people talk about multiplying matrices they'll mean the dot-product:
$$
\begin{equation}
\begin{bmatrix}
a_{1,1} & a_{1,2} \\
a_{2,1} & a_{2,2}
\end{bmatrix}
\cdot
\begin{bmatrix}
b_{1,1} & b_{1,2} \\
b_{2,1} & b_{2,2}
\end{bmatrix}
=
\begin{bmatrix}
a_{1,1}\times b_{1,1} + a_{1,2}\times b_{2,1} & a_{1,1}\times b_{1,2} + a_{1,2}\times b_{2,2} \\
a_{2,1}\times b_{1,1} + a_{2,2}\times b_{2,1} & a_{2,1}\times b_{1,2} + a_{2,2}\times b_{2,2}
\end{bmatrix}
\end{equation}
$$
Take the entries in each *row* of the left matrix and multiply them, respectively, by the entries in each *column* of the right matrix, and then add them up. This is the product we calculated above with our two vectors!
We can multiply in NumPy in the same way of doing a dot product with vectors.
```python
A = np.array([[1,2,3],[4,5,6]])
B = np.array([[11,22],[33,44],[55,66]])
# Different ways to do the same dot product
AB = np.dot(A,B)
AB = A.dot(B)
AB = A @ B
```
Observe also that in order to be able to perform the dot product on two matrices A and B, **the number of columns of A must equal the number of rows of B**.
Also, **the number of rows of the product matrix will equal the number of rows of A, and the number of columns of the product matrix will equal the number of columns of B**.
> **A note about vectors and matrices**
>
> Matrix dot-multiplication is NOT commutative! In general, $AB \neq BA$.
>
> Strictly speaking, this is true for vectors as well. Above, we multiplied the *row*-vector $(a, b)$ by the *column*-vector $(c, d)$. A row-vector is simply a matrix with only one row; a column-vector is simply a matrix with only one column.
>
> What would be the result of multiplying the column-vector $(c, d)$ on the left by the row-vector $(a, b)$ on the right?
>
> $$\begin{equation}
\begin{bmatrix}
c \\
d
\end{bmatrix}
\space
\begin{bmatrix}
a & b
\end{bmatrix}
=
\begin{bmatrix}
ca & cb \\
da & db
\end{bmatrix}
\end{equation}$$
##### Exercise
Illustrate this difference between $\begin{bmatrix} a & b \end{bmatrix} \begin{bmatrix} c \\ d \end{bmatrix}$ and $\begin{bmatrix} c \\ d \end{bmatrix} \begin{bmatrix} a & b \end{bmatrix}$ in Python for $a=2, b=3, c=4$, and $d=5$.
```python
# Your code here
```
<details>
<summary>Solution</summary>
```python
ab = np.array([[2, 3]])
cd = np.array([[4],[5]])
display(ab @ cd)
display(cd @ ab)
```
</details>
# More with Matrices
In order to solve an equation like $A\vec{x} = \vec{c}$ for $\vec{x}$, we can't very well divide $\vec{c}$ by $A$! But there is a notion of matrix _inversion_ that is relevant here, which is analogous to multiplicative inversion. If we have an equation like $2x = 10$, we can simply multiply both sides by the multiplicative inverse of the coefficient of $x$, viz. $2^{-1}$. And here the point, of course, is that $2^{-1} \times 2 = 1$.
We'll see there is an equivalent _inverse_ for matrices. But first, we should discuss the identity matrix.
## Identity Matrices
> The **identity matrix** $I$ is the matrix containing 1's along the main diagonal (upper-left to lower-right) and 0's everywhere else:
>
> $$\begin{align}
I_3 &= \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \\
\\
I_5 &= \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
\end{bmatrix}
\end{align}$$
The identity matrix is a special matrix since any (compatible) matrix $A$ multiplied by it is the matrix itself: $AI = A$ and $IA = A$
### Identity Matrices in NumPy
```python
I5 = np.eye(5)
print(I5)
```
```python
A = 42*np.ones(25).reshape(5,5)
A
```
```python
print(I5 @ A)
print()
print(A @ I5)
print()
is_equal = (I5 @ A) == (A @ I5)
print('Both are the same:')
print(is_equal)
```
## Inverse Matrices
In the higher-dimensional case, what we can do is to left-multiply both sides by the _inverse matrix_ of A, denoted $A^{-1}$, and here the point is that the dot-product $A^{-1}A = I$, where $I$ is the identity matrix.
Thus when we have a matrix equation $A\vec{x} = \vec{c}$, we can calculate the solution by multiplying both sides by $A^{-1}$:
$$\begin{align}
A\vec{x} &= \vec{c}\\
A^{-1}A\vec{x} &= A^{-1}\vec{c}\\
I \vec{x} &= A^{-1}\vec{c}\\
\vec{x} &= A^{-1}\vec{c}\\
\end{align}$$
```python
# Your code here
A = np.array([
[ 1,-2, 3],
[ 2,-5,10],
[ 0, 0, 1]
])
```
```python
np.linalg.inv(A)
```
##### Exercise
You can produce the inverse of a matrix in `numpy` by calling `np.linalg.inv()`.
Using `numpy` arrays, find the inverse $A^{-1}$ of the matrix below:
$$A = \begin{bmatrix} 1 & -2 & 3 \\ 2 & -5 & 10 \\ 0 & 0 & 1 \end{bmatrix}$$
> Confirm this is the inverse by multiplying the matrices
```python
# Your code here
```
# Solving a System of Linear Equations
Solving a system of equations can take a lot of work
$$ \begin{align}
1 - 2y + 3z &= 9 \\
2x - 5y + 10z &= 4 \\
6z &= 0
\end{align}$$
## Representing the System with Matrices
But we can make it easier by writing it in matrix form
$$
\begin{pmatrix}
1 & -2 & 3 \\
2 & -5 & 10 \\
0 & 0 & 6
\end{pmatrix}
\cdot
\begin{pmatrix}
x \\
y \\
z
\end{pmatrix}
=
\begin{pmatrix}
9 \\
4 \\
0
\end{pmatrix}
$$
We can think of this in the abstract:
$$ A \cdot X = B $$
$$ A^{-1} \cdot A \cdot X = A^{-1} \cdot B $$
$$ I \cdot X = A^{-1} \cdot B $$
$$ X = A^{-1} \cdot B $$
### Coding It with NumPy
Let's try solving the system with matrices:
$$ \begin{align}
1 - 2y + 3z &= 9 \\
2x - 5y + 10z &= 4 \\
6z &= 0
\end{align}$$
First define the system's matrices:
$$
\begin{pmatrix}
1 & -2 & 3 \\
2 & -5 & 10 \\
0 & 0 & 6
\end{pmatrix}
\cdot
\begin{pmatrix}
x \\
y \\
z
\end{pmatrix}
=
\begin{pmatrix}
9 \\
4 \\
0
\end{pmatrix}
$$
to
$$ A \cdot \vec{X} = B $$
```python
A = np.array([
[1, -2, 3],
[2, -5, 10],
[0, 0, 6]
])
```
```python
B = np.array([9,4,0]).reshape(3,1)
```
```python
print('A:')
print(A)
print()
print('B:')
print(B)
```
Find the inverse
```python
A_inv = np.linalg.inv(A)
print(A_inv)
```
Getting the solution
```python
solution = A_inv @ B
print(solution)
```
### Solve It Faster with NumPy's `linalg.solve()`
NumPy's ```linalg``` module has a ```.solve()``` method that you can use to solve a system of linear equations!
In particular, it will solve for the vector $\vec{x}$ in the equation $A\vec{x} = b$. You should know that, "under the hood", the ```.solve()``` method does NOT compute the inverse matrix $A^{-1}$. This is largely because of the enormous expense of directly computing a matrix inverse, which takes $\mathcal{O}(n^3)$ time.
Check out [this discussion](https://stackoverflow.com/questions/31256252/why-does-numpy-linalg-solve-offer-more-precise-matrix-inversions-than-numpy-li) on stackoverflow for more on the differences between using `.solve()` and `.inv()`.
And check out the documentation for ```.solve()``` [here](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html).
```python
# Let's use the .solve() method to solve this system of equations
A = np.array([
[1, -2, 3],
[2, -5, 10],
[0, 0, 6]
])
B = np.array([9,4,0]).reshape(3,1)
```
```python
np.linalg.solve(A, B)
```
Again, we could just solve our matrix equation by calculating the inverse of our matrix $X$ and then multiplying by $y$:
```python
np.linalg.inv(A).dot(B)
```
But the time difference is striking:
```python
%timeit np.linalg.inv(A).dot(B)
```
```python
%timeit np.linalg.solve(A, B)
```
Even for a (tiny!) 5x5 matrix, the cost of computing the inverse directly is evident.
# Solving for the Line of Best Fit: Linear Regression
Consider a typical dataset and the associated multiple linear regression problem. We have many observations (rows), each of which consists of a set of values both for the predictors (columns, i.e. the independent variables) and for the target (the dependent variable).
We can think of the values of the independent variables as our matrix $A$ of coefficients and of the values of the dependent variable as our output vector $\vec{c}$.
The task here is, in effect, to solve for $\vec{\beta}$, where we have that $A\vec{\beta} = \vec{c}$, except in general we'll have more rows than columns. This is why we won't in general be computing matrix inverses. (They're computationally expensive, anyway.) This is also why we have a problem requiring not a direct solution but rather an optimization--in our case, a best-fit line.
Using $z$ for our independent variables and $y$ for our dependent variable, we have:
\begin{equation}
\beta_1\begin{bmatrix}
z_{1,1} \\
. \\
. \\
. \\
z_{m,1}
\end{bmatrix} +
... + \beta_n\begin{bmatrix}
z_{1,n} \\
. \\
. \\
. \\
z_{m,n}
\end{bmatrix} \approx \begin{bmatrix}
y_1 \\
. \\
. \\
. \\
y_m
\end{bmatrix}
\end{equation}
## Linear Algebra Solves the Best-Fit Line Problem
If we have a matrix of predictors $X$ and a target column $y$, we can express $\hat{y}$, the best-fit line, as follows:
$\large\hat{y} = (X^TX)^{-1}X^Ty$.
$(X^TX)^{-1}X^T$ is sometimes called the *pseudo-inverse* of $X$. We'll have more to say about this in a future lesson when we talk about the singular value decomposition.
Let's see this in action:
```python
preds = np.array(list(zip(np.random.normal(size=10),
np.array(np.random.normal(size=10, loc=2)))))
target = np.array(np.random.exponential(size=10))
```
```python
preds
```
```python
np.linalg.inv(preds.T.dot(preds)).dot(preds.T).dot(target)
```
```python
LinearRegression(fit_intercept=False).fit(preds, target).coef_
```
| 0cbd083d9ce774dfda6a95d67d544598e6d27cf6 | 40,086 | ipynb | Jupyter Notebook | Phase_3/ds-linear_algebra-main/linear_algebra.ipynb | VaneezaAhmad/ds-east-042621-lectures | 334f98bb4bd4f8020055e95994764b1587a809c0 | [
"MIT"
]
| null | null | null | Phase_3/ds-linear_algebra-main/linear_algebra.ipynb | VaneezaAhmad/ds-east-042621-lectures | 334f98bb4bd4f8020055e95994764b1587a809c0 | [
"MIT"
]
| null | null | null | Phase_3/ds-linear_algebra-main/linear_algebra.ipynb | VaneezaAhmad/ds-east-042621-lectures | 334f98bb4bd4f8020055e95994764b1587a809c0 | [
"MIT"
]
| 20 | 2021-04-27T19:27:58.000Z | 2021-06-16T15:08:50.000Z | 25.679693 | 4,966 | 0.523674 | true | 6,742 | Qwen/Qwen-72B | 1. YES
2. YES | 0.79053 | 0.865224 | 0.683986 | __label__eng_Latn | 0.903538 | 0.42746 |
# Scale bijectors and LinearOperator
This reading is an introduction to scale bijectors, as well as the `LinearOperator` class, which can be used with them.
```python
!pip install tensorflow=='2.2.0'
```
Collecting tensorflow==2.2.0
Downloading tensorflow-2.2.0-cp37-cp37m-manylinux2010_x86_64.whl (516.2 MB)
[K |████████████████████████████████| 516.2 MB 3.8 kB/s
[?25hRequirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (1.6.3)
Collecting tensorflow-estimator<2.3.0,>=2.2.0
Downloading tensorflow_estimator-2.2.0-py2.py3-none-any.whl (454 kB)
[K |████████████████████████████████| 454 kB 50.5 MB/s
[?25hRequirement already satisfied: protobuf>=3.8.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (3.17.3)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (0.37.1)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (1.1.0)
Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (1.15.0)
Requirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (1.19.5)
Requirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (0.2.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (3.3.0)
Collecting tensorboard<2.3.0,>=2.2.0
Downloading tensorboard-2.2.2-py3-none-any.whl (3.0 MB)
[K |████████████████████████████████| 3.0 MB 43.2 MB/s
[?25hRequirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (1.13.3)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (1.43.0)
Collecting h5py<2.11.0,>=2.10.0
Downloading h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB)
[K |████████████████████████████████| 2.9 MB 25.9 MB/s
[?25hRequirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (1.1.2)
Requirement already satisfied: scipy==1.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (1.4.1)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.2.0) (1.0.0)
Collecting gast==0.3.3
Downloading gast-0.3.3-py2.py3-none-any.whl (9.7 kB)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (1.8.1)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (2.23.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (1.35.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (0.4.6)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (57.4.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (1.0.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (3.3.6)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (4.2.4)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (4.8)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (0.2.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (1.3.1)
Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (4.10.1)
Requirement already satisfied: typing-extensions>=3.6.4 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (3.10.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (3.7.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (0.4.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (2021.10.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (3.2.0)
Installing collected packages: tensorflow-estimator, tensorboard, h5py, gast, tensorflow
Attempting uninstall: tensorflow-estimator
Found existing installation: tensorflow-estimator 2.7.0
Uninstalling tensorflow-estimator-2.7.0:
Successfully uninstalled tensorflow-estimator-2.7.0
Attempting uninstall: tensorboard
Found existing installation: tensorboard 2.7.0
Uninstalling tensorboard-2.7.0:
Successfully uninstalled tensorboard-2.7.0
Attempting uninstall: h5py
Found existing installation: h5py 3.1.0
Uninstalling h5py-3.1.0:
Successfully uninstalled h5py-3.1.0
Attempting uninstall: gast
Found existing installation: gast 0.4.0
Uninstalling gast-0.4.0:
Successfully uninstalled gast-0.4.0
Attempting uninstall: tensorflow
Found existing installation: tensorflow 2.7.0
Uninstalling tensorflow-2.7.0:
Successfully uninstalled tensorflow-2.7.0
Successfully installed gast-0.3.3 h5py-2.10.0 tensorboard-2.2.2 tensorflow-2.2.0 tensorflow-estimator-2.2.0
```python
!pip install tensorflow_probability=='0.10.0'
```
Collecting tensorflow_probability==0.10.0
Downloading tensorflow_probability-0.10.0-py2.py3-none-any.whl (3.5 MB)
[K |████████████████████████████████| 3.5 MB 5.2 MB/s
[?25hRequirement already satisfied: gast>=0.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow_probability==0.10.0) (0.3.3)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow_probability==0.10.0) (1.19.5)
Requirement already satisfied: cloudpickle>=1.2.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow_probability==0.10.0) (1.3.0)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow_probability==0.10.0) (1.15.0)
Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from tensorflow_probability==0.10.0) (4.4.2)
Installing collected packages: tensorflow-probability
Attempting uninstall: tensorflow-probability
Found existing installation: tensorflow-probability 0.15.0
Uninstalling tensorflow-probability-0.15.0:
Successfully uninstalled tensorflow-probability-0.15.0
Successfully installed tensorflow-probability-0.10.0
```python
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
print("TF version:", tf.__version__)
print("TFP version:", tfp.__version__)
from IPython.display import Image
```
TF version: 2.2.0
TFP version: 0.10.0
## Introduction
You have now seen how bijectors can be used to transform tensors and tensor spaces. Until now, you've only seen this in the scalar case, where the bijector acts on a single value. When the tensors you fed into the bijectors had multiple components, the bijector acted on each component individually by applying batch operations to scalar values. For probability distributions, this corresponds to a scalar event space.
However, bijectors can also act on higher-dimensional space. You've seen, for example, the multivariate normal distribution, for which samples are tensors with more than one component. You'll need higher-dimensional bijectors to work with such distributions. In this reading, you'll see how bijectors can be used to generalise scale transformations to higher dimensions. You'll also see the `LinearOperator` class, which you can use to construct highly general scale bijectors. In this reading, you'll walk through the code, and we'll use figure examples to demonstrate these transformations.
This reading contains many images, as this allows you to visualise how a space is transformed. For this reason, the examples are limited to two dimensions, since these allow easy plots. However, these ideas generalise naturally to higher dimensions. Let's start by creating a point that is randomly distributed across the unit square $[0, 1] \times [0, 1]$:
```python
# Create the base distribution and a single sample
uniform = tfd.Uniform(low=[0.0, 0.0], high=[1.0, 1.0], name='uniform2d')
x = uniform.sample()
x
```
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([0.4135183, 0.5548992], dtype=float32)>
We will be applying linear transformations to this data. To get a feel for how these transformations work, we show ten example sample points, and plot them, as well as the domain of the underlying distribution:
```python
# Run this cell to download and view a figure to show example data points
!wget -q -O x.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1DLqzh7xcjM7BS3C_QmgeF1xET2sXgMG0"
Image("x.png", width=500)
```
Each of the ten points is hence represented by a two-dimensional vector. Let $\mathbf{x} = [x_1, x_2]^T$ be one of these points. Then scale bijectors are linear transformations of $\mathbf{x}$, which can be represented by a $2 \times 2$ matrix $B$. The forward bijection to $\mathbf{y} = [y_1, y_2]^T$ is
$$
\mathbf{y}
=
\begin{bmatrix}
y_1 \\ y_2
\end{bmatrix}
= B \mathbf{x}
= \begin{bmatrix}
b_{11} & b_{12} \\
b_{21} & b_{22} \\
\end{bmatrix}
\begin{bmatrix}
x_1 \\ x_2
\end{bmatrix}
$$
This is important to remember: any two-dimensional scale bijector can be represented by a $2 \times 2$ matrix. For this reason, we'll sometimes use the term "matrix" to refer to the bijector itself. You'll be seeing how these points and domain are transformed under different bijectors in two dimensions.
## The `ScaleMatvec` bijectors
### The `ScaleMatvecDiag` bijector
We'll start with a simple scale bijector created using the `ScaleMatvecDiag` class:
```python
# Create the ScaleMatvecDiag bijector
bijector = tfb.ScaleMatvecDiag(scale_diag=[1.5, -0.5])
```
which creates a bijector represented by the diagonal matrix
$$ B =
\begin{bmatrix}
1.5 & 0 \\
0 & -0.5 \\
\end{bmatrix}.
$$
We can apply this to the data using `y = bijector(x)` for each of the ten points. This transforms the data as follows:
```python
# Run this cell to download and view a figure to illustrate the transformation
!wget -q -O diag.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1sgfZ_Qzdd2v7CErP2zIk04p6R6hUW7RR"
Image("diag.png", width=500)
```
You can see what happened here: the first coordinate is multiplied by 1.5 while the second is multipled by -0.5, flipping it through the horizontal axis.
```python
# Apply the bijector to the sample point
y = bijector(x)
y
```
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([ 0.62027746, -0.2774496 ], dtype=float32)>
### The `ScaleMatvecTriL` bijector
In the previous example, the bijector matrix was diagonal, which essentially performs an independent scale operation on each of the two dimensions. The domain under the bijection remains rectangular. However, not all scale tarnsformations have to be like this. With a non-diagonal matrix, the domain will transform to a quadrilateral. One way to do this is by using the `tfb.ScaleMatvecTriL` class, which implements a bijection based on a lower-triangular matrix. For example, to implement the lower-triangular matrix
$$ B =
\begin{bmatrix}
-1 & 0 \\
-1 & -1 \\
\end{bmatrix}
$$
you can use the `tfb.ScaleMatvecTriL` bijector as follows:
```python
# Create the ScaleMatvecTriL bijector
bijector = tfb.ScaleMatvecTriL(scale_tril=[[-1., 0.],
[-1., -1.]])
```
```python
# Apply the bijector to the sample x
y = bijector(x)
y
```
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([-0.4135183, -0.9684175], dtype=float32)>
A graphical overview of this change is:
```python
# Run this cell to download and view a figure to illustrate the transformation
!wget -q -O lower_triangular.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1eMYwPzMVpmt1FYscplu7RRn1S4gmFo5B"
Image("lower_triangular.png", width=500)
```
## Inverse and composition
Scale transformations always map the point $[0, 0]$ to itself and are only one particular class of bijectors. As you saw before, you can create more complicated bijections by composing one with another. This works just like you would expect. For example, you can compose a scale transformation with a shift to the left (by one unit) as follows:
```python
# Create a scale and shift bijector
scale_bijector = tfb.ScaleMatvecTriL(scale_tril=[[-1., 0.],
[-1., -1.]])
shift_bijector = tfb.Shift([-1., 0.])
bijector = shift_bijector(scale_bijector)
```
```python
# Apply the bijector to the sample x
y = bijector(x)
y
```
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([-1.4135183, -0.9684175], dtype=float32)>
which has the expected result:
```python
# Run this cell to download and view a figure to illustrate the transformation
!wget -q -O scale_and_shift.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1iucwJlG2ropvJOkRfBMgEpuFNpYa_JH6"
Image("scale_and_shift.png", width=500)
```
Furthermore, bijectors are always invertible (with just a few special cases, see e.g. [`Absolute Value`](https://www.tensorflow.org/probability/api_docs/python/tfp/bijectors/AbsoluteValue)), and these scale transformations are no exception. For example, running
```python
# Apply the inverse transformation to the image of x
bijector = tfb.ScaleMatvecTriL(scale_tril=[[-1., 0.],
[-1., -1.]])
y = bijector.inverse(bijector(x))
```
recovers `x`:
```python
# Run this cell to download and view a figure to illustrate the transformation
!wget -q -O inverse.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1CHCkSfz6EnOYiZaw6vGZ_s6BzyP1NK1X"
Image("inverse.png", width=500)
```
so that the original and transformed data is the same.
```python
# Check that all y and x values are the same
tf.reduce_all(y == x)
```
<tf.Tensor: shape=(), dtype=bool, numpy=True>
## The `LinearOperator` class and `ScaleMatvecLinearOperator` bijector
The examples you just saw used the `ScaleMatvecDiag` and `ScaleMatvecTriL` bijectors, whose transformations can be represented by diagonal and lower-triangular matrices respectively. These are convenient since it's easy to check whether such matrices are invertible (a requirement for a bijector). However, this comes at a cost of generality: there are acceptable bijectors whose matrices are not diagonal or lower-triangular. To construct these more general bijectors, you can use the `ScaleMatvecLinearOperator` class, which operates on instances of `tf.linalg.LinearOperator`.
The `LinearOperator` is a class that allows the creation and manipulation of linear operators in TensorFlow. It's rare to call the class directly, but its subclasses represent many of the common linear operators. It's programmed in a way to have computational advantages when working with big linear operators, although we won't discuss these here. What matters now is that we can use these linear operators to define bijectors using the `ScaleMatvecLinearOperator` class. Let's see how this works.
### The `LinearOperatorDiag` class
First, let's use this framework to recreate our first bijector, represented by the diagonal matrix
$$ B =
\begin{bmatrix}
1.5 & 0 \\
0 & -0.5 \\
\end{bmatrix}.
$$
You can do this using the `ScaleMatvecLinearOperator` as follows. First, we'll create the linear operator that represents the scale transformation using
```python
scale = tf.linalg.LinearOperatorDiag(diag=[1.5, -0.5])
```
where `LinearOperatorDiag` is one of the subclasses of `LinearOperator`. As the name suggests, it implements a diagonal matrix. We then use this to create the bijector using the `tfb.ScaleMatvecLinearOperator`:
```python
# Create the ScaleMatvecLinearOperator bijector
bijector = tfb.ScaleMatvecLinearOperator(scale)
```
This bijector is the same as the first one above:
```python
# Run this cell to download and view a figure to illustrate the transformation
!wget -q -O linear_operator_diag.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1KaCJl28Thp6NjxspG3pq251vDJrmDd97"
Image("linear_operator_diag.png", width=500)
```
```python
# Apply the bijector to the sample x
y = bijector(x)
y
```
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([ 0.62027746, -0.2774496 ], dtype=float32)>
### The `LinearOperatorFullMatrix` class
We can also use this framework to create a bijector represented by a custom matrix. Suppose we have the matrix
$$ B =
\begin{bmatrix}
0.5 & 1.5 \\
1.5 & 0.5 \\
\end{bmatrix}
$$
which is neither diagonal nor lower-triangular. We can implement a bijector for it using the `ScaleMatvecLinearOperator` class by using another subclass of `LinearOperator`, namely the `LinearOperatorFullMatrix`, as follows:
```python
# Create a ScaleMatvecLinearOperator bijector
B = [[0.5, 1.5],
[1.5, 0.5]]
scale = tf.linalg.LinearOperatorFullMatrix(matrix=B)
bijector = tfb.ScaleMatvecLinearOperator(scale)
```
which leads to the following transformation:
```python
# Run this cell to download and view a figure to illustrate the transformation
!wget -q -O linear_operator_full.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1Zk5lp7-VTwmX5r0yPAqVGGzWIgYTjJIJ"
Image("linear_operator_full.png", width=500)
```
```python
# Apply the bijector to the sample x
y = bijector(x)
y
```
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([1.039108 , 0.8977271], dtype=float32)>
### Batch operations and broadcasting
As you've seen before, it's important to be very careful with shapes in TensorFlow Probability. That's because there are three possible components to a shape: the event shape (dimensionality of the random variable), sample shape (dimensionality of the samples drawn) and batch shape (multiple distributions can be considered in one object). This subtlety is especially important for bijectors, but can be harnassed to make powerful, and very computationally efficient, transformations of spaces. Let's examine this a little bit in this section.
In the previous examples, we applied a bijector to a two-dimensional data point $\mathbf{x}$ to create a two-dimensional data point $\mathbf{y}$. This was done using $\mathbf{y} = B \mathbf{x}$ where $B$ is the $2 \times 2$ matrix that represents the scale bijector. This is simply matrix multiplication. To implement this, we created a tensor `x` with `x.shape == [2]` and a bijector using a matrix of shape `B.shape == [2, 2]`. This generalises straightforwardly to higher dimensions: if $\mathbf{x}$ is $n$-dimensional, the bijection matrix must be of shape $n \times n$ for some $n>0$. In this case, $\mathbf{y}$ is $n$-dimensional.
But what if you wanted to apply the same bijection to ten $\mathbf{x}$ values at once? You can then arrange all these samples into a single tensor `x` with `x.shape == [10, 2]` and create a bijector as usual, with a matrix of shape `B.shape == [2, 2]`.
```python
# Create 10 samples from the uniform distribution
x = uniform.sample(10)
x
```
<tf.Tensor: shape=(10, 2), dtype=float32, numpy=
array([[0.29726505, 0.4838661 ],
[0.07308066, 0.7001848 ],
[0.05950093, 0.7649404 ],
[0.06935227, 0.10923171],
[0.33912015, 0.45880222],
[0.51004636, 0.72990084],
[0.67883885, 0.5524392 ],
[0.6320689 , 0.9856808 ],
[0.9687066 , 0.80901885],
[0.87916744, 0.99455404]], dtype=float32)>
```python
# Recreate the diagonal matrix transformation with LinearOperatorDiag
scale = tf.linalg.LinearOperatorDiag(diag=[1.5, -0.5])
scale.to_dense()
```
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[ 1.5, 0. ],
[ 0. , -0.5]], dtype=float32)>
```python
# Create the ScaleMatvecLinearOperator bijector
bijector = tfb.ScaleMatvecLinearOperator(scale)
```
```python
# Apply the bijector to the 10 samples
y = bijector(x)
y
```
<tf.Tensor: shape=(10, 2), dtype=float32, numpy=
array([[ 0.44589758, -0.24193305],
[ 0.10962099, -0.3500924 ],
[ 0.0892514 , -0.3824702 ],
[ 0.1040284 , -0.05461586],
[ 0.5086802 , -0.22940111],
[ 0.76506954, -0.36495042],
[ 1.0182583 , -0.2762196 ],
[ 0.9481033 , -0.4928404 ],
[ 1.4530599 , -0.40450943],
[ 1.3187511 , -0.49727702]], dtype=float32)>
This gives us the same plot we had before:
```python
# Run this cell to download and view a figure to illustrate the transformation
!wget -q -O diag.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1sgfZ_Qzdd2v7CErP2zIk04p6R6hUW7RR"
Image("diag.png", width=500)
```
For matrix multiplication to work, we need `B.shape[-1] == x.shape[-1]`, and the output tensor has last dimension `y.shape[-1] == B.shape[-2]`. For invertibility, we also need the matrix `B` to be square. Any dimensions except for the last one on `x` become sample/batch dimensions: the operation is broadcast across these dimensions as we are used to. It's probably easiest to understand through a table of values, where `s`, `b`, `m`, and `n` are positive integers and `m != n`:
| `B.shape` | `x.shape` | `y.shape` |
| ----- | ----- | ----- |
| `(2, 2)` | `(2)` | `(2)` |
| `(n, n)` | `(m)` | `ERROR` |
| `(n, n)` | `(n)` | `(n)` |
| `(n, n)` | `(s, n)` | `(s, n)` |
| `(b, n, n)` | `(n)` | `(b, n)` |
| `(b, n, n)` | `(b, n)` | `(b, n)` |
| `(b, n, n)` | `(s, 1, n)` | `(s, b, n)` |
These rules and the ability to broadcast make batch operations easy.
We can also easily apply multiple bijectors. Suppose we want to apply both these bijectors:
$$
\begin{align}
B_1 =
\begin{bmatrix}
1 & 0 \\
0 & -1 \\
\end{bmatrix}
& \qquad
B_2 =
\begin{bmatrix}
-1 & 0 \\
0 & 1 \\
\end{bmatrix}.
\end{align}
$$
We can do this using the batched bijector
```python
# Create a batched ScaleMatvecLinearOperator bijector
diag = tf.stack((tf.constant([1, -1.]),
tf.constant([-1, 1.]))) # (2, 2)
scale = tf.linalg.LinearOperatorDiag(diag=diag) # (2, 2, 2)
bijector = tfb.ScaleMatvecLinearOperator(scale=scale)
```
and we can broadcast the samples across both bijectors in the batch, as well as broadcasting the bijectors across all samples. For this, we need to include a batch dimension in the samples Tensor.
```python
# Add a singleton batch dimension to x
x = tf.expand_dims(x, axis=1)
x.shape
```
TensorShape([10, 1, 2])
```python
# Apply the batched bijector to x
y = bijector(x)
y.shape # (S, B, E) shape semantics
```
TensorShape([10, 2, 2])
which gives two batches of forward values for each sample:
```python
# Run this cell to download and view a figure to illustrate the transformation
!wget -q -O linear_operator_batch.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1obgl3sOIYsH_ijxxkhgBu4miBxq23fny"
Image("linear_operator_batch.png", width=500)
```
## Conclusion
In this reading, you saw how to construct scale bijectors in two dimensions using the various `ScaleMatvec` classes. You also had a quick introduction to the general `LinearOperators` class and some of its subclasses. Finally, you saw how batching makes large computations clean and efficient. Be careful to keep track of the tensor shapes, as broadcasting and the difference between batch shapes and event shapes makes errors easy. Finally, note that these bijectors are still amenable to composition (via `Chain` or simply feeding one into another) and inversion, which retains the same syntax you're used to. Enjoy using this powerful tool!
### Further reading and resources
* `ScaleMatvec` bijectors:
* https://www.tensorflow.org/probability/api_docs/python/tfp/bijectors/ScaleMatvecDiag\n",
* https://www.tensorflow.org/probability/api_docs/python/tfp/bijectors/ScaleMatvecLinearOperator\n",
* https://www.tensorflow.org/probability/api_docs/python/tfp/bijectors/ScaleMatvecLU\n",
* https://www.tensorflow.org/probability/api_docs/python/tfp/bijectors/ScaleMatvecTriL\n",
* `LinearOperator` class (see also subclasses)
* https://www.tensorflow.org/api_docs/python/tf/linalg/LinearOperator
| 7d2fdafb4c28c42030e200e7ec309674cff4a22c | 219,937 | ipynb | Jupyter Notebook | Week3/Scale bijectors and LinearOperator.ipynb | stevensmiley1989/Prob_TF2_Examples | fa022e58a44563d09792070be5d015d0798ca00d | [
"MIT"
]
| null | null | null | Week3/Scale bijectors and LinearOperator.ipynb | stevensmiley1989/Prob_TF2_Examples | fa022e58a44563d09792070be5d015d0798ca00d | [
"MIT"
]
| null | null | null | Week3/Scale bijectors and LinearOperator.ipynb | stevensmiley1989/Prob_TF2_Examples | fa022e58a44563d09792070be5d015d0798ca00d | [
"MIT"
]
| null | null | null | 219,937 | 219,937 | 0.914057 | true | 8,052 | Qwen/Qwen-72B | 1. YES
2. YES | 0.743168 | 0.731059 | 0.543299 | __label__eng_Latn | 0.939124 | 0.100596 |
# Transformations, Eigenvectors, and Eigenvalues
Matrices and vectors are used together to manipulate spatial dimensions. This has a lot of applications, including the mathematical generation of 3D computer graphics, geometric modeling, and the training and optimization of machine learning algorithms. We're not going to cover the subject exhaustively here; but we'll focus on a few key concepts that are useful to know when you plan to work with machine learning.
## Linear Transformations
You can manipulate a vector by multiplying it with a matrix. The matrix acts a function that operates on an input vector to produce a vector output. Specifically, matrix multiplications of vectors are *linear transformations* that transform the input vector into the output vector.
For example, consider this matrix ***A*** and vector ***v***:
$$ A = \begin{bmatrix}2 & 3\\5 & 2\end{bmatrix} \;\;\;\; \vec{v} = \begin{bmatrix}1\\2\end{bmatrix}$$
We can define a transformation ***T*** like this:
$$ T(\vec{v}) = A\vec{v} $$
To perform this transformation, we simply calculate the dot product by applying the *RC* rule; multiplying each row of the matrix by the single column of the vector:
$$\begin{bmatrix}2 & 3\\5 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\2\end{bmatrix} = \begin{bmatrix}8\\9\end{bmatrix}$$
Here's the calculation in Python:
```python
import numpy as np
v = np.array([1,2])
A = np.array([[2,3],
[5,2]])
t = A@v
print (t)
```
In this case, both the input vector and the output vector have 2 components - in other words, the transformation takes a 2-dimensional vector and produces a new 2-dimensional vector; which we can indicate like this:
$$ T: \rm I\!R^{2} \to \rm I\!R^{2} $$
Note that the output vector may have a different number of dimensions from the input vector; so the matrix function might transform the vector from one space to another - or in notation, ${\rm I\!R}$<sup>n</sup> -> ${\rm I\!R}$<sup>m</sup>.
For example, let's redefine matrix ***A***, while retaining our original definition of vector ***v***:
$$ A = \begin{bmatrix}2 & 3\\5 & 2\\1 & 1\end{bmatrix} \;\;\;\; \vec{v} = \begin{bmatrix}1\\2\end{bmatrix}$$
Now if we once again define ***T*** like this:
$$ T(\vec{v}) = A\vec{v} $$
We apply the transformation like this:
$$\begin{bmatrix}2 & 3\\5 & 2\\1 & 1\end{bmatrix} \cdot \begin{bmatrix}1\\2\end{bmatrix} = \begin{bmatrix}8\\9\\3\end{bmatrix}$$
So now, our transformation transforms the vector from 2-dimensional space to 3-dimensional space:
$$ T: \rm I\!R^{2} \to \rm I\!R^{3} $$
Here it is in Python:
```python
import numpy as np
v = np.array([1,2])
A = np.array([[2,3],
[5,2],
[1,1]])
t = A@v
print (t)
```
```python
import numpy as np
v = np.array([1,2])
A = np.array([[1,2],
[2,1]])
t = A@v
print (t)
```
## Transformations of Magnitude and Amplitude
When you multiply a vector by a matrix, you transform it in at least one of the following two ways:
* Scale the length (*magnitude*) of the matrix to make it longer or shorter
* Change the direction (*amplitude*) of the matrix
For example consider the following matrix and vector:
$$ A = \begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \;\;\;\; \vec{v} = \begin{bmatrix}1\\0\end{bmatrix}$$
As before, we transform the vector ***v*** by multiplying it with the matrix ***A***:
\begin{equation}\begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}\end{equation}
In this case, the resulting vector has changed in length (*magnitude*), but has not changed its direction (*amplitude*).
Let's visualize that in Python:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[2,0],
[0,2]])
t = A@v
print (t)
# Plot v and t
vecs = np.array([t,v])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
```
The original vector ***v*** is shown in orange, and the transformed vector ***t*** is shown in blue - note that ***t*** has the same direction (*amplitude*) as ***v*** but a greater length (*magnitude*).
Now let's use a different matrix to transform the vector ***v***:
\begin{equation}\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}0\\1\end{bmatrix}\end{equation}
This time, the resulting vector has been changed to a different amplitude, but has the same magnitude.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[0,-1],
[1,0]])
t = A@v
print (t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'blue'], scale=10)
plt.show()
```
Now let's see change the matrix one more time:
\begin{equation}\begin{bmatrix}2 & 1\\1 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\1\end{bmatrix}\end{equation}
Now our resulting vector has been transformed to a new amplitude *and* magnitude - the transformation has affected both direction and scale.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[2,1],
[1,2]])
t = A@v
print (t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'blue'], scale=10)
plt.show()
```
### Afine Transformations
An Afine transformation multiplies a vector by a matrix and adds an offset vector, sometimes referred to as *bias*; like this:
$$T(\vec{v}) = A\vec{v} + \vec{b}$$
For example:
\begin{equation}\begin{bmatrix}5 & 2\\3 & 1\end{bmatrix} \cdot \begin{bmatrix}1\\1\end{bmatrix} + \begin{bmatrix}-2\\-6\end{bmatrix} = \begin{bmatrix}5\\-2\end{bmatrix}\end{equation}
This kind of transformation is actually the basis of linear regression, which is a core foundation for machine learning. The matrix defines the *features*, the first vector is the *coefficients*, and the bias vector is the *intercept*.
here's an example of an Afine transformation in Python:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,1])
A = np.array([[5,2],
[3,1]])
b = np.array([-2,-6])
t = A@v + b
print (t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'blue'], scale=15)
plt.show()
```
## Eigenvectors and Eigenvalues
So we can see that when you transform a vector using a matrix, we change its direction, length, or both. When the transformation only affects scale (in other words, the output vector has a different magnitude but the same amplitude as the input vector), the matrix multiplication for the transformation is the equivalent operation as some scalar multiplication of the vector.
For example, earlier we examined the following transformation that dot-mulitplies a vector by a matrix:
$$\begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}$$
You can achieve the same result by mulitplying the vector by the scalar value ***2***:
$$2 \times \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}$$
The following python performs both of these calculation and shows the results, which are identical.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[2,0],
[0,2]])
t1 = A@v
print (t1)
t2 = 2*v
print (t2)
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,v])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,v])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
```
In cases like these, where a matrix transformation is the equivelent of a scalar-vector multiplication, the scalar-vector pairs that correspond to the matrix are known respectively as eigenvalues and eigenvectors. We generally indicate eigenvalues using the Greek letter lambda (λ), and the formula that defines eigenvalues and eigenvectors with respect to a transformation is:
$$ T(\vec{v}) = \lambda\vec{v}$$
Where the vector ***v*** is an eigenvector and the value ***λ*** is an eigenvalue for transformation ***T***.
When the transformation ***T*** is represented as a matrix multiplication, as in this case where the transformation is represented by matrix ***A***:
$$ T(\vec{v}) = A\vec{v} = \lambda\vec{v}$$
Then ***v*** is an eigenvector and ***λ*** is an eigenvalue of ***A***.
A matrix can have multiple eigenvector-eigenvalue pairs, and you can calculate them manually. However, it's generally easier to use a tool or programming language. For example, in Python you can use the ***linalg.eig*** function, which returns an array of eigenvalues and a matrix of the corresponding eigenvectors for the specified matrix.
Here's an example that returns the eigenvalue and eigenvector pairs for the following matrix:
$$A=\begin{bmatrix}2 & 0\\0 & 3\end{bmatrix}$$
```python
import numpy as np
A = np.array([[2,0],
[0,3]])
eVals, eVecs = np.linalg.eig(A)
print(eVals)
print(eVecs)
```
So there are two eigenvalue-eigenvector pairs for this matrix, as shown here:
$$ \lambda_{1} = 2, \vec{v_{1}} = \begin{bmatrix}1 \\ 0\end{bmatrix} \;\;\;\;\;\; \lambda_{2} = 3, \vec{v_{2}} = \begin{bmatrix}0 \\ 1\end{bmatrix} $$
Let's verify that multiplying each eigenvalue-eigenvector pair corresponds to the dot-product of the eigenvector and the matrix. Here's the first pair:
$$ 2 \times \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}2 \\ 0\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\\0 & 3\end{bmatrix} \cdot \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}2 \\ 0\end{bmatrix} $$
So far so good. Now let's check the second pair:
$$ 3 \times \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}0 \\ 3\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\\0 & 3\end{bmatrix} \cdot \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}0 \\ 3\end{bmatrix} $$
So our eigenvalue-eigenvector scalar multiplications do indeed correspond to our matrix-eigenvector dot-product transformations.
Here's the equivalent code in Python, using the ***eVals*** and ***eVecs*** variables you generated in the previous code cell:
```python
vec1 = eVecs[:,0]
lam1 = eVals[0]
print('Matrix A:')
print(A)
print('-------')
print('lam1: ' + str(lam1))
print ('v1: ' + str(vec1))
print ('Av1: ' + str(A@vec1))
print ('lam1 x v1: ' + str(lam1*vec1))
print('-------')
vec2 = eVecs[:,1]
lam2 = eVals[1]
print('lam2: ' + str(lam2))
print ('v2: ' + str(vec2))
print ('Av2: ' + str(A@vec2))
print ('lam2 x v2: ' + str(lam2*vec2))
```
You can use the following code to visualize these transformations:
```python
t1 = lam1*vec1
print (t1)
t2 = lam2*vec2
print (t2)
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,vec1])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,vec2])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
```
Similarly, earlier we examined the following matrix transformation:
$$\begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}$$
And we saw that you can achieve the same result by mulitplying the vector by the scalar value ***2***:
$$2 \times \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}$$
This works because the scalar value 2 and the vector (1,0) are an eigenvalue-eigenvector pair for this matrix.
Let's use Python to determine the eigenvalue-eigenvector pairs for this matrix:
```python
import numpy as np
A = np.array([[2,0],
[0,2]])
eVals, eVecs = np.linalg.eig(A)
print(eVals)
print(eVecs)
```
So once again, there are two eigenvalue-eigenvector pairs for this matrix, as shown here:
$$ \lambda_{1} = 2, \vec{v_{1}} = \begin{bmatrix}1 \\ 0\end{bmatrix} \;\;\;\;\;\; \lambda_{2} = 2, \vec{v_{2}} = \begin{bmatrix}0 \\ 1\end{bmatrix} $$
Let's verify that multiplying each eigenvalue-eigenvector pair corresponds to the dot-product of the eigenvector and the matrix. Here's the first pair:
$$ 2 \times \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}2 \\ 0\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}2 \\ 0\end{bmatrix} $$
Well, we already knew that. Now let's check the second pair:
$$ 2 \times \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}0 \\ 2\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}0 \\ 2\end{bmatrix} $$
Now let's use Pythonto verify and plot these transformations:
```python
vec1 = eVecs[:,0]
lam1 = eVals[0]
print('Matrix A:')
print(A)
print('-------')
print('lam1: ' + str(lam1))
print ('v1: ' + str(vec1))
print ('Av1: ' + str(A@vec1))
print ('lam1 x v1: ' + str(lam1*vec1))
print('-------')
vec2 = eVecs[:,1]
lam2 = eVals[1]
print('lam2: ' + str(lam2))
print ('v2: ' + str(vec2))
print ('Av2: ' + str(A@vec2))
print ('lam2 x v2: ' + str(lam2*vec2))
# Plot the resulting vectors
t1 = lam1*vec1
t2 = lam2*vec2
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,vec1])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,vec2])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
```
Let's take a look at one more, slightly more complex example. Here's our matrix:
$$\begin{bmatrix}2 & 1\\1 & 2\end{bmatrix}$$
Let's get the eigenvalue and eigenvector pairs:
```python
import numpy as np
A = np.array([[2,1],
[1,2]])
eVals, eVecs = np.linalg.eig(A)
print(eVals)
print(eVecs)
```
This time the eigenvalue-eigenvector pairs are:
$$ \lambda_{1} = 3, \vec{v_{1}} = \begin{bmatrix}0.70710678 \\ 0.70710678\end{bmatrix} \;\;\;\;\;\; \lambda_{2} = 1, \vec{v_{2}} = \begin{bmatrix}-0.70710678 \\ 0.70710678\end{bmatrix} $$
So let's check the first pair:
$$ 3 \times \begin{bmatrix}0.70710678 \\ 0.70710678\end{bmatrix} = \begin{bmatrix}2.12132034 \\ 2.12132034\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 1\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}0.70710678 \\ 0.70710678\end{bmatrix} = \begin{bmatrix}2.12132034 \\ 2.12132034\end{bmatrix} $$
Now let's check the second pair:
$$ 1 \times \begin{bmatrix}-0.70710678 \\ 0.70710678\end{bmatrix} = \begin{bmatrix}-0.70710678\\0.70710678\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 1\\1 & 2\end{bmatrix} \cdot \begin{bmatrix}-0.70710678 \\ 0.70710678\end{bmatrix} = \begin{bmatrix}-0.70710678\\0.70710678\end{bmatrix} $$
With more complex examples like this, it's generally easier to do it with Python:
```python
vec1 = eVecs[:,0]
lam1 = eVals[0]
print('Matrix A:')
print(A)
print('-------')
print('lam1: ' + str(lam1))
print ('v1: ' + str(vec1))
print ('Av1: ' + str(A@vec1))
print ('lam1 x v1: ' + str(lam1*vec1))
print('-------')
vec2 = eVecs[:,1]
lam2 = eVals[1]
print('lam2: ' + str(lam2))
print ('v2: ' + str(vec2))
print ('Av2: ' + str(A@vec2))
print ('lam2 x v2: ' + str(lam2*vec2))
# Plot the results
t1 = lam1*vec1
t2 = lam2*vec2
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,vec1])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,vec2])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
```
## Eigendecomposition
So we've learned a little about eigenvalues and eigenvectors; but you may be wondering what use they are. Well, one use for them is to help decompose transformation matrices.
Recall that previously we found that a matrix transformation of a vector changes its magnitude, amplitude, or both. Without getting too technical about it, we need to remember that vectors can exist in any spatial orientation, or *basis*; and the same transformation can be applied in different *bases*.
We can decompose a matrix using the following formula:
$$A = Q \Lambda Q^{-1}$$
Where ***A*** is a trasformation that can be applied to a vector in its current base, ***Q*** is a matrix of eigenvectors that defines a change of basis, and ***Λ*** is a matrix with eigenvalues on the diagonal that defines the same linear transformation as ***A*** in the base defined by ***Q***.
Let's look at these in some more detail. Consider this matrix:
$$A=\begin{bmatrix}3 & 2\\1 & 0\end{bmatrix}$$
***Q*** is a matrix in which each column is an eigenvector of ***A***; which as we've seen previously, we can calculate using Python:
```python
import numpy as np
A = np.array([[3,2],
[1,0]])
l, Q = np.linalg.eig(A)
print(Q)
```
So for matrix ***A***, ***Q*** is the following matrix:
$$Q=\begin{bmatrix}0.96276969 & -0.48963374\\0.27032301 & 0.87192821\end{bmatrix}$$
***Λ*** is a matrix that contains the eigenvalues for ***A*** on the diagonal, with zeros in all other elements; so for a 2x2 matrix, Λ will look like this:
$$\Lambda=\begin{bmatrix}\lambda_{1} & 0\\0 & \lambda_{2}\end{bmatrix}$$
In our Python code, we've already used the ***linalg.eig*** function to return the array of eigenvalues for ***A*** into the variable ***l***, so now we just need to format that as a matrix:
```python
L = np.diag(l)
print (L)
```
So ***Λ*** is the following matrix:
$$\Lambda=\begin{bmatrix}3.56155281 & 0\\0 & -0.56155281\end{bmatrix}$$
Now we just need to find ***Q<sup>-1</sup>***, which is the inverse of ***Q***:
```python
Qinv = np.linalg.inv(Q)
print(Qinv)
```
The inverse of ***Q*** then, is:
$$Q^{-1}=\begin{bmatrix}0.89720673 & 0.50382896\\-0.27816009 & 0.99068183\end{bmatrix}$$
So what does that mean? Well, it means that we can decompose the transformation of *any* vector multiplied by matrix ***A*** into the separate operations ***QΛQ<sup>-1</sup>***:
$$A\vec{v} = Q \Lambda Q^{-1}\vec{v}$$
To prove this, let's take vector ***v***:
$$\vec{v} = \begin{bmatrix}1\\3\end{bmatrix} $$
Our matrix transformation using ***A*** is:
$$\begin{bmatrix}3 & 2\\1 & 0\end{bmatrix} \cdot \begin{bmatrix}1\\3\end{bmatrix} $$
So let's show the results of that using Python:
```python
v = np.array([1,3])
t = A@v
print(t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'b'], scale=20)
plt.show()
```
And now, let's do the same thing using the ***QΛQ<sup>-1</sup>*** sequence of operations:
```python
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
t = (Q@(L@(Qinv)))@v
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'b'], scale=20)
plt.show()
```
So ***A*** and ***QΛQ<sup>-1</sup>*** are equivalent.
If we view the intermediary stages of the decomposed transformation, you can see the transformation using ***A*** in the original base for ***v*** (orange to blue) and the transformation using ***Λ*** in the change of basis decribed by ***Q*** (red to magenta):
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
t1 = Qinv@v
t2 = L@t1
t3 = Q@t2
# Plot the transformations
vecs = np.array([v,t1, t2, t3])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'red', 'magenta', 'blue'], scale=20)
plt.show()
```
So from this visualization, it should be apparent that the transformation ***Av*** can be performed by changing the basis for ***v*** using ***Q*** (from orange to red in the above plot) applying the equivalent linear transformation in that base using ***Λ*** (red to magenta), and switching back to the original base using ***Q<sup>-1</sup>*** (magenta to blue).
## Rank of a Matrix
The **rank** of a square matrix is the number of non-zero eigenvalues of the matrix. A **full rank** matrix has the same number of non-zero eigenvalues as the dimension of the matrix. A **rank-deficient** matrix has fewer non-zero eigenvalues as dimensions. The inverse of a rank deficient matrix is singular and so does not exist (this is why in a previous notebook we noted that some matrices have no inverse).
Consider the following matrix ***A***:
$$A=\begin{bmatrix}1 & 2\\4 & 3\end{bmatrix}$$
Let's find its eigenvalues (***Λ***):
```python
import numpy as np
A = np.array([[1,2],
[4,3]])
l, Q = np.linalg.eig(A)
L = np.diag(l)
print(L)
```
$$\Lambda=\begin{bmatrix}-1 & 0\\0 & 5\end{bmatrix}$$
This matrix has full rank. The dimensions of the matrix is 2. There are two non-zero eigenvalues.
Now consider this matrix:
$$B=\begin{bmatrix}3 & -3 & 6\\2 & -2 & 4\\1 & -1 & 2\end{bmatrix}$$
Note that the second and third columns are just scalar multiples of the first column.
Let's examine it's eigenvalues:
```python
B = np.array([[3,-3,6],
[2,-2,4],
[1,-1,2]])
lb, Qb = np.linalg.eig(B)
Lb = np.diag(lb)
print(Lb)
```
$$\Lambda=\begin{bmatrix}3 & 0& 0\\0 & -6\times10^{-17} & 0\\0 & 0 & 3.6\times10^{-16}\end{bmatrix}$$
Note that matrix has only 1 non-zero eigenvalue. The other two eigenvalues are so extremely small as to be effectively zero. This is an example of a rank-deficient matrix; and as such, it has no inverse.
## Inverse of a Square Full Rank Matrix
You can calculate the inverse of a square full rank matrix by using the following formula:
$$A^{-1} = Q \Lambda^{-1} Q^{-1}$$
Let's apply this to matrix ***A***:
$$A=\begin{bmatrix}1 & 2\\4 & 3\end{bmatrix}$$
Let's find the matrices for ***Q***, ***Λ<sup>-1</sup>***, and ***Q<sup>-1</sup>***:
```python
import numpy as np
A = np.array([[1,2],
[4,3]])
l, Q = np.linalg.eig(A)
L = np.diag(l)
print(Q)
Linv = np.linalg.inv(L)
Qinv = np.linalg.inv(Q)
print(Linv)
print(Qinv)
```
So:
$$A^{-1}=\begin{bmatrix}-0.70710678 & -0.4472136\\0.70710678 & -0.89442719\end{bmatrix}\cdot\begin{bmatrix}-1 & -0\\0 & 0.2\end{bmatrix}\cdot\begin{bmatrix}-0.94280904 & 0.47140452\\-0.74535599 & -0.74535599\end{bmatrix}$$
Let's calculate that in Python:
```python
Ainv = (Q@(Linv@(Qinv)))
print(Ainv)
```
That gives us the result:
$$A^{-1}=\begin{bmatrix}-0.6 & 0.4\\0.8 & -0.2\end{bmatrix}$$
We can apply the ***np.linalg.inv*** function directly to ***A*** to verify this:
```python
print(np.linalg.inv(A))
```
| 9d82599ed391b15e116e045e155f801fae22cf37 | 35,597 | ipynb | Jupyter Notebook | Vector and Matrices by Hiren/03-05-Transformations Eigenvectors and Eigenvalues.ipynb | awesome-archive/Basic-Mathematics-for-Machine-Learning | b6699a9c29ec070a0b1615c46952cb0deeb73b54 | [
"MIT"
]
| 401 | 2018-08-29T04:55:26.000Z | 2022-03-29T11:03:39.000Z | Vector and Matrices by Hiren/03-05-Transformations Eigenvectors and Eigenvalues.ipynb | aligeekk/Basic-Mathematics-for-Machine-Learning | 8662076d60e89f58a6e81e4ca1377569472760a2 | [
"Apache-2.0"
]
| 1 | 2020-09-28T13:52:53.000Z | 2020-09-28T18:13:53.000Z | Vector and Matrices by Hiren/03-05-Transformations Eigenvectors and Eigenvalues.ipynb | aligeekk/Basic-Mathematics-for-Machine-Learning | 8662076d60e89f58a6e81e4ca1377569472760a2 | [
"Apache-2.0"
]
| 135 | 2018-08-29T05:04:00.000Z | 2022-03-30T07:04:25.000Z | 34.426499 | 425 | 0.538107 | true | 7,828 | Qwen/Qwen-72B | 1. YES
2. YES | 0.950411 | 0.893309 | 0.849011 | __label__eng_Latn | 0.90896 | 0.81087 |
# 単回帰分析と重回帰分析
本章では、基礎的な機械学習手法として代表的な**単回帰分析**と**重回帰分析**の仕組みを、数式を用いて説明します。
また次章では、本章で紹介した数式を Python によるプログラミングで実装する例も紹介します。本章と次章を通じて、数学とプログラミングの結びつきを体験して理解することができます。
本チュートリアルの主題であるディープラーニングの前に、単回帰分析と重回帰分析を紹介することには 2 つの理由があります。
1 つ目は、単回帰分析と重回帰分析の数学がニューラルネットワーク含めたディープラーニングの数学の基礎となるためです。
2 つ目は、単回帰分析のアルゴリズムを通して微分、重回帰分析のアルゴリズムを通して線形代数に関する理解を深めることができるためです。
機械学習手法は、**教師あり学習 (supervised learning)** 、**教師なし学習 (unsupervised learning)** 、**強化学習 (reinforcement learning)** に大別され、単回帰分析は教師あり学習に含まれます。
本チュートリアルで扱う多くの手法が教師あり学習です。
教師あり学習の中でも典型的な問題設定は 2 つに大別されます。
与えられた入力変数から、$10$ や $0.1$ といった実数値を予測する **回帰 (regression)** と、「赤ワイン」、「白ワイン」といったカテゴリを予測する**分類 (classification)** の 2 つです。
単回帰分析は回帰を行うための手法であり、1 つの入力変数から 1 つの出力変数を予測します。
それに対し、重回帰分析は、複数の入力変数から 1 つの出力変数を予測します。
この両手法は教師あり学習であるため、訓練の際には、入力変数 $x$ と目的変数 $t$ がペアで準備されている必要があります。
回帰分析を行うアルゴリズムでは、以下の 3 ステップを順番に考えていきます。
- Step 1 : モデルを決める
- Step 2 : 目的関数を決める
- Step 3 : 最適なパラメータを求める
## 単回帰分析
### 問題設定(単回帰分析)
単回帰分析では、 1 つの入力変数から 1 つの出力変数を予測します。
今回は身近な例として、部屋の広さ $x$ から家賃 $y$ を予測する問題を考えてみます。
### Step 1:モデルを決める(単回帰分析)
まずはじめに、入力変数 $x$ と出力変数 $y$ との関係をどのように定式化するかを決定します。
この定式化したものを **モデル** もしくは **数理モデル** と呼びます。
単回帰分析におけるモデルを具体的に考えていきましょう。
例えば、家賃と部屋の広さの組で表されるデータを 3 つ集め、「家賃」を $y$ 軸に、「部屋の広さ」を $x$ 軸にとってそれらをプロットしたとき、次のようになっていたとします。
この場合、部屋が広くなるほど、家賃が高くなるという関係が予想されます。
また、この 2 変数間の関係性は直線によって表現を行うことができそうだと考えられます。
そこで、2 つのパラメータ $w$ と $b$ によって特徴づけられる直線の方程式
$$
y = wx + b
$$
によって、部屋の広さと家賃の関係を表すことを考えます。
ここで、$w$ は **重み (weight)** 、$b$ は **バイアス (bias)** の頭文字を採用しています。
単回帰分析では、このようにモデルとして直線 $y = wx + b$ を用います。
そして、2 つのパラメータ $w$ と $b$ を、直線がデータによくフィットするように調整します。
パラメータで特徴づけられたモデルを用いる場合、与えられた **データセット** に適合するように最適なパラメータを求めることが目標となります。
今回はデータセットとして部屋の広さ $x$ と家賃 $t$ の組からなるデータの集合を用います。
全部で $N$ 個のデータがあり、$n$ 番目のデータが $(x_n, t_n)$ と表されるとき、データセットは
$$
\begin{align}
\mathcal{D}
&= \{(x_1, t_1), (x_2, t_2), \dots, (x_N, t_N)\} \\
&= \{(x_n, t_n)\}_{n=1}^{N}
\end{align}
$$
と表すことができます([注釈1](#note1))。
これを用いて、新しい $x$ を入力すると、それに対応する $t$ を予測するモデルを訓練します。
### 前処理
次のステップに進む前に、データの**前処理 (preprocessing)** をひとつ紹介します。
データの **中心化 (centering)** は、データの平均値が 0 になるように全てのデータを平行移動する処理を言います。
下図は、データ集合 $(x_n, y_n) \ (n=1,\dots,11)$ を、平均が $(0, 0)$ になるように平行移動する例です。
中心化によるデータ前処理の利点の一つとして、調整すべきパラメータを削減できることが挙げられます。中心化処理を行うことで切片を考慮する必要がなくなるため、データ間の関係性を表現する直線の方程式を、 $y_c = wx_c$ のように、簡潔に表現可能となります。
データセット内の入力変数と目的変数の平均をそれぞれ $\bar{x}$, $\bar{t}$ としたとき、中心化後の入力変数と目的変数は、
$$
\begin{aligned}
x_{c} &= x - \bar{x} \\
t_{c} &= t - \bar{t}
\end{aligned}
$$
となります。
以降は記述を簡単にするため、$_c$ という添え字を省略し、事前に中心化を行っている前提でデータを扱います。
また、そのデータにフィットさせたいモデルは、
$$
y = wx
$$
と、こちらも添え字 $_c$ を省略して説明を行います。
### Step 2:目的関数を決める(単回帰分析)
ここでの目標は、部屋の広さと家賃の関係を直線の方程式によってモデル化することです。
このために、予め収集されたいくつかのデータセットを使って、モデルが部屋の広さの値から予測した家賃(予測値)と、その部屋の広さに対応する実際の家賃(目標値)の差が小さくなるように、モデルのパラメータを決定します。
今回は、目的関数として [こちらの章](https://tutorials.chainer.org/ja/src/03_Basic_Math_for_Machine_Learning_ja.html#note2) ですでに紹介した予測値と目標値との**二乗和誤差 (sum-of-squares error)** を用います。
二乗和誤差が $0$ のとき、またその時のみ予測値 $y$ は目標値 $t$ と完全に一致($t = y$)しています。
$n$ 個目のデータの部屋の広さ $x_n$ が与えられたときのモデルの予測値 $y_n$ と、対応する目標値 $t_n$ との間の二乗誤差は、
$$
(t_{n} - y_{n})^{2}
$$
となります。これを全データに渡って合計したものが以下の二乗和誤差です。
$$
\begin{aligned}
L
&= (t_1 - y_1)^2
+ (t_2 - y_2)^2
+ \cdots
+ (t_N - y_N)^2 \\
&= \sum_{n=1}^N (t_n - y_n )^2 \\
\end{aligned}
$$
今回用いるモデルは
$$
y_{n} = wx_{n}
$$
であるため、目的関数は
$$
L = \sum_{n=1}^N (t_n - wx_n)^2
$$
と書くこともできます。
### Step 3:最適なパラメータを求める(単回帰分析)
この目的関数を最小化するようなパラメータを求めます。
ここで、目的関数は差の二乗和であり、常に正の値または $0$ を取るような、下に凸な二次関数となっています。
(一般的には多くの場合において、最適なパラメータを用いてもモデルがすべてのデータを完全に表現できず、目的関数の値は $0$ にはなりません。)
目的関数の値が最小となる点を求める際には、微分の知識が有用です。
微分では、対象とする関数の接線の傾きを求めることができます。凸関数では、この接線の傾きが 0 である点において、関数の最小値、もしくは最大値が得られます。
今回は、目的関数が $x$ に関する二次関数となっているため、下図のように重み $w$ に関する接線の傾きが $0$ であるときに、目的関数の値が最小となります。
それでは、具体的に今回定めた目的関数 $L$ をパラメータである $w$ で微分してみましょう。
微分に関する基本的な計算や性質は [こちらの章](https://tutorials.chainer.org/ja/src/04_Basics_of_Differential_ja.html)で紹介しました。
$$
\frac{\partial}{\partial w} L
= \frac{\partial}{\partial w} \sum_{n=1}^N (t_n - wx_n)^2
$$
ここで、微分の**線形性**から、和の微分は、微分の和となるため、
$$
\frac{\partial}{\partial w} L
= \sum_{n=1}^N \frac{\partial}{\partial w} (t_n - wx_n)^2
$$
と変形できます。
次に、総和($\sum$)の中の各項に着目すると、
$$
\frac{\partial}{\partial w} (t_n - wx_n)^2
$$
となっており、この部分は $t_n - wx_n$ と $(\cdot)^2$ という関数の**合成関数**になっています。
そこで、$u_n = t_n - wx_n$、$f(u_n) = u_n^2$ とおいて計算すると、
$$
\begin{aligned}
\frac{\partial}{\partial w}(t_n - wx_n)^2
&= \frac{\partial}{\partial w} f(u_n) \\
&= \frac{\partial u_n}{\partial w}\frac{\partial f(u_n)}{\partial u_n} \\
&= -x_n (2 u_n) \\
&= -2x_n(t_n - wx_n)
\end{aligned}
$$
が得られます。
これを $\partial L / \partial w$ の式に戻すと、
$$
\begin{aligned}
\frac{\partial}{\partial w}
L
&= \sum_{n=1}^N \frac{\partial}{\partial w} (t_n - wx_n)^2 \\
&= -\sum_{n=1}^N 2x_n(t_n - wx_n)
\end{aligned}
$$
となります。
この導関数の値が $0$ となるときの $w$ が、目的関数を最小にするパラメータです。
そこで、$\frac{\partial}{\partial w} L = 0$ とおいてこれを $w$ について解きます。
$$
\begin{aligned}
\frac{\partial}{\partial w} L &= 0 \\
-2 \sum_{n=1}^N x_n (t_n - wx_n) &= 0 \\
-2 \sum_{n=1}^N x_n t_n + 2 \sum_{n=1}^N wx^2_n &= 0 \\
-2 \sum_{n=1}^N x_n t_n + 2 w \sum_{n=1}^N x^2_n &= 0 \\
w \sum_{n=1}^N x^2_n &= \sum_{n=1}^N x_n t_n \\
\end{aligned}
$$
以上より、
$$
w = \frac{\sum_{n=1}^N x_n t_n}{\sum_{n=1}^N x^2_n}
$$
と求まりました。これを最適な $w$ と呼びます。
この値は、与えられたデータセット $\mathcal{D} = \{x_n, t_n\}_{n=1}^{N}$ のみから決定されています。
### 数値例
例題にあげていた数値でパラメータ $w$ を求めてみましょう。
まずはデータの中心化を行うために、平均の値を事前に算出します。
$$
\begin{aligned}
\bar{x} &= \frac{1}{3} (20 + 40 + 60) = 40 \\
\bar{t} &= \frac{1}{3} (60000 + 115000 + 155000) = 110000
\end{aligned}
$$
この平均の値を使い、全変数に対して中心化を行うと、
$$
\begin{aligned}
x_{1} &= 20 - 40 = -20 \\
x_{2} &= 40 -40 = 0 \\
x_{3} &= 60- 40- = 20 \\
t_{1} &= 60000 - 110000 = -50000 \\
t_{2} &= 115000- 110000 = 5000 \\
t_{3} &= 155000 - 110000 = 45000
\end{aligned}
$$
となります。
これらの中心化後の値を用いて、最適なパラメータ $w$ を計算すると、
$$
\begin{aligned}
w
&= \frac{\sum_{n=1}^N x_n t_n}{\sum_{n=1}^N x_n^2} \\
&= \frac{x_1 t_1 + x_2 t_2 + x_3 t_3}{x_1^2 + x_2^2 + x_3^2} \\
&= \frac{-20 \times (-50000) + 0 \times 5000 + 20 \times 45000}{(-20)^2 + 0^2 + 20^2} \\
&= 2375
\end{aligned}
$$
と求まります。
したがって、家賃が $1$ m$^{2}$ 増えるごとに、$2375$ 円家賃が上昇しているとわかります。
この $w$ を用いて決定される直線 $y = 2375 x$ と、学習データとして用いた 3 つの点をプロットした図が以下です。
この直線上の点の $y$ の値が、対応する $x$ の値に対するここで訓練したモデルによる予測値です。
ここで、$x$ 軸で負の値をとっていますが、これは中心化後であることに注意してください。
訓練済みのモデルを使って新しいサンプル $x_{q}$ に対する家賃の予測を行います。
例えば、部屋の広さが 50 m$^2$ の場合の家賃に対する予測値を計算する推論を行いましょう。
$$
\begin{aligned}
y_c &= wx_c \\
y_q - \bar{t} &= w(x_q - \bar{x}) \\
\Rightarrow y_q &= w(x_q - \bar{x}) + \bar{t} \\
&= 2375 (50 - 40) + 110000 \\
&= 133750
\end{aligned}
$$
このように、部屋の広さが 50 m$^{2}$ の場合の家賃が 133,750 円であると予測することができました。
上記のように、モデル化の際に中心化を行っていた処理を推論の際には元に戻して計算しましょう。
## 重回帰分析
### 問題設定(重回帰分析)
重回帰分析も単回帰分析の場合と同様に家賃を予測する問題を題材にして説明します。
単回帰分析の場合と異なり、入力変数として「部屋の広さ」だけでなく、「駅からの距離」や「犯罪発生率」などの変数を合わせて考慮することにします。
部屋の広さを $x_{1}$、駅からの距離を $x_{2}$、…、犯罪発生率を $x_{M}$ といった形で表し、$M$ 個の入力変数を扱うことを考えてみましょう。
### Step 1:モデルを決める(重回帰分析)
単回帰分析では、
$$
y = wx + b
$$
のように直線の方程式をモデルとして用いていました。重回帰分析でも、
$$
y = w_{1}x_{1} + w_{2}x_{2} + \cdots + w_{M}x_{M} + b
$$
のように、単回帰分析と似た形のモデルを定義します。
単回帰分析の際は二次元平面を考え、その平面上に存在するデータに最もよくフィットする直線を考えましたが、今回は $M$ 次元空間に存在するデータに最もよくフィットする 直線を考えることになります。
重回帰分析のモデルは総和の記号を使って表記すると、
$$
y = \sum_{m=1}^{M} w_{m} x_{m} + b
$$
と書くことができます。
ここでバイアス $b$ の扱い方を改めて考えます。
単回帰分析では、中心化を前処理として施し、バイアス $b$ を省略することで、簡潔に定式化することができました。
重回帰分析では、 $M$ 個の重み $w_{1}, w_{2}, \dots, w_{M}$ と 1 個のバイアス $b$ があり、合わせて $M + 1$ 個のパラメータが存在します。これらのパラメータをうまく定式化することを考えます。
そこで、今回は $x_0 = 1$、$w_0 = b$ とおくことで、
$$
\begin{aligned}
y
&= w_{1}x_{1} + w_{2}x_{2} + \cdots + w_{M}x_{M} + b \\
&= w_{1}x_{1} + w_{2}x_{2} + \cdots + w_{M}x_{M} + w_{0}x_{0} \\
&= w_{0}x_{0} + w_{1}x_{1} + \cdots + w_{M}x_{M} \\
&= \sum_{m=0}^M w_{m} x_{m}
\end{aligned}
$$
のように $b$ を総和の内側の項に含めて、簡潔に表記できるようにします。
(ここで、 $\sum$ 記号の下部が $m=1$ ではなく $m=0$ となっていることに注意してください。)
さらに、ここから線形代数で学んだ知識を活かして、数式を整理していきます。
上式をベクトルの内積を用いて表記しなおすと、
$$
\begin{aligned}
y
&= w_{0}x_{0} + w_{1}x_{1} + \cdots + w_{M}x_{M} \\
&=
\begin{bmatrix}
w_{0} & w_{1} & \cdots & w_{M}
\end{bmatrix}
\begin{bmatrix}
x_{0} \\
x_{1} \\
\vdots \\
x_{M}
\end{bmatrix} \\
&= {\bf w}^{\rm T}{\bf x}
\end{aligned}
$$
のように、シンプルな形式で表現することができます。
このモデルが持つパラメータは前述の通り $M + 1$ 個あり、$M + 1$ 次元のベクトル ${\bf w}$ で表されています。
重回帰分析では、この ${\bf w}$ のすべての要素について最適な値を求めます。
### Step 2:目的関数を決める(重回帰分析)
単回帰分析の例と比べると、入力変数は増えましたが、家賃を目標値としている点は変わっていません。
そこで、単回帰分析と同じ目的関数
$$
L = (t_1 - y_1)^2 + (t_2 - y_2)^2 + \cdots + (t_N - y_N)^2
$$
を用います。
この目的関数は、ベクトルの内積を用いて表記し直すと、
$$
\begin{aligned}
L
&= (t_1 - y_1)^2 + (t_2 - y_2)^2 + \cdots + (t_N - y_N)^2 \\
&= \begin{bmatrix}
t_1 - y_1 & t_2 - y_2 & \cdots & t_N - y_N
\end{bmatrix}
\begin{bmatrix}
t_1 - y_1 \\
t_2 - y_2 \\
\vdots \\
t_N - y_N
\end{bmatrix} \\
&= ({\bf t} - {\bf y})^{\rm T}({\bf t} - {\bf y})
\end{aligned}
$$
と書くことができます。
ここで、内積には交換法則が成り立つため、${\bf w}^{\rm T}{\bf x}$ は ${\bf x}^{\rm T}{\bf w}$ と書くこともできます。これを利用して、モデルの方程式 ${\bf y} = {\bf w}^{\rm T}{\bf x}$ を、以下のように変形します。
$$
\begin{aligned}
{\bf y} =
\begin{bmatrix}
y_1 \\
y_2 \\
\vdots \\
y_N
\end{bmatrix} =
\begin{bmatrix}
{\bf x}_1^{\rm T}{\bf w} \\
{\bf x}_2^{\rm T}{\bf w} \\
\vdots \\
{\bf x}_N^{\rm T}{\bf w}
\end{bmatrix} =
\begin{bmatrix}
{\bf x}_1^{\rm T} \\
{\bf x}_2^{\rm T} \\
\vdots \\
{\bf x}_N^{\rm T}
\end{bmatrix}
{\bf w}
\end{aligned}
$$
さらに、${\bf x}_n^{\rm T} = \bigl[ x_{n0},\ x_{n1},\ x_{n2},\ \dots,\ x_{nM} \bigr]$ $(n = 1, \dots, N)$ と展開すると、
$$
\begin{aligned}
{\bf y}
&= \begin{bmatrix}
x_{10} & x_{11} & x_{12} & \cdots & x_{1M} \\
x_{20} & x_{21} & x_{22} & \cdots & x_{2M} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_{N0} & x_{N1} & x_{N2} & \cdots & x_{NM}
\end{bmatrix}
\begin{bmatrix}
w_{0} \\
w_{1} \\
w_{2} \\
\vdots \\
w_{M}
\end{bmatrix} \\
&= {\bf X}{\bf w}
\end{aligned}
$$
と表記できます。
ここで、$N \times M$ 行列 ${\bf X}$ は、各行が各データを表しており、各列が各入力変数を表しています。
このような行列を、**デザイン行列 (design matrix)** と呼びます。
各列はそれぞれ入力変数の種類に対応しており、例えば、部屋の広さや、駅からの距離などです。
各行が表すデータ点がどのように表されているかを説明するため、具体的な数値例を挙げます。
部屋の広さ $= 50{\rm m}^2$ 、駅からの距離 $= 600 {\rm m}$ 、犯罪発生率 $= 2\%$ という 3 つの入力変数を考える場合、$M = 3$ であり、これが $n$ 個目のデータのとき、${\bf x}_n^{\rm T}$ は、
$$
{\bf x}_n^{\rm T} =
\begin{bmatrix}
1 & 50 & 600 & 0.02
\end{bmatrix}
$$
となります。先頭に $1$ があるのは、Step 1 で $x_0 = 1$ と定めたためです。
### Step 3:パラメータを最適化する(重回帰分析)
それでは、目的関数 $L$ を最小化するモデルのパラメータベクトル ${\bf w}$ を求めましょう。
単回帰分析と同様に、目的関数をパラメータで微分して 0 とおき、${\bf w}$ について解きます。
まずは目的関数に登場している予測値 ${\bf y}$ を、パラメータ ${\bf w}$ を用いた表記に置き換えます。
$$
\begin{aligned}
L
&= ({\bf t} - {\bf y})^{\rm T} ({\bf t} - {\bf y}) \\
&= ({\bf t} - {\bf X}{\bf w})^{\rm T} ({\bf t} - {\bf X}{\bf w}) \\
&= \{ {\bf t}^{\rm T} - ({\bf X}{\bf w})^{\rm T} \} ({\bf t} - {\bf X}{\bf w}) \\
&= ({\bf t}^{\rm T} - {\bf w}^{\rm T}{\bf X}^{\rm T}) ({\bf t} - {\bf X}{\bf w})
\end{aligned}
$$
ここで、転置の公式 $({\bf A}{\bf B})^{\rm T} = {\bf B}^{\rm T}{\bf A}^{\rm T}$ を用いました。
さらに分配法則を使って展開すると、
$$
L
= {\bf t}^{\rm T}{\bf t}
- {\bf t}^{\rm T}{\bf X}{\bf w}
- {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf t}
+ {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf X}{\bf w}
$$
となります。この目的関数に対しパラメータの ${\bf w}$ についての偏微分を計算します。
その前に、この式をもう少し整理します。
まず、$(1)^{\rm T} = 1$ のように、スカラは転置しても変化しません。
また、${\bf w} \in \mathbb{R}^{M+1}$、${\bf X} \in \mathbb{R}^{N \times (M+1)}$ であり、${\bf X}{\bf w} \in \mathbb{R}^{N}$ となることから、${\bf t} \in \mathbb{R}^{N}$ との間の内積 ${\bf t}^{\rm T}{\bf X}{\bf w} \in \mathbb{R}$ は、スカラになります。
したがって、
$$
({\bf t}^{\rm T}{\bf X}{\bf w})^{\rm T} = {\bf t}^{\rm T}{\bf X}{\bf w}
$$
が成り立ちます。
さらに、転置の公式 $({\bf A}{\bf B}{\bf C})^{\rm T} = {\bf C}^{\rm T}{\bf B}^{\rm T}{\bf A}^{\rm T}$ より、
$$
({\bf t}^{\rm T}{\bf X}{\bf w})^{\rm T} = {\bf w}^{\rm T} {\bf X}^{\rm T} {\bf t}
$$
も成り立ちます。以上から、
$$({\bf t}^{T}{\bf X}{\bf w})^{T} = {\bf t}^{T}{\bf X}{\bf w} = {\bf w}^{T} {\bf X}^{T} {\bf t}$$
が導かれます。目的関数 $L$ は、この式を利用して、
$$
L = {\bf t}^{\rm T}{\bf t} - 2{\bf t}^{\rm T}{\bf X}{\bf w}+ {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf X}{\bf w}
$$
と変形できます。
ここで、${\bf w}$ に関する偏微分を行いやすくするため、${\bf w}$ 以外の定数項を一つにまとめます。
$$
\begin{aligned}
L
&= {\bf t}^{\rm T}{\bf t}
- 2{\bf t}^{\rm T}{\bf X}{\bf w}
+ {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf X}{\bf w} \\
&= {\bf t}^{\rm T}{\bf t}
- 2({\bf X}^{\rm T}{\bf t})^{\rm T} {\bf w}
+ {\bf w}^{\rm T}{\bf X}^{\rm T}{\bf X}{\bf w} \\
&= c + {\bf b}^{\rm T}{\bf w} + {\bf w}^{\rm T}{\bf A}{\bf w}
\end{aligned}
$$
すると、${\bf w}$ に関する二次形式で表現できました。
ここで、
$$
\begin{align}
{\bf A} &= {\bf X}^{\rm T}{\bf X} \\
{\bf b} &= -2 {\bf X}^{\rm T}{\bf t} \\
c &= {\bf t}^{\rm T}{\bf t}
\end{align}
$$
とおいていることに注意してください。
それでは、目的関数を最小にするパラメータ ${\bf w}$ の求め方を考えます。
目的関数はパラメータ ${\bf w}$ に関して二次関数になっています。
まずは、${\bf w}$ 以外のベクトルや行列に、具体的な数値を当てはめて考えてみましょう。
例えば、
$$
{\bf w} =
\begin{bmatrix}
w_1 \\ w_2
\end{bmatrix},
{\bf A} =
\begin{bmatrix}
1 & 2 \\
3 & 4
\end{bmatrix},
{\bf b} =
\begin{bmatrix}
1 \\
2
\end{bmatrix},
c = 1
$$
とおきます。すると、目的関数の値は
$$
\begin{aligned}
L
&= {\bf w}^{\rm T}{\bf A}{\bf w} +{\bf b}^{\rm T}{\bf w} + c \\
&=
\begin{bmatrix}
w_1 & w_2
\end{bmatrix}
\begin{bmatrix}
1 & 2 \\
3 & 4
\end{bmatrix}
\begin{bmatrix}
w_1 \\
w_2
\end{bmatrix}
+
\begin{bmatrix}
1 & 2
\end{bmatrix}
\begin{bmatrix}
w_1 \\
w_2
\end{bmatrix}
+
1 \\
&=
\begin{bmatrix}
w_1 & w_2
\end{bmatrix}
\begin{bmatrix}
w_1 + 2w_2 \\
3w_1 + 4w_2
\end{bmatrix}
+ w_1 + 2w_2 + 1 \\
&= w_1 (w_1 + 2w_2) + w_2 (3w_1 + 4w_2) + w_1 + 2w_2 + 1 \\
&= w_1^2 + 5 w_1 w_2 + 4 w_2^2 + w_1 + 2 w_2 + 1 \\
\end{aligned}
$$
となります。これを $w_1, w_2$ に関して整理すると、
$$
\begin{aligned}
L
&= w_1^2 + (5 w_2 + 1) w_1 + (4 w_2^2 + 2 w_2 + 1) \\
&= 4 w_2^2 + (5 w_1 + 2) w_2 + (w_1^2 + w_1 + 1)
\end{aligned}
$$
となり、$w_1, w_2$ それぞれについて二次関数になっていることが分かります。
目的関数 $L$ を $w_1$ の二次関数、$w_2$ の二次関数と見たとき、$L$ は、下図のような概形となっています。
さらに、各次元が $w_1, w_2, L$ を表す 3 次元空間上においては、 $L$ の概形は下図のようになっています。
ここでは 2 つのパラメータ $w_1$ と $w_2$ について図示していますが、目的関数が 任意の $M$ 個の変数 $w_0, w_1, w_2, \dots, w_M$ によって特徴づけられている場合でも、目的関数がそれぞれのパラメータについて二次形式になっている限り、同様のことが言えます。
すなわち、$M + 1$ 個の連立方程式、
$$
\begin{cases}
\frac {\partial }{\partial w_0}L = 0 \\
\frac {\partial }{\partial w_1}L = 0 \\
\ \ \ \ \ \vdots \\
\frac {\partial }{\partial w_M}L = 0 \\
\end{cases}
$$
を解けば良いということになります。
これはベクトルによる微分を用いて表記すると、以下のようになります。
$$
\begin{aligned}
\begin{bmatrix}
\frac {\partial}{\partial w_0} L \\
\frac {\partial}{\partial w_1} L \\
\vdots \\
\frac {\partial}{\partial w_M} L \\
\end{bmatrix}
&=
\begin{bmatrix}
0 \\
0 \\
\vdots \\
0 \\
\end{bmatrix} \\
\Rightarrow \frac {\partial}{\partial {\bf w}} L
&= {\bf 0} \\
\end{aligned}
$$
上式を ${\bf w}$ について解くために、以下のような式変形を行います。
式変形の途中で理解できない部分があった場合は、[こちらの章](https://tutorials.chainer.org/ja/src/05_Basics_of_Linear_Algebra_ja.html) を読み返してみてください。
まずは、左辺について整理を行います。
$$
\begin{aligned}
\frac{\partial}{\partial {\bf w}} L
&= \frac{\partial}{\partial {\bf w}} (c + {\bf b}^{\rm T}{\bf w} + {\bf w}^{\rm T}{\bf A}{\bf w}) \\
&=\frac{\partial}{\partial {\bf w}} (c) + \frac{\partial}{\partial {\bf w}} ({\bf b}^{\rm T}{\bf w}) + \frac{\partial}{\partial {\bf w}} ({\bf w}^{\rm T}{\bf A}{\bf w}) \\
&={\bf 0} + {\bf b} + ({\bf A} + {\bf A}^{\rm T}) {\bf w}
\end{aligned}
$$
これを $0$ とおき、${\bf A}$ 、${\bf b}$ を展開すると
$$
\begin{aligned}
-2{\bf X}^{\rm T}{\bf t} + \{ {\bf X}^{\rm T}{\bf X} + ({\bf X}^{\rm T}{\bf X})^{\rm T} \} {\bf w}
&= {\bf 0} \\
-2{\bf X}^{\rm T}{\bf t} + 2{\bf X}^{\rm T}{\bf X}{\bf w}
&= {\bf 0} \\
{\bf X}^{\rm T}{\bf X}{\bf w}& = {\bf X}^{\rm T}{\bf t} \\
\end{aligned}
$$
のように式変形できます。
ここで、${\bf X}^{\rm T}{\bf X}$に**逆行列が存在すると仮定**して、両辺に左側から $({\bf X}^{\rm T}{\bf X})^{-1}$ を掛けると、
$$
\begin{aligned}
({\bf X}^{\rm T}{\bf X})^{-1} {\bf X}^{\rm T}{\bf X} {\bf w} &= ({\bf X}^{\rm T}{\bf X})^{-1} {\bf X}^{\rm T}{\bf t} \\
{\bf I}{\bf w} &= ({\bf X}^{\rm T}{\bf X})^{-1} {\bf X}^{\rm T}{\bf t} \\
{\bf w} &= ({\bf X}^{\rm T}{\bf X})^{-1}{\bf X}^{\rm T}{\bf t}
\end{aligned}
$$
が導かれます。特に、この最後の式を**正規方程式 (normal equation)** と呼びます。
上式は、与えられたデータを並べたデザイン行列 ${\bf X}$ と、各データの目標値を並べたベクトル ${\bf t}$ から、最適なパラメータ ${\bf w}$ を計算しています。
${\bf I}$ は単位行列を表します。
${\bf w}$ を求める際に気をつけたいこととして、以下の誤った式変形をしてしまう例が挙げられます。
$$
\begin{aligned}
{\bf X}^{\rm T}{\bf X}{\bf w} &= {\bf X}^{\rm T}{\bf t} \\
({\bf X}^{\rm T})^{-1} {\bf X}^{\rm T}{\bf X}{\bf w} &= ({\bf X}^{\rm T})^{-1} {\bf X}^{\rm T}{\bf t} \\
{\bf X}{\bf w} &= {\bf t} \\
{\bf X}^{-1}{\bf X}{\bf w} &= {\bf X}^{-1}{\bf t} \\
{\bf w} &= {\bf X}^{-1}{\bf t}
\end{aligned}
$$
上記の式変形は一般には成立しません。
この式変形が可能かどうかは、${\bf X}^{-1}$ が存在するかどうか、に関わっています。
サンプル数 $N$ と独立変数の数 $M + 1$ が等しくない場合、${\bf X} \in \mathbb{R}^{N \times (M + 1)}$ は**正方行列ではない**ため、逆行列 ${\bf X}^{-1}$ を持ちません。
従って、上式の 2 行目の変形を行うことはできません(逆行列が求まるためのより厳密な条件についてはここでは省略します)。
一方、 ${\bf X}^{\rm T}{\bf X}$ は $(M + 1) \times (M + 1)$ 行列であり、その形はサンプル数 $N$ に依存することなく、常に正方行列となるため、これを利用して式変形を行います。
新しい入力変数の値 ${\bf x}_q = [x_1, \dots, x_M]^{\rm T}$ に対して、対応する目標値 $y_q$ を予測するためには、訓練により決定されたパラメータ ${\bf w}$ を用いて、
$$
y_q = {\bf w}^{\rm T}{\bf x}_q
$$
のように計算します。
<hr />
<div class="alert alert-info">
**注釈 1**
データセット中の各データのことをデータ点(datum)ということがあります。具体的には、上の説明で同上した $\mathcal{D}$ 中の各 $(x_1, t_1)$ などのことです。
[▲上へ戻る](#ref_note1)
</div>
| 5d4fab06d94dbca1076613901575b43d705dbeff | 42,503 | ipynb | Jupyter Notebook | ja/07_Regression_Analysis_ja.ipynb | youtalk/tutorials | 7b1a517a9f4b43715151add449962bbc7206374f | [
"BSD-3-Clause"
]
| null | null | null | ja/07_Regression_Analysis_ja.ipynb | youtalk/tutorials | 7b1a517a9f4b43715151add449962bbc7206374f | [
"BSD-3-Clause"
]
| null | null | null | ja/07_Regression_Analysis_ja.ipynb | youtalk/tutorials | 7b1a517a9f4b43715151add449962bbc7206374f | [
"BSD-3-Clause"
]
| null | null | null | 38.119283 | 259 | 0.328494 | true | 11,765 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.685949 | 0.57745 | __label__yue_Hant | 0.650637 | 0.179939 |
# Assignment 02 Companion Notebook
This notebook contains some exercises to walk you through implementing the linear regression algorithm. We'll pay special attention to debugging and visualization as we go along.
## A Toy Linear Regression Problem Revisited
As we discovered in the last assignment, the idea of a toy problem is very useful for validating that a machine learning algorithm is working as it is intended to. Recall the following basic setup and role for a toy problem:
> Suppose you are given a learning algorithm designed to estimate some model parameters $\textbf{w}$ from some training data.
>
> 1. Generate values for the model parameters $\mathbf{w}$ (e.g., set them to some known values or generate them randomly). If you were applying your algorithm to real data, you would of course not know these parameters, but instead estimate them from data. For our toy problem we'll proceed with values that we generate so we can test our algorithm more easily.
>
> 2. Generate some training input data, $\mathbf{X}$, (random numbers work well for this). Generate the training output data, $\mathbf{y}$, by applying the model with parameters $\mathbf{w}$. For example, in a linear regression problem since $\mathbf{w}$ represents the regression coefficients, then we can generate each training label, $y_i$ as $y_i = \mathbf{x_i}^\top \mathbf{w}$.
>
> 3. Run your learning algorithm on the synthesized training data $\mathbf{X}, \mathbf{y}$ to arrive at estimated values of the model parameters, $\hat{\mathbf{w}}$.
>
> 4. Compare $\mathbf{w}$ and $\hat{\mathbf{w}}$ as a way of understanding whether your learning algorithm is working.
In the next code block, you'll see an example of a toy regression problem where we set $\mathbf{w} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}$ and generate some training data. To make the data a little more interesting, we'll add some noise to the training outputs (you saw this in the last assignment). We'll also visualize the training data.
```python
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import matplotlib.pyplot as plt
n_points = 50
X = np.random.randn(n_points,2)
w_true = np.array([1, 2])
# we'll apply a Gaussian noise with a standard deviation of 0.5 to the outputs to make it more interesting
y = X.dot(w_true) + np.random.randn(n_points,)*0.5
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:,0], X[:,1], y)
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_zlabel('y')
plt.show()
```
### *Notebook Exercise 1 (40 minutes)*
Before implementing the algorithm you derived for computing $\mathbf{w}^\star$, let's create a visualization of the sum of squared errors as a function of the entries of $\mathbf{w}$. You should recall from exercises in the assignment document that the sum of squared errors for a particular value of $\mathbf{w}$ can be written as $\left(\mathbf{X}\mathbf{w} - \mathbf{y} \right)^\top \left (\mathbf{X}\mathbf{w} - \mathbf{y} \right)$
(a) Write a function called `sum_of_squared_errors` that takes the parameters `X`, `y`, and `w` and returns the sum of squared errors that this particular value of `w` incurs on the training data `X`, `y`. We have included a skeletal outline of the function along with a unit test (SoftDes flashback!!).
(b) Run the visualization code in the cell below and interpret the resulting output. What do the contour lines represent in the generated plot? Based on the visualization, where is the optimal value of `w` (the one that minizes the squared error)? Does this agree with the setup of the toy problem? If not, why doesn't it match?
```python
def sum_of_squared_errors(X, y, w):
"""
Return the sum of squared errors for the given training data (X, y) and
model parameters w.
>>> sum_of_squared_errors(np.array([[1, 4, 3],\
[2, -1, 4]]),\
np.array([3, 4]),\
np.array([1, 2, 3]))
289
"""
# your code here
pass
import doctest
doctest.testmod()
```
**********************************************************************
File "__main__", line 6, in __main__.sum_of_squared_errors
Failed example:
sum_of_squared_errors(np.array([[1, 4, 3], [2, -1, 4]]), np.array([3, 4]), np.array([1, 2, 3]))
Expected:
289
Got nothing
**********************************************************************
1 items had failures:
1 of 1 in __main__.sum_of_squared_errors
***Test Failed*** 1 failures.
TestResults(failed=1, attempted=1)
```python
w1 = np.linspace(-2, 4, 50)
w2 = np.linspace(-2, 4, 50)
W1, W2 = np.meshgrid(w1, w2)
E = np.array([[sum_of_squared_errors(X, y, np.array([W1[i, j], W2[i, j]])) \
for j in range(W1.shape[1])] \
for i in range(W1.shape[0])])
fig, ax = plt.subplots(figsize=(8,8))
CS = ax.contour(W1, W2, E, colors='black', levels=20)
ax.clabel(CS, inline=1, fontsize=10)
plt.xlabel('$w_1$')
plt.ylabel('$w_2$')
plt.title('Sum of Squared Errors')
plt.show()
```
#### *Expand for Solution*
```python
# ***Solution***
def sum_of_squared_errors(X, y, w):
"""
Return the sum of squared errors for the given training data (X, y) and
model parameters w.
>>> sum_of_squared_errors(np.array([[1, 4, 3],\
[2, -1, 4]]),\
np.array([3, 4]),\
np.array([1, 2, 3]))
289
"""
e = X.dot(w) - y
return e.dot(e)
import doctest
doctest.testmod()
```
```python
# ***Solution***
w1 = np.linspace(-2, 4, 50)
w2 = np.linspace(-2, 4, 50)
W1, W2 = np.meshgrid(w1, w2)
E = np.array([[sum_of_squared_errors(X, y, np.array([W1[i, j], W2[i, j]])) \
for j in range(W1.shape[1])] \
for i in range(W1.shape[0])])
fig, ax = plt.subplots(figsize=(8,8))
CS = ax.contour(W1, W2, E, colors='black', levels=20)
ax.clabel(CS, inline=1, fontsize=10)
plt.xlabel('$w_1$')
plt.ylabel('$w_2$')
plt.title('Sum of Squared Errors')
plt.show()
```
***Solution***
(b) The contour lines represent values of $\mathbf{w}$ that incur equal squared error on the training set. The optimal value of $\mathbf{w}$ (the one that minimizes the error) occurs near $w_1 = 1, w_2 = 2$, which is what we'd expect given the setup of the toy problem.
## Computing the Optimal Weights
Now you're ready to implement the formula that you derived in the assignment document. In that document you should have arrived at the following formula for the optimal weights:
$$\mathbf{w^\star} = \left ( \mathbf{X}^\top \mathbf{X} \right)^{-1} \mathbf{X}^\top \mathbf{y}$$
### Notebook Exercise 2 (20 minutes)
Fill in the body of the function `optimal_weights` below. You've done the hard work to derive this beautiful expression, translating it to code is the last step to glory! Hint: `np.linalg.inv` computes the inverse of a specified matrix. We've included code that will run your code on the training data. Does your code compute sensible values of $\mathbf{w}$ given the setup of the toy problem?
```python
def optimal_weights(X, y):
""" Returns the optimal weights in the least squares sense for the specified
training inputs (X) and training outputs (y) """
# your code here
pass
optimal_weights(X, y)
```
#### Expand for Solution
```python
# ***Solution***
def optimal_weights(X, y):
""" Returns the optimal weights in the least squares sense for the specified
training inputs (X) and training outputs (y) """
return np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
optimal_weights(X, y)
```
***Solution***
The output makes sense since it is close to the values used to generate the data, but not identical. We don't expect it to be identical since we added noise to the training outputs.
## Sanity Checking your Implementation
On the first day we talked a lot about evaluating ML models. For instance, we talked about running experiments to see how well they work for some problem. When we are implementing the algorithm ourselves a different and more basic thing we'd like to evaluate is whether we've implemented the algorithm correctly.
While you are probably feeling pretty confident right now that your implementation of linear regression is correct, for more complicated algorithms there have been cases when implementations of algorithms (even in published works) turned out to be incorrect (i.e., they didn't accurately reflect the algorithm that had been derived in the paper). The story round the campfire (by which I mean I heard this from one of my professors in grad school, but I can't seem to find a link online verifying it) is that the initial implementation of the backpropagation algorithm (a foundational algorithm for machine learning in neural networks that we'll be learning about in the coming weeks) was wrong. The experimental results presented in the paper were based on a flawed implementation (although clearly it wasn't so flawed that the results were garbage).
### *Notebook Exercise 3 (30 minutes)*
Let's check out a few strategies that we can use to verify that an implementation of an algorithm is correct.
a. ***Strategy 1 check for local optimality.*** If the machine learning algorithm involves optimizing some function (for example in linear regression you are optimizing squared error), you can verify that the solution you compute is locally optimal. What does it mean for the solution to be locally optimal? One very basic thing we can check is to see whether the value of the error gets strictly higher as we perturb the solution (e.g., add a small delta to the weights computed by your implementation of linear regression). The following not very elegant, but illustrative code provides an implementation of this optimality check.
As a quick diagnostic of your understanding, what should be true of the output below in order for an implementation to pass the optimality check? Why is it important to test each of the four perturbations below?
```python
w_star = optimal_weights(X, y)
w_star_err = sum_of_squared_errors(X, y, w_star)
perturbation = 10**-5
print(sum_of_squared_errors(X, y, w_star + np.array([perturbation, 0])) - w_star_err)
print(sum_of_squared_errors(X, y, w_star - np.array([perturbation, 0])) - w_star_err)
print(sum_of_squared_errors(X, y, w_star + np.array([0, perturbation])) - w_star_err)
print(sum_of_squared_errors(X, y, w_star - np.array([0, perturbation])) - w_star_err)
```
5.583189910396413e-09
5.583190798574833e-09
6.337681490720115e-09
6.337677938006436e-09
b. ***Strategy 2: check the gradient.*** For many machine learning algorithms that involve optimizing some function (linear regression is a great example) a second sanity check is to verify that the gradient is 0 at a potential solution. Since it is not necessarily straightforward to calculate the gradient of the function we are optimizing, we can instead check that a numerical approximation of the gradient is close to 0. We will use the finite differences method to approximate the gradient.
To help you understand what we mean by finite differences, here is the definition of the derivative of a single variable function.
$$f'(x) = \lim_{h \rightarrow 0} \frac{f(x+h) - f(x)}{h}$$
This definition suggests that we can approximate the derivative using the finite difference method as $f'(x) \approx \frac{f(x+h) - f(x)}{h}$ for some small value of $h$. What typically works even better is to use the method of central differences where we estimate the drivative as $f'(x) \approx \frac{f(x+h) - f(x - h)}{2h}$.
In the code below, we'll apply this idea to estimating the gradient (which consists of two partial derivatives) at the optimal solution returned by your implementation of linear regression.
As a quick check of your understanding, what should be true of the output below in order for an implementation to pass the gradient check?
```python
estimate_partial_w_1 = (sum_of_squared_errors(X, y, w_star + np.array([perturbation, 0])) - sum_of_squared_errors(X, y, w_star - np.array([perturbation, 0])))/(2*perturbation)
estimate_partial_w_2 = (sum_of_squared_errors(X, y, w_star + np.array([0, perturbation])) - sum_of_squared_errors(X, y, w_star - np.array([0, perturbation])))/(2*perturbation)
print(estimate_partial_w_1, estimate_partial_w_2)
```
-4.4408920985006255e-11 1.7763568394002502e-10
c. ***Strategy 3: compare to a known working implementation.*** Perhaps the most direct approach to validating your implementation would be to compare it to a known working implementation (assuming you have access to one). In the cell below, we call `numpy`'s implementation of linear regression and compare it with your solution.
As a quick check of your understanding, what should be true of the output below in order for an implementation to pass the *compare to a known working implementation* check?
```python
w_known_working, _, _, _ = np.linalg.lstsq(X, y, rcond=-1)
print(w_known_working - w_star)
```
[ 4.44089210e-16 -2.22044605e-15]
#### Expand for Solution
***Solution***
<ol type="a">
<li>In order to pass the test all of the differences printed out should be positive since this corresponds to the error increasing as we move away from the optimal solution we computed. It's important to test all four direction since you want to look along each dimension and in both the positive and negative directions.</li>
<li>The gradient should be close to 0.</li>
<li>The difference between the two vectors should be small.</li>
</ol>
## Training Test Splits: Bikeshare Revisited
In this next section of the notebook we're going to revisit the dataset that we met in the first assignment. Our goals in this activity are twofold.
1. We will introduce the notion of a train / test split for validating machine learning algorithms.
2. We will motivate, derive, and implement an extension to linear regression called [ridge regression](https://en.wikipedia.org/wiki/Tikhonov_regularization).
For your convenience, here is the text from the previous notebook that we used to introduce the dataset.
> The [Bikeshare](https://archive.ics.uci.edu/ml/datasets/bike%20sharing%20dataset) dataset contains daily usage data over a roughly two year period. Along with each record of user counts, there are independent variables that measure various characteristics of the day in question (e.g., whether it was a weekday or a weekend, the air temperature, the wind speed).
The code below loads the dataset and produces hexplots that show various characteristics of the day versus ridership.
```python
import pandas as pd
bikeshare = pd.read_csv('https://raw.githubusercontent.com/kylecho/nd101_p1_neural_network/master/Bike-Sharing-Dataset/day.csv')
X_bikeshare = bikeshare.drop(columns=['instant', 'dteday', 'cnt', 'registered', 'casual'])
y_bikeshare = bikeshare['cnt']
X_bikeshare['bias'] = 1
plt.figure(figsize=(30, 18))
for idx, col in enumerate(X_bikeshare):
plt.subplot(3, 4, idx+1)
plt.hexbin(X_bikeshare[col], y_bikeshare, gridsize=25, cmap='jet')
plt.colorbar()
plt.xlabel(col)
plt.ylabel('rider count')
plt.subplots_adjust(wspace=.2)
plt.show()
```
### Training and Testing Sets
One of the most fundamental ideas in evaluating machine learning algorithm involves [partitioning data into a training set (used for fitting a model) and a testing set (used for estimating the performance of the model). There is a pretty comprehensive article on [training, validation, and testing sets](https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets) on Wikipedia, but for now we are not going to be talking about the validation set. Feel free to follow along with our presentation here and keep the link handy for future reference (no need to read the linked article now).
Remember the basic supervised learning problem setup where we are given a training data consisting of inputs $\mathbf{x_1}, \mathbf{x_2}, \ldots, \mathbf{x_n}$ and outputs $y_1, y_2, \ldots, y_n$. So far we have been applying our learning algorithms to *all* $n$ of the training data instances. We might be then tempted to estimate how well the resultant model would work on new data bytes computing the average squared error on these $n$ training instances. It turns out that this approach can wildly overestimate how well the model will work on new data. The reason is that the model parameters (e.g., the weights in linear regression) have been tuned to the training data. Some of these model parameters will reflect genuine relationships between the inputs and outputs, and other model parameters may largely reflect particular quirks of the training data (e.g., noise) that are not applicable to new data.
In order to get an unbiased estimate of the performance of the model on new data we reserve a portion of the training data as a ***testing set***. This testing set is not used to fit the model parameters and is only used to estimate model performance *after the model has been created*.
To clarify what we mean, in the next cell is some code that partitions the Bikeshare data into training and test sets. We will fit the parameters of a linear regression model (using your code!) on the training set and calculate mean squared error on the testing set. To make the code less cluttered, we will be using a helper function from the [`scikit-learn`](https://scikit-learn.org/stable/) library that creates a [training set / testing set split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html#sklearn.model_selection.train_test_split). This function will ***randomly*** partition the given data into two disjoint sets: the training set and the testing set. The parameter `test_size` controls the fraction of data assigned to the testing set versus the training set. You'll also notice that we divide the `sum_of_squared_errors` by `y_train.shape[0]`, which is the number of training data instances. Divided the sum of squared error by the number of training data instances gives us the *mean squared error.* The mean squared error is more interpretable than the sum of squared errors since it controls for the number of data instances.
#### Notebook Exercise 4 (30 minutes)
Run the code below several times in order to answer the following questions.
<ol type="a">
<li>What causes the results to change from run to run?</li>
<li>As you run the code multiple times, does there seem to be a trend that the performance on the training set is better (i.e. has lower mean squared error) than the performance on the testing set?</li>
<li>Since the training set was used to fit the model parameters, we might expect the training set to always have better performance than the testing set. It appears that this is not always the case. How is this possible?</li>
</ol>
```python
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X_train, X_test, y_train, y_test = train_test_split(X_bikeshare, y_bikeshare, test_size=0.5)
w = optimal_weights(X_train, y_train)
print("training set mean squared error=%f" % (sum_of_squared_errors(X_train, y_train, w)/y_train.shape[0]))
print("testing set mean squared error=%f" % (sum_of_squared_errors(X_test, y_test, w)/y_test.shape[0]))
```
training set mean squared error=831323.600458
testing set mean squared error=686501.059788
##### Expand for Solution
***Solution***
<ol type="a">
<li>The changes in output are driven by randomness in the `train_test_split` function. Depending on which instances are assigned to the training versus the testing set, the output will differ.</li>
<li>It does appear that, on average, the performance on the testing set is worse.</li>
<li>On any given run it does not follow that performance on the testing set will always be worse. It could be the case that the testing set happened to contain a lot of easy to predict instances and the training set contained a high number of outliers.</li>
</ol>
### Ridge Regression
So far we've been working with this Bikeshare dataset in cases where we have a relatively high number of training instances compared with the dimensionality of the data. To make this more precise, the shape of `X_train` is 365 by 12, which means we have ratio of roughly 30:1 training instances to inputs features. While there are no hard and fast rules about this, a 30:1 ratio is considered pretty good for coming up with good esitmates of model parameters.
Suppose instead that we faced a situation where we had very little training data. To simulate this case, below we rerun our experiment with the BikeShare dataset but set the `test_size` to 0.95. You should notice two things when running this code.
1. The performance on the training set is markedly better than the testing set.
2. Occasionally you will get an error message about a singular matrix.
```python
X_train, X_test, y_train, y_test = train_test_split(X_bikeshare, y_bikeshare, test_size=0.95)
print("number of training points %d, number of testing points %d" % (y_train.shape[0], y_test.shape[0]))
w = optimal_weights(X_train, y_train)
print("training set mean squared error=%f" % (sum_of_squared_errors(X_train, y_train, w)/y_train.shape[0]))
print("testing set mean squared error=%f" % (sum_of_squared_errors(X_test, y_test, w)/y_test.shape[0]))
```
number of training points 36, number of testing points 695
training set mean squared error=439139.914045
testing set mean squared error=1518156.344032
The first observation (that the performance on the training set is markedly better than the testing set) is perhaps not very surprising since we now have much less training data to use to reliably estimate the model parameters. To understand the second observation, we need remind ourselves of the formula for the optimal weights in linear regression.
$$\mathbf{w^\star} = \left ( \mathbf{X}^\top \mathbf{X} \right)^{-1} \mathbf{X}^\top \mathbf{y}$$
The error regarding a singular matrix is coming from the fact that we are computing the inverse of the matrix $\mathbf{X}^\top \mathbf{X}$. One property of a [singular matrix](http://mathworld.wolfram.com/SingularMatrix.html) is that it is not invertible, hence the error message. The reason it is not invertible is that the matrix $\mathbf{X}^\top \mathbf{X}$ is not full rank. This happens when the training data does not properly span the space of the features. This usually happens for a combination of the following reasons:
1. There is too little training data
2. There are features that are defined as linear combinations of each other.
In order to solve this problem, a common approach is to modify the linear regression problem to prefer solutions that have small weights. We do this by penalizing the sum of the squares of the weights themselves. This is called ridge regression (or Tikhonov regularization). Below, we show the original version of ordinary least squares along with ridge regression.
Ordinary least squares:
$$\begin{align}
\mathbf{w^\star} &= \arg\min_\mathbf{w} \sum_{i=1}^n \left ( \mathbf{w}^\top \mathbf{x_i} - y_i \right)^2 \\
&= \arg\min_\mathbf{w} \left ( \mathbf{X}\mathbf{w} - \mathbf{y} \right)^\top \left ( \mathbf{X}\mathbf{w} - \mathbf{y} \right)
\end{align}$$
Ridge regression (note that $\lambda$ is a non-negative parameter that controls how much the algorithm cares about fitting the data and how much it cares about having small weights):
$$\begin{align}
\mathbf{w^\star} &= \arg\min_\mathbf{w} \sum_{i=1}^n \left ( \mathbf{w}^\top \mathbf{x_i} - y_i \right)^2 + \lambda\sum_{i=1}^d w_i^2 \\
&= \arg\min_\mathbf{w} \left ( \mathbf{X}\mathbf{w} - \mathbf{y} \right)^\top \left ( \mathbf{X}\mathbf{w} - \mathbf{y} \right) + \lambda \mathbf{w}^\top \mathbf{w}
\end{align}$$
The penalty term may seem a little arbitrary, but it can be motivated on a conceptual level pretty easily. The basic idea is that in the absence of sufficient training data to suggest otherwise, we should try to make the weights small. Small weights have the property that changes to the input result in minor changes to our predictions, which is a good default behavior.
### Notebook Exercise 5 (60 minutes)
*Note: this one is really a math problem, but we didn't want to send you back to the other document and then back here again. Let us know if you like this or not via NB.*
Derive an expression to compute the optimal weights, $\mathbf{w^\star}$, to the ridge regression problem.
* Hint 1: This is very, very similar to exercise 5 in the assignment document.
* Hint 2: If you follow the same steps as you did in exercise 5, you'll arrive at an expression that looks like this (note: $\mathbf{I}_{d \times d}$ is the $d$ by $d$ identity matrix).
$$\mathbf{w^\star} = \arg\min_\mathbf{w} \mathbf{w}^\top \mathbf{X}^\top \mathbf{X} \mathbf{w} - 2\mathbf{w}^\top \mathbf{X}^\top \mathbf{y} + \mathbf{y}^\top \mathbf{y} + \lambda \mathbf{w}^\top \mathbf{I}_{d \times d} \mathbf{w}$$
* Hint 3: to get $\mathbf{w^\star}$, take the gradient, set it to 0 and solve for $\mathbf{w}$.
#### Expand for Solution
***Solution***
$$\begin{align}\mathbf{w^\star} &= \arg\min_\mathbf{w} \left ( \mathbf{X}\mathbf{w} - \mathbf{y} \right)^\top \left ( \mathbf{X}\mathbf{w} - \mathbf{y} \right) + \lambda \mathbf{w}^\top \mathbf{w} \\
& = \arg\min_\mathbf{w} \mathbf{w}^\top \mathbf{X}^\top \mathbf{X} \mathbf{w} - 2\mathbf{w}^\top \mathbf{X}^\top \mathbf{y} + \mathbf{y}^\top \mathbf{y} + \lambda \mathbf{w}^\top \mathbf{I}_{d \times d} \mathbf{w}\\
&= \arg\min_\mathbf{w} \mathbf{w}^\top \left ( \mathbf{X}^\top \mathbf{X} + \lambda \mathbf{I_{d \times d}} \right )\mathbf{w} - 2\mathbf{w}^\top \mathbf{X}^\top \mathbf{y} + \mathbf{y}^\top \mathbf{y} \\
2 \left ( \mathbf{X}^\top \mathbf{X} + \lambda \mathbf{I_{d \times d}} \right ) \mathbf{w^\star} - 2 \mathbf{X}^\top \mathbf{y} &=0 ~~\mbox{take the gradient and set to 0} \\
\mathbf{w}^\star &= \left ( \mathbf{X}^\top \mathbf{X} + \lambda \mathbf{I_{d \times d}} \right)^{-1} \mathbf{X}^\top \mathbf{y}
\end{align}$$
### Notebook Exercise 6 (20 minutes)
Now we'll be revisiting the Bikeshare dataset and see if ridge regression can help. If you'd like to implement the algorithm yourself, feel free. Since it is a relatively small change from the implementation that you created earlier, we have gone ahead and provided you with implementation below. Here are some questions to test your understanding of the effects of applying ridge regression to the bike share dataset.
<ol type="a">
<li>Run the code below with the default setting of the input `lam`. You should notice that singular matrix error does not arise anymore. Make the value of `lam` really large (search over different orders of magnitude to find a value that is really large). What happens to the training and test set errors?</li>
<li>Does there seem to be a value of `lam` that is best (we advise you to search over different orders of magnitude)? How do you define best? What would be a good process for determining a good value of `lam` (we'll be learning about this in much more detail coming up, but we wanted to get you thinking about some possibilities)?</li>
</ol>
```python
def optimal_weights_ridge(X, y, lam):
""" Returns the optimal weights in the least squares sense for the specified
training inputs (X) and training outputs (y) with ridge term `lam` """
return np.linalg.inv(X.T.dot(X) + lam*np.eye(X.shape[1])).dot(X.T).dot(y)
X_train, X_test, y_train, y_test = train_test_split(X_bikeshare, y_bikeshare, test_size=0.95)
print("number of training points %d, number of testing points %d" % (y_train.shape[0], y_test.shape[0]))
w = optimal_weights_ridge(X_train, y_train, 1)
print("training set mean squared error=%f" % (sum_of_squared_errors(X_train, y_train, w)/y_train.shape[0]))
print("testing set mean squared error=%f" % (sum_of_squared_errors(X_test, y_test, w)/y_test.shape[0]))
```
number of training points 36, number of testing points 695
training set mean squared error=608917.228853
testing set mean squared error=942739.384525
#### Expand for Solution
***Solution***
(a) When you make lambda really big, the errors become very large. This is because the model is underfitting to the data. In other words, it cares more about making the weights small than it does about fitting the data.
(b) It's pretty hard to tell, but it seems like a value around $0.0001$ seems to work pretty well. We defined best by mentally averaging over the testing mean squared error measured across a few runs. In order to do this more rigorously you'd want to have a defined space of values you'd search over, repeat the experiment a number of times, and then choose the best average performance. We'll dig into this more systematically soon, but this is a good amount answer to get to with the current tools we have discussed.
| bc9186a0ba77c4542a8b769de83ae6f3c39d54f5 | 714,876 | ipynb | Jupyter Notebook | M1/Assignment2/Assignment_02_Companion.ipynb | SSModelGit/MachineLearningOlin19 | bd9f4a11f57c6151d266db76193ce9d923fbc4c9 | [
"MIT"
]
| null | null | null | M1/Assignment2/Assignment_02_Companion.ipynb | SSModelGit/MachineLearningOlin19 | bd9f4a11f57c6151d266db76193ce9d923fbc4c9 | [
"MIT"
]
| null | null | null | M1/Assignment2/Assignment_02_Companion.ipynb | SSModelGit/MachineLearningOlin19 | bd9f4a11f57c6151d266db76193ce9d923fbc4c9 | [
"MIT"
]
| null | null | null | 753.293994 | 579,282 | 0.925803 | true | 7,486 | Qwen/Qwen-72B | 1. YES
2. YES | 0.76908 | 0.872347 | 0.670905 | __label__eng_Latn | 0.995804 | 0.397068 |
# ShiftIfYouCan Example Notebook
In this notebook we present a walkthrough example for the ShiftIfYouCan visualisation code.
```python
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
```
<style>.container { width:100% !important; }</style>
### Setup
First we have to import all needed modules.
```python
import numpy as np
# Internal functions
from modules.ext_libraries import f_measure, variations
from modules.operating import operation_count, process_operations, get_variation, get_summary
from modules.plotting import plot_operations
```
By using the magic `%matplotlib notebook`, interaction (zoom in, out, save) with the figure is enabled.
*Note: zoom-in is higly resource-intensive and may be quite slow*
```python
# Uncomment/comment this line if you do/don't need interaction
%matplotlib notebook
import matplotlib.pyplot as plt
FIGSIZE = (18, 2.5)
plt.rcParams['figure.figsize'] = FIGSIZE
```
## Start the Analysis
First we load the beat detections and ground-truth annotations. The examples provided correspond to a highly expressive piece of music.
Then we obtain a list of the corrected detections (`corrected`), that can be saved for further use within a tool such as [Sonic Visualiser](https://www.sonicvisualiser.org/).
We print a summary of the operations in a friendly manner, that provides the proposed **annotation efficiency**, defined as follows:
\begin{equation}
ae = t^{+} / (t^{+} + s + f^{+} + f^{-}),
\end{equation}
where $t^{+}$ is the number of true positives, $s$ is the number of shifts, and $f^{-}$ and $f^{+}$ correspond to false negatives and false positives, respectively.
that displays the number of each operation required to transform the detection, thus maximising the (transformed) F-Measure.
```python
# Load the beat detections (dets) and the ground-truth annotation (anns)
dets = np.loadtxt('dets.txt')
anns = np.loadtxt('hains006.beats')
# use only the first column (i.e. the time stamp) if these are 2D
if dets.ndim > 1:
dets = dets[:, 0]
if anns.ndim > 1:
anns = anns[:, 0]
```
```python
# Get matrix of operations and annotation efficiency
ops, ann_eff = operation_count(dets, anns)
# Get list of transformed detections
transformed = process_operations(ops)
# Save list of transformed detections
np.savetxt('dets_transformed.txt', transformed, fmt='%.2f')
# Get combined F-measure (tuple with initial F-measure and transformed F-measure)
comb_f_measure = f_measure(dets, anns), f_measure(transformed, anns)
# Display results
print(get_summary('dets', ann_eff, comb_f_measure))
```
- - - - - - - - - - - - - - - - -
dets
- - - - - - - - - - - - - - - - -
annotation_efficiency: 0.877
# "good" detections: 57
# insertions: 3
# deletions: 1
# shifts: 4
(initial) f-measure: 0.905
(transformed) f_measure: 1.000
# Visualisation
We provide an open-source Python implementation which graphically displays the minimum set and type of operations required to transform a sequence of initial beat detections in such a way as to maximize the F-measure when comparing the transformed detections against a ground truth annotation sequence.
Here we present some examples to demonstrate the usefulness of this rich visualisation of beat tracking evaluation.
All the figures are plotted in interactive mode to allow the user to go into greater detail by zooming in specific parts of the plot.
Note: Besides the configuration provided by the parametererisation of the function plot_operations(), a group of module-level control variables is defined at the top of the module *plotting.py*, where all the plotting functionality is implemented.
```python
# Display visualisation
fig,ax = plot_operations(ops, anns, 'Original')
plt.tight_layout()
plt.show()
```
<IPython.core.display.Javascript object>
We can manually zoom in by using the tools in the navigation bar of the figure. For demonstration purposes, we'll zoom programatically, e.g. the area between 68 secs and 73 secs.
In this area of the figure, we can observe the following:
- 1 **correct detection** (at 68.6 secs), clearly within the *inner* tolerance window, depicted as a grey background surrounding the annotation;
- 2 **shifts**, also within the *outer* tolerance window, depicted as a pink background surrounding the annotation; the original ("unshifted") locations of the detections are show as dotted lines (at 70.2 and 71.9 secs) and the final locations of the shifts are depicted as solid lines (at 69.7 and 72.6 secs).
- 1 **insertion** (at 70.8 secs). In this case, as there was not any detection (to consider as correct detection) available within the inner tolerance window, neither any detection (to consider as shift) within the outer tolerance window (only drawn when a shift is considered), a **insertion** was accounted.
```python
# Zoom the image
ax[0].set_xlim(68, 73) # only when in "interactive" mode
```
(68.0, 73.0)
Finally, we can change the sizes of the inner and outer tolerance windows and update all the calculations and visualisation. We just have to pass the new values for both tolerance windows as (optional) parameters of the corresponding functions.
```python
# Get matrix of operations and annotation efficiency
ops, ann_eff = operation_count(dets, anns, inn_tol_win=0.05, out_tol_win=3.0)
# Get list of transformed detections
transformed = process_operations(ops)
# Save list of transformed detections
np.savetxt('dets_transformed.txt', transformed, fmt='%.2f')
# Get combined F-measure (tuple with initial F-measure and transformed F-measure)
comb_f_measure = f_measure(dets, anns, inn_tol_win=0.05), f_measure(transformed, anns, inn_tol_win=0.05)
# Display results
print(get_summary('dets', ann_eff, comb_f_measure))
# Display visualisation
fig,ax = plot_operations(ops, anns, inn_tol_win=0.05, out_tol_win=3.0)
plt.tight_layout()
plt.show()
```
- - - - - - - - - - - - - - - - -
dets
- - - - - - - - - - - - - - - - -
annotation_efficiency: 0.859
# (correct) detections: 55
# insertions: 2
# deletions: 0
# shifts: 7
(initial) f-measure: 0.873
(transformed) f_measure: 1.000
<IPython.core.display.Javascript object>
### Metrical Ambiguity
To allow for metrical ambiguity in beat tracking evaluation, it is common to create a set of variations of the ground truth by interpolation and sub-sampling operations. In our implementation, we flip this behaviour, and instead create the following variants of the beat detections:
- *Offbeat*: 180 degrees out of phase from the original beat locations;
- *Double*: Beats at 2x the original tempo;
- *Half-odd*: Half of the original tempo, only the odd beats;
- *Half-even*: Half of the original tempo, only the even beats;
- *Triple*: Beats at 3x the original tempo;
- *Third-1*: A Third of the original tempo, 1st beat (1,4,3,2,..)
- *Third-2*: A Third of the original tempo, 2nd beat (2,1,4,3,..)
- *Third-3*: A Third of the original tempo, 3rd beat (3,2,1,4,..)
```python
# Get all variations of beat detections
dets_variations, types_variations = variations(dets)
```
Now let's analyse one of the provided variations: e.g. the **Offbeat** variation, which is a version 180 degrees out of phase from the original beat locations.
As we would expect, the values for the annotation efficiency are worse than for the original beat detection: 0.015 vs 0.877.
```python
# Select specific variation of the beat detections
type_variation = 'Offbeat'
dets_variation = get_variation(type_variation, dets_variations, types_variations)
# Get matrix of operations and annotation efficiency
ops, ann_eff = operation_count(dets_variation, anns)
# Get list of transformed detections
transformed = process_operations(ops)
# Get combined f-measure (tuple with initial f-measure and transformed f-measure)
comb_f_measure = f_measure(dets_variation, anns), f_measure(transformed, anns)
# Display summary results
print(get_summary(type_variation, ann_eff, comb_f_measure))
# Get the figure
fig = plot_operations(ops, anns, type_variation)
# Show the plot
plt.tight_layout()
plt.show()
```
- - - - - - - - - - - - - - - - -
Offbeat
- - - - - - - - - - - - - - - - -
annotation_efficiency: 0.015
# (correct) detections: 1
# insertions: 6
# deletions: 3
# shifts: 57
(initial) f-measure: 0.016
(transformed) f_measure: 1.000
<IPython.core.display.Javascript object>
Now let's do the same analysis for the **Double** variation, where the beats occur at 2x the original tempo.
```python
# Select specific variation of the beat detections
type_variation = 'Double'
dets_variation = get_variation(type_variation, dets_variations, types_variations)
# Get matrix of operations and annotation efficiency
ops, ann_eff = operation_count(dets_variation, anns)
# Get list of transformed detections
transformed = process_operations(ops)
# Get combined f-measure (tuple with initial f-measure and transformed f-measure)
comb_f_measure = f_measure(dets_variation, anns), f_measure(transformed, anns)
# Display summary results
print(get_summary(type_variation, ann_eff, comb_f_measure))
# Get the figure
fig = plot_operations(ops, anns, type_variation)
# Show the plot
plt.tight_layout()
plt.show()
```
- - - - - - - - - - - - - - - - -
Double
- - - - - - - - - - - - - - - - -
annotation_efficiency: 0.468
# (correct) detections: 58
# insertions: 1
# deletions: 60
# shifts: 5
(initial) f-measure: 0.620
(transformed) f_measure: 1.000
<IPython.core.display.Javascript object>
| a047a182cb87104e954cc5db7cff8dbd707fc0a9 | 740,338 | ipynb | Jupyter Notebook | ShiftIfYouCan.ipynb | MR-T77/ShiftIfYouCan | 88f08463df909dd0029b5f60f693b8dcadccbf9f | [
"MIT"
]
| 3 | 2020-10-22T13:41:15.000Z | 2022-01-11T13:19:30.000Z | ShiftIfYouCan.ipynb | MR-T77/ShiftIfYouCan | 88f08463df909dd0029b5f60f693b8dcadccbf9f | [
"MIT"
]
| null | null | null | ShiftIfYouCan.ipynb | MR-T77/ShiftIfYouCan | 88f08463df909dd0029b5f60f693b8dcadccbf9f | [
"MIT"
]
| null | null | null | 179.955761 | 182,508 | 0.850684 | true | 2,512 | Qwen/Qwen-72B | 1. YES
2. YES | 0.715424 | 0.721743 | 0.516352 | __label__eng_Latn | 0.97722 | 0.037989 |
# Vibration modes of membranes with convex polygonal shape
Nicolás Guarín Zapata
## Description
The idea is to find the modes of vibration for membranes with (convex) polygonal shape. These are found as eigenvalues for the [Helmholtz equation](http://en.wikipedia.org/wiki/Helmholtz_equation)
$$\left(\nabla^2 + \frac{\omega^2}{c^2}\right) u(x,y) \equiv \left(\nabla^2 + k^2\right) u(x,y) = 0\ \forall\ (x,y)\in\Omega\enspace ,$$
with boundaries fixed, i.e.
$$u(x,y)=0\ \forall\ (x,y) \in \partial\Omega \enspace .$$
This quation is common in Mathematical Physics, e.g., in the solution of Acoustics, Quantum Mechanics and Electromagnetism problems [[1]](#References).
We have an equivalent formulation for this problem, i.e., a [variational](http://en.wikipedia.org/wiki/Calculus_of_variations) (energy) formulation. Starting from
the energy (functional) $E$,
$$ E = U + k^2T \equiv \underbrace{\int\limits_\Omega \nabla u \cdot \nabla u\ dx\ dy}_{U} + k^2\underbrace{\int\limits_\Omega u^2 dx\ dy}_{T} \enspace ,$$
that is equivalent to the original differential equation -under some assumptions [[2]](#References).
This problem has analytical solution for some shapes like rectangles, circles and ellipses. So, here we use an approximate method to find the solutions: the [Ritz method](http://en.wikipedia.org/wiki/Ritz_method). The Ritz method is the very same method used in most of Finite Element Formulations. I will describe the method in here, but it should not be read as a formal one, it is more like a cartoon-ish formulation.
Let's propose a solution of the form
$$\hat{u}(x,y) = \sum_{n=1}^{N} c_{n} f_n(x,y) \enspace ,$$
where $f_n(x,y)$ are known functions, they should be a subset of a complete basis for the space of solutions... but, for this notebook they are just polynomials (due to ease in integrate them). So, what we don't know in this equations are the coefficients. Good thing there is a theorem that says that the solution of the problem is an extremum (minimum in this case) of the functional. In our case, that means solving the system of equations
$$\frac{\partial E}{\partial c_{n}} = 0 \enspace .$$
What lead us to
$$[K]\lbrace\mathbf{c}\rbrace = k^2[M]\lbrace\mathbf{c}\rbrace \enspace ,$$
and whe know that
$$\left[\frac{\partial^2 U}{\partial c_i \partial c_j }\right]\lbrace \mathbf{c}\rbrace = k^2 \left[\frac{\partial^2 T}{\partial c_i \partial c_j}\right]\lbrace \mathbf{c}\rbrace \enspace .$$
Than can be explicitly, written as
$$K_{ij} = \int\limits_\Omega \nabla f_i \cdot \nabla f_j \ dx\ dy$$
and
$$M_{ij} = \int\limits_\Omega f_i f_j dx\ dy\, .$$
Hence, we have the formula for computing the components of the (stiffness and mass) matrices and then solve the resulting eigenvalue problem.
## Computing the integrals
The last ingredient missing is the calculation of the integral $U$ and $T$. We want to compute the integral of known functions $f_n(x,y)$; the main problem is the domain (in other words, the limits of the integrals). The following image shows a (convex) heptagon and a subdivision in triangles that use the centroid as one of the vertices. The idea is to subdivide the integral into integrals over the non-overlapping triangles.
To achieve that (easily) we can transform the domain of integration into a _canonical_ domain, as depicted in the next image.
<center></center>
For this simple case the transformation is given by
$$\begin{pmatrix}x\\ y \end{pmatrix} = \mathbf{T}\begin{pmatrix}r\\ s \end{pmatrix} \equiv [J]\begin{pmatrix}r\\ s \end{pmatrix} + \begin{pmatrix}x_A\\ y_A \end{pmatrix} \enspace ,$$
with
$$[J] = \begin{bmatrix} x_B - x_A &x_C - x_A\\ y_B - y_A &y_C - y_A \end{bmatrix} \enspace .$$
And the inverse transformation reads
$$\begin{pmatrix}r\\ s \end{pmatrix} = \mathbf{T}^{-1}\begin{pmatrix}x\\ y \end{pmatrix} \equiv [J^{-1}]\begin{pmatrix}x\\ y \end{pmatrix} + \frac{1}{\det J}\begin{pmatrix}x_A y_C - x_C y_A\\ x_By_A - x_A y_B \end{pmatrix} \enspace ,$$
where
$$[J^{-1}] = \frac{1}{\det J}\begin{bmatrix} y_C - y_A &x_A - x_C\\ y_A - y_B &x_B - x_A \end{bmatrix} \enspace .$$
Then, to compute the integrals we [tranform the domain and the functions of interest](http://en.wikipedia.org/wiki/Integration_by_substitution), i.e., $u^2(x,y)$ and $\nabla u(x,y) \cdot \nabla u(x,y)$. And the integrals over one of the triangles are expressed as
$$\int\limits_{\Omega_k} u^2(x,y)\ dx\ dy = \int\limits_{0}^{1}\int\limits_{0}^{1-s} u^2(\phi(r,s))\ \det J\ dr\ ds \enspace ,$$
and
$$\int\limits_{\Omega_k} [\nabla u(x,y))]^2\ dx\ dy = \int\limits_{0}^{1}\int\limits_{0}^{1-s} [\nabla u(\phi(r,s))]^2\ \det J\ dr\ ds \enspace ,$$
in this case we are first computing $\nabla u(x,y)$ and later making the change of variable $(x,y) = \phi(r,s)$, it can be done the other way but that implies the use of the [chain rule](http://en.wikipedia.org/wiki/Chain_rule). Thus, we loop over the triangles, compute the Jacobian transform the functions, solve each integral and add it up.
## Algorithm
```python
%matplotlib notebook
```
```python
import numpy as np
from scipy.linalg import eigh
from sympy import (symbols, lambdify, init_printing, cos, sin,
pi, expand, Matrix, diff, integrate, Poly)
from sympy.plotting import plot3d
from sympy.utilities.lambdify import lambdify
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.path import Path
```
```python
x, y, r, s= symbols('x y r s')
init_printing()
```
### Compute the polygon
The polygon is defined as a set of nodes and its conectivity
```python
nsides = 6
poly = [[cos(2*k*pi/nsides), sin(2*k*pi/nsides)] for k in range(0, nsides)]
#poly = [[-1,-1],[1,-1],[1,1],[-1,1]] # Simplest square
npts = len(poly)
lines = [[k,0] if k==npts-1 else [k,k+1] for k in range(npts)]
centroid = [sum([poly[k][0] for k in range(npts)]), sum([poly[k][1] for k in range(npts)])]
```
### Polygon plot
```python
plt.fill(np.array(poly)[:,0], np.array(poly)[:,1], fill=False, ec='k', lw=2, hatch='/')
plt.plot(np.array(poly)[:,0], np.array(poly)[:,1], lw=0, marker='o', ms=8,
mfc="white", mec="black")
plt.axis('image');
plt.xlim(-1.2, 1.2), plt.ylim(-1.2, 1.2);
```
### Boundary conditions
We need our function to satisfy the boundary conditions, i.e,
$$u(\text{boundary}) = 0 \enspace ,$$
this can be convoluted for a general polygon. The easy way to do it is define a polynomial that is exactly zero when evaluated at the boundary, namely
$$b(x,y) = \prod_{i=1}^{n-1}\left[y - y_i + (x_i - x)\left(\frac{y_i - y_{i+1}}{x_i - x_{i+1}}\right)\right]\left[y- y_n + (x_n - x)\left(\frac{y_n - y_1}{x_n - x_1}\right)\right] \enspace ,$$
being $(x_i,y_i)$ the coordinates of each vertex.
```python
# Polynomial defining the boundaries
def b(x,y,n):
prod = 1
for k in range(0, n):
prod = prod * ((y - poly[lines[k][0]][1])*(poly[lines[k][0]][0] - poly[lines[k][1]][0]) -
(x - poly[lines[k][0]][0])*(poly[lines[k][0]][1] - poly[lines[k][1]][1]))
return prod.expand()
bound = b(x, y, npts)
```
And the boundary function looks like...
```python
b_num = lambdify((x,y), bound, "numpy")
X,Y = np.mgrid[-1:1:50j, -1:1:50j]
Z = b_num(X,Y)
```
```python
fig = plt.figure(figsize=(10, 4))
ax = fig.add_subplot(121, projection='3d')
ax.plot_surface(X, Y, Z, cmap="RdYlBu", lw=0.2, edgecolor="black",
vmin=-2, vmax=2)
surf = ax2 = fig.add_subplot(122)
cont = ax2.contourf(X, Y, Z, 12, cmap="RdYlBu", vmin=-2, vmax=2)
ax2.fill(np.array(poly)[:,0], np.array(poly)[:,1], fill=False, ec='k', lw=1)
ax2.axis("image");
```
<IPython.core.display.Javascript object>
### The function
The function is then given by the product of a boundary function and the linear combinations of (nonredundant) functions over the domain
$$\hat{u}(x,y) = b(x,y)\sum\limits_{n=0}^{N-1} c_{n} W_{n}(x,y) \enspace .$$
The terms $W_{n}(x,y)$ are functions that should be linear independent, preferably a complete basis on the space solution. In our case we choose polynomials since we want them to be easilly integrable.
```python
def w_fun(x, y, m, n):
""" Trial function. """
c = symbols('c:%d' % (m*n)) # This is the way of define the coefficients c_i
w = []
for i in range(0, m):
for j in range(0, n):
w.append(x**i * y**j)
return w, c
def u_fun(x, y, m, n):
""" Complete function. Contains the boundary and trial functions. """
w, c = w_fun(x, y, m, n)
return [b(x, y, npts) * phi for phi in w ], c
m = 2
n = 2
u, c = u_fun(x, y, m, n)
```
### Matrices computation
```python
dudx = [diff(u[k], x) for k in range(len(c))]
dudy = [diff(u[k], y) for k in range(len(c))]
```
```python
Kaux = Matrix(m*n, m*n, lambda ii, jj: dudx[ii]*dudx[jj] + dudy[ii]*dudy[jj])
Maux = Matrix(m*n, m*n, lambda ii, jj: u[ii]*u[jj])
K = Matrix(m*n, m*n, lambda i,j: 0)
M = Matrix(m*n, m*n, lambda i,j: 0)
```
```python
for j in range(len(lines)):
A = [poly[lines[j][0]][0], poly[lines[j][0]][1]]
B = [poly[lines[j][1]][0], poly[lines[j][1]][1]]
C = [centroid[0], centroid[1]]
J = Matrix([[B[0] - A[0], C[0] - A[0]],
[B[1] - A[1], C[1] - A[1]]])
detJ = J.det()
trans = J * Matrix([[r],[s]]) + Matrix(A)
for row in range(m*n):
for col in range(row, m*n):
K_inte = Kaux[row, col].subs({x:trans[0], y:trans[1]})
M_inte = Maux[row, col].subs({x:trans[0], y:trans[1]})
K_inte = integrate(K_inte*detJ, (r, 0, 1-s), (s, 0, 1))
M_inte = integrate(M_inte*detJ, (r, 0, 1-s), (s, 0, 1))
K[row, col] += K_inte
M[row, col] += M_inte
if row != col:
K[col, row] += K_inte
M[col, row] += M_inte
```
So far, everything was done in an analytical fashion. This cannot be the case for the solution of eigenvalue problems, since they [need to](http://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem) be solved in an iterative way. Thus, we convert our analytical matrices to numpy arrays and proceed
```python
Kn = np.array(K).astype(np.float64)
Mn = np.array(M).astype(np.float64)
```
```python
vals, vecs = eigh(Kn, Mn, eigvals=(0,3))
vals
```
array([ 7.26184612, 18.19131025, 18.19131025, 33.03588824])
### Plot of the modes
They are note pretty neat since they are plotting the polynomials outside their region...
```python
X,Y = np.mgrid[-1:1:200j, -1:1:200j]
verts = np.array(poly, float)
```
```python
path = Path(verts)
mask = 0*X;
for i in range(X.shape[0]):
for j in range(X.shape[1]):
mask[i,j] = path.contains_point((X[i,j],Y[i,j]))
```
```python
for i in range(len(vals)):
U = sum(vecs[j, i]*u[j] for j in range(m*n))
vecU = lambdify((x,y), U, "numpy")
Z = vecU(X,Y)*mask
Z_max = Z.max()
Z_max = max (Z_max, -Z.min())
plt.figure(figsize=(4, 4))
plt.title(r"$k^2=%.2f$" % vals[i], size=16);
plt.fill(np.array(poly)[:,0], np.array(poly)[:,1], fill=False, ec='k', lw=1)
plt.contourf(X, Y, Z, 12, cmap="RdYlBu", vmin=-1.2, vmax=1.2)
plt.axis("image")
plt.colorbar();
```
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
## References
1. Arfken, George B., and Hans J. Weber. Mathematical Methods For Physicists International Student Edition. Academic press, 2005.
2. Reddy, J. N. "Applied Functional Analysis And Variational Methods In Engineering, Mcgraw-Hill College Pa." (1986): 546.
```python
from IPython.core.display import HTML
def css_styling():
styles = open('../styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
```
<link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'>
<style>
/* Based on Lorena Barba template available at: https://github.com/barbagroup/AeroPython/blob/master/styles/custom.css*/
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: 'Alegreya Sans', sans-serif;
}
h2 {
font-family: 'Fenix', serif;
}
h3{
font-family: 'Fenix', serif;
margin-top:12px;
margin-bottom: 3px;
}
h4{
font-family: 'Fenix', serif;
}
h5 {
font-family: 'Alegreya Sans', sans-serif;
}
div.text_cell_render{
font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 135%;
font-size: 120%;
width:600px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
/* .prompt{
display: None;
}*/
.text_cell_render h1 {
font-weight: 200;
font-size: 50pt;
line-height: 100%;
color:#CD2305;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 16pt;
color: #CD2305;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
```python
```
| 324922991f7a355dce624c1cdf540127618b46ce | 536,460 | ipynb | Jupyter Notebook | variational/poly_ritz.ipynb | nicoguaro/FEM_resources | 32f032a4e096fdfd2870e0e9b5269046dd555aee | [
"MIT"
]
| 28 | 2015-11-06T16:59:39.000Z | 2022-02-25T18:18:49.000Z | variational/poly_ritz.ipynb | oldninja/FEM_resources | e44f315be217fd78ba95c09e3c94b1693773c047 | [
"MIT"
]
| null | null | null | variational/poly_ritz.ipynb | oldninja/FEM_resources | e44f315be217fd78ba95c09e3c94b1693773c047 | [
"MIT"
]
| 9 | 2018-06-24T22:12:00.000Z | 2022-01-12T15:57:37.000Z | 116.3183 | 200,164 | 0.787835 | true | 4,303 | Qwen/Qwen-72B | 1. YES
2. YES | 0.936285 | 0.867036 | 0.811793 | __label__eng_Latn | 0.791391 | 0.724399 |
$$ \LaTeX \text{ command declarations here.}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\vx}{\mathbf{x}}
\newcommand{\vy}{\mathbf{y}}
\newcommand{\vt}{\mathbf{t}}
\newcommand{\vb}{\mathbf{b}}
\newcommand{\vw}{\mathbf{w}}
$$
```python
%pylab inline
import numpy as np
import seaborn as sns
import pandas as pd
from Lec08 import *
```
Populating the interactive namespace from numpy and matplotlib
# EECS 445: Machine Learning
## Lecture 10: Bias-Variance Tradeoff, Cross Validation, ML Advice
* Instructor: **Jacob Abernethy**
* Date: October 10, 2016
## Announcements
- I'm your new lecturer (for about 6 weeks)!
- Course website: https://eecs445-f16.github.io/
- HW3 out later today, due **Saturday 10/22, 5pm**
- We'll release solutions early Sunday, no late submissions after soln's released!
- Midterm exam is **Monday 10/24** in lecture
- We will release a "topic list" and practice exam early next week
- Key point: if you really understand the HW problems, you'll do fine on the exams
## Comments on Recent Piazza discussions
- We are happy to hear your feedback! But please use [Course Survey #2](https://piazza.com/class/issarttijnz3la?cid=185)
- Anonymous Piazza discussions aren't always helpful, and don't reflect overall student needs (Fully-anonymous posting now disallowed).
- The course staff is working very hard, and are investing a lot more time than previous semesters
- Struggling students need to find an OH to get help! If you can't find a time to attend an OH, tell us!
- We will approve all *Late Drop* requests for those who feel they can't catch up.
## Comments on the Mathematical nature of ML
- We know that students who haven't taken a serious Linear Algebra course, as well as a Probability/Stat course, are finding the mathematical aspects to be challenging. We are working to change course prereqs for future semesters.
- ML may not seem like a mathy topic, but **it certainly is**
- This course is near the frontlines of research, and there aren't yet books on the topic that work for EECS445. (But PRML and MLAPP are pretty good...)
- You can't understand the full nature of these algorithmic tools without having a strong grasp of the math concepts underlying them
- It may be painful now, but we're trying to put you all in the elite category of computer scientists who actually know ML
## Review of SVM
```python
plot_svc();
```
- **Separating Hyperplanes**
- **Idea:** divide the vector space $\mathbb{R}^d$ where $d$ is the number of features into 2 "decision regions" with a $\mathbb{R}^{d - 1}$ subspace (a hyperplane).
- Eg. Logistic Regression
- As with other linear classifiers, classification could be achieved by
$$
y = \text{sign}(\vw^T\vx + b)
$$
**Note:** We may use $\vx$ and $\phi(\vx)$ interchangeably to denote features.
- **(Functional) Margin**
- The distance from a separating hyperplane to the *closest* datapoint of *any* class.
$$
\rho
= \rho(\vw, b)
= \min_{i = 1, ..., n} \frac{| \vw^T\vx_i + b |}{\| \vw \|}
$$
where $\mathbf{x}_i$ is the $i$th datapoint from the training set.
### Finding the Max-Margin Hyperplane
- For dataset $\{\vx_i, t_i \}_{i=1}^n $, maximum margin separating hyperplane is the solution of
$$
\begin{split}
\underset{\vw, b}{\text{maximize}} \quad & \min_{i = 1, ..., n} \frac{| \vw^T\vx_i + b |}{\| \vw \|}\\
\text{subject to} \quad & t_i(\vw^T \vx_i + b) > 0 \quad \forall i \\
\end{split}
$$
of which the constraint ensures every training data is correctly classified
- Note that $t_i \in \{+1, -1\}$ is the label of $i$th training data
- This problem guarantees optimal hyperplane, but the solution $\vw$ and $b$ is **not** unique :
- we could scale both $\vw$ and $b$ by arbitrary scalar without affecting $\mathbb{H} = \{\vx : \vw^T\vx + b = 0\}$
- we have infinite sets of solutions
### Restatement of Optimization Problem
- Simplifying further, we have
$$
\begin{split}
\underset{\vw, b}{\text{maximize}} \quad & \frac{1}{\| \vw \|}\\
\text{subject to} \quad & t_i(\vw^T \vx_i + b) = 1 \text{ for some } i\\
\quad & t_i(\vw^T \vx_i + b) > 1 \text{ for other } i\\
\end{split}
\Longrightarrow
\begin{split}
\underset{\vw, b}{\text{minimize}} \quad & \frac{1}{2}{\| \vw \|}^2\\\
\text{subject to} \quad & t_i(\vw^T \vx_i + b) \geq 1 \quad \forall i \\
\quad
\end{split}
$$
### Optimal Soft-Margin Hyperplane (OSMH)
- To deal with non-linearly separable case, we could introduce slack variables:
$$
\begin{split}
\underset{\vw, b}{\text{min}} \quad & \frac{1}{2}{\| \vw \|}^2\\\
\text{s.t.} \quad & t_i(\vw^T \vx_i + b) \geq 1 \; \forall i \\
& \\
\end{split}
\;
\Longrightarrow
\;
\begin{split}
\underset{\vw, b, \xi}{\text{min}} \quad & \frac{1}{2}{\| \vw \|}^2 + \frac{C}{n} \sum \nolimits_{i = 1}^n \xi_i\\
\text{s.t.} \quad & t_i(\vw^T\vx_i + b) \geq 1 - \xi_i \; \forall i\\
\quad & \xi_i \geq 0 \; \forall i\\
\end{split}
$$
- New term $\frac{C}{n} \sum_{i = 1}^n \xi_i$ penalizes errors and accounts for the influence of outliers through a constant $C \geq 0$ ($C=\infty$ would lead us back to the hard margin case) and $\mathbf{\xi} = [\xi_1, ..., \xi_n]$ are the "slack" variables.
- **Motivation:**
- The **objective function** ensures margin is large *and* the margin violations are small
- The **first set of constraints** ensures classifier is doing well
* similar to the prev. max-margin constraint, except we now allow for slack
- The **second set of constraints** ensure slack variables are non-negative.
- keeps the optimization problem from *"diverging"*
### OSMH has *Dual* Formulation
- The previous objective function is referred to as the *Primal*
- With $N$ datapoints in $d$ dimensions, the Primal optimizes over $d + 1$ variables ($\vw, b$).
- But the *Dual* of this optimization problem has $N$ variables, one $\alpha_i$ for each example $i$!
$$
\begin{split}
\underset{\alpha, \beta}{\text{maximize}} \quad & -\frac12 \sum \nolimits_{i,j = 1}^n \alpha_i \alpha_j t_i t_j \vx_i^T \vx_j + \sum \nolimits_{i = 1}^n \alpha_i\\
\text{subject to} \quad & 0 \leq \alpha_i \leq C/n \quad \forall i\ \\
\quad & \sum \nolimits_{i=1}^n \alpha_i t_i = 0
\end{split}
$$
- Often the Dual problem is easier to solve.
- Once you solve the dual problem for $\alpha^*_1, \ldots, \alpha^*_N$, you get a primal solution as well!
$$
\vw^* = \sum \nolimits_{i=1}^n \alpha_i^* t_i \vx_i \quad \text{and} \quad b^* = t_i - {\vw^*}^T\vx_i \; (\text{ for any } i)
$$
- Note: Generally we can't solve these by hand, one uses optimization packages (such as a QP solver)
# Statistical Inference
## Loss Functions & Bias-Variance Decomposition
### Estimators
- ML Algorithms can in general be thought of as "estimators."
> **Estimator:** A statistic (a function of data) that is used to infer the value of an unknown parameter in a statistical model.
- Suppose there is a fixed parameter $f$ that needs to be estimated. An estimator of $f$ is a function that maps the sample space to a set of sample estimates, denoted $\hat{f}$.
### Noise
- For most problems in Machine Learning, the relationship is functional but noisy.
- Mathematically, $y = f(x) + \epsilon$
- $\epsilon$ is noise with mean $0$ variance $\sigma^2$
### Mathematical Viewpoint
- Let the training set be $D = \{\mathbf{x}_1, ..., \mathbf{x}_n\}, \mathbf{x}_i \in \mathbb{R}^d$.
- **Goal:** Find $\hat{f}$ that minimizes some **Loss function**, $L(y, \hat{f})$, which measures how good predictions are for **both**
- Points in $D$ (the **sample**), and,
- Points ***out of sample*** (outside $D$).
- Cannot minimize both perfectly because the relationship between $y$ and $\mathbf{x}$ is noisy.
- ***Irreducible error***.
### Loss Functions
There are many loss functions, each with their own use cases and interpretations.
- **Quadratic Loss:** $L(y,\hat{f}) = (y-\hat{f})^2$
- **Absolute Loss:** $L(y,\hat{f}) = |y-\hat{f}|$
Classification-only loss functions:
- **Sigmoid Loss:** $L(y,\hat{f}) = \mathrm{sigmoid}(-y\hat{f})$
- **Zero-One Loss:** $L(y,\hat{f}) = \mathbb{I}(y \neq \hat{f})$
- **Hinge Loss:** $L(y,\hat{f}) = \max(0, 1-y\hat{f})$
- **Logistic Loss:** $L(y,\hat{f}) = \log[ 1 + \exp(-y\hat{f})]$
- **Exponential Loss:** $L(y,\hat{f}) = \exp[ -y \hat{f} ]$
### Choosing a Loss Function
Different loss functions answer the following questions differently:
- How should we treat **outliers**?
- How **"correct"** do we need to be?
- Do we want a **margin** of safety?
- What is our notion of **distance**? What are we predicting?
- Real-world measurements?
- Probabilities?
### Quadratic Loss (aka Square Loss)
- Commonly used for regression
- Heavily influenced by outliers
$$
L(y, \hat{f}) = (y - \hat{f})^2
$$
```python
x = np.linspace(-1, 1, 100);
plt.plot(x, x**2)
plt.xlabel("$y-\hat{f}$", size=18);
```
### Absolute Loss
- Commonly used for regression.
- Robust to outliers.
$$
L(y, \hat{f}) = |y - \hat{f}|
$$
### Absolute Loss: Plot
```python
x = np.linspace(-1, 1, 100);
plt.plot(x, np.abs(x));
plt.xlabel("$y-\hat{f}$", size=18);
plt.ylabel("$|y-\hat{f}|$", size=18);
```
### 0-1 Loss
- Used for classification.
- Not convex!
- Not practical since optimization problems become intractable!
- "Surrogate Loss functions" that are convex and differentiable can be used instead.
$$
L(y, \hat{f}) = \mathbb{I}(y \neq \hat{f})
$$
### Sigmoid Loss
- Differentiable but non-convex! Can be used for classification.
$$L(y,\hat{f}) = \mathrm{sigmoid}(-y\hat{f})$$
```python
x = np.linspace(-6, 6, 100);
plt.plot(x, 1/(1 + np.exp(-x)));
plt.xlabel("$-y\hat{f}$", size=18);
plt.ylabel("$\sigma(-y\hat{f})$", size=18);
```
### Logistic Loss
- Used in Logistic regression.
- Influenced by outliers.
- Provides well calibrated probabilities (can be interpreted as confidence levels).
$$L(y,\hat{f}) = \log[ 1 + \exp(-y\hat{f})]$$
```python
x = np.linspace(-6, 6, 100);
plt.plot(x, np.log2(1 + np.exp(-x)));
plt.xlabel("$y\hat{f}$", size=18);
plt.ylabel("$\log(1 + \exp(-y\hat{f}))$", size=18);
```
### Hinge Loss
- Used in SVMs.
- Robust to outliers.
- Doesn't provide well calibrated probabilities.
$$L(y,\hat{f}) = \max(0, 1-y\hat{f})$$
```python
x = np.linspace(-6, 6, 100);
plt.plot(x, np.where(x < 1, 1 - x, 0));
plt.xlabel("$y\hat{f}$", size=18); plt.ylabel("$\max(0,1-y\hat{f})$", size=18);
```
### Exponential Loss
- Used for Boosting.
- Very susceptible to outliers.
$$L(y,\hat{f}) = \exp(-y\hat{f})$$
```python
x = np.linspace(-3, 3, 100);
plt.plot(x, np.exp(-x));
plt.xlabel("$y\hat{f}$", size=18);
plt.ylabel("$\exp(-y\hat{f})$", size=18);
```
### Loss Functions: Comparison
```python
# adapted from http://scikit-learn.org/stable/auto_examples/linear_model/plot_sgd_loss_functions.html
def plot_loss_functions():
xmin, xmax = -4, 4
xx = np.linspace(xmin, xmax, 100)
plt.plot(xx, xx ** 2, 'm-',
label="Quadratic loss")
plt.plot([xmin, 0, 0, xmax], [1, 1, 0, 0], 'k-',
label="Zero-one loss")
plt.plot(xx, 1/(1 + np.exp(xx)), 'b-',
label="Sigmoid loss")
plt.plot(xx, np.where(xx < 1, 1 - xx, 0), 'g-',
label="Hinge loss")
plt.plot(xx, np.log2(1 + np.exp(-xx)), 'r-',
label="Log loss")
plt.plot(xx, np.exp(-xx), 'c-',
label="Exponential loss")
plt.ylim((0, 8))
plt.legend(loc="best")
plt.xlabel(r"Decision function $f(x)$")
plt.ylabel("$L(y, f)$")
```
```python
# Demonstrate some loss functions
plot_loss_functions()
```
## Break time!
### Risk
**Risk** is the expected loss or error.
- Calculated differently for Bayesian vs. Frequentist Statistics
For now, assume **quadratic loss** $L(y,\hat{f}) = (y-\hat{f})^2$
- Associated risk is $R(\hat{f}) = E_y[L(y, \hat{f})] = E_y[(y-\hat{f})^2]$
### Bias-Variance Decomposition
- Can decompose the expected loss into a **bias** term and **variance** term.
- Depending on samples, learning process can give different results
- ML vs MAP vs Posterior Mean, etc..
- We want to learn a model with
- Small bias (how well a model fits the data on average)
- Small variance (how stable a model is w.r.t. data samples)
### Bias-Variance Decomposition
$$
\begin{align}
\mathbb{E}[(y - \hat{f})^2]
&= \mathbb{E}[y^2 - 2 \cdot y \cdot \hat{f} + {\hat{f}}^2] \\
&= \mathbb{E}[y^2] - \mathbb{E}[2 \cdot y \cdot \hat{f}] + \mathbb{E}[{\hat{f}}^2] \\
&= \mathrm{Var}[y] + {\mathbb{E}[y]}^2 - \mathbb{E}[2 \cdot y \cdot \hat{f}] +
\mathrm{Var}[\hat{f}] + {\mathbb{E}[{\hat{f}}]}^2
\end{align}
$$
since $Var[X] = \mathbb{E}[{X}^2] - {\mathbb{E}[X]}^2 \implies \mathbb{E}[X^2] = Var[X] + {\mathbb{E}[X]}^2$
### Bias-Variance Decomposition
$$\begin{align} \mathbb{E}[y] &= \mathbb{E}[f + \epsilon] \\
&= \mathbb{E}[f] + \mathbb{E}[\epsilon] & \text{ (linearity of expectations)}\\
&= \mathbb{E}[f] + 0 &\text{(zero-mean noise)}\\
&= f & \text{ (} f \text{ is determinstic)}\end{align}$$
### Bias-Variance Decomposition
$$\begin{align} Var[y] &= \mathbb{E}[(y - \mathbb{E}[y])^2] \\
&= \mathbb{E}[(y - f)^2] \\
&= \mathbb{E}[(f + \epsilon - f)^2] \\
&= \mathbb{E}[\epsilon^2] \equiv \sigma^2 \end{align}$$
### Bias-Variance Decomposition
We just showed that:
- $\mathbb{E}[y] = f$
- $\mathrm{Var}[y] = \mathbb{E}[\epsilon^2] = \sigma^2$
Therefore,
$$
\begin{align}
\mathbb{E}[(y - \hat{f})^2]
&= Var[y] + {\mathbb{E}[y]}^2 - \mathbb{E}[2 \cdot y \cdot \hat{f} + Var[\hat{f}] + {\mathbb{E}[{\hat{f}}]}^2 \\
&= \sigma^2 + f^2 - \mathbb{E}[2 \cdot y \cdot \hat{f}] + Var[\hat{f}] + {\mathbb{E}[{\hat{f}}]}^2
\end{align}
$$
### Bias-Variance Decomposition
- Note $y$ is random ***only*** in $\epsilon$ (again, $f$ is deterministic).
- Also, $\epsilon$ is ***independent*** from $\hat{f}$.
$\begin{align}\mathbb{E}[2 \cdot y \cdot \hat{f}]
&= \mathbb{E}[2 \cdot y \cdot \hat{f}]\\
&= \mathbb{E}[2 \cdot y] \cdot \mathbb{E}[\hat{f}] & \text{ (by independence) }\\
&= 2 \cdot \mathbb{E}[y] \cdot \mathbb{E}[\hat{f}] \\
&= 2 \cdot f \cdot \mathbb{E}[\hat{f}] \end{align}$
Thus, we now have $\mathbb{E}[(y - \hat{f})^2] = \sigma^2 + f^2 - 2 \cdot f \cdot \mathbb{E}[\hat{f}] + Var[\hat{f}] + {\mathbb{E}[{\hat{f}}]}^2$
### Bias-Variance Decomposition
$\mathbb{E}[(y - \hat{f})^2] = \sigma^2 + Var[\hat{f}] + f^2 - 2 \cdot f \cdot \mathbb{E}[\hat{f}] + {\mathbb{E}[{\hat{f}}]}^2$
Now, $f^2 - 2 \cdot f \cdot \mathbb{E}[\hat{f}] + \mathbb{E}[\hat{f}]^2 = (f - \mathbb{E}[\hat{f}])^2$
$\implies \mathbb{E}[(y - \hat{f})^2] = \sigma^2 + Var[\hat{f}] + (f - \mathbb{E}[\hat{f}])^2$
$\begin{align} \text{Finally, } \mathbb{E}[f - \hat{f}]
&= \mathbb{E}[f] - \mathbb{E}[\hat{f}] \text{ (linearity of expectations)} \\
&= f - \mathbb{E}[\hat{f}] \end{align}$
So,
$$\mathbb{E}[(y - \hat{f})^2] = \underbrace{{\sigma^2}}_\text{irreducible error} + \underbrace{{\text{Var}[\hat{f}]}}_\text{Variance} + \underbrace{{\mathbb{E}[f - \mathbb{E}[\hat{f}]]}^2}_{\text{Bias}^2}$$
### Bias-Variance Decomposition
We have
$$\mathbb{E}[(y - \hat{f})^2] = \underbrace{{\sigma^2}}_\text{irreducible error} + \underbrace{{\text{Var}[\hat{f}]}}_\text{Variance} + \underbrace{{\mathbb{E}[f - \mathbb{E}_S[\hat{f}]]}^2}_{\text{Bias}^2}$$
### Bias and Variance Formulae
Bias of an estimator, $B(\hat{f}) = \mathbb{E}[\hat{f}] - f$
Variance of an estimator, $Var(\hat{f}) = \mathbb{E}[(\hat{f} - \mathbb{E}[\hat{f}])^2]$
### An example to explain Bias/Variance and illustrate the tradeoff
- Consider estimating a sinusoidal function.
(Example that follows is inspired by Yaser Abu-Mostafa's CS 156 Lecture titled "Bias-Variance Tradeoff"
```python
import pylab as pl
RANGEXS = np.linspace(0., 2., 300)
TRUEYS = np.sin(np.pi * RANGEXS)
def plot_fit(x, y, p, show,color='k'):
xfit = RANGEXS
yfit = np.polyval(p, xfit)
if show:
axes = pl.gca()
axes.set_xlim([min(RANGEXS),max(RANGEXS)])
axes.set_ylim([-2.5,2.5])
pl.scatter(x, y, facecolors='none', edgecolors=color)
pl.plot(xfit, yfit,color=color)
pl.hold('on')
pl.xlabel('x')
pl.ylabel('y')
```
```python
def calc_errors(p):
x = RANGEXS
errs = []
for i in x:
errs.append(abs(np.polyval(p, i) - np.sin(np.pi * i)) ** 2)
return errs
```
```python
def calculate_bias_variance(poly_coeffs, input_values_x, true_values_y):
# poly_coeffs: a list of polynomial coefficient vectors
# input_values_x: the range of xvals we will see
# true_values_y: the true labels/targes for y
# First we calculate the mean polynomial, and compute the predictions for this mean poly
mean_coeffs = np.mean(poly_coeffs, axis=0)
mean_predicted_poly = np.poly1d(mean_coeffs)
mean_predictions_y = np.polyval(mean_predicted_poly, input_values_x)
# Then we calculate the error of this mean poly
bias_errors_across_x = (mean_predictions_y - true_values_y) ** 2
# To consider the variance errors, we need to look at every output of the coefficients
variance_errors = []
for coeff in poly_coeffs:
predicted_poly = np.poly1d(coeff)
predictions_y = np.polyval(predicted_poly, input_values_x)
# Variance error is the average squared error between the predicted values of y
# and the *average* predicted value of y
variance_error = (mean_predictions_y - predictions_y)**2
variance_errors.append(variance_error)
variance_errors_across_x = np.mean(np.array(variance_errors),axis=0)
return bias_errors_across_x, variance_errors_across_x
```
```python
from matplotlib.pylab import cm
def polyfit_sin(degree=0, iterations=100, num_points=5, show=True):
total = 0
l = []
coeffs = []
errs = [0] * len(RANGEXS)
colors=cm.rainbow(np.linspace(0,1,iterations))
for i in range(iterations):
np.random.seed()
x = np.random.choice(RANGEXS,size=num_points) # Pick random points from the sinusoid
y = np.sin(np.pi * x)
p = np.polyfit(x, y, degree)
y_poly = [np.polyval(p, x_i) for x_i in x]
plot_fit(x, y, p, show,color=colors[i])
total += sum(abs(y_poly - y) ** 2) # calculate Squared Error (Squared Error)
coeffs.append(p)
errs = np.add(calc_errors(p), errs)
return total / iterations, errs / iterations, np.mean(coeffs, axis = 0), coeffs
```
```python
def plot_bias_and_variance(biases,variances,range_xs,true_ys,mean_predicted_ys):
pl.plot(range_xs, mean_predicted_ys, c='k')
axes = pl.gca()
axes.set_xlim([min(range_xs),max(range_xs)])
axes.set_ylim([-3,3])
pl.hold('on')
pl.plot(range_xs, true_ys,c='b')
pl.errorbar(range_xs, mean_predicted_ys, yerr = biases, c='y', ls="None", zorder=0,alpha=1)
pl.errorbar(range_xs, mean_predicted_ys, yerr = variances, c='r', ls="None", zorder=0,alpha=0.1)
pl.xlabel('x')
pl.ylabel('y')
```
## Let's return to fitting polynomials
* Here we generate some samples $x,y$, with $y = \sin(2\pi x)$
* We then fit a *degree-0* polynomial (i.e. a constant function) to the samples
```python
# polyfit_sin() generates 5 samples of the form (x,y) where y=sin(2*pi*x)
# then it tries to fit a degree=0 polynomial (i.e. a constant func.) to the data
# Ignore return values for now, we will return to these later
_, _, _, _ = polyfit_sin(degree=0, iterations=1, num_points=5, show=True)
```
## We can do this over many datasets
* Let's sample a number of datasets
* How does the fitted polynomial change for different datasets?
```python
# Estimate two points of sin(pi * x) with a constant 5 times
_, _, _, _ = polyfit_sin(0, 5)
```
## What about over lots more datasets?
```python
# Estimate two points of sin(pi * x) with a constant 100 times
_, _, _, _ = polyfit_sin(0, 25)
```
```python
MSE, errs, mean_coeffs, coeffs_list = polyfit_sin(0, 100,num_points = 3,show=False)
biases, variances = calculate_bias_variance(coeffs_list,RANGEXS,TRUEYS)
plot_bias_and_variance(biases,variances,RANGEXS,TRUEYS,np.polyval(np.poly1d(mean_coeffs), RANGEXS))
```
* Decomposition: $\mathbb{E}[(y - \hat{f})^2] = \underbrace{{\sigma^2}}_\text{irreducible error} + \underbrace{{\text{Var}[\hat{f}]}}_\text{Variance} + \underbrace{{\mathbb{E}[f - \mathbb{E}_S[\hat{f}]]}^2}_{\text{Bias}^2}$
* Blue curve: true $f$
* Black curve: $\hat f$, average predicted values of $y$
* Yellow is error due to **Bias**, Red/Pink is error due to **Variance**
## Bias vs. Variance
* We can calculate how much error we suffered due to bias and due to variance
```python
poly_degree = 0
results_list = []
MSE, errs, mean_coeffs, coeffs_list = polyfit_sin(
poly_degree, 500,num_points = 5,show=False)
biases, variances = calculate_bias_variance(coeffs_list,RANGEXS,TRUEYS)
sns.barplot(x='type', y='error',hue='poly_degree', data=pd.DataFrame([
{'error':np.mean(biases), 'type':'bias','poly_degree':0},
{'error':np.mean(variances), 'type':'variance','poly_degree':0}]))
```
## Let's now fit degree=3 polynomials
* Let's sample a dataset of 5 points and fit a cubic poly
```python
MSE, _, _, _ = polyfit_sin(degree=3, iterations=1)
```
## Let's now fit degree=3 polynomials
* What does this look like over 5 different datasets?
```python
_, _, _, _ = polyfit_sin(degree=3,iterations=5,num_points=5,show=True)
```
## Let's now fit degree=3 polynomials
* What does this look like over 50 different datasets?
```python
# Estimate two points of sin(pi * x) with a line 50 times
_, _, _, _ = polyfit_sin(degree=3, iterations=50)
```
```python
MSE, errs, mean_coeffs, coeffs_list = polyfit_sin(3,500,show=False)
biases, variances = calculate_bias_variance(coeffs_list,RANGEXS,TRUEYS)
plot_bias_and_variance(biases,variances,RANGEXS,TRUEYS,np.polyval(np.poly1d(mean_coeffs), RANGEXS))
```
$$\mathbb{E}[(y - \hat{f})^2] = \underbrace{{\sigma^2}}_\text{irreducible error} + \underbrace{{\text{Var}[\hat{f}]}}_\text{Variance} + \underbrace{{\mathbb{E}[f - \mathbb{E}_S[\hat{f}]]}^2}_{\text{Bias}^2}$$
* Blue curve: true $f$
* Black curve: $\hat f$, average *prediction* (of the value of $y$)
* Yellow is error due to **Bias**, Red/Pink is error due to **Variance**
```python
results_list = []
for poly_degree in [0,1,3]:
MSE, errs, mean_coeffs, coeffs_list = polyfit_sin(poly_degree,500,num_points=5,show=False)
biases, variances = calculate_bias_variance(coeffs_list,RANGEXS,TRUEYS)
results_list.append({'error':np.mean(biases),
'type':'bias', 'poly_degree':poly_degree})
results_list.append({'error':np.mean(variances),
'type':'variance', 'poly_degree':poly_degree})
sns.barplot(x='type', y='error',hue='poly_degree',data=pd.DataFrame(results_list))
```
### Bias Variance Tradeoff
#### Central problem in supervised learning.
Ideally, one wants to choose a model that both accurately captures the regularities in its training data, but also generalizes well to unseen data. Unfortunately, it is typically impossible to do both simultaneously.
- High Variance:
- Model represents the training set well.
- Overfit to noise or unrepresentative training data.
- Poor generalization performance
- High Bias:
- Simplistic models.
- Fail to capture regularities in the data.
- May give better generalization performance.
### Interpretations of Bias
- Captures the errors caused by the simplifying assumptions of a model.
- Captures the average errors of a model across different training sets.
### Interpretations of Variance
- Captures how much a learning method moves around the mean.
- How different can one expect the hypotheses of a given model to be?
- How sensitive is an estimator to different training sets?
### Complexity of Model
- Simple models generally have high bias and complex models generally have low bias.
- Simple models generally have low variance andcomplex models generally have high variance.
- Underfitting / Overfitting
- High variance is associated with overfitting.
- High bias is associated with underfitting.
### Training set size
- Decreasing the training set size
- Helps with a high bias algorithm:
- Will in general not help in improving performance.
- Can attain the same performance with smaller training samples however.
- Additional advantage of increases in speed.
- Increase the training set size
- Decreases Variance by reducing overfitting.
### Number of features
- Increasing the number of features.
- Decreases bias at the expense of increasing the variance.
- Decreasing the number of features.
- Dimensionality reduction can decrease variance by reducing over-fitting.
### Features
Many techniques for engineering and selecting features (Feature Engineering and Feature Extraction)
- PCA, Isomap, Kernel PCA, Autoencoders, Latent sematic analysis, Nonlinear dimensionality reduction, Multidimensional Scaling
### Features
The importance of features
> "Coming up with features is difficult, time-consuming, requires expert knowledge. Applied machine learning is basically feature engineering"
- Andrew Ng
> "... some machine learning projects succeed and some fail. What makes the difference? Easily the most important factor is the features used."
- Pedro Domingo
### Regularization (Changing $\lambda$ or $C$)
Regularization is designed to impose simplicity by adding a penalty term that depends on the charactistics of the parameters.
- Decrease Regularization.
- Reduces bias (allows the model to be more complex).
- Increase Regularization.
- Reduces variance by reducing overfitting (again, regularization imposes "simplicity.")
### Ideal bias and variance?
- All is not lost. Bias and Variance can both be lowered through some methods:
- Ex: Boosting (learning from weak classifiers).
- The sweet spot for a model is the level of complexity at which the increase in bias is equivalent to the reduction in variance.
# Model Selection
### Model Selection
- ML Algorithms generally have a lot of parameters that must be chosen. A natural question is then "How do we choose them?"
- Examples: Penalty for margin violation (C), Polynomial Degree in polynomial fitting
### Model Selection
- Simple Idea:
- Construct models $M_i, i = 1, ..., n$.
- Train each of the models to get a hypothesis $h_i, i = 1, ..., n$.
- Choose the best.
- Does this work? No! Overfitting. This brings us to **cross validation**.
### Hold-Out Cross Validation
(1) Randomly split the training data $D$ into $D_{train}$ and $D_{val}$, say 70% of the data and 30% of the data respectively.
(2) Train each model $M_i$ on $D_{train}$ only, each time getting a hypothesis $h_i$.
(3) Select and output hypothesis $h_i$ that had the smallest error on the held out validation set.
Disadvantages:
- Waste some sizable amount of data (30\% in the above scenario) so that less training examples are available.
- Using only some data for training and other data for validation.
### K-Fold Cross Validation (Step 1)
Randomly split the training data $D$ into $K$ ***disjoint*** subsets of $N/K$ training samples each.
- Let these subsets be denoted $D_1, ..., D_K$.
### K-Fold Cross Validation (Step 2)
For each model $M_i$, we evaluate the model as follows:
- Train the model $M_i$ on $D \setminus D_k$ (all of the subsets except subset $D_k$) to get hypothesis $h_i(k)$.
- Test the hypothesis $h_i(k)$ on $D_k$ to get the error (or loss) $\epsilon_i(k)$.
- Estimated generalization error for model $M_i$ is then given by $e^g_i = \frac{1}{K} \sum \limits_{k = 1}^K \epsilon_i (k)$
### K-Fold Cross Validation (Step 3)
Pick the model $M_i^*$ with the lowest estimated generalization error $e^{g*}_i$ and retrain the model on the entire training set, thus giving the final hypothesis $h^*$ that is output.
### Three Way Data Splits
- If model selection and true error estimates are to be computed simaltaneously, the data needs to be divided into three disjoin sets.
- Training set: A set of examples used for learning
- Validation set: A set of examples used to tune the hyperparameters of a classifier.
- Test Set: A set of examples used *** only *** to assess the performance of a fully-trained model.
### Procedure Outline
1. Divide the available data into training, validation and test set
2. Select a model (and hyperparameters)
3. Train the model using the training set
4. Evaluate the model using the validation set
5. Repeat steps 2 through 4 using different models (and hyperparameters)
6. Select the best model (and hyperparameter) and train it using data from the training and validation set
7. Assess this final model using the test set
### How to choose hyperparameters?
Cross Validation is only useful if we have some number of models. This often means constructing models each with a different combination of hyperparameters.
### Random Search
- Just choose each hyperparameter randomly (possibly within some range for each.)
- Pro: Easy to implement. Viable for models with a small number of hyperparameters and/or low dimensional data.
- Con: Very inefficient for models with a large number of hyperparameters or high dimensional data (curse of dimensionality.)
### Grid Search / Parameter Sweep
- Choose a subset for each of the parameters.
- Discretize real valued parameters with step sizes as necessary.
- Output the model with the best cross validation performance.
- Pro: "Embarassingly Parallel" (Can be easily parallelized)
- Con: Again, curse of dimensionality poses problems.
### Bayesian Optimization
- Assumes that there is a smooth but noisy relation that acts as a mapping from hyperparameters to the objective function.
- Gather observations in such a manner as to evaluate the machine learning model the least number of times while revealing as much information as possible about the mapping and, in particular, the location of the optimum.
- Exploration vs. Exploitation problem.
### Learning Curves
Provide a visualization for diagnostics such as:
- Bias / variance
- Convergence
```python
# Image from Andrew Ng's Stanford CS229 lecture titled "Advice for applying machine learning"
from IPython.display import Image
Image(filename='images/HighVariance.png', width=800, height=600)
# Testing error still decreasing as the training set size increases. Suggests increasing the training set size.
# Large gap Between Training and Test Error.
```
```python
# Image from Andrew Ng's Stanford CS229 lecture titled "Advice for applying machine learning"
from IPython.display import Image
Image(filename='images/HighBias.png', width=800, height=600)
# Training error is unacceptably high.
# Small gap between training error and testing error.
```
### Convergence
- Approach 1:
- Measure gradient of the learning curve.
- As learning curve gradient approaches 0, the model has been trained. Choose threshold to stop training.
- Approach 2:
- Measure change in the model parameters each iteration of the algorithm.
- One can assume that training is complete when the change in model parameters is below some threshold.
### Diagnostics related to Convergence (1)
- Convergence too slow?
- Try using Newton's method.
- Larger step size.
- Note that too large of a step size could also lead to slow convergence (but the learning curves in general will then suggest instability if "oscillations" are occuring.)
- Decrease batch size if using a batch based optimization algorithm.
### Diagnostics related to Convergence (2)
- Are the learning curves stable? If not:
- Switch to a batch style optimization algorithm if not already using one (like minibatch gradient descent / gradient descent).
- Increase batch sizes if already using one.
- Some algorithms always ensure a decrease or increase in the objective function each iterations. Ensure that this is the case if the optimization algorithm being used provides such guarantees.
### Ablative Analysis
- Similar to the idea of cross validation, except for components of a system.
- Example: Simple Logisitic Regression on spam classification gives 94% performance.
- 95% with spell correction
- 96% with top 100 most commonly used words removed
- 98% with extra sender and receiver information
- 99% overall performance
| 58107d185c358b12ba15072b8d9bdee191004043 | 938,060 | ipynb | Jupyter Notebook | lecture10_bias-variance-tradeoff/lecture10_bias-variance-tradeoff.ipynb | xipengwang/umich-eecs445-f16 | 298407af9fd417c1b6daa6127b17cb2c34c2c772 | [
"MIT"
]
| 97 | 2016-09-11T23:15:35.000Z | 2022-02-22T08:03:24.000Z | lecture10_bias-variance-tradeoff/lecture10_bias-variance-tradeoff.ipynb | eecs445-f16/umich-eecs445-f16 | 298407af9fd417c1b6daa6127b17cb2c34c2c772 | [
"MIT"
]
| null | null | null | lecture10_bias-variance-tradeoff/lecture10_bias-variance-tradeoff.ipynb | eecs445-f16/umich-eecs445-f16 | 298407af9fd417c1b6daa6127b17cb2c34c2c772 | [
"MIT"
]
| 77 | 2016-09-12T20:50:46.000Z | 2022-01-03T14:41:23.000Z | 447.334287 | 188,380 | 0.927476 | true | 9,614 | Qwen/Qwen-72B | 1. YES
2. YES | 0.760651 | 0.70253 | 0.53438 | __label__eng_Latn | 0.944838 | 0.079873 |
---
layout: page
title: Teorema Central do Limite
nav_order: 7
---
[](https://colab.research.google.com/github/icd-ufmg/icd-ufmg.github.io/blob/master/_lessons/07-tcl.ipynb)
# Teorema Central do Limite
{: .no_toc .mb-2 }
O teorema base para os nossos testes de hipóteses
{: .fs-6 .fw-300 }
{: .no_toc .text-delta }
Resultados Esperados
1. Revisar conceitos de Probabilidade ligados a distribuição normal
1. Revisar o teorema central do limite
1. Entendimento do teorema central do limite
1. Simular médias de qualquer distribuição
1. Mostrar como a distribuição de média segue uma normal
---
**Sumário**
1. TOC
{:toc}
---
```python
# -*- coding: utf8
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
```python
plt.style.use('seaborn-colorblind')
plt.rcParams['figure.figsize'] = (16, 10)
plt.rcParams['axes.labelsize'] = 20
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['legend.fontsize'] = 20
plt.rcParams['xtick.labelsize'] = 20
plt.rcParams['ytick.labelsize'] = 20
plt.rcParams['lines.linewidth'] = 4
```
```python
plt.ion()
```
```python
def despine(ax=None):
if ax is None:
ax = plt.gca()
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# Only show ticks on the left and bottom spines
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
```
## Introdução
Uma razão pela qual a distribuição normal é tão útil é o teorema central do limite, que diz (em essência) que uma variável aleatória definida como a média (ou soma) de um grande número de variáveis aleatórias independentes e identicamente distribuídas é aproximadamente distribuída normalmente. Em outras palavras, a distribuição amostral de médias segue uma normal.
Em detalhes, se $X_1, ..., X_n$ são variáveis aleatórias. Em particular, todas as VAs foram amostradas de uma mesma população com média $\mu$ (finita), desvio padrão $\sigma$ (finito). Além do mais, a geração de cada VA é independente da outra, sendo toas identicamente distribuídas. Quando $n$ é grande, então
$$\frac{1}{n}(X_1 + \cdots + X_n)$$
é aproximadamente distribuído por uma Normal com média $\mu$ e desvio padrão $\sigma/\sqrt{n}$. De forma equivalente (mas muitas vezes mais útil),
$$Z = \frac{(X_1 + \cdots + X_n) - \mu }{\sigma / \sqrt{n}}$$
é aproximadamente uma normal com média 0 e desvio padrão 1.
$$Z \sim Normal(0, 1).$$
### Como transformar VAs?
Lemebre-se da aula passada que sabemos estimar:
$$\bar{x} \approx \mu$$
e
$$s^2 \approx \sigma$$
Além do mais, sabemos que a variância do estimador da média é:
$$Var(\hat{\mu}) = \frac{\sigma^2}{n}$$
Assim:
\begin{align}
\bar{X} \sim Normal(\mu, \frac{\sigma^2}{n}) \\
\bar{X}- \mu \sim Normal(0, \frac{\sigma^2}{n}) \\
\frac{\bar{X}- \mu}{\sigma / \sqrt{n}} \sim Normal(0, 1) \\
\end{align}
## Exemplo das Moedas
Considere o caso de uma moeda sendo jogada para cima. Agora, escolhe um número `n` (tamanho da amostra), e gere amostras da mesma. Ou seja, jogue uma moeda para cima `n` vezes. Por fim, some quantas vezes a moeda cai em `cara` (ou `coroa`). Isto é uma soma para uma amostra de tamanho `n`.
O processo de geração destes dados é bem capturado por uma distribuição Binomial. Variáveis aleatórias binomiais, que possuem dois parâmetros $n$ e $p$. A distribuição binomial é útil para contar o número de sucessos $n$ dada uma probabilidade $p$. Por exemplo, quantas vezes ($n$) uma moeda ($p$) gera um o valor cara. Formalmente, uma variável aleatória Binomial($n, p$) é simplesmente a soma de $n$ variáveis aleatórias independentes de Bernoulli($p$), cada uma delas igual a $1$ com probabilidade $p$ e $0$ com probabilidade $1 - p$.
Ao gerar um valor de uma Binomial estamos falando "Jogue uma moeda para cima n vezes e conte quantas caras!". No caso abaixo, jogue uma moeda para cima 5 vezes e conte quantas caras!
```python
num_caras = np.random.binomial(5, 0.5)
num_caras
```
4
Vamos repetir o processo várias! Jogue uma moeda para cima 5 vezes, pare, respire, depois jogue mais 5. Por aí vai. Note que temos a contagem para cada experimento de tamanho 5.
```python
np.random.binomial(5, 0.5, size=10)
```
array([3, 2, 3, 1, 1, 3, 2, 3, 2, 1])
Agora, vamos ver o gráfico de tal experimento!
```python
num_caras_a_cada_5 = np.random.binomial(5, 0.5, size=10000)
plt.hist(num_caras_a_cada_5, bins=[-0.5, 0.5, 1.5, 2.5, 3.5, 4.5, 5.5], edgecolor='k')
plt.xlabel('Jogadas de moedas')
plt.ylabel('Número de caras')
plt.title('1000 jogadas de moedas em grupos de 5')
despine()
```
Mesma coisa com 10 moedas
```python
num_caras_a_cada_5 = np.random.binomial(10, 0.5, size=1000000)
ticks = np.linspace(-0.5, 10.5, num=12)
print(ticks)
plt.hist(num_caras_a_cada_5, bins=ticks, edgecolor='k')
plt.xlabel('Jogadas de moedas')
plt.ylabel('Número de caras')
plt.title('1000 jogadas de moedas em grupos de 5')
despine()
```
Lembrando da sua aula de probabilidade, uma população que segue uma variável Binomial($n, p$) tem média $\mu = np$ e desvio padrão $\sigma = \sqrt{np(1 - p)}$. Se plotarmos ambos, você pode facilmente ver a semelhança. Obseve o plot abaixo da PDF com os parâmetros que listamos (média, desvio padrão).
```python
import scipy.stats as ss
mean = 10 * 0.5
std = np.sqrt(10 * 0.5 *(1 - 0.5))
x = np.linspace(1, 11, 1000)
y = ss.distributions.norm.pdf(loc=mean, scale=std, x=x)
plt.xlim(0, 10)
plt.plot(x, y)
plt.xlabel('Número caras - x')
plt.ylabel('P(X = x)')
despine()
```
```python
mean = 10 * 0.5
std = np.sqrt(10 * 0.5 *(1 - 0.5))
x = np.linspace(1, 11, 1000)
y = ss.distributions.norm.pdf(loc=mean, scale=std, x=x)
plt.xlim(0, 10)
plt.plot(x, y, label='Aproximação Normal')
num_caras_a_cada_5 = np.random.binomial(10, 0.5, size=1000000)
ticks = np.linspace(-0.5, 10.5, num=12)
plt.hist(num_caras_a_cada_5, bins=ticks, edgecolor='k', label='Dados',
density=True)
plt.plot(x, y)
plt.xlabel('Jogadas de moedas')
plt.ylabel('Número de caras')
plt.title('1000 jogadas de moedas em grupos de 5')
plt.legend()
despine()
```
## Exemplo com Dados Sintéticos de Matrículas
Para exemplificar com dados, considere o exemplo abaixo onde geramos uma distribuição sintética de 25 mil alunos da UFMG. A distribuição captura o número de matéria que um aluno se matrícula no ano. Note que diferente da moeda, que gera apenas cara ou cora, cada aluno pode se matricular entre [min, max] matérias. No exemplo, vamos suport que todo aluno precisa se matricular em pelo menos uma matéria `min=1` e pode ser matricular em no máximo 10 matérias `max=10`. Agora, vamos suport que cada número de matéria tem probabilidade $p_i$. Ou seja, a chance se matricular em uma matéria é $p_1$ e por aí vai.
Dados deste tipo são modelados por distribuições multinomiais. Generalizando a Binomial, uma Multinomial conta a quantidade de sucessos (matrículas) em cada $p_i$. A mesma é definida por $n > 0$, número de amostras ou matrículas, e $p_1, \ldots, p_k$ probabilidade de se matrícular em $i$ matérias. A pmf de uma multinomial é dada por:
$$P(X = x) = \frac{n!}{x_1!\cdots x_k!} p_1^{x_1} \cdots p_k^{x_k}$$
Inicialmente observe os valores de $p_i$.
```python
num_materias = np.arange(10) + 1
prob_materias = np.array([6, 7, 16, 25, 25, 25, 10, 12, 2, 11])
prob_materias = prob_materias / prob_materias.sum()
plt.bar(num_materias, prob_materias, edgecolor='k')
plt.xlabel('Número de Matérias no Semestre')
plt.ylabel('Fração de Alunos')
plt.title('Distribuição do número de matérias no semestre')
despine()
```
Agora vamos responder a pergunta: **Quantas matérias, em média, um aluno se matrícula?!**. Note que a nossa pergunta aqui é **em média!!**. Então, vamos considerar que temos 25 mil discente na ufmg. Para cada um destes alunos, vamos amostrar de $p_i$ o número de matérias que tal aluno está matrículado no atual semestre.
```python
amostras = 25000
mats = np.arange(10) + 1
print(mats)
dados = []
for i in range(25000):
n_mat = np.random.choice(mats, p=prob_materias)
dados.append(n_mat)
dados = np.array(dados)
dados
```
[ 1 2 3 4 5 6 7 8 9 10]
array([ 6, 5, 2, ..., 10, 8, 8])
Agora vamos responder nossa pergunta. **Quantas matérias, em média, um aluno se matrícula?!**. Para tirar uma média precisamos de uma amostra. Vamos definir amostras de tamanho 100. Então, vamos amostrar 100 alunos, **com repetição**, dos nossos 25 mil alunos.
```python
n_amostra = 100
soma = 0
for i in range(n_amostra):
aluno = np.random.randint(0, len(dados))
num_mat = dados[aluno]
soma += num_mat
media = soma / n_amostra
print(media)
```
5.47
Vamos repetir o processo algumas vezes. Tipo, 10000 vezes.
```python
n_amostra = 100
medias = []
for _ in range(10000):
soma = 0
for i in range(n_amostra):
aluno = np.random.randint(0, len(dados))
num_mat = dados[aluno]
soma += num_mat
media = soma / n_amostra
medias.append(media)
medias = np.array(medias)
```
Agora vamos ver os resultados!
```python
plt.hist(medias, bins=20, edgecolor='k')
plt.ylabel('P(X = x)')
plt.xlabel('Média das matérias - x')
plt.title('CLT na Prática')
despine()
```
Agora, vamos comparar com a nossa Normal, para isto podemos usar a média das médias e o desvio padrão das médias.
```python
mean = np.mean(medias)
# ddof=1 faz dividir por n-1
std = np.std(medias, ddof=1)
# pegue 1000 números entre o minimo e o max
x = np.linspace(np.min(medias), np.max(medias), 1000)
y = ss.distributions.norm.pdf(loc=mean, scale=std, x=x)
plt.plot(x, y, label='Aproximação Normal')
plt.hist(medias, bins=20, edgecolor='k', density=True)
plt.ylabel('P(X = x)')
plt.xlabel('Média das matérias - x')
plt.title('CLT na Prática')
despine()
```
## Com Dados
```python
df = pd.read_csv('https://media.githubusercontent.com/media/icd-ufmg/material/master/aulas/03-Tabelas-e-Tipos-de-Dados/nba_salaries.csv')
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>PLAYER</th>
<th>POSITION</th>
<th>TEAM</th>
<th>SALARY</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Paul Millsap</td>
<td>PF</td>
<td>Atlanta Hawks</td>
<td>18.671659</td>
</tr>
<tr>
<th>1</th>
<td>Al Horford</td>
<td>C</td>
<td>Atlanta Hawks</td>
<td>12.000000</td>
</tr>
<tr>
<th>2</th>
<td>Tiago Splitter</td>
<td>C</td>
<td>Atlanta Hawks</td>
<td>9.756250</td>
</tr>
<tr>
<th>3</th>
<td>Jeff Teague</td>
<td>PG</td>
<td>Atlanta Hawks</td>
<td>8.000000</td>
</tr>
<tr>
<th>4</th>
<td>Kyle Korver</td>
<td>SG</td>
<td>Atlanta Hawks</td>
<td>5.746479</td>
</tr>
</tbody>
</table>
</div>
```python
df['SALARY'].sort_values().plot.hist(bins=20, edgecolor='k')
plt.xlabel('Salário na NBA')
plt.ylabel('Número de Linhas')
despine()
```
```python
N = 10000
data = df['SALARY']
medias = []
for i in range(N):
mean = np.random.choice(data, 100).mean()
medias.append(mean)
```
```python
mean = np.mean(medias)
# ddof=1 faz dividir por n-1
std = np.std(medias, ddof=1)
# pegue 1000 números entre o minimo e o max
x = np.linspace(np.min(medias), np.max(medias), 1000)
y = ss.distributions.norm.pdf(loc=mean, scale=std, x=x)
plt.plot(x, y, label='Aproximação Normal')
plt.hist(medias, bins=20, edgecolor='k', density=True)
plt.ylabel('P(X = x)')
plt.xlabel('Salário da NBA - x')
plt.title('CLT na Prática')
despine()
```
## Condições para o TCL
Existem algumas condições para garantir que o TCL seja válido.
1. Dados independentes e identicamente distribuídos.
1. Variância finita.
1. Pelo menos umas 30 amostras
Observe do wikipedia que uma distribuição Pareto(1) tem variância infinita. Quebramos nossa condição. Olhe que o plot abaixo não parece em nada com uma Normal.
https://en.wikipedia.org/wiki/Pareto_distribution
```python
data = []
for _ in range(10000):
m = np.random.pareto(1, size=100).mean()
data.append(m)
```
```python
plt.hist(data, bins=100, edgecolor='k')
despine()
```
Podemos quebrar também com amostras muito pequenas, tipo na Beta(3, 2, size=2) abaixo.
Observe como é muito perto de uma Normal mas tem um certo viés para a direita.
```python
data = []
for _ in range(10000):
m = np.random.beta(3, 2, size=2).mean()
data.append(m)
plt.hist(data, edgecolor='k')
despine()
```
```python
mean = np.mean(data)
# ddof=1 faz dividir por n-1
std = np.std(data, ddof=1)
# pegue 1000 números entre o minimo e o max
x = np.linspace(np.min(data), np.max(data), 1000)
y = ss.distributions.norm.pdf(loc=mean, scale=std, x=x)
plt.plot(x, y, label='Aproximação Normal')
plt.hist(data, bins=20, edgecolor='k', density=True)
plt.ylabel('P(X = x)')
plt.title('CLT na Prática')
despine()
```
| ae70c1099cf3ec6c700f103f03e985bc20f550ce | 457,162 | ipynb | Jupyter Notebook | _lessons/.ipynb_checkpoints/07-tcl-checkpoint.ipynb | icd-ufmg/icd-ufmg.github.io | 5bc96e818938f8dec09dc93d786e4b291d298a02 | [
"MIT"
]
| 3 | 2019-02-25T18:25:49.000Z | 2021-05-20T19:22:24.000Z | _lessons/.ipynb_checkpoints/07-tcl-checkpoint.ipynb | thiagomrs/icd-ufmg.github.io | f72c0eca5a0f97d83be214aff52715c986b078a7 | [
"MIT"
]
| null | null | null | _lessons/.ipynb_checkpoints/07-tcl-checkpoint.ipynb | thiagomrs/icd-ufmg.github.io | f72c0eca5a0f97d83be214aff52715c986b078a7 | [
"MIT"
]
| 3 | 2021-06-05T20:49:02.000Z | 2022-02-11T20:21:44.000Z | 432.508988 | 68,484 | 0.938394 | true | 4,351 | Qwen/Qwen-72B | 1. YES
2. YES | 0.819893 | 0.749087 | 0.614172 | __label__por_Latn | 0.973616 | 0.265257 |
# Fangohr, Hans. Introduction to Python for Computational Science and Engineering, 2015.
Embleton | 20160910 | Notes
### General Notes
* Use `help()` with a command for details
* Use `dir()` with a command for a list of available methods
## Chapter 2, A Powerful Calculator
```python
import math
```
```python
dir(math)
help(math.exp)
math.pi
math.e
```
Help on built-in function exp in module math:
exp(...)
exp(x)
Return e raised to the power of x.
2.718281828459045
## Chapter 3, Data Types and Structures
* use cmath library to calculate complex results
* Strings are immutable, lists are mutable
* `dir("")` outputs a list of available methods
* Sequences
* `a[i]` returns the ith element of a
* `a[i:j]` returns elements i up to j-1
* `len(a)` returns number of elements in a sequence
* `min(a)` returns the smallest value in a seq.
* `max(a)`
* `x in a` returns True if x is an element in a
* `a + b` concatenates seq. a and seq. b
* `n * a` creates n copies of seq. a
* The `split()` method seperates the string where it finds white space or at a seperator character.
* The join method is the opposite of split
* Lists
* Empty list given by `x = []`
* You can mix objects within a list
* You can add lists within lists
* ? Is this the proper method for making tables?
* You can use scipy.arrange() or pandas
* `append()` to add an object to the end of a list, opposite is `remove()`
* `range()` command common in for loops. Use: range(start, stop, step size)
* Range is a type!
* Tuple
* Immutable
* Empty tuple given by `t = ()`
* Tuple containing one object `t = (x,)`. The comma is required.
* Indexing
* Use negative numbers to retrieve values from the back of the list
* Slicing
* Slicing is different from indicing as it corresponds to the point between two indicies.
* Dictionaries
* empty dictionary: `d = {}`
* look for matches with: `d.has_key()` or `d.has_item()`
* method `get(key, default)` to retrieve values or default if key not found
* Keyword can be any immutable object
* Dictionaries are very fast when retreiving values (when given the key)
* Passing arguments to functions
* Modifications to values of an arguement in a function can affect the value of the original object
* Copying
* `b = a` does not pass a copy of a to b and create two seperate objects. Instead b and a refer to the same object. Only the lable is copied. To create a copy of a with a differnt label, use something like `c = a[:]`
* use `id(a)` to determine if objects are the same or different
* Equality Operators
* <, >, ==, >=, <=, !=
* Does not depend on type
* To compare the id use, `a is b`. Objects with different types will not have the same id
```python
import cmath
cmath.sqrt(-1)
```
1j
```python
a = 'This is a test sentance'
print(a)
print(a.upper())
print(a.split())
```
This is a test sentance
THIS IS A TEST SENTANCE
['This', 'is', 'a', 'test', 'sentance']
```python
b = "The dog is hungry. The cat is bored. The snake is awake."
print(b)
s=b.split(".")
print(s)
print(".".join(s))
print(" STOP".join(s))
```
The dog is hungry. The cat is bored. The snake is awake.
['The dog is hungry', ' The cat is bored', ' The snake is awake', '']
The dog is hungry. The cat is bored. The snake is awake.
The dog is hungry STOP The cat is bored STOP The snake is awake STOP
```python
a = [1, 2, 3]
print(a)
a.append(45)
print(a)
a.remove(2)
print(a)
```
[1, 2, 3]
[1, 2, 3, 45]
[1, 3, 45]
```python
print(range(3, 10))
for i in range(3,11):
print(i**2)
```
range(3, 10)
9
16
25
36
49
64
81
100
```python
a = 100, 200, 'duck'
print(a)
print(type(a))
x, y = 10, 20
print(x)
print(y)
x, y = y, x
print(x)
print(y)
```
(100, 200, 'duck')
<class 'tuple'>
10
20
20
10
```python
a = 'dog cat mouse'
a = a.split()
print(a[0])
print(a[-1])
print(a[-2:])
```
dog
mouse
['cat', 'mouse']
```python
## Dictionaries
d = {}
d['today'] = [1, 2, 3]
d['yesterday'] = '19 deg C'
print(d.keys())
print(d.values())
print(d.items())
print(d)
print(d['today'])
print(d['today'][1])
# Other methods for creating dictionaries
d2 = {2:4, 3:9, 4:16}
print(d2)
d3 = dict(a=1, b=2, c=3)
print(d3)
print(d3['a'])
print(d.__contains__('today'))
d.get('today','unknown')
```
dict_keys(['yesterday', 'today'])
dict_values(['19 deg C', [1, 2, 3]])
dict_items([('yesterday', '19 deg C'), ('today', [1, 2, 3])])
{'yesterday': '19 deg C', 'today': [1, 2, 3]}
[1, 2, 3]
2
{2: 4, 3: 9, 4: 16}
{'a': 1, 'c': 3, 'b': 2}
1
True
[1, 2, 3]
```python
# Dictionary Example
# create an empty directory
order = {}
# add orders as they come in
order['Peter'] = 'Pint of bitter'
order['Paul'] = 'Half pint of Hoegarden'
order['Mary'] = 'Gin Tonic'
# deliver order at bar
for person in order.keys():
print(person, "requests", order[person])
```
Paul requests Half pint of Hoegarden
Mary requests Gin Tonic
Peter requests Pint of bitter
```python
#Copying and Identity
a = [1, 2, 3, 4, 5]
b=a
b[0] = 42
print(a)
c = a[:]
c[1] = 99
print(a)
print(c)
print('id a: ', id(a))
print('id b: ', id(b))
print('id c: ', id(c))
```
[42, 2, 3, 4, 5]
[42, 2, 3, 4, 5]
[42, 99, 3, 4, 5]
id a: 72139272
id b: 72139272
id c: 67450824
## Chapter 4, Introspection
* Magic names start and end with a double underscore
* `isinstance(<unit>, <type space>)` Returns true if the given object is of the given type.
* `help(<object>)`
* `help()` Starts aand interactive help utility
* Provide a docstring for user defined functions
```python
# Example of documenting a user defined function and calling it.
def power2and3(x):
"""Returns the tuple (x**2, x**3)"""
return x**2 ,x**3
print(power2and3(2))
print(power2and3.__doc__)
help(power2and3)
```
(4, 8)
Returns the tuple (x**2, x**3)
Help on function power2and3 in module __main__:
power2and3(x)
Returns the tuple (x**2, x**3)
## Chapter 5, Input and Output
* String specifiers, reprinted table below
* Pg 54-55 for a more elegant method of string formatting used in Python 3
* `fileobject.readlines()` method returns a list of strings
```python
## Copied from pg 52.
AU = 149597870700 #Astronomical unit in [m]
"%g" %AU
```
'1.49598e+11'
|Specifier|Style|Example Output for AU|
|:---:|:---:|:---|
|`%f`|Floating Point|149597870700.000000|
|`%e`|Exponential Notation|1.495979e+11|
|`%g`|Shorter of %e or %f|1.49598e+11|
|`%d`|Integer|149597870700|
|`%s`|String|149597870700|
|`%r`|repr|149597870700L|
```python
a = math.pi
print("Short pi = %.2f. longer pi = %.12f." %(a, a))
```
Short pi = 3.14. longer pi = 3.141592653590.
```python
## Reading and Writing Files
#1. Write a File
out_file = open("test.txt", "w") # 'w' stands for Writing
out_file.write("Writing text to file. This is the first line.\n"
"And the second lineasdfa.")
out_file.close() # close the file
#2. Read a File
in_file = open("test.txt", "r") # 'r' stands for Reading
text = in_file.read()
in_file.close()
#3. Display Data
print (text)
```
Writing text to file. This is the first line.
And the second lineasdfa.
```python
## Readlines Example
myexp = open("myfile.txt", "w") # 'w' stands for Writing
myexp.write("This is the first line.\n"
"This is the second line.\n"
"This is the third and last line.")
myexp.close()
f = open('myfile.txt', "r")
print(len(f.read()))
f.close()
f = open('myfile.txt', "r")
for line in f.readlines():
print("%d characters" %len(line))
f.close()
```
81
24 characters
25 characters
32 characters
# Chapter 6, Control Flow
* If-then-else statements
* For loops
* use logical operators "`and`" and "`or`" to combine conditions
* Read chapter 8 for more on errors and exceptiosn, help('exceptions')
```python
a = 17
if a == 0:
print("a is zero")
elif a < 0:
print("a is negative")
else:
print("a is positive")
```
a is positive
```python
# for example
for animal in ['dog', 'cat', 'mouse']:
print(animal, animal.upper())
for i in range(5,10):
print(i)
```
dog DOG
cat CAT
mouse MOUSE
5
6
7
8
9
## Chapter 7, Functions and Modules
* A function takes an argument and returns a result or return value.
* Function parameter may have default values.
* ie. `def print_multi_table(n, upto=10):`
* Common to have an `if __name__ == "__main__` to output results and capabilities only seen when program is runnin on its own.
Generic Function Format:
def my_function(arg1, arg2, ..., argn):
"""Optional docstring."""
#Implementation of the function
return result #optional
#this is not part of the function
some_command
## Chapter 8, Functional Tools
* Examples using the tools `filter`, `reduce`, and `lamda`.
* An anonymous function is only needed once or needs no name
* `lambda x : x**2`
* `(lambda x, y, z: (x + y) * z)(10, 20, 2)`
* The map function applies function f to all elements in sequence s, `lst2 = map(f,s)`
* `map(lambda x:x**2, range(10))`
* The filter function applies the function f to all elements in a sequence s, `lst2 = filter(f, lst)`
* The filet function should return a true or false.
* `filter(lambda x:x>5,range(11))`
* List comprehension is an expression followed by a for clause, then zero or more for or if clauses. More consise then the above methods.
```python
## Maps
def f(x):
return x**2
# Two methods to print
print(list(map(f, range(10))))
for ch in map(f, range(10)):
print(ch)
#Combining with Lambda
print(list(map(lambda x:x**2,range(10))))
```
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
0
1
4
9
16
25
36
49
64
81
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
```python
## List Comprehensions
vec = [2, 4, 6]
print(vec)
print([3 * x for x in vec])
print([3 * x for x in vec if x >3])
print([3 * x for x in vec if x <2])
print([[x, x**2] for x in vec])
```
[2, 4, 6]
[6, 12, 18]
[12, 18]
[]
[[2, 4], [4, 16], [6, 36]]
## Chapter 9, Common Tasks
* Illustrates many ways to compute a series to illustrate the differnt methods possible and also illustrate the different methods we've previously discussed. Includes a check method and doc file.
* `sorted` returns a copy of a sorted list while `sort` changes the list insto a sorted order of elements
## Chapter 10, Matlab to Python
* Common differences
* The extension library numpy provides matrix functionality similar to Matlab
## Chapter 11, Python Shells
* Useful features of different Python shells
* iPython, IPython Notebook, Spyder
## Chapter 12, Symbolic Computation
* SymPy is the Python Symbolic library, [SymPy Homepage](http://sympy.org) for full and up-to-date documentation
* Very slow compared to floating point opperation
* isympy an exexutable wrapper around python, convenient for figuring out new features or experementing interactively
* Rational type, Rational(1,2) represents 1/2.
* Rational class works exactly as opposed to the standard float.
* If Sympy returns the result in an unfamiliar form, subtract it with the expected form to determine if they are equivalent.
* Calculate definite integrals with a tuple containing the variable of interest, lower, and upper bounds.
* Results from dsolve are an Equality class, function needs to be evaluated to a number to be plotted.
* Covers series expansion
* LaTeX and Pretty printing
* `preview()` allows you to display rendered output on the screen
* Automatic generation of C code via `codegen()`
```python
## Symbols
import sympy
x, y, z = sympy.symbols('x, y, z')
a = x + 2*y + 3*z - x
print(a)
print(sympy.sqrt(8))
```
2*y + 3*z
2*sqrt(2)
```python
x, y = sympy.symbols('x,y')
a = x + 2*y
print(a.subs(x, 10))
print(a.subs(x,10).subs(y,3))
print(a.subs({x:10, y:3}))
SS_77 = -y + -23.625*x**3 - 5.3065*x**2 + 5.6633*x
SS_52 = -y + -245.67*x**3 + 31.951*x**2 + 4.4341*x
SS_36 = -y + -18.58*x**3 - 5.4025*x**2 + 2.1623*x
#print("t0 = 36, 0.5 Strain at %.2f MPa" % sympy.solve(SS_36.subs(y, 0.5),x)[0])
#print("t0 = 52, 0.5 Strain at %.2f MPa" % sympy.solve(SS_52.subs(y, 0.5),x)[0])
#print("t0 = 77, 0.5 Strain at %.2f MPa" % sympy.solve(SS_77.subs(y, 0.5),x)[0])
print("t0 = 36, 0.01 Stress at %.3f Strain" % sympy.solve(SS_36.subs(x, .01),y)[0])
print("t0 = 52, 0.01 Stress at %.3f Strain" % sympy.solve(SS_52.subs(x, .01),y)[0])
print("t0 = 77, 0.01 Stress at %.3f Strain" % sympy.solve(SS_77.subs(x, .01),y)[0])
```
2*y + 10
16
16
t0 = 36, 0.01 Stress at 0.021 Strain
t0 = 52, 0.01 Stress at 0.047 Strain
t0 = 77, 0.01 Stress at 0.056 Strain
```python
a = sympy.Rational(2,3)
print(a)
print(float(a))
print(a.evalf())
print(a.evalf(50))
```
2/3
0.6666666666666666
0.666666666666667
0.66666666666666666666666666666666666666666666666667
```python
# Differentiation
print(sympy.diff(3*x**4, x))
print(sympy.diff(3*x**4, x, x, x))
print(sympy.diff(3*x**4, x, 3))
```
12*x**3
72*x
72*x
```python
## Integration
from sympy import integrate
print(integrate(sympy.sin(x), y))
print(integrate(sympy.sin(x), x))
# Definite Integrals
print(integrate(x*2, x))
print(integrate(x*2, (x, 0, 2)))
print(integrate(x**2, (x,0,2), (x, 0, 2), (y,0,1)))
```
y*sin(x)
-cos(x)
x**2
4
16/3
```python
## Ordinary Differential Equations
from sympy import Symbol, dsolve, Function, Derivative, Eq
y = Function("y")
x = Symbol('x')
y_ = Derivative(y(x), x)
print(dsolve(y_ + 5*y(x), y(x)))
print(dsolve(Eq(y_ + 5*y(x), 0), y(x)))
print(dsolve(Eq(y_ + 5*y(x), 12), y(x)))
```
Eq(y(x), C1*exp(-5*x))
Eq(y(x), C1*exp(-5*x))
Eq(y(x), C1*exp(-5*x)/5 + 12/5)
```python
## Linear Equations and Matrix Inversion
from sympy import symbols, Matrix
x, y, z = symbols('x,y,z')
A = Matrix(([3,7], [4,-2]))
print(A)
print(A.inv())
```
Matrix([[3, 7], [4, -2]])
Matrix([[1/17, 7/34], [2/17, -3/34]])
```python
## Solving Non Linear Equations
import sympy
x, y, z = sympy.symbols('x,y,z')
eq = x - x**2
print(sympy.solve(eq,x))
```
[0, 1]
## Chapter 14, Numerical Calculation
* Limitations of the different number types: int, float, complex, and long.
* Comparing float and symbolic time to compute.
## Chapter 15, Numerical Python (numpy): arrays
* The data structure, `array`, allows efficient matrix and vector operation
* An array can only keep elements of the same type, as opposed to lists which can hold a mix.
* Convert a matrix back tp a list or tuple using, `list(s)` or `tuple(s)`.
* Computing eiganvectors and eiganvalues
* Numpy examples at [SciPy.org](http://www.scipy.org/Numpy_Example_List)
```python
## Vectors (1d-arrays)
import numpy as N
x = N.array([0, 0.5, 1, 1.5])
print(x)
print(N.zeros(4))
a = N.zeros((5,4))
print(a)
print(a.shape)
print(a[2,3])
random_matrix = N.random.rand(5,5)
print(random_matrix)
x = N.random.rand(5)
b = N.dot(random_matrix, x)
print("b= ", b)
```
[ 0. 0.5 1. 1.5]
[ 0. 0. 0. 0.]
[[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]]
(5, 4)
0.0
[[ 0.62986856 0.08414351 0.86029432 0.23673407 0.69039432]
[ 0.09499312 0.46304194 0.83582097 0.80421487 0.99190126]
[ 0.98594909 0.58822546 0.86016599 0.41493799 0.31856799]
[ 0.91495891 0.38045604 0.67692051 0.39180708 0.22073492]
[ 0.65115666 0.67522929 0.69594017 0.13819881 0.62083603]]
b= [ 0.39203078 0.83042469 0.85378184 0.62487284 0.88562137]
```python
## Curve Fitting of Polynomial Example
import numpy
# demo curve fitting: xdata and ydata are input data
xdata = numpy.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
ydata = numpy.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
#now fit for cubic (order = 3) polynomial
z = numpy.polyfit(xdata, ydata, 3)
#z is an array of coefficients, highest first, i.e.
# x^3 X^2 X 0
#z=array([0.08703704, -0.8134, 1.693, -0.0396])
print("z = ", z)
#It is convenient to use `poly1d` objects for dealing with polynomials
p = numpy.poly1d(z) #Creates a polynomial function p from coefficients and p can be evaluated for all x then.
print("p = ",p)
#Create a plot
xs = [0.1 * i for i in range(50)]
ys = [p(x) for x in xs] # evaluates p(x) for all x in list xs
%matplotlib inline
import pylab
pylab.plot(xdata, ydata, 'o', label = 'data')
pylab.plot(xs, ys, label = 'fitted curve')
pylab.ylabel('y')
pylab.xlabel('x')
#pylab.savefig('polyfit.pdf')
pylab.show()
```
## Chapter 15, Visualizing Data
* Need to include all the useful links here
* IPython Inline mode via: `%matplotlib inline`, `%matplotlib qt`, or `%pylab`.
* `help(pylab.legend)` for legend placement information
* `help(pylab.plot)` for line style, color, thickness, etc calls
* Colors can be called out in RGB, Hex, greyscale, etc.
* Subplot to call more than one plot in one figure, `pylab.subplot(numRows, numCols, plotNum)`.
* Multiple figures via: `pylab.figure(figNum)`.
* `pylab.close()` may be used to close one, some, or all figures.
* Use `pyplot.imshow()` to visualize matrix data (heat plot).
* Use different color maps with the module `matplotlib.cm`
* Check out the contour_demo.py example for illustrating `z=f(x,y)`
* Visual Python is a module that allows you to create and animate 3D scenes.
* Visual Python [Home Page](http://vpython.org)
* Useful for illustrating time dependent data
* Visualising 2D and 3D fields as a function of time with the Visualization Toolkit, [VTK](http://vtk.org).
* Other modules: Mayavi, Paraview, and VisIt.
```python
## Plot details example
import pylab
import numpy as N
%matplotlib inline
x = N.arange(-3.14, 3.14, 0.01)
y1 = N.sin(x)
y2 = N.cos(x)
pylab.figure(figsize=(5,5)) #Sets figure size to 5 x 5 in.
pylab.plot(x, y1, label='sin(x)')
pylab.plot(x, y2, label='cos(x)')
pylab.legend()
pylab.grid()
pylab.xlabel('x')
pylab.title('This is the Title')
pylab.axis([-2,2,-1,1])
pylab.show()
```
```python
## Histogram Example
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pylab as plt
# create data
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
#histogram of the data
n, bins, patches = plt.hist(x, 50, normed=1, facecolor='green', alpha=0.75)
#fine tuning the plot
plt.xlabel('Smarts')
plt.ylabel('Probability')
#LaTeX strings for labels and titles
plt.title(r'$\mathrm{Histogram\ of\ IQ :}\ \mu=100,\ \sigma=15$')
plt.axis([40, 160, 0, 0.03])
plt.grid(True)
#add a best fit line curve
y = mlab.normpdf(bins, mu, sigma)
l = plt.plot(bins, y, 'r--', linewidth=1)
#Save to file
#plt.savefig('pylabhistogram.pdf')
#then display
plt.show()
```
## Chapter 16, Numerical Methods using Python (scipy)
* The `scipy` package privides numerous numerical algorithms
* All functionality from `numpy` may be available in `scipy`
* Using `help(scipy)` will detail the package structure. You can call specific sections ie. `import scipy.integrate`.
* Use `scipy.quad()` to solve integrals of the form $\int_{a}^{b} f(x)dx$
* Functions that approach +/- inf may be difficult to handle numerically. Plot results with integand to check
* Use `scipy.odeint()` to solve differential equations of the type $\frac{\partial y}{\partial t}(t) = f(y,t)$
* `help(scipy.integrate.odeint)` to explore differnet error tolerance options
* Using the `bisect()` method to find roots. Requires arguments f(x), lower limit, and upper limit. Optional xtol parameter.
* Using `fsolve()` to find roots is more efficient, but not garunteed to converge. Input argument is a starting location suspected close to the root.
* The function `y0 = scipy.interpolate.interp1d(x, y, kind='nearest')` may be used to interpolate the data $(x_i, y_i)$ for all x.
* A generic curve fitting function is provided with `scipy.optimize.curve_fit()`
* FFT example below
* Optimization, using `scipy.optimize.fmin()` to find the minimum of a function. Arguments are the function and starting point.
```python
## Integral Example
from math import cos, exp, pi
from scipy.integrate import quad
#function we want to integrate
def f(x):
return exp(cos(-2 * x * pi)) + 3.2
#call quad to integrate f from -2 to 2
res, err = quad(f, -2, 2)
print("The numerical result is {:f} (+/-{:.2g})" .format(res, err))
```
The numerical result is 17.864264 (+/-1.6e-11)
```python
## ODE Example
from scipy.integrate import odeint
import numpy as N
def f(y,t):
"""This returns the RHS of the ODE to integrate, i.e. dy/dt = f(y,t)"""
return(-2 * y * t)
y0 = 1 # initial value
a = 0 # integration limits for t
b = 2
t = N.arange(a, b, 0.01) # values of t for which we require the solution y(t)
y = odeint(f, y0, t) # actual computation of y(t)
import pylab # Plotting of results
%matplotlib inline
pylab.plot(t, y)
pylab.xlabel('t'); pylab.ylabel('y(t)')
pylab.show()
```
```python
## FFT exmaple
# Create a superposition of 50 and 70 Hz and plot the fft.
import scipy
import matplotlib.pyplot as plt
%matplotlib inline
pi = scipy.pi
signal_length = 0.5 # [seconds]
sample_rate = 500 # sampling rate [Hz]
dt = 1./sample_rate # delta t [s]
df = 1/signal_length # frequency between points in the freq. domain [Hz]
t = scipy.arange(0, signal_length, dt) # the time vector
n_t = len(t) # length of the time vector
# create signal
y = scipy.sin(2*pi*50*t) + scipy.sin(2*pi*70*t + pi/4)
# compute the fourier transport
f = scipy.fft(y)
# work out meaningful frequencies in fourier transform
freqs = df*scipy.arange(0,(n_t-1)/2.,dtype='d') #d = double precision float
n_freq = len(freqs)
# plot input data y against time
plt.subplot(2,1,1)
plt.plot(t, y, label='input data')
plt.xlabel('time [s]')
plt.ylabel('signal')
# plot frequency spectrum
plt.subplot(2,1,2)
plt.plot(freqs, abs(f[0:n_freq]), label='abs(fourier transform)')
plt.xlabel('frequency [Hz]')
plt.ylabel('abs(DFT(signal))')
# save plot to disk
#plt.savefig('fft1.pdf')
plt.show()
```
## Chapter 17, Where to go from here?
* A list of additional skills for computational science work.
| 4f615a67405c3dbdfe3c62abf137f1122836a929 | 134,982 | ipynb | Jupyter Notebook | public/ipy/Fangohr_2015/Fangohr_Python_Intro.ipynb | stembl/stembl.github.io | 5108fc33dccd8c321e1840b62a4a493309a6eeff | [
"MIT"
]
| 1 | 2016-12-10T04:04:33.000Z | 2016-12-10T04:04:33.000Z | public/ipy/Fangohr_2015/Fangohr_Python_Intro.ipynb | stembl/stembl.github.io | 5108fc33dccd8c321e1840b62a4a493309a6eeff | [
"MIT"
]
| 3 | 2021-05-18T07:27:17.000Z | 2022-02-26T02:16:11.000Z | public/ipy/Fangohr_2015/Fangohr_Python_Intro.ipynb | stembl/stembl.github.io | 5108fc33dccd8c321e1840b62a4a493309a6eeff | [
"MIT"
]
| null | null | null | 84.628213 | 27,336 | 0.815864 | true | 7,348 | Qwen/Qwen-72B | 1. YES
2. YES | 0.921922 | 0.934395 | 0.861439 | __label__eng_Latn | 0.931547 | 0.839746 |
# DCEGM Upper Envelope
## ["The endogenous grid method for discrete-continuous dynamic choice models with (or without) taste shocks"](https://onlinelibrary.wiley.com/doi/abs/10.3982/QE643)
<p style="text-align: center;"><small><small><small>For the following badges: GitHub does not allow click-through redirects; right-click to get the link, then paste into navigation bar</small></small></small></p>
[](https://mybinder.org/v2/gh/econ-ark/DemARK/master?filepath=notebooks%2FDCEGM-Upper-Envelope.ipynb)
[](https://colab.research.google.com/github/econ-ark/DemARK/blob/master/notebooks/DCEGM-Upper-Envelope.ipynb)
This notebook provides a simple introduction to the upper envelope calculation in the "DCEGM" algorithm <cite data-cite="6202365/4F64GG8F"></cite>. It takes the EGM method proposed in <cite data-cite="6202365/HQ6H9JEI"></cite>, and extends it to the mixed choice (discrete and continuous) case. It handles various constraints. It works on a 1-dimensional problems.
The main challenge in the types of models considered in DCEGM is, that the first order conditions to the Bellman equations are no longer sufficient to find an optimum. Though, they are still necessary in a broad class of models. This means that our EGM step will give us (resource, consumption) pairs that do fulfill the FOCs, but that are sub-optimal (there's another consumption choices for the same initial resources that gives a higher value).
Take a consumption model formulated as:
$$
\max_{\{c_t\}^T_{t=1}} \sum^T_{t=1}\beta^t\cdot u(c_t)
$$
given some initial condition on $x$ and some laws of motion for the states, though explicit references to states are omitted. Then, if we're in a class of models described in EGM
, we can show that
$$
c_t = {u_{c}}^{-1}[E_t(u_c(c_{t+1}))]
$$
uniquely determines an optimal consumption today given the expected marginal utility of consuming tomorrow. However, if there is a another choice in the choice set, and that choice is discrete, we get
$$
\max_{\{c_t, d_t\}^T_{t=1}} \sum^T_{t=1}\beta^t\cdot u(c_t, d_t)
$$
again given initial conditions and the laws of motion. Then, we can show that
$$
c_t = {u_{c}}^{-1}[E_t(u_c(c_{t+1}))]
$$
will produce solutions that are necessary but not sufficient. Note, that there is no explicit mentioning of the discrete choices in the expectation, but they obviously vary over the realized states in general. For the optimal consumption, it doesn't matter what the choice is exactly, only what expected marginal utility is tomorrow. The algorithm presented in [1] is designed to take advantage of models with this structure.
To visualize the problem, consider the following pictures that show the output of an EGM step from the model in the REMARK [linkhere].
```python
# imports
import numpy as np
import matplotlib.pyplot as plt
```
```python
# here for now, should be
# from HARK import discontools or whatever name is chosen
from HARK.interpolation import LinearInterp
from HARK.dcegm import calcSegments, calcMultilineEnvelope
```
```python
m_common = np.linspace(0,1.0,100)
m_egm = np.array([0.0, 0.04, 0.25, 0.15, 0.1, 0.3, 0.6,0.5, 0.35, 0.6, 0.75,0.85])
c_egm = np.array([0.0, 0.03, 0.1, 0.07, 0.05, 0.36, 0.4, 0.6, 0.8, 0.9,0.9,0.9])
vt_egm = np.array( [0.0, 0.05, 0.1,0.04, 0.02,0.2, 0.7, 0.5, 0.2, 0.9, 1.0, 1.2])
```
```python
plt.plot(m_egm, vt_egm)
plt.xlabel("resources")
plt.ylabel("transformed values")
```
```python
plt.plot(m_egm, c_egm)
plt.xlabel("resources")
plt.ylabel("consumption")
plt.show()
```
The point of DCEGM is to realize, that the segments on the `(m, vt)` curve that are decreasing, cannot be optimal. This leaves us with a set of increasing line segments, as seen below (`dcegmSegments` is the function in HARK that calculates the breaks where the curve goes from increasing to decreasing).
```python
rise, fall = calcSegments(m_egm, vt_egm)
```
In `rise` we have all the starting indices for the segments that are "good", that is `(m, vt)` draws an increasing curve.
```python
rise
```
array([0, 4, 8])
We see that `rise` has its first index at `0`, then again at `4`, and lastly at `8`. Let's look at `fall`.
```python
fall
```
array([ 2, 6, 11])
We see that the last segment is increasing (as the last element of `rise` is larger than the last element of `fall`), and we see that `len(fall)` is one larger than number of problematic segments in the plot. The index of the last point in `m_egm`/`c_egm`/`vt_egm` is added for convenience when we do the upper envelope step (and is also convenient below for drawing the segments!).
We can use `fall` and `rise` to draw only the relevant segments that we will use to construct an upper envelope.
```python
for j in range(len(fall)):
idx = range(rise[j],fall[j]+1)
plt.plot(m_egm[idx], vt_egm[idx])
plt.xlabel("resources")
plt.ylabel("transformed values")
plt.show()
```
Let us now use the `calcMultilineEnvelope` function to do the full DCEGM step: find segments and calculate upper envelope in one sweep.
```python
m_upper, c_upper, v_upper = calcMultilineEnvelope(m_egm, c_egm, vt_egm, m_common)
```
```python
for j in range(len(fall)):
idx = range(rise[j],fall[j]+1)
plt.plot(m_egm[idx], vt_egm[idx])
plt.plot(m_upper, v_upper, 'k')
plt.xlabel("resources")
plt.ylabel("transformed values")
plt.show()
```
And there we have it! These functions are the building blocks for univariate discrete choice modeling in HARK, so hopefully this little demo helped better understand what goes on under the hood, or it was a help if you're extending some existing class with a discrete choice.
# An example: writing a will
We now present a basic example to illustrate the use of the previous tools in solving dynamic optimization problems with discrete and continuous decisions.
The model represents an agent that lives for three periods and decides how much of his resources to consume in each of them. On the second period, he must additionally decide whether to hire a lawyer to write a will. Having a will has the upside of allowing the agent to leave a bequest in his third and last period of life, which gives him utility, but has the downside that the lawyer will charge a fraction of his period 3 resources.
On each period, the agent receives a deterministic amount of resources $w$. The problem, therefore, is fully deterministic.
I now present the model formally, solving it backwards.
But first, some setup and calibration:
```python
# Import tools for linear interpolation and finding optimal
# discrete choices.
from HARK.interpolation import calcLogSumChoiceProbs
# Import CRRA utility (and related) functions from HARK
from HARK.utilities import CRRAutility, CRRAutilityP, CRRAutilityP_inv
# Solution method parameters
aGrid = np.linspace(0,8,400) # Savings grid for EGM.
# Model parameters
# Parameters that need to be fixed
# Relative risk aversion. This is fixed at 2 in order to mantain
# the analytical solution that we use, from Carroll (2000)
CRRA = 2
# Parameters that can be changed.
w = 1 # Deterministic wage per period.
willCstFac = 0.35 # Fraction of resources charged by lawyer for writing a will.
DiscFac = 0.98 # Time-discount factor.
# Define utility (and related) functions
u = lambda x: CRRAutility(x,CRRA)
uP = lambda x: CRRAutilityP(x, CRRA)
uPinv = lambda x: CRRAutilityP_inv(x, CRRA)
# Create a grid for market resources
mGrid = (aGrid-aGrid[0])*1.5
mGridPlots = np.linspace(w,10*w,100)
mGridPlotsC = np.insert(mGridPlots,0,0)
# Transformations for value funtion interpolation
vTransf = lambda x: np.exp(x)
vUntransf = lambda x: np.log(x)
```
# The third (last) period of life
In the last period of life, the agent's problem is determined by his total amount of resources $m_3$ and a state variable $W$ that indicates whether he wrote a will ($W=1$) or not ($W=0$).
### The agent without a will
An agent who does not have a will simply consumes all of his available resources. Therefore, his value and consumption functions will be:
\begin{equation}
V_3(m_3,W=0) = u(m_3)
\end{equation}
\begin{equation}
c_3(m_3, W=0) = m_3
\end{equation}
Where $u(\cdot)$ gives the utility from consumption. We assume a CRRA specification $u(c) = \frac{c^{1-\rho}}{1-\rho}$.
### The agent with a will
An agent who wrote a will decides how to allocate his available resources $m_3$ between his consumption and a bequest. We assume an additive specification for the utility of a given consumption-bequest combination that follows a particular case in [Carroll (2000)](http://www.econ2.jhu.edu/people/ccarroll/Why.pdf). The component of utility from leaving a bequest $x$ is assumed to be $\ln (x+1)$. Therefore, the agent's value function is
\begin{equation}
V_3(m_3, W=1) = \max_{0\leq c_3 \leq m_3} u(c_3) + \ln(m_3 - c_3 + 1)
\end{equation}
For ease of exposition we consider the case $\rho = 2$, where [Carroll (2000)](http://www.econ2.jhu.edu/people/ccarroll/Why.pdf) shows that the optimal consumption level is given by
\begin{equation}
c_3(m_3, W=1) = \min \left[m_3, \frac{-1 + \sqrt{1 + 4(m_3+1)}}{2} \right].
\end{equation}
The consumption function shows that $m_3=1$ is the level of resources at which an important change of behavior occurs: agents leave bequests only for $m_3 > 1$. Since an important change of behavior happens at this point, we call it a 'kink-point' and add it to our grids.
```python
# Agent without a will
mGrid3_no = mGrid
cGrid3_no = mGrid
vGrid3_no = u(cGrid3_no)
# Create functions
c3_no = LinearInterp(mGrid3_no, cGrid3_no) # (0,0) is already here.
vT3_no = LinearInterp(mGrid3_no, vTransf(vGrid3_no), lower_extrap = True)
v3_no = lambda x: vUntransf(vT3_no(x))
# Agent with a will
# Define an auxiliary function with the analytical consumption expression
c3will = lambda m: np.minimum(m, -0.5 + 0.5*np.sqrt(1+4*(m+1)))
# Find the kink point
mKink = 1.0
indBelw = mGrid < mKink
indAbve = mGrid > mKink
mGrid3_wi = np.concatenate([mGrid[indBelw],
np.array([mKink]),
mGrid[indAbve]])
cGrid3_wi = c3will(mGrid3_wi)
cAbve = c3will(mGrid[indAbve])
beqAbve = mGrid[indAbve] - c3will(mGrid[indAbve])
vGrid3_wi = np.concatenate([u(mGrid[indBelw]),
u(np.array([mKink])),
u(cAbve) + np.log(1+beqAbve)])
# Create functions
c3_wi = LinearInterp(mGrid3_wi, cGrid3_wi) # (0,0) is already here
vT3_wi = LinearInterp(mGrid3_wi, vTransf(vGrid3_wi), lower_extrap = True)
v3_wi = lambda x: vUntransf(vT3_wi(x))
plt.figure()
plt.plot(mGridPlots, v3_wi(mGridPlots), label = 'Will')
plt.plot(mGridPlots, v3_no(mGridPlots), label = 'No Will')
plt.title('Period 3: Value functions')
plt.xlabel('Market resources')
plt.legend()
plt.show()
plt.plot(mGridPlotsC, c3_wi(mGridPlotsC), label = 'Will')
plt.plot(mGridPlotsC, c3_no(mGridPlotsC), label = 'No Will')
plt.title('Period 3: Consumption Functions')
plt.xlabel('Market resources')
plt.legend()
plt.show()
```
# The second period
On the second period, the agent takes his resources as given (the only state variable) and makes two decisions:
- Whether to write a will or not.
- What fraction of his resources to consume.
These decisions can be seen as happening sequentially: the agent first decides whether to write a will or not, and then consumes optimally in accordance with his previous decision. Since we solve the model backwards in time, we first explore the consumption decision, conditional on the choice of writing a will or not.
## An agent who decides not to write a will
After deciding not to write a will, an agent solves the optimization problem expressed in the following conditional value function
\begin{equation}
\begin{split}
\nu (m_2|w=0) &= \max_{0\leq c \leq m_2} u(c) + \beta V_3(m_3,W=0)\\
s.t.&\\
m_3 &= m_2 - c + w
\end{split}
\end{equation}
We can approximate a solution to this problem through the method of endogenous gridpoints. This yields approximations to $\nu(\cdot|w=0)$ and $c_2(\cdot|w=0)$
```python
# Second period, not writing a will
# Compute market resources at 3 with and without a will
mGrid3_cond_nowi = aGrid + w
# Compute marginal value of assets in period 3 for each ammount of savings in 2
vPGrid3_no = uP(c3_no(mGrid3_cond_nowi))
# Get consumption through EGM inversion of the euler equation
cGrid2_cond_no = uPinv(DiscFac*vPGrid3_no)
# Get beginning-of-period market resources
mGrid2_cond_no = aGrid + cGrid2_cond_no
# Compute value function
vGrid2_cond_no = u(cGrid2_cond_no) + DiscFac*v3_no(mGrid3_cond_nowi)
# Create interpolating value and consumption functions
vT2_cond_no = LinearInterp(mGrid2_cond_no, vTransf(vGrid2_cond_no), lower_extrap = True)
v2_cond_no = lambda x: vUntransf(vT2_cond_no(x))
c2_cond_no = LinearInterp(np.insert(mGrid2_cond_no,0,0), np.insert(cGrid2_cond_no,0,0))
```
## An agent who decides to write a will
An agent who decides to write a will also solves for his consumption dinamically. We assume that the lawyer that helps the agent write his will takes some fraction $\tau$ of his total resources in period 3. Therefore, the evolution of resources is given by $m_3 = (1-\tau)(m_2 - c_2 + w)$. The conditional value function of the agent is therefore:
\begin{equation}
\begin{split}
\nu (m_2|w=1) &= \max_{0\leq c \leq m_2} u(c) + \beta V_3(m_3,W=1)\\
s.t.&\\
m_3 &= (1-\tau)(m_2 - c + w)
\end{split}
\end{equation}
We also approximate a solution to this problem using the EGM. This yields approximations to $\nu(\cdot|w=1)$ and $c_2(\cdot|w=1)$.
```python
# Second period, writing a will
# Compute market resources at 3 with and without a will
mGrid3_cond_will = (1-willCstFac)*(aGrid + w)
# Compute marginal value of assets in period 3 for each ammount of savings in 2
vPGrid3_wi = uP(c3_wi(mGrid3_cond_will))
# Get consumption through EGM inversion of the euler equation
cGrid2_cond_wi = uPinv(DiscFac*(1-willCstFac)*vPGrid3_wi)
# Get beginning-of-period market resources
mGrid2_cond_wi = aGrid + cGrid2_cond_wi
# Compute value function
vGrid2_cond_wi = u(cGrid2_cond_wi) + DiscFac*v3_wi(mGrid3_cond_will)
# Create interpolating value and consumption functions
vT2_cond_wi = LinearInterp(mGrid2_cond_wi, vTransf(vGrid2_cond_wi), lower_extrap = True)
v2_cond_wi = lambda x: vUntransf(vT2_cond_wi(x))
c2_cond_wi = LinearInterp(np.insert(mGrid2_cond_wi,0,0), np.insert(cGrid2_cond_wi,0,0))
```
## The decision whether to write a will or not
With the conditional value functions at hand, we can now express and solve the decision of whether to write a will or not, and obtain the unconditional value and consumption functions.
\begin{equation}
V_2(m_2) = \max \{ \nu (m_2|w=0), \nu (m_2|w=1) \}
\end{equation}
\begin{equation}
w^*(m_2) = \arg \max_{w \in \{0,1\}} \{ \nu (m_2|w=w) \}
\end{equation}
\begin{equation}
c_2(m_2) = c_2(m_2|w=w^*(m_2))
\end{equation}
We now construct these objects.
```python
# We use HARK's 'calcLogSumchoiceProbs' to compute the optimal
# will decision over our grid of market resources.
# The function also returns the unconditional value function
# Use transformed values since -given sigma=0- magnitudes are unimportant. This
# avoids NaNs at m \approx 0.
vTGrid2, willChoice2 = calcLogSumChoiceProbs(np.stack((vT2_cond_wi(mGrid),
vT2_cond_no(mGrid))),
sigma = 0)
vGrid2 = vUntransf(vTGrid2)
# Plot the optimal decision rule
plt.plot(mGrid, willChoice2[0])
plt.title('$w^*(m)$')
plt.ylabel('Write will (1) or not (0)')
plt.xlabel('Market resources: m')
plt.show()
# With the decision rule we can get the unconditional consumption function
cGrid2 = (willChoice2*np.stack((c2_cond_wi(mGrid),c2_cond_no(mGrid)))).sum(axis=0)
vT2 = LinearInterp(mGrid, vTransf(vGrid2), lower_extrap = True)
v2 = lambda x: vUntransf(vT2(x))
c2 = LinearInterp(mGrid, cGrid2)
# Plot the conditional and unconditional value functions
plt.plot(mGridPlots, v2_cond_wi(mGridPlots), label = 'Cond. Will')
plt.plot(mGridPlots, v2_cond_no(mGridPlots), label = 'Cond. No will')
plt.plot(mGridPlots, v2(mGridPlots), 'k--',label = 'Uncond.')
plt.title('Period 2: Value Functions')
plt.xlabel('Market resources')
plt.legend()
plt.show()
# Plot the conditional and unconditiional consumption
# functions
plt.plot(mGridPlotsC, c2_cond_wi(mGridPlotsC), label = 'Cond. Will')
plt.plot(mGridPlotsC, c2_cond_no(mGridPlotsC), label = 'Cond. No will')
plt.plot(mGridPlotsC, c2(mGridPlotsC), 'k--',label = 'Uncond.')
plt.title('Period 2: Consumption Functions')
plt.xlabel('Market resources')
plt.legend()
plt.show()
```
# The first period
In the first period, the agent simply observes his market resources and decides what fraction of them to consume. His problem is represented by the following value function
\begin{equation}
\begin{split}
V (m_1) &= \max_{0\leq c \leq m_1} u(c) + \beta V_2(m_2)\\
s.t.&\\
m_2 &= m_1 - c + w.
\end{split}
\end{equation}
Although this looks like a simple problem, there are complications introduced by the kink in $V_2(\cdot)$, which is clearly visible in the plot from the previous block. Particularly, note that $V_2'(\cdot)$ and $c_2(\cdot)$ are not monotonic: there are now multiple points $m$ for which the slope of $V_2(m)$ is equal. Thus, the Euler equation becomes a necessary but not sufficient condition for optimality and the traditional EGM inversion step can generate non-monotonic endogenous $m$ gridpoints.
We now illustrate this phenomenon.
```python
# EGM step
# Period 2 resources implied by the exogenous savings grid
mGrid2 = aGrid + w
# Envelope condition
vPGrid2 = uP(c2(mGrid2))
# Inversion of the euler equation
cGrid1 = uPinv(DiscFac*vPGrid2)
# Endogenous gridpoints
mGrid1 = aGrid + cGrid1
vGrid1 = u(cGrid1) + DiscFac*v2(mGrid2)
plt.plot(mGrid1)
plt.title('Endogenous gridpoints')
plt.xlabel('Position: i')
plt.ylabel('Endogenous grid point: $m_i$')
plt.show()
plt.plot(mGrid1,vGrid1)
plt.title('Value function at grid points')
plt.xlabel('Market resources: m')
plt.ylabel('Value function')
plt.show()
```
The previous cell applies the endogenous gridpoints method to the first period problem. The plots illustrate that the sequence of resulting endogenous gridpoints $\{m_i\}_{i=1}^N$ is not monotonic. This results in intervals of market resources over which we have multiple candidate values for the value function. This is the point where we must apply the upper envelope function illustrated above.
We finally use the resulting consumption and value grid points to create the first period value and consumption functions.
```python
# Calculate envelope
vTGrid1 = vTransf(vGrid1) # The function operates with *transformed* value grids
rise, fall = calcSegments(mGrid1, vTGrid1)
mGrid1_up, cGrid1_up, vTGrid1_up = calcMultilineEnvelope(mGrid1, cGrid1,
vTGrid1, mGrid)
# Create functions
c1_up = LinearInterp(mGrid1_up, cGrid1_up)
v1T_up = LinearInterp(mGrid1_up, vTGrid1_up)
v1_up = lambda x: vUntransf(v1T_up(x))
# Show that there is a non-monothonicity and that the upper envelope fixes it
plt.plot(mGrid1,vGrid1, label = 'EGM Points')
plt.plot(mGridPlots, v1_up(mGridPlots), 'k--', label = 'Upper Envelope')
plt.title('Period 1: Value function')
plt.legend()
plt.show()
plt.plot(mGrid1,cGrid1, label = 'EGM Points')
plt.plot(mGridPlotsC,c1_up(mGridPlotsC),'k--', label = 'Upper Envelope')
plt.title('Period 1: Consumption function')
plt.legend()
plt.show()
```
# References
[1] Iskhakov, F. , Jørgensen, T. H., Rust, J. and Schjerning, B. (2017), The endogenous grid method for discrete‐continuous dynamic choice models with (or without) taste shocks. Quantitative Economics, 8: 317-365. doi:10.3982/QE643
[2] Carroll, C. D. (2006). The method of endogenous gridpoints for solving dynamic stochastic optimization problems. Economics letters, 91(3), 312-320.
| ec6a3a269034badbc2638b461aa8d82e935d7db1 | 259,651 | ipynb | Jupyter Notebook | notebooks/DCEGM-Upper-Envelope.ipynb | sbenthall/DemARK | ef1c010091d28c7dea2e5d4fa0f746e67c6b23f4 | [
"Apache-2.0"
]
| null | null | null | notebooks/DCEGM-Upper-Envelope.ipynb | sbenthall/DemARK | ef1c010091d28c7dea2e5d4fa0f746e67c6b23f4 | [
"Apache-2.0"
]
| null | null | null | notebooks/DCEGM-Upper-Envelope.ipynb | sbenthall/DemARK | ef1c010091d28c7dea2e5d4fa0f746e67c6b23f4 | [
"Apache-2.0"
]
| null | null | null | 259.132735 | 21,916 | 0.917177 | true | 5,810 | Qwen/Qwen-72B | 1. YES
2. YES | 0.763484 | 0.731059 | 0.558151 | __label__eng_Latn | 0.977673 | 0.135102 |
# Fundamentals of Data Science
Winter Semester 2021
## Prof. Fabio Galasso, Guido D'Amely, Alessandro Flaborea, Luca Franco, Muhammad Rameez Ur Rahman and Alessio Sampieri
<galasso@di.uniroma1.it>, <damely@di.uniroma1.it>, <flaborea@di.uniroma1.it>, <franco@diag.uniroma1.it>, <rahman@di.uniroma1.it>, <alessiosampieri27@gmail.com>
## Exercise 2: Classification
In Exercise 2, you will re-derive and implement logistic regression and optimize the parameters with Gradient Descent and with the Newton's method. Also, in this exercise you will re-derive and implement Gassian Discriminant Analysis.
We will use datasets generated from the make_classification function from the SkLearn library. Its first output contains the feature values $x^{(i)}_1$ and $x^{(i)}_2$ for the $i$-th data sample $x^{(i)}$. The second contains the ground truth label $y^{(i)}$ for each corresponding data sample.
The completed exercise should be handed in as a single notebook file. Use Markdown to provide equations. Use the code sections to provide your scripts and the corresponding plots.
Submit it by sending an email to galasso@di.uniroma1.it, flaborea@di.uniroma1.it, franco@diag.uniroma1.it and alessiosampieri27@gmail.com by Wednesday November 17th 2021, 23:59.
## Notation
- $x^i$ is the $i^{th}$ feature vector
- $y^i$ is the expected outcome for the $i^{th}$ training example
- $m$ is the number of training examples
- $n$ is the number of features
Let's start by setting up our Python environment and importing the required libraries:
```python
%matplotlib inline
import numpy as np # imports a fast numerical programming library
import scipy as sp # imports stats functions, amongst other things
import matplotlib as mpl # this actually imports matplotlib
import matplotlib.cm as cm # allows us easy access to colormaps
import matplotlib.pyplot as plt # sets up plotting under plt
import pandas as pd # lets us handle data as dataframes
from sklearn.datasets import make_classification
import seaborn as sns
# sets up pandas table display
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns # sets up styles and gives us more plotting options
```
## Question 1: Logistic Regression with Gradient Ascent **(10 Points)**
### Code and Theory
#### Exercise 1.a **(3 Points)** Equations for the log likelihood, its gradient, and the gradient ascent update rule.
Write and simplify the likelihood $L(\theta)$ and log-likelihood $l(\theta)$ of the parameters $\theta$.
Recall the probabilistic interpretation of the hypothesis $h_\theta(x)= P(y=1|x;\theta)$ and that $h_\theta(x)=\frac{1}{1+\exp(-\theta^T x)}$.
Also derive the gradient $\frac{\delta l(\theta)}{\delta \theta_j}$ of $l(\theta)$ and write the gradient update equation.
Question: Are we looking for a local minimum or a local maximum using the gradient ascent rule?
################# Do not write above this line #################
Your equations and answers here.
- $P(Y=1|x; \theta) = h_{\theta}(x)$
- $P(Y=0|x; \theta) = 1 - h_{\theta}(x)$
Given these two points, we can say that in general we have:
$P(y|x;\theta)=h_{\theta}(x)^{y} (1+h_{\theta}(x))^{1-y}$
We must observe that for $y=1$, we get only $h_{\theta}(x)^{y}$, instead for $y=0$, we just get $(1+h_{\theta}(x))^{1-y}$.
The Likelihood $L(\theta)$ is given by:
$$L(\theta) = P(\vec{y}|x; \theta) \\
= \prod_{i=1}^{m} P(y^{(i)}|x^{(i)};\theta) \\
= \prod_{i=1}^{m} h_{\theta}(x^{(i)})^{y^{(i)}}(1-h_{\theta}(x^{(i)}))^{1-y^{(i)}} \\
= \prod_{i=1}^{m} \bigg(\frac{1}{1+\exp(-\theta^T x^{(i)})}\bigg)^{y^{(i)}}\bigg(1-\frac{1}{1+\exp(-\theta^T x^{(i)})}\bigg)^{1-y^{(i)}}
$$
Instead, in order to find the **Log-likelihood $l(\theta)$**, we must apply the natural logarithm function to the likelihood function previously found, and we get:
$$
l(\theta) = \log L(\theta)
= \sum_{i=1}^{m} y^{(i)}\log h_{\theta}(x^{(i)})+(1-y^{(i)})\log(1-h_{\theta}(x^{(i)})) \\
= \sum_{i=1}^{m} y^{(i)}\log \bigg(\frac{1}{1+\exp(-\theta^T x^{(i)})}\bigg) + (1-y^{(i)})\log\bigg(1-\frac{1}{1+\exp(-\theta^T x^{(i)})}\bigg)
$$
We proceed deriving the gradient $\frac{\delta l(\theta)}{\delta \theta_j}$ of $l(\theta)$ as follows:
$$
\frac{\delta l(\theta)}{\delta \theta_j} = \frac{\delta}{\delta \theta_j} \Bigg(\sum_{i=1}^{m} y^{(i)}\log \bigg(\frac{1}{1+\exp(-\theta^T x^{(i)})}\bigg) + (1-y^{(i)})\log\bigg(1-\frac{1}{1+\exp(-\theta^T x^{(i)})}\bigg)\Bigg)\\
= \Bigg[y^{(i)}-\bigg(\frac{1}{1+\exp(-\theta^T x^{(i)})}\bigg)\Bigg]x^{(i)}
$$
The procedure of the _gradient ascent rule_ leads us to a local maximum of our (differentiable) function.
#### Exercise 1.b **(7 Points)** Implementation of logistic regression with Gradient Ascent
Code up the equations above to learn the logistic regression parameters. The dataset used here is created using the make_classification function present in the SkLearn library. $x^{(i)}_1$ and $x^{(i)}_2$ represent the two features for the $i$-th data sample $x^{(i)}$ and $y^{(i)}$ is its ground truth label.
```python
X, y = make_classification(n_samples=500, n_features=2, n_informative=2, n_redundant=0, n_classes=2, random_state=5)
X.shape, y.shape
```
((500, 2), (500,))
```python
sns.scatterplot(x=X[:,0], y=X[:,1], hue=y);
```
Adding a column of 1's to $X$ to take into account the zero intercept
```python
x = np.hstack([np.ones((X.shape[0], 1)), X])
```
```python
[x[:5,:],x[-5:,:]] # Plot the first and last 5 lines of x, now containing features x0 (constant=1), x1 and x2
```
[array([[ 1. , 2.25698215, -1.34710915],
[ 1. , 1.43699308, 1.28420453],
[ 1. , 0.57927295, 0.23690172],
[ 1. , 0.42538132, -0.24611145],
[ 1. , 1.13485101, -0.61162683]]),
array([[ 1. , 1.56638944, 0.81749944],
[ 1. , -1.94913831, -1.90601147],
[ 1. , 1.53440506, -0.11687238],
[ 1. , -0.39243599, 1.39209018],
[ 1. , -0.11881249, 0.96973739]])]
```python
[y[:5],y[-5:]] # Plot the first and last 5 lines of y
```
[array([1, 1, 1, 0, 1]), array([1, 0, 0, 0, 1])]
Define the sigmoid function "sigmoid", the function to compute the gradient of the log likelihood "grad_l" and the gradient ascent algorithm.
################# Do not write above this line #################
```python
def sigmoid(x):
'''
Function to compute the sigmoid of a given input x.
Input:
x: it's the input data matrix. The shape is (N, H)
Output:
g: The sigmoid of the input x
'''
g = 1 / (1 + np.exp(-x))
return g
def log_likelihood(theta,features,target):
'''
Function to compute the log likehood of theta according to data x and label y
Input:
theta: it's the model parameter matrix.
features: it's the input data matrix. The shape is (N, H)
target: the label array
Output:
log_g: the log likehood of theta according to data x and label y
'''
t = features.dot(theta)
log_l= np.sum(target * np.log(sigmoid(t)) + (1 - target) * np.log(1 - sigmoid(t))) / features.shape[0]
return log_l
def predictions(features, theta):
'''
Function to compute the predictions for the input features
Input:
theta: it's the model parameter matrix.
features: it's the input data matrix. The shape is (N, H)
Output:
preds: the predictions of the input features
'''
preds = sigmoid(features.dot(theta))
return preds
def update_theta(theta, target, preds, features, lr):
'''
Function to compute the gradient of the log likelihood
and then return the updated weights
Input:
theta: the model parameter matrix.
target: the label array
preds: the predictions of the input features
features: it's the input data matrix. The shape is (N, H)
lr: the learning rate
Output:
theta: the updated model parameter matrix.
'''
prediction = predictions(features, theta)
theta = theta + ((lr/features.shape[0]) * (features.T).dot(target-prediction))
return theta
def gradient_ascent(theta, features, target, lr, num_steps):
'''
Function to execute the gradient ascent algorithm
Input:
theta: the model parameter matrix.
target: the label array
num_steps: the number of iterations
features: the input data matrix. The shape is (N, H)
lr: the learning rate
Output:
theta: the final model parameter matrix.
log_likelihood_history: the values of the log likelihood during the process
'''
log_likelihood_history = np.zeros(num_steps)
m = len(target)
for it in range(num_steps):
prediction = np.dot(features,theta)
theta = update_theta(theta, target, prediction, features, lr)
log_likelihood_history[it] = log_likelihood(theta,features,target)
return theta, log_likelihood_history
```
################# Do not write below this line #################
Check your grad_l implementation:
grad_l applied to the theta_test (defined below) should provide a value for log_l_test close to the target_value (defined below); in other words the error_test should be 0, up to machine error precision.
```python
target_value = -1.630501731599431
output_test = log_likelihood(np.array([-7,4,1]),x,y)
error_test=np.abs(output_test-target_value)
print("{:f}".format(error_test))
```
0.000000
Let's now apply the function gradient_ascent and print the final theta as well as theta_history
```python
# Initialize theta0
theta0 = np.zeros(x.shape[1])
# Run Gradient Ascent method
n_iter=1000
theta_final, log_l_history = gradient_ascent(theta0,x,y,lr=0.5,num_steps=n_iter)
print(theta_final)
```
[-0.46097042 2.90036399 0.23146846]
Let's plot the log likelihood over iterations
```python
fig,ax = plt.subplots(num=2)
ax.set_ylabel('l(Theta)')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)),log_l_history,'b.')
```
Plot the data and the decision boundary:
```python
# Generate vector to plot decision boundary
x1_vec = np.linspace(X[:,0].min(),X[:,1].max(),2)
# Plot raw data
sns.scatterplot(x=X[:,0], y=X[:,1], hue=y, data=X)
# Plot decision boundary
plt.plot(x1_vec,(-x1_vec*theta_final[1]-theta_final[0])/theta_final[2], color="red")
plt.ylim(X[:,1].min()-1,X[:,1].max()+1)
# Save the theta_final value for later comparisons
theta_GA = theta_final.copy()
```
################# Do not write above this line #################
Discuss these two points:
1. You have implemented the gradient ascent rule. Could we have also used gradient descent instead for the proposed problem? Why/Why not?
- Since we know that we can derive the cost function from the log-likelihood simply multiplying it by $-1$:
$J(\theta)=-\frac{1}{m}L(\theta)$ ,
we could have turned a maximization problem into a minimization one, then using the gradient descend.
2. Let's deeply analyze how the learning rate $\alpha$ and the number of iterations affect the final results. Run the algorithm you have written for different values of $\alpha$ and the number of iterations and look at the outputs you get. Is the decision boundary influenced by these parameters change? Why do you think these parameters are affecting/not affecting the results?
- We analyzed how the learning rate $\alpha$ and the number of iterations affect the final results:
1) if we don't change lr's value but we consider a lower number of iterations, theta_final doesn't change considerably;
2) the same thing happens if we take the same number of iterations but we increase the learning rate $\alpha$ or we increase both the learning rate and the number of iterations;
3) a little difference in our results appears if we decrease at the same time both the learning rate and the number of iterations, but still not so considerably;
4) we notice an abnormal behaviour if we take a very large value of $\alpha$ (with the same number of iterations).
- We can justify our results because generally, a large value of the learning rate makes the learning faster, at the cost of a less optimal result, instead a smaller learning rate may allow the model to learn a more optimal or even globally optimal result but may take significantly longer time to execute.
```python
#EXAMPLE with a too big value of alpha (alpha = 300)
# that gives us a wrong result
# Run Gradient Ascent method
n_iter=1500
theta_final, log_l_history = gradient_ascent(theta0,x,y,lr=300,num_steps=n_iter)
print(theta_final)
```
[-45.43579893 158.26593918 23.81026057]
<ipython-input-10-bfb1284f7296>:30: RuntimeWarning: divide by zero encountered in log
log_l= np.sum(target * np.log(sigmoid(t)) + (1 - target) * np.log(1 - sigmoid(t))) / features.shape[0]
<ipython-input-10-bfb1284f7296>:30: RuntimeWarning: invalid value encountered in multiply
log_l= np.sum(target * np.log(sigmoid(t)) + (1 - target) * np.log(1 - sigmoid(t))) / features.shape[0]
<ipython-input-10-bfb1284f7296>:12: RuntimeWarning: overflow encountered in exp
g = 1 / (1 + np.exp(-x))
################# Do not write below this line #################
## Question 2: Logistic Regression with non linear boundaries (7 points)
#### Exercise 2.a **(4 Points)** Polynomial features for logistic regression
Define new features, e.g. of 2nd and 3rd degrees, and learn a logistic regression classifier by using the new features, by using the gradient ascent optimization algorithm you defined in Question 1.
In particular, we would consider a polynomial boundary with equation:
$f(x_1, x_2) = c_0 + c_1 x_1 + c_2 x_2 + c_3 x_1^2 + c_4 x_2^2 + c_5 x_1 x_2 + c_6 x_1^3 + c_7 x_2^3 + c_8 x_1^2 x_2 + c_9 x_1 x_2^2$
We would therefore compute 7 new features: 3 new ones for the quadratic terms and 4 new ones for the cubic terms.
Create new arrays by stacking x and the new 7 features (in the order x1x1, x2x2, x1x2, x1x1x1, x2x2x2, x1x1x2, x1x2x2). In particular create x_new_quad by additionally stacking with x the quadratic features, and x_new_cubic by additionally stacking with x the quadratic and the cubic features.
```python
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=500, n_features=2, n_informative=2, n_redundant=0, n_classes=2, random_state=5)
X.shape, y.shape
```
((500, 2), (500,))
```python
x = np.hstack([np.ones((X.shape[0], 1)), X])
```
```python
import seaborn as sns
import matplotlib.pyplot as plt
sns.scatterplot(x=X[:,0], y=X[:,1], hue=y);
```
```python
# First extract features x1 and x2 from x and reshape them to x1 vector arrays
x1 = x[:,1]
x2 = x[:,2]
x1 = x1.reshape(x1.shape[0], 1)
x2 = x2.reshape(x2.shape[0], 1)
print(x[:5,:]) # For visualization of the first 5 values
print(x1[:5,:]) # For visualization of the first 5 values
print(x2[:5,:]) # For visualization of the first 5 values
```
[[ 1. 2.25698215 -1.34710915]
[ 1. 1.43699308 1.28420453]
[ 1. 0.57927295 0.23690172]
[ 1. 0.42538132 -0.24611145]
[ 1. 1.13485101 -0.61162683]]
[[2.25698215]
[1.43699308]
[0.57927295]
[0.42538132]
[1.13485101]]
[[-1.34710915]
[ 1.28420453]
[ 0.23690172]
[-0.24611145]
[-0.61162683]]
################# Do not write above this line #################
Your code here
```python
def is_in(label, cur_labels):
na, nb = label.count("a"), label.count("b")
for l in cur_labels:
na_l, nb_l = l.count("a"), l.count("b")
if na_l == na and nb_l == nb:
return True
return False
def new_features(x, degree=2):
'''
Function to create n-degree features from the input
Input:
x: the initial features
degree: the maximum degree you wantthe features
Output:
features: the final features.
2nd degree features must have the order [x, x1x1, x1x2, x2x2]
2nd degree features must have the order [x, x1x1, x1x2, x2x2, x1x1x1, x1x1x2, x1x2x2, x2x2x2]
'''
#features = np.ones(x[:,1].shape[0])
# 3nd degree features must have the order [x, x1x1, x1x2, x2x2, x1x1x1, x1x1x2, x1x2x2, x2x2x2]
features = []
# initialize grades to degree 0
for triple in x:
c_features = []
a, b = triple[1], triple[2]
cur_factors, cur_labels = [a, b], ["a", "b"]
for degrees in range(2, degree + 1):
# factors list
new_factors, new_labels = [], []
for index in range(len(cur_factors)):
r_a, ra_label = a * cur_factors[index], "a" + cur_labels[index]
r_b, rb_label = b * cur_factors[index], "b" + cur_labels[index]
if not is_in(ra_label, new_labels):
new_factors.append(r_a)
new_labels.append(ra_label)
if not is_in(rb_label, new_labels):
new_factors.append(r_b)
new_labels.append(rb_label)
c_features += new_factors
cur_factors, cur_labels = new_factors, new_labels
features.append([*triple, *c_features])
features = np.array(features, np.float64)
return features
```
################# Do not write below this line #################
```python
x_new_quad = new_features(x, degree=2)
x_new_cubic = new_features(x, degree=3)
#reordering output features
temp = np.copy(x_new_quad[:, -1])
x_new_quad[:, -1] = x_new_quad[:, -2]
x_new_quad[:, -2] = temp
temp = np.copy(x_new_cubic[:, -1])
x_new_cubic[:, -1] = x_new_cubic[:, -2]
x_new_cubic[:, -2] = x_new_cubic[:, -3]
x_new_cubic[:, -3] = temp
```
Now use the gradient ascent optimization algorithm to learn theta by maximizing the log-likelihood, both for the case of x_new_quad and x_new_cubic.
```python
# Initialize theta0, in case of quadratic features
theta0_quad = np.zeros(x_new_quad.shape[1])
theta_final_quad, log_l_history_quad = gradient_ascent(theta0_quad,x_new_quad,y,lr=0.5,num_steps=n_iter)
# Initialize theta0, in case of quadratic and cubic features
theta0_cubic = np.zeros(x_new_cubic.shape[1])
# Run Newton's method, in case of quadratic and cubic features
theta_final_cubic, log_l_history_cubic = gradient_ascent(theta0_cubic,x_new_cubic,y,lr=0.5,num_steps=n_iter)
# check and compare with previous results
print(theta_final_quad)
print(theta_final_cubic)
```
[ 0.07605952 3.33058375 0.27310376 -0.52398852 -0.34168459 -0.05180065]
[ 0.84882395 2.50116735 1.74388569 -1.26222374 -0.34310518 -0.99094786
0.37330203 -0.66243284 0.98447591 1.40258057]
```python
# Plot the log likelihood values in the optimization iterations, in one of the two cases.
fig,ax = plt.subplots(num=2)
ax.set_ylabel('l(Theta)')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history_quad)),log_l_history_quad,'b.')
```
#### Exercise 2.b **(3 Points)** Plot the computed non-linear boundary and discuss the questions
First, define a boundary_function to compute the boundary equation for the input feature vectors $x_1$ and $x_2$, according to estimated parameters theta, both in the case of quadratic (theta_final_quad) and of quadratic and cubic features (theta_final_cubic). Refer for the equation to the introductory part of Question 2.
################# Do not write above this line #################
Your code here
```python
def boundary_function(x1_vec, x2_vec, theta_final):
x1_vec, x2_vec = np.meshgrid(x1_vec,x2_vec)
if len(theta_final) == 6:
# boundary function value for features up to quadratic
c_0, c_1, c_2, c_3, c_4, c_5 = theta_final
f = c_0 + c_1*x1_vec + c_2*x2_vec + c_3*(x1_vec**2) + c_4*(x2_vec**2) + c_5*x1_vec*x2_vec
elif len(theta_final) == 10:
# boundary function value for features up to cubic
c_0, c_1, c_2, c_3, c_4, c_5, c_6, c_7, c_8, c_9 = theta_final
f = c_0 + c_1*x1_vec + c_2*x2_vec + c_3*(x1_vec**2) + c_4*(x2_vec**2) + c_5*x1_vec*x2_vec + c_6*(x1_vec**3) + c_7*(x2_vec**3) + c_8*(x1_vec**2)*x2_vec + c_9*(x2_vec**2)*x1_vec
else:
raise("Number of Parameters is not correct")
return x1_vec, x2_vec, f
```
################# Do not write below this line #################
Now plot the decision boundaries corresponding to the theta_final_quad and theta_final_cubic solutions.
```python
x1_vec = np.linspace(X[:,0].min()-1,X[:,0].max()+1,200);
x2_vec = np.linspace(X[:,1].min()-1,X[:,1].max()+1,200);
x1_vec, x2_vec, f = boundary_function(x1_vec, x2_vec, theta_final_quad)
sns.scatterplot(x=X[:,0], y=X[:,1], hue=y, data=X);
plt.contour(x1_vec, x2_vec, f, colors="red", levels=[0])
plt.show()
```
```python
x1_vec = np.linspace(X[:,0].min()-1,X[:,0].max()+1,200);
x2_vec = np.linspace(X[:,1].min()-1,X[:,1].max()+1,200);
x1_vec, x2_vec, f = boundary_function(x1_vec, x2_vec, theta_final_cubic)
sns.scatterplot(x=X[:,0], y=X[:,1], hue=y, data=X);
plt.contour(x1_vec, x2_vec, f, colors="red", levels=[0])
plt.show()
```
#### Confusion Matrix
Here you can see the confusion matrices related to the three models you've implemented.
```python
from sklearn.metrics import confusion_matrix
```
```python
## logistic regression with linear buondary
z = np.dot(x,theta_final)
probabilities = sigmoid(z)
y_hat = np.array(list(map(lambda x: 1 if x>0.5 else 0, probabilities)))
confusion_matrix(y, y_hat)
```
array([[226, 27],
[ 31, 216]], dtype=int64)
```python
## logistic regression with non linear buondary - quadratic
z = np.dot(x_new_quad,theta_final_quad)
probabilities = sigmoid(z)
y_hat = np.array(list(map(lambda x: 1 if x>0.5 else 0, probabilities)))
confusion_matrix(y, y_hat)
```
array([[220, 33],
[ 15, 232]], dtype=int64)
```python
## logistic regression with non linear buondary - cubic
z = np.dot(x_new_cubic,theta_final_cubic)
probabilities = sigmoid(z)
y_hat = np.array(list(map(lambda x: 1 if x>0.5 else 0, probabilities)))
confusion_matrix(y, y_hat)
```
array([[225, 28],
[ 11, 236]], dtype=int64)
################# Do not write above this line #################
Write now your considerations. Discuss in particular:
1. Look back at the plots you have generated. What can you say about the differences between the linear, quadratic, and cubic decision boundaries? Can you say if the model is improving in performances, increasing the degree of the polynomial? Do you think you can incur in underfitting increasing more and more the degree?
- The _linear decision boundary_ shown in the first plot is an example of underfitting, in fact the model is not capable of capturing the underlying structure of the data.
- In the same way we can say that the _cubic decision boundary_ overfits the data, in fact the model captures too scrupulously the behaviour of these particular observations. This means that it may fail to fit additional data or predict future observations reliably.
- The _quadratic decision boundary_ fits the data in the most balanced way.
- We can notice that, the more the degree grows, the more the decision boundaries fit the data. This leads us to an improvement if we move from the _linear function_ to the _quadratic_ one. The model does not improve in performances if we move from the _quadratic function_ to the _cubic_ one, instead.
2. Let's now delve into some quantitative analysis. The three tables you have generated represent the confusion matrix for the model you have implemented in the first two questions. What can you say about actual performances? Does the increase of the degree have a high effect on the results?
- As it is possible to see from the displayed results the parameters effectively increase even if slightly:
about 0.02 each time. It is possible to affirm that the increase of the degree actually impacts on all of the parameters: Accuracy, Precision, Recall which proportionally increase through the different cases.
| Confusion Matrix | Accuracy | Precision | Recall |
| --- | --- | --- | --- |
|[218, 35],[ 22, 225]| 0.886 | 0.865384 | 0.910931 |
|[225, 28],[ 11, 236]| 0.904 | 0.875471 | 0.939271 |
|[225, 28],[ 11, 236]| 0.922 | 0.893939 | 0.955465 |
################# Do not write below this line #################
## Question 3: Multinomial Classification (Softmax Regression) **(13 Points)**
### Code and Theory **(10 Points)**
### Report **(3 Points)**
#### Exercise 3.a **(4 Points)**
In the multinomial classification we generally have $K>2$ classes. So the label for the $i$-th sample $X_i$ is $y_i\in\{1,...,K\}$, where $i=1,...,N$. The output class for each sample is estimated by returning a score $s_i$ for each of the K classes. This results in a vector of scores of dimension K.
In this exercise we'll use the *Softmax Regression* model, which is the natural extension of *Logistic Regression* for the case of more than 2 classes. The score array is given by the linear model:
\begin{equation}
s_i = X_i \theta
\end{equation}
Scores may be interpreted probabilistically, upon application of the function *softmax*. The position in the vector with the highest probability will be predicted as the output class. The probability of the class k for the $i$-th data sample is:
\begin{equation}
p_{ik} = \frac{\exp(X_i \theta_k)}{\sum_{j=1}^K(X_i \theta_j))}
\end{equation}
We will adopt the *Cross Entropy* loss and optimize the model via *Gradient Descent*.
In the first of this exercise we have to:
- Write the equations of the Cross Entropy loss for the Softmax regression model;
- Compute the equation for the gradient of the Cross Entropy loss for the model, in order to use it in the gradient descent algorithm.
#### A bit of notation
* N: is the number of samples
* K: is the number of classes
* X: is the input dataset and it has shape (N, H) where H is the number of features
* y: is the output array with the labels; it has shape (N, 1)
* $\theta$: is the parameter matrix of the model; it has shape (H, K)
################# Do not write above this line #################
Your equations here.
\begin{equation}
L(\theta) = -\frac{1}{N} \sum_{i=1}^{N}\sum_{j=1}^{K} \textbf{1} \{y_{ij} = 1\} \log{\frac{\exp(X_i \theta_k)}{\sum_{j=1}^K\exp(X_i \theta_j)}}
\end{equation}
\begin{equation}
\nabla_{\theta_k} L(\theta) = -\frac{1}{N} \left[ X \left( y - \frac{\exp(X_i \theta_k)}{\sum_{j=1}^K\exp(X_i \theta_j)} \right) \right]
\end{equation}
################# Do not write below this line #################
#### Exercise 3.b **(4 Points)**
Now we will implement the code for the equations. Let's implement the functions:
- softmax
- CELoss
- CELoss gradient
- gradient descent
We generate a toy dataset with *sklearn* library. Do not change anything outside the parts provided of your own code (else the provided checkpoint will not work).
```python
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=300, n_features=7, n_informative=7, n_redundant=0, n_classes=3, random_state=1)
X.shape, y.shape
```
((300, 7), (300,))
As a hint for the implementations of your functions: consider the labels $y$ as one-hot vector. This will allow matrix operations (element-wise multiplication and summation).
```python
import scipy
import numpy as np
def class2OneHot(vec):
out_sparse = scipy.sparse.csr_matrix((np.ones(vec.shape[0]), (vec, np.array(range(vec.shape[0])))))
out_onehot = np.array(out_sparse.todense()).T
return out_onehot
y_onehot = class2OneHot(y)
```
Let's visualize the generated dataset. We use as visualizzation method the *Principal Component Analysis* (PCA). PCA summarize the high-dimensional feature vectors of each sample into 2 features, which we can illustrate with a 2D plot. Look at the following plot, the 3 generated classes do not seem separable.
```python
from sklearn.decomposition import PCA
import pandas as pd
pca = PCA(n_components=2)
principalComponents = pca.fit_transform(X)
principalDf = pd.DataFrame(data = principalComponents, columns = ['pc1', 'pc2'])
finalDf = pd.concat([principalDf, pd.DataFrame(y, columns = ['target'])], axis = 1)
```
```python
import seaborn as sns
import matplotlib.pyplot as plt
sns.scatterplot(x='pc1', y='pc2', hue='target', data=finalDf);
```
################# Do not write above this line #################
```python
def softmax(theta, X):
'''
Function to compute associated probability for each sample and each class.
Input:
theta: it's the model parameter matrix. The shape is (H, K)
X: it's the input data matrix. The shape is (N, H)
Output:
softmax: it's the matrix containing probability for each sample and each class. The shape is (N, K)
'''
# Create the matrix for the dot products between theta and X
dot = np.zeros((len(X), len(theta[0])), dtype = float)
# Iterate for each column of theta and for each row of X
for k in range(len(theta[0])):
for i in range(len(X)):
# Compute the dot product
dot[i][k] = np.dot(X[i], theta[:,k])
# Subtract the maximum to normalize the results
dot -= np.max(dot)
# Compute the softmax matrix
softmax = ( np.exp(dot).T / np.sum(np.exp(dot), axis = 1) ).T
return softmax
def CELoss(theta, X, y_onehot):
'''
Function to compute softmax regression model and Cross Entropy loss.
Input:
theta: it's the model parameter matrix. The shape is (H, K)
X: it's the input data matrix. The shape is (N, H)
y_onehot: it's the label array in encoded as one hot vector. The shape is (N, K)
Output:
loss: The scalar that is the mean error for each sample.
'''
# Call the softmax function
s = softmax(theta, X)
# Compute loss
l = []
for k in range(len(s[0])):
for i in range(len(s)):
if y_onehot[i][k] == 1:
l.append(-(y_onehot[i][k]) * np.log(s[i][k]))
loss = np.mean(l)
return loss
def CELoss_jacobian(theta, X, y_onehot):
'''
Function to compute gradient of the cross entropy loss with respect the parameters.
Input:
theta: it's the model parameter matrix. The shape is (H, K)
X: it's the input data matrix. The shape is (N, H)
y_onehot: it's the label array in encoded as one hot vector. The shape is (N, K)
Output:
jacobian: A matrix with the partial derivatives of the loss. The shape is (H, K)
'''
# Call the softmax function
s = softmax(theta, X)
# Compute the jacobian matrix
jacobian = (-1/len(X)) * (np.dot(X.T, (y_onehot - s)))
return jacobian
def gradient_descent(theta, X, y_onehot, alpha=0.01, iterations=100):
'''
Function to compute gradient of the cross entropy loss with respect the parameters.
Input:
theta: it's the model parameter matrix. The shape is (H, K)
X: it's the input data matrix. The shape is (N, H)
y_onehot: it's the label array in encoded as one hot vector. The shape is (N, K)
alpha: it's the learning rate, so it determines the speed of each step of the GD algorithm
iterations: it's the total number of step the algorithm performs
Output:
theta: it's the updated matrix of the parameters after all the iterations of the optimization algorithm. The shape is (H, K)
loss_history: it's an array with the computed loss after each iteration
'''
# We initialize an empty array to be filled with loss value after each iteration
loss_history = np.zeros(iterations)
# With a for loop we compute the steps of GD algo
for it in range(iterations):
theta = theta - alpha * CELoss_jacobian(theta, X, y_onehot)
loss_history[it] = CELoss(theta, X, y_onehot)
return theta, loss_history
```
################# Do not write below this line #################
```python
# Initialize a theta matrix with random parameters
theta0 = np.random.rand(X.shape[1], len(np.unique(y)))
print("Initial Loss with initialized theta is:", CELoss(theta0, X, y_onehot))
# Run Gradient Descent method
n_iter = 1000
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha=0.01, iterations=n_iter)
```
Initial Loss with initialized theta is: 1.2646726695234938
```python
theta_final
```
array([[ 0.36863817, 0.43665456, 0.44932945],
[ 0.61032416, 0.46997052, 0.57767904],
[ 0.44826049, 0.46877418, 1.10659579],
[ 0.48684583, 0.15780697, 1.26364796],
[ 0.70460402, 0.1696857 , 0.21803725],
[ 0.90348791, 0.89975282, 0.69354729],
[ 0.99991163, -0.06843561, 1.21751273]])
```python
loss = CELoss(theta_final, X, y_onehot)
loss
```
0.5852421930515843
```python
fig,ax = plt.subplots(num=2)
ax.set_ylabel('loss')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)), log_l_history,'b.')
```
#### Exercise 3.c **(2 Points)**
Let's now evaluate the goodness of the learnt based on accuracy:
\begin{equation}
Accuracy = \frac{Number\ of\ correct\ predictions}{Total\ number\ of\ predictions}
\end{equation}
Implement the compute_accuracy function. You may compare the accuracy achieved with learnt model Vs. a random model (random $\Theta$) or one based on $\Theta$'s filled with zeros.
################# Do not write above this line #################
```python
def compute_accuracy(theta, X, y):
'''
Function to compute accuracy metrics of the softmax regression model.
Input:
theta: it's the final parameter matrix. The one we learned after all the iterations of the GD algorithm. The shape is (H, K)
X: it's the input data matrix. The shape is (N, H)
y: it's the label array. The shape is (N, 1)
Output:
accuracy: Score of the accuracy.
'''
# Call the softmax function
s = softmax(theta, X)
# Get the predicted classes
predict = np.argmax(s, axis = 1)
# Get the number of correct predictions comparing them with the label array
# Divide by the total number of predictions (num of rows of softmax matrix)
accuracy = sum(predict == y)/(float(len(s)))
return accuracy
```
################# Do not write below this line #################
```python
compute_accuracy(theta_final, X, y)
```
0.7966666666666666
```python
theta0 = np.random.rand(X.shape[1], len(np.unique(y)))
compute_accuracy(theta0, X, y)
```
0.4766666666666667
```python
compute_accuracy(np.zeros((X.shape[1], len(np.unique(y)))), X, y)
```
0.3333333333333333
### Report **(3 Points)**
Experiment with different values for the learning rate $\alpha$ and the number of iterations. Look how the loss plot changes the convergence rate and the resulting accuracy metric. Report also execution time of each run. For this last step you could you %%time at the beginning of the cell to display time needed for the algorithm.
```python
%%time
# Initialize a theta matrix with random parameters
theta0 = np.random.rand(X.shape[1], len(np.unique(y)))
print("Initial Loss with initialized theta is:", CELoss(theta0, X, y_onehot))
# Run Gradient Descent method
n_iter = 100
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha=0.001, iterations=n_iter)
```
Initial Loss with initialized theta is: 1.748475150783264
Wall time: 697 ms
**Write your Report here**
```python
# Initialize a theta matrix with random parameters
theta0 = np.random.rand(X.shape[1], len(np.unique(y)))
```
Learning rate $\alpha = 0.01$
Iterations = $100$
```python
%%time
# Run Gradient descent method
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha = 0.01, iterations = 100)
# Compute the accuracy
acc = compute_accuracy(theta_final, X, y)
print('LR:', 0.01, 'Iter:', 100)
print('Accuracy:', acc)
# Loss plot
fig,ax = plt.subplots(num=2)
ax.set_ylabel('loss')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)), log_l_history,'b.')
```
Learning rate $\alpha = 0.01$
Iterations = $500$
```python
%%time
# Run Gradient descent method
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha = 0.01, iterations = 500)
# Compute the accuracy
acc = compute_accuracy(theta_final, X, y)
print('LR:', 0.01, 'Iter:', 500)
print('Accuracy:', acc)
# Loss plot
fig,ax = plt.subplots(num=2)
ax.set_ylabel('loss')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)), log_l_history,'b.')
```
Learning rate $\alpha = 0.01$
Iterations = $1,000$
```python
%%time
# Run Gradient descent method
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha = 0.01, iterations = 1000)
# Compute the accuracy
acc = compute_accuracy(theta_final, X, y)
print('LR:', 0.01, 'Iter:', 1000)
print('Accuracy:', acc)
# Loss plot
fig,ax = plt.subplots(num=2)
ax.set_ylabel('loss')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)), log_l_history,'b.')
```
Learning rate $\alpha = 0.005$
Iterations = $100$
```python
%%time
# Run Gradient descent method
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha = 0.005, iterations = 100)
# Compute the accuracy
acc = compute_accuracy(theta_final, X, y)
print('LR:', 0.005, 'Iter:', 100)
print('Accuracy:', acc)
# Loss plot
fig,ax = plt.subplots(num=2)
ax.set_ylabel('loss')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)), log_l_history,'b.')
```
Learning rate $\alpha = 0.005$
Iterations = $500$
```python
%%time
# Run Gradient descent method
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha = 0.005, iterations = 500)
# Compute the accuracy
acc = compute_accuracy(theta_final, X, y)
print('LR:', 0.005, 'Iter:', 500)
print('Accuracy:', acc)
# Loss plot
fig,ax = plt.subplots(num=2)
ax.set_ylabel('loss')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)), log_l_history,'b.')
```
Learning rate $\alpha = 0.005$
Iterations = $1,000$
```python
%%time
# Run Gradient descent method
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha = 0.005, iterations = 1000)
# Compute the accuracy
acc = compute_accuracy(theta_final, X, y)
print('LR:', 0.005, 'Iter:', 1000)
print('Accuracy:', acc)
# Loss plot
fig,ax = plt.subplots(num=2)
ax.set_ylabel('loss')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)), log_l_history,'b.')
```
Learning rate $\alpha = 0.001$
Iterations = $100$
```python
%%time
# Run Gradient descent method
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha = 0.001, iterations = 100)
# Compute the accuracy
acc = compute_accuracy(theta_final, X, y)
print('LR:', 0.001, 'Iter:', 100)
print('Accuracy:', acc)
# Loss plot
fig,ax = plt.subplots(num=2)
ax.set_ylabel('loss')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)), log_l_history,'b.')
```
Learning rate $\alpha = 0.001$
Iterations = $500$
```python
%%time
# Run Gradient descent method
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha = 0.001, iterations = 500)
# Compute the accuracy
acc = compute_accuracy(theta_final, X, y)
print('LR:', 0.001, 'Iter:', 500)
print('Accuracy:', acc)
# Loss plot
fig,ax = plt.subplots(num=2)
ax.set_ylabel('loss')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)), log_l_history,'b.')
```
Learning rate $\alpha = 0.001$
Iterations = $1,000$
```python
%%time
# Run Gradient descent method
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha = 0.001, iterations = 1000)
# Compute the accuracy
acc = compute_accuracy(theta_final, X, y)
print('LR:', 0.001, 'Iter:', 1000)
print('Accuracy:', acc)
# Loss plot
fig,ax = plt.subplots(num=2)
ax.set_ylabel('loss')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)), log_l_history,'b.')
```
Learning rate $\alpha = 0.001$
Iterations = $10,000$
```python
%%time
# Run Gradient descent method
theta_final, log_l_history = gradient_descent(theta0, X, y_onehot, alpha = 0.001, iterations = 10000)
# Compute the accuracy
acc = compute_accuracy(theta_final, X, y)
print('LR:', 0.001, 'Iter:', 10000)
print('Accuracy:', acc)
# Loss plot
fig,ax = plt.subplots(num=2)
ax.set_ylabel('loss')
ax.set_xlabel('Iterations')
_=ax.plot(range(len(log_l_history)), log_l_history,'b.')
```
| LR | Iter | Accuracy | Time |
|---|---|---|---|
| 0.01 | 100 |0.616 | 780 ms |
| 0.01 | 500 | 0.78 | 3.4 s |
| 0.01 |1,000 | 0.79 | 6.57 s |
| 0.005 | 100 | 0.553 | 767 ms |
| 0.005 | 500 | 0.736 | 3.47 s |
| 0.005 | 1,000 | 0.78 | 6.63 s |
| 0.001 | 100 | 0.456 | 679 ms |
| 0.001 | 500 | 0.553 | 3.33 s |
| 0.001 | 1,000 | 0.616 | 6.61 s |
| 0.001 | 10,000 | 0.79 | 1 min 10 s |
From data we can see that we get the best accuracy score ($0.79$) with $\alpha = 0.01$ e $1,000$ iterations.
We observe that with the same number of iterations, in terms of accuracy, we have better results with higher learning rates.
Infact, with lower learning rates we need more iterations to reach the minimum loss, as shown by the plots.
To confirm this hypothesis, we tried with $\alpha = 0.001$ and $10,000$ iterations and we get the same accuracy of $\alpha = 0.01$ e $1,000$ iterations. So, we can say that if we divide by $n$ our learning rate, to get the same accuracy level, we need to multiply our number of iterations by the same $n$.
In addition, it seems that with the same number of iterations, changing the learning rates doesn't affect so much the time needed for the execution of the algorithm. Obviously, the time depends mainly on the number of iterations and increase with it.
```python
# Set the learning rates
lr = ['0.01', '0.005', '0.001']
# Get the plot for each different number of iterations
iterations = [100, 500, 1000]
accuracy = np.asarray([[0.616, 0.553, 0.456],
[0.78, 0.736, 0.553],
[0.79, 0.78, 0.616]])
for i in range(len(iterations)):
fig, ax = plt.subplots()
b = ax.bar(lr, accuracy[i], width = 0.5)
ax.set_ylim(0, 1)
ax.set_ylabel('Accuracy')
ax.set_xlabel('Learning Rates')
ax.set_title(f'Iterations: {iterations[i]}')
ax.grid(True, linewidth = 0.5)
plt.show()
```
## Question 4: Multinomial Naive Bayes **(6 Points)**
### Code and Theory
The Naive Bayes classifier is a probabilistic machine learning model often used for classification tasks, e.g. document classification problems.
In the multinomial Naive Bayes classification you generally have $K>2$ classes, and the features are assumed to be generated from a multinomial distribution.
##### __*Example Data*__
General models consider input data as values. In the case of MultinomialNB, being used mainly in the field of document classification, these data consider how many features $X_i$ are present in the sample. Basically, it is a count of features within each document.
Taking into account $D=3$ documents and a vocabulary consisting of $N=4$ words, the data are considered as follows.
| | $w_1$ | $w_2$ | $w_3$ | $w_4$ |
|---|---|---|---|---|
| $d_1$ | 3 | 0 | 1 | 1 |
| $d_2$ | 2 | 1 | 3 |0|
| $d_3$ | 2 | 2 | 0 |2|
By randomly generating the class to which each document belongs we have $y=[1,0,1]$
##### __*A bit of notation*__
- $Y =\{y_1, y_2, ... , y_{|Y|}\}$: set of classes
- $V =\{w_1, w_2, ... , w_{|V|}\}$: set of vocabulary
- $D =\{d_1, d_2, ... , d_{|D|}\}$: set of documents
- $N_{yi}$: count of a specific word $w_i$ in each unique class, e.g. for $y=1$ you select $D_1$ and $D_3$, then for third column you have $N_{y,3}=1$
- $N_y$: total count of features for a specific class, e.g. for $y=1$ you sum all rows values which the correspondent label is 1, so $N_y=11$
- $n$: total number of features (words in vocabulary)
- $\alpha$: smoothing parameters
##### __*Task*__
Find the class $y$ to which the document is most likely to belong given the words $w$.
Use the Bayes formula and the posterior probability for it.
Bayes Formula:
\begin{equation}
P(A|B) = \frac{P(A)*P(B|A)}{P(B)}
\end{equation}
Where:
- P(A): Prior probability of A
- P(B): Prior probability of B
- P(B|A): Likelihood, multiplying posterior probability, that is multinomial Naive Bayes is:
\begin{equation}
P(B|A) = \left(\frac{N_{yi}+\alpha}{N_{y}+\alpha*n\_features}\right)^{X_{doc,i}}
\end{equation}
**Reminder: do not change any part of this notebook outside the assigned work spaces**
#### Generate random dataset
```python
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
X, y = make_classification(n_samples=300, n_features=7, n_informative=7, n_redundant=0, n_classes=3, random_state=1)
X = np.floor(X)-np.min(np.floor(X))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, y_train.shape
```
((240, 7), (240,))
#### Step0: $N_y$ and $N_{yi}$
```python
def feature_count(X, y, classes, n_classes, n_features):
'''
Function to compute the count of a specific word in each unique class and the total count.
Input:
X: it's the input data matrix.
y: label array
classes: unique values of y
n_classes: number of classes
n_features: it's the number of word in Vocabulary.
Output:
N_yi: count of a specific word $w_i$ in each unique class
N_y: total count of features for a specific class
'''
N_yi = np.zeros((n_classes, n_features)) # feature count
N_y = np.zeros((n_classes)) # total count
for i in range(len(y)):
# Compute N_y counting the features for each specific class
N_y[y[i]] += np.sum(X[i])
# Compute N_yi adding counting the specific words in each class
N_yi[y[i]] += (X[i])
return N_yi, N_y
```
```python
n_samples_train, n_features = X_train.shape
classes = np.unique(y_train)
n_classes = 3
alpha = 0.1
N_yi, N_y = feature_count(X_train, y_train, classes, n_classes, n_features)
```
#### Step1: Prior Probability
The probability of a document being in a specific category from the given set of documents.
######################################
Your equations here:
\begin{equation}
P(y_j) = \frac{\sum_{i=1}^{n\_samples} \textbf{1} \{ y^{(i)}=1 \} } {n\_samples}
\end{equation}
######################################
```python
def prior_(X, y, n_classes, n_samples):
"""
Calculates prior for each unique class in y.
Input:
X: it's the input data matrix.
y: label array
n_classes: number of classes
n_samples: number of documents
Output:
P: prior probability for each class. Shape: (, n_classes)
"""
classes = np.unique(y)
P = np.zeros(n_classes)
# Implement Prior Probability P(A)
for j in classes:
P[j] = np.count_nonzero(y == j)/(n_samples)
return P
```
```python
prior_prob = prior_(X_train, y_train, n_classes, n_samples_train)
print(prior_prob)
```
[0.3 0.34583333 0.35416667]
#### Step2
Posterior Probability: The conditional probability of a word occurring in a document given that the document belongs to a particular category.
\begin{equation}
P(w_i|y_j) = \left(\frac{N_{yi}+\alpha}{N_{y}+\alpha*n\_features}\right)^{X_{doc,i}}
\end{equation}
Likelihood for a single document:
######################################
Your equations here:
\begin{equation}
P(w|y_j) = \prod_{i=1}^{n} \left( \frac{N_{yi}+\alpha}{N_{y}+\alpha*n\_features} \right)^{X_{doc,i}}
\end{equation}
######################################
```python
def posterior_(x_i, i, h, N_y, N_yi, n_features, alpha):
"""
Calculates posterior probability. aka P(w_i|y_j) using equation in the notebook.
Input:
x_i: feature x_i
i: feature index.
h: a class in y
N_yi: count of a specific word in each unique class
N_y: total count of features for a specific class
n_features: it's the number of word in Vocabulary.
alpha: smoothing parameter
Output:
posterior: P(xi | y). Float.
"""
# Implement Posterior Probability
posterior = float(((N_yi[h][i] + alpha)/(N_y[h] + alpha * n_features))**x_i)
return posterior
def likelihood_(x, h, N_y, N_yi, n_features, alpha):
"""
Calculates Likelihood P(w|j_i).
Input:
x: a row of test data. Shape(n_features,)
h: a class in y
N_yi: count of a specific word in each unique class
N_y: total count of features for a specific class
n_features: it's the number of word in Vocabulary.
alpha: smoothing parameter
Output:
likelihood: Float.
"""
tmp = []
for i in range(x.shape[0]):
tmp.append(posterior_(x[i], i, h, N_y, N_yi, n_features, alpha))
# Implement Likelihood
likelihood = float(np.prod(tmp))
return likelihood
```
```python
# Example of likelihood for first document
likelihood_(X_test[0], 0, N_y, N_yi, n_features, alpha)
```
2.7754694679413126e-53
#### Step3
Joint Likelihood that, given the words, the documents belongs to specific class
######################################
Your equations here:
\begin{equation}
P(y_i|w) = P(y_i)*P(w|y_i)
\end{equation}
######################################
Finally, from the probability that the document is in that class given the words, take the argument correspond to max value.
\begin{equation}
y(D) = argmax_{y \in Y} \frac{P(y|w)}{\sum_{j}P(y_j|w)}
\end{equation}
```python
def joint_likelihood(X, prior_prob, classes, n_classes, N_y, N_yi, n_features, alpha):
"""
Calculates the joint probability P(y_i|w) for each class and makes it probability.
Then take the argmax.
Input:
X: test data
prior_prob:
classes:
n_classes:
N_yi: count of a specific word in each unique class
N_y: total count of features for a specific class
n_features: it's the number of word in Vocabulary.
alpha: smoothing parameter
Output:
predicted_class: Predicted class of the documents. Int. Shape: (,#documents)
"""
samples, features = X.shape
predict_proba = np.zeros((samples,n_classes))
# Calculate Joint Likelihood of each row for each class, then normalize in order to make them probabilities
# Finally take the argmax to have the predicted class for each document
for i in range(samples):
for j in range(n_classes):
l = likelihood_(X[i], j, N_y, N_yi, n_features, alpha)
predict_proba[i][j] = (prior_prob[j]*l)
predicted_class = []
for i in range(samples):
predicted_class.append(np.argmax(predict_proba[i]/np.sum(predict_proba[i])))
return predicted_class
```
```python
yhat = joint_likelihood(X_test, prior_prob, classes, n_classes, N_y, N_yi, n_features, alpha)
```
#### Step4: Calculate the Accuracy Score
```python
print('Accuracy: ', np.round(accuracy_score(yhat, y_test),3))
```
Accuracy: 0.717
**Sanity Check**
Here we use a function from the sklearn library, one of the most widely used in machine learning. MultinomialNB() implements the required algorithm, so the result of your implementation should be equal to the output of the following function.
```python
from sklearn import naive_bayes
clf = naive_bayes.MultinomialNB(alpha=0.1)
clf.fit(X_train,y_train)
sk_y = clf.predict(X_test)
print('Accuracy: ', np.round(accuracy_score(sk_y, y_test),3))
```
Accuracy: 0.717
| aad98649cfd4895ded342e245e173e1b64fa86e5 | 534,927 | ipynb | Jupyter Notebook | .ipynb_checkpoints/FDS_Exercise2_Assignment-checkpoint.ipynb | SimBoex/FDS-homework2 | 4f64f78ceb59bc81f2b966cc9009be4790403879 | [
"MIT"
]
| null | null | null | .ipynb_checkpoints/FDS_Exercise2_Assignment-checkpoint.ipynb | SimBoex/FDS-homework2 | 4f64f78ceb59bc81f2b966cc9009be4790403879 | [
"MIT"
]
| null | null | null | .ipynb_checkpoints/FDS_Exercise2_Assignment-checkpoint.ipynb | SimBoex/FDS-homework2 | 4f64f78ceb59bc81f2b966cc9009be4790403879 | [
"MIT"
]
| null | null | null | 170.576212 | 53,420 | 0.893425 | true | 14,652 | Qwen/Qwen-72B | 1. YES
2. YES | 0.800692 | 0.779993 | 0.624534 | __label__eng_Latn | 0.940911 | 0.289332 |
```python
from sympy import *
init_printing()
```
```python
def skew(l):
l1, l2, l3 = l
return Matrix([
[0, -l3, l2],
[l3, 0, -l1],
[-l2, l1, 0]
])
```
```python
# define state variables
x, y, z, eta0, eps1, eps2, eps3, u, v, w, p, q, r = symbols('x y z et0 eps1 eps2 eps3 u v w p q r', real=True)
s = Matrix([x, y, z, eta0, eps1, eps2, eps3, u, v, w, p, q, r])
# position and orientation
eta = Matrix([x, y, z, eta0, eps1, eps2, eps3])
nu = Matrix([u, v, w, p, q, r])
# centre of gravity
xg, yg, zg = symbols('xg yg zg', real=True)
rg = Matrix([xg, yg, zg])
# centre of bouyancy
xb, yb, zb = symbols('xb yb zb', real=True)
rb = Matrix([xb, yb, zb])
# center of pressure
xcp, ycp, zcp = symbols('xcp ycp zcp', real=True)
rcp = Matrix([xcp, ycp, zcp])
# mass matrix
m = symbols('m', real=True, positive=True)
Ixx, Iyy, Izz = symbols('Ixx Iyy Izz')
I0 = diag(Ixx, Iyy, Izz)
M = BlockMatrix([
[m*eye(3), -m*skew(rg)],
[m*skew(rg), I0]
])
M = Matrix(M)
# M = simplify(M)
# Coriolis and centripetal matrix
nu1 = Matrix([u, v, w])
nu2 = Matrix([p, q, r])
crb = BlockMatrix([
[zeros(3), -m*skew(nu1)-m*skew(nu2)*skew(rg)],
[-m*skew(nu1)+m*skew(rg)*skew(nu2), -skew(I0*nu2)]
])
crb = Matrix(crb)
# crb = simplify(crb)
# damping matrix
Xuu, Yvv, Zww, Kpp, Mqq, Nrr = symbols(
'Xuu Yvv Zww Kpp Mqq Nrr', real=True
)
D = Matrix([
[Xuu*abs(u), 0, 0, 0, 0, 0],
[0, Yvv*abs(v), 0, 0, 0, 0],
[0, 0, Zww*abs(w), 0, 0, 0],
[0, -zcp*Yvv*abs(v), ycp*Zww*abs(w), Kpp*abs(p), 0, 0],
[zcp*Xuu*abs(u), 0, -xcp*Zww*abs(w), 0, Mqq*abs(q), 0],
[-ycp*Xuu*abs(u), xcp*Yvv*abs(v), 0, 0, 0, Nrr*abs(r)]
])
# D = simplify(D)
# rotational transform between body and NED quaternions
Tq = Rational(1,2)*Matrix([
[-eps1, -eps2, -eps3],
[eta0, -eps3, eps2],
[eps3, eta0, -eps1],
[-eps2, eps1, eta0]
])
# Tq = simplify(Tq)
Rq = Matrix([
[1-2*(eps2**2+eps3**2), 2*(eps1*eps2-eps3*eta0), 2*(eps1*eps3+eps2*eta0)],
[2*(eps1*eps2+eps3*eta0), 1-2*(eps1**2+eps3**2), 2*(eps2*eps3-eps1*eta0)],
[2*(eps1*eps3-eps2*eta0), 2*(eps2*eps3+eps1*eta0), 1-2*(eps1**2+eps2**2)]
])
Jeta = BlockMatrix([
[Rq, zeros(3)],
[zeros(4,3), Tq]
])
Jeta = Matrix(Jeta)
# Jeta = simplify(Jeta)
# bouyancy in quaternions
W, B = symbols('W B', real=True)
fg = Matrix([0, 0, W])
fb = Matrix([0, 0, -B])
Rqinv = Rq.inv()
geta = Matrix([
Rqinv*(fg+fb),
skew(rg)*Rqinv*fg + skew(rb)*Rqinv*fb
])
# geta = simplify(geta)
```
```python
print(cse(Jeta))
```
([(x0, -2*eps2**2), (x1, 1 - 2*eps3**2), (x2, 2*eps2), (x3, eps1*x2), (x4, 2*eps3), (x5, et0*x4), (x6, eps1*x4), (x7, et0*x2), (x8, -2*eps1**2), (x9, 2*eps1*et0), (x10, eps2*x4), (x11, eps1/2), (x12, -x11), (x13, eps2/2), (x14, -x13), (x15, eps3/2), (x16, -x15), (x17, et0/2)], [Matrix([
[x0 + x1, x3 - x5, x6 + x7, 0, 0, 0],
[x3 + x5, x1 + x8, x10 - x9, 0, 0, 0],
[x6 - x7, x10 + x9, x0 + x8 + 1, 0, 0, 0],
[ 0, 0, 0, x12, x14, x16],
[ 0, 0, 0, x17, x16, x13],
[ 0, 0, 0, x15, x17, x12],
[ 0, 0, 0, x14, x11, x17]])])
```python
# thrust model
Kt0, Kt1 = symbols('Kt0 Kt1', real=True)
Kt = Matrix([Kt0, Kt1])
Qt0, Qt1 = symbols('Qt0 Qt1', real=True)
Qt = Matrix([Qt0, Qt1])
# control inputs
rpm0, rpm1 = symbols('rpm0 rpm1', real=True)
rpm = Matrix([rpm0, rpm1])
de, dr = symbols('de dr', real=True)
control_vector = Matrix([rpm0, rpm1, de, dr])
# control force vector
Ft = Kt.dot(rpm)
Mt = Qt.dot(rpm)
# coefficient for each element in cost function
tauc = Matrix([
Ft*cos(de)*cos(dr),
-Ft*sin(dr),
Ft*sin(de)*cos(dr),
Mt*cos(de)*cos(dr),
-Mt*sin(dr),
Mt*sin(de)*cos(dr)
])
```
```python
etadot = Jeta*nu
nudot = M.inv()*(tauc - (crb + D)*nu - geta)
```
```python
sdot = Matrix([
etadot,
nudot
])
```
```python
print(list(set(sdot.free_symbols) - set(s.free_symbols) - set(control_vector.free_symbols)))
```
```python
# Lagrangian
alpha = symbols('\\alpha', real=True, positive=True)
L = alpha + (1-alpha)*tauc.norm()
```
```python
l = Matrix([symbols('lambda_{}'.format(var)) for var in s])
```
```python
H = l.dot(sdot) + L
```
```python
eq = H.diff(control_vector)
```
```python
sol = solve(eq, control_vector)
```
```python
L.diff(control_vector)
```
```python
tauc.transpose()*tauc
```
```python
```
| d11bfd8ebf74a363f94282abdecb81c8f0a535ac | 36,291 | ipynb | Jupyter Notebook | sam_dynamics/notebooks/dynamics.ipynb | Jollerprutt/sam_common | dd8b43b3c69eee76fe0c35a98db9dfb67f2b79f2 | [
"BSD-3-Clause"
]
| 1 | 2020-06-09T18:23:53.000Z | 2020-06-09T18:23:53.000Z | sam_dynamics/notebooks/dynamics.ipynb | Jollerprutt/sam_common | dd8b43b3c69eee76fe0c35a98db9dfb67f2b79f2 | [
"BSD-3-Clause"
]
| 3 | 2020-10-06T09:46:03.000Z | 2021-03-10T13:40:44.000Z | sam_dynamics/notebooks/dynamics.ipynb | Jollerprutt/sam_common | dd8b43b3c69eee76fe0c35a98db9dfb67f2b79f2 | [
"BSD-3-Clause"
]
| 5 | 2020-01-20T18:33:55.000Z | 2020-12-29T12:34:22.000Z | 109.310241 | 1,695 | 0.658924 | true | 1,916 | Qwen/Qwen-72B | 1. YES
2. YES | 0.839734 | 0.782662 | 0.657228 | __label__krc_Cyrl | 0.291595 | 0.365292 |
# 成為初級資料分析師 | Python 程式設計
> 函式:參考解答
## 郭耀仁
## 隨堂練習:定義一個函式 `product(*args)` 能回傳 `*args` 所組成之數列的乘積
- 預期輸入:彈性參數 `*args`
- 預期輸出:一個數值
```python
def product(*args):
"""
>>> product(0, 1, 2)
0
>>> product(1, 2, 3, 4, 5)
120
>>> product(1, 3, 5, 7, 9)
945
"""
ans = 1
for i in args:
ans *= i
return ans
```
## 隨堂練習:定義一個函式 `iso_country(**kwargs)` 能讓使用者創建國家的 Alpha-3 code 與國家名稱的對應 dict
<https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes。
- 預期輸入:彈性參數 `**kwargs`
- 預期輸出:一個 dict
```python
def iso_country(**kwargs):
"""
>>> iso_country(TWN='Taiwan')
{'TWN': 'Taiwan'}
>>> iso_country(TWN='Taiwan', USA='United States of America')
{'TWN': 'Taiwan', 'USA': 'United States of America'}
>>> iso_country(TWN='Taiwan', USA='United States of America', JPN='Japan')
{'TWN': 'Taiwan', 'USA': 'United States of America', 'JPN': 'Japan'}
"""
return kwargs
```
## 隨堂練習:定義一個函式 `mean(*args)` 能回傳 `*args` 所組成之數列的平均數
\begin{equation}
\mu = \frac{\sum_{i=1}^n x_i}{n}
\end{equation}
- 預期輸入:彈性參數 `*args`
- 預期輸出:一個數值
```python
def mean(*args):
"""
>>> mean(1, 3, 5, 7, 9)
5.0
>>> mean(3, 4, 5, 6, 7)
5.0
>>> mean(3)
3.0
"""
return sum(args) / len(args)
```
## 隨堂練習:定義一個函式 `std(*args)` 回傳 `*args` 所組成之數列的樣本標準差
\begin{equation}
\sigma = \sqrt{\frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})^2}
\end{equation}
<https://en.wikipedia.org/wiki/Standard_deviation>
- 預期輸入:彈性參數 `*args`
- 預期輸出:一個數值或文字
```python
def std(*args):
"""
>>> std(1, 3, 5, 7, 9)
3.1622776601683795
>>> std(3, 4, 5, 6, 7)
1.5811388300841898
>>> std(3)
'Please input at least 2 numbers.'
"""
n = len(args)
x_bar = sum(args)/n
sse = 0
for i in args:
err = i - x_bar
se = err**2
sse += se
try:
std = (sse/(n-1))**(0.5)
return std
except ZeroDivisionError:
return "Please input at least 2 numbers."
```
## 隨堂練習:定義一個函式 `fibonacci_list(N, f0=0, f1=1)` 回傳長度為 `N`、前兩個數字分別為 `f0` 與 `f1` 的費氏數列
\begin{equation}
F_0 = 0, F_1 = 1 \\
F_n = F_{n-1} + F_{n-2} \text{ , For } n > 1
\end{equation}
<https://en.wikipedia.org/wiki/Fibonacci_number>
- 預期輸入:三個整數
- 預期輸出:一個長度為 N 的 list
```python
def fibonacci_list(N, f0=0, f1=1):
"""
>>> fibonacci_list(5)
[0, 1, 1, 2, 3]
>>> fibonacci_list(5, 1, 2)
[1, 2, 3, 5, 8]
>>> fibonacci_list(10, 1, 2)
[1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
"""
fib = [f0, f1]
while len(fib) < N:
fn = fib[-1]+fib[-2]
fib.append(fn)
return fib
```
```python
# %load ../test_cases/test_cases_07.py
import unittest
class TestFunctions(unittest.TestCase):
def test_product(self):
self.assertEqual(product(0, 1, 2), 0)
self.assertEqual(product(1, 2, 3, 4, 5), 120)
self.assertEqual(product(1, 3, 5, 7, 9), 945)
def test_iso_country(self):
self.assertEqual(iso_country(TWN='Taiwan'), {'TWN': 'Taiwan'})
self.assertEqual(iso_country(TWN='Taiwan', USA='United States of America'), {'TWN': 'Taiwan', 'USA': 'United States of America'})
self.assertEqual(iso_country(TWN='Taiwan', USA='United States of America', JPN='Japan'), {'TWN': 'Taiwan', 'USA': 'United States of America', 'JPN': 'Japan'})
def test_mean(self):
self.assertAlmostEqual(mean(1, 3, 5, 7, 9), 5.0)
self.assertAlmostEqual(mean(3, 4, 5, 6, 7), 5.0)
self.assertAlmostEqual(mean(3), 3.0)
def test_std(self):
self.assertAlmostEqual(std(1, 3, 5, 7, 9), 3.1622776601683795)
self.assertAlmostEqual(std(3, 4, 5, 6, 7), 1.5811388300841898)
self.assertAlmostEqual(std(3), 'Please input at least 2 numbers.')
def test_fibonacci_list(self):
self.assertEqual(fibonacci_list(5), [0, 1, 1, 2, 3])
self.assertEqual(fibonacci_list(5, 1, 2), [1, 2, 3, 5, 8])
self.assertEqual(fibonacci_list(10, 1, 2), [1, 2, 3, 5, 8, 13, 21, 34, 55, 89])
suite = unittest.TestLoader().loadTestsFromTestCase(TestFunctions)
runner = unittest.TextTestRunner(verbosity=2)
test_results = runner.run(suite)
```
test_fibonacci_list (__main__.TestFunctions) ... ok
test_iso_country (__main__.TestFunctions) ... ok
test_mean (__main__.TestFunctions) ... ok
test_product (__main__.TestFunctions) ... ok
test_std (__main__.TestFunctions) ... ok
----------------------------------------------------------------------
Ran 5 tests in 0.009s
OK
| 875cf368ff9f7d637b4d11bf7f2a32b0d567f210 | 8,151 | ipynb | Jupyter Notebook | suggested_answers/07-suggested-answers.ipynb | datainpoint/classroom-introduction-to-python | a5d4036829eda3a0ed1a0a0af752f541e4e015e7 | [
"MIT"
]
| null | null | null | suggested_answers/07-suggested-answers.ipynb | datainpoint/classroom-introduction-to-python | a5d4036829eda3a0ed1a0a0af752f541e4e015e7 | [
"MIT"
]
| null | null | null | suggested_answers/07-suggested-answers.ipynb | datainpoint/classroom-introduction-to-python | a5d4036829eda3a0ed1a0a0af752f541e4e015e7 | [
"MIT"
]
| null | null | null | 25.794304 | 175 | 0.465832 | true | 1,924 | Qwen/Qwen-72B | 1. YES
2. YES | 0.859664 | 0.7773 | 0.668217 | __label__yue_Hant | 0.481112 | 0.390822 |
# Quantization of Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Quantization Error of a Linear Uniform Quantizer
As illustrated in the [preceding section](linear_uniform_characteristic.ipynb), quantization results in two different types of distortions. Overload distortions are a consequence of exceeding the minimum/maximum amplitude of the quantizer. Granular distortions are a consequence of the quantization process when no clipping occurs. Various measures are used to quantify the distortions of a quantizer. We limit ourselves to the signal-to-noise ratio as commonly used measure.
### Signal-to-Noise Ratio
A quantizer can be evaluated by its [signal-to-noise ratio](https://en.wikipedia.org/wiki/Signal-to-noise_ratio) (SNR), which is defined as the power of the continuous amplitude signal $x[k]$ divided by the power of the quantization error $e[k]$. Under the assumption that both signals are drawn from a zero-mean wide-sense stationary (WSS) process, the average SNR is given as
\begin{equation}
SNR = 10 \cdot \log_{10} \left( \frac{\sigma_x^2}{\sigma_e^2} \right) \quad \text{ in dB}
\end{equation}
where $\sigma_x^2$ and $\sigma_e^2$ denote the variances of the signals $x[k]$ and $e[k]$, respectively. The SNR quantifies the average impact of the distortions introduced by quantization. The statistical properties of the signal $x[k]$ and the quantization error $e[k]$ are required in order to evaluate the SNR of a quantizer. First, a statistical model for the quantization error is introduced.
### Model for the Quantization Error
In order to derive the statistical properties of the quantization error, the probability density functions (PDFs) of the quantized signal $x_\text{Q}[k]$ and the error $e[k]$, as well as its bivariate PDFs have to be derived. The underlying calculus is quite tedious due to the nonlinear nature of quantization. Please refer to [[Widrow](../index.ipynb#Literature)] for a detailed treatment. The resulting model is summarized in the following. We focus on the non-clipping case $x_\text{min} \leq x[k] < x_\text{max}$ first, hence on granular distortions. Here the quantization error is in general bounded $|e[k]| < \frac{Q}{2}$.
Under the assumption that the input signal has a wide dynamic range compared to the quantization step size $Q$, the quantization error $e[k]$ can be approximated by the following statistical model
1. The quantization error $e[k]$ is not correlated with the input signal $x[k]$
2. The quantization error is [white](../random_signals/white_noise.ipynb)
$$ \Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sigma_e^2 $$
3. The probability density function (PDF) of the quantization error is given by the zero-mean [uniform distribution](../random_signals/important_distributions.ipynb#Uniform-Distribution)
$$ p_e(\theta) = \frac{1}{Q} \cdot \text{rect} \left( \frac{\theta}{Q} \right) $$
The variance of the quantization error is then [derived from its PDF](../random_signals/important_distributions.ipynb#Uniform-Distribution) as
\begin{equation}
\sigma_e^2 = \frac{Q^2}{12}
\end{equation}
Let's assume that the quantization index is represented as binary or [fixed-point number](https://en.wikipedia.org/wiki/Fixed-point_arithmetic) with $w$-bits. The common notation for the mid-tread quantizer is that $x_\text{min}$ can be represented exactly. Half of the $2^w$ quantization indexes is used for the negative signal values, the other half for the positive ones including zero. The quantization step is then given as
\begin{equation}
Q = \frac{ |x_\text{min}|}{2^{w-1}} = \frac{ x_\text{max}}{2^{w-1} - 1}
\end{equation}
where $x_\text{max} = |x_\text{min}| - Q$. Introducing the quantization step, the variance of the quantization error can be expressed by the word length $w$ as
\begin{equation}
\sigma_e^2 = \frac{x^2_\text{max}}{3 \cdot 2^{2w}}
\end{equation}
The average power of the quantization error quarters per additional bit spend. Introducing the variance into the definition of the SNR yields
\begin{equation}
\begin{split}
SNR &= 10 \cdot \log_{10} \left( \frac{3 \sigma_x^2}{x^2_\text{max}} \right) + 10 \cdot \log_{10} \left( 2^{2w} \right) \\
& \approx 10 \cdot \log_{10} \left( \frac{3 \sigma_x^2}{x^2_\text{max}} \right) + 6.02 w \quad \text{in dB}
\end{split}
\end{equation}
It now can be concluded that the SNR decays approximately by 6 dB per additional bit spend. This is often referred to as the 6 dB per bit rule of thumb for linear uniform quantization. Note, this holds only under the assumptions stated above.
### Uniformly Distributed Signal
A statistical model for the input signal $x[k]$ is required in order to calculate the average SNR of a linear uniform quantizer. For a signal that conforms to a zero-mean uniform distribution and under the assumption $x_\text{max} \gg Q$ its PDF is given as
\begin{equation}
p_x(\theta) = \frac{1}{2 x_\text{max}} \text{rect}\left( \frac{\theta}{2 x_\text{max}} \right)
\end{equation}
Hence, all amplitudes between $-x_\text{max}$ and $x_\text{max}$ occur with the same probability. The variance of the signal is then calculated to
\begin{equation}
\sigma_x^2 = \frac{4 x_\text{max}^2}{12}
\end{equation}
Introducing $\sigma_x^2$ and $\sigma_e^2$ into the definition of the SNR yields
\begin{equation}
SNR = 10 \cdot \log_{10} \left( 2^{2 w} \right) \approx 6.02 \, w \quad \text{in dB}
\end{equation}
The word length $w$ and resulting SNRs for some typical digital signal representations are
| | $w$ | SNR |
|----|:----:|:----:|
| Compact Disc (CD) | 16 bit | 96 dB |
| Digital Video Disc (DVD) | 24 bit | 144 dB |
| Video Signals | 8 bit | 48 dB |
Note that the SNR values hold only if the continuous amplitude signal conforms reasonably well to a uniform PDF and if it uses the full amplitude range of the quantizer. If the latter is not the case this can be considered by introducing the level $0 < A \leq 1$ into above considerations, such that $x_\text{min} \leq \frac{x[k]}{A} < x_\text{max}$. The resulting variance is given as
\begin{equation}
\sigma_x^2 = \frac{4 x_\text{max}^2 A^2}{12}
\end{equation}
introduced into the definition of the SNR yields
\begin{equation}
SNR = 10 \cdot \log_{10} \left( 2^{2 w} \right) + 20 \cdot \log_{10} ( A ) \approx 6.02 \, w + 20 \cdot \log_{10} ( A ) \quad \text{in dB}
\end{equation}
From this it can be concluded that a level of -6 dB is equivalent to a loss of one bit in terms of SNR of the quantized signal.
#### Example - Quantization of a uniformly distributed signal
In this example the linear uniform quantization of a random signal drawn from a uniform distribution is evaluated. The amplitude range of the quantizer is $x_\text{min} = -1$ and $x_\text{max} = 1 - Q$.
```python
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as sig
w = 8 # wordlength of the quantized signal
xmin = -1 # mimimum amplitude of input signal
N = 8192 # number of samples
K = 30 # maximum lag for cross-correlation
def uniform_midtread_quantizer(x, Q):
'''Uniform mid-tread quantizer with limiter.'''
# limiter
x = np.copy(x)
idx = np.where(x <= -1)
x[idx] = -1
idx = np.where(x > 1 - Q)
x[idx] = 1 - Q
# linear uniform quantization
xQ = Q * np.floor(x/Q + 1/2)
return xQ
def analyze_quantizer(x, e):
'''Compute and plot PDF, CCF and PSD of quantizer.'''
# estimated PDF of error signal
pe, bins = np.histogram(e, bins=20, density=True, range=(-Q, Q))
# estimate cross-correlation between input and error
ccf = 1/len(x) * np.correlate(x, e, mode='full')
# estimate PSD of error signal
nf, Pee = sig.welch(e, nperseg=128)
# estimate SNR
SNR = 10*np.log10((np.var(x)/np.var(e)))
print('SNR = %f in dB' % SNR)
# plot statistical properties of error signal
plt.figure(figsize=(9, 4))
plt.subplot(121)
plt.bar((bins[:-1] + bins[1:])/(2*Q), pe*Q, width=2/len(pe))
plt.title('Estimated histogram of quantization error')
plt.xlabel(r'$\theta / Q$')
plt.ylabel(r'$\hat{p}_x(\theta) / Q$')
plt.axis([-1, 1, 0, 1.2])
plt.grid()
plt.subplot(122)
plt.plot(nf*2*np.pi, Pee*6/Q**2)
plt.title('Estimated PSD of quantization error')
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\hat{\Phi}_{ee}(e^{j \Omega}) / \sigma_e^2$')
plt.axis([0, np.pi, 0, 2])
plt.grid()
plt.tight_layout()
plt.figure(figsize=(10, 6))
ccf = ccf[N-K-1:N+K-1]
kappa = np.arange(-len(ccf)//2, len(ccf)//2)
plt.stem(kappa, ccf, use_line_collection=True)
plt.title('Cross-correlation function between input signal and error')
plt.xlabel(r'$\kappa$')
plt.ylabel(r'$\varphi_{xe}[\kappa]$')
plt.grid()
# quantization step
Q = 1/(2**(w-1))
# compute input signal
np.random.seed(1)
x = np.random.uniform(size=N, low=xmin, high=(-xmin-Q))
# quantize signal
xQ = uniform_midtread_quantizer(x, Q)
e = xQ - x
# analyze quantizer
analyze_quantizer(x, e)
```
SNR = 48.090272 in dB
**Exercise**
* Change the number of bits `w` and check if the derived SNR holds
* How does the SNR change if you lower the magnitude of the minimum amplitude `xmin` of the input signal?
* What happens if you chose the magnitude of the minimum amplitude `xmin` in the range of the quantization step? Why?
Solution: The numerically computed SNR conforms well to the theoretic result derived above. Lowering the magnitude of the minimum amplitude results in a lower SNR as predicted above. The input signal $x[k]$ is correlated to the quantization error $e[k]$ if the magnitude of the minimum amplitude is lowered such that it is close to the quantization step. Here the assumptions made for the statistical model of the quantization error do not hold.
### Harmonic Signal
For a harmonic input signal $x[k] = x_\text{max} \cdot \cos[\Omega_0 k]$ the variance $\sigma_x^2$ is given by its squared [root mean square](https://en.wikipedia.org/wiki/Root_mean_square) (RMS) value
\begin{equation}
\sigma_x^2 = \frac{x_\text{max}^2}{2}
\end{equation}
Introducing this into the definition of the SNR together with the variance $\sigma_e^2$ of the quantization error yields
\begin{equation}
SNR = 10 \cdot \log_{10} \left(2^{2 w} \cdot \frac{3}{2} \right) \approx 6.02 \, w + 1.76 \quad \text{in dB}
\end{equation}
The gain of 1.76 dB with respect to the case of a uniformly distributed input signal is due to the fact that the amplitude distribution of a harmonic signal is not uniform
\begin{equation}
p_x(\theta) = \frac{1}{\pi \sqrt{1 - (\frac{\theta}{x_\text{max}})^2}}
\end{equation}
for $|\theta| < x_\text{max}$. High amplitudes are more likely to occur. The relative power of the quantization error is lower for higher amplitudes which results in an increase of the average SNR.
### Normally Distributed Signal
So far, we did not consider clipping of the input signal $x[k]$, e.g. by ensuring that its minimum/maximum values do not exceed the limits of the quantizer. However, this cannot always be ensured for practical signals. Moreover, many practical signals cannot be modeled as a uniform distribution. For instance, a [normally distributed](../random_signals/important_distributions.ipynb#Normal-Distribution) random signal exceeds a given maximum value with non-zero probability. Hence, clipping will occur for such an input signal. Clipping results in overload distortions whose amplitude can be much higher that $\frac{Q}{2}$. For the overall average SNR both granular and overload distortions have to be included.
The root mean square (RMS) of the normal distributed input signal is given by its standard deviation $\sigma_x$. The RMS level $A$ of the input signal normalized to the maximum level of the quantizer as
\begin{equation}
A = \frac{\sigma_x}{x_\text{max}}
\end{equation}
The probability that clipping occurs can be derived from the [cumulative distribution function](../random_signals/important_distributions.ipynb#Normal-Distribution) (CDF) of the normal distribution as
\begin{equation}
\Pr \{ |x[k]| > x_\text{max} \} = 1 + \text{erf} \left( \frac{-1}{\sqrt{2} A} \right)
\end{equation}
where $x_\text{max} = - x_\text{min}$ was assumed. For a normally distributed signal with a given probability that clipping occurs $\Pr \{ |x[k]| > x_\text{max} \} = 10^{-5}$ the SNR can be approximately calculated to [[Zölzer](../index.ipynb#Literature)]
\begin{equation}
SNR \approx 6.02 \, w - 8.5 \quad \text{in dB}
\end{equation}
The reduction of the SNR by 8.5 dB results from the fact that small signal values are more likely to occur for a normally distributed signal. The relative quantization error for small signals is higher, which results in a lower average SNR. Overload distortions due to clipping result in a further reduction of the average SNR.
#### Example - Quantization of a normal distributed signal
The following example evaluates the SNR of a linear uniform quantizer with $w=8$ for a normally distributed signal $x[k]$. The SNR is computed and plotted for various RMS levels, the probabilities for clipping are shown additionally.
```python
from scipy.special import erf
w = 8 # wordlength of the quantizer
A = np.logspace(-2, 0, num=500) # RMS levels
N = int(1e6) # number of samples
np.random.seed(1)
def compute_SNR(a):
'''Numerically evaluate SNR of a quantized normally distributed signal.'''
# compute input signal
x = np.random.normal(size=N, scale=a)
# quantize signal
xQ = uniform_midtread_quantizer(x, Q)
e = xQ - x
# compute SNR
SNR = 10*np.log10((np.var(x)/np.var(e)))
return SNR
def plot_SNR(A, SNR):
'''Plot SNR.'''
# plot results
plt.figure(figsize=(8, 4))
plt.plot(20*np.log10(A), SNR)
plt.xlabel(r'RMS level $\sigma_x / x_\mathrm{min}$ in dB')
plt.ylabel('SNR in dB')
plt.grid()
# quantization step
Q = 1/(2**(w-1))
# compute SNR for given RMS levels
SNR = [compute_SNR(a) for a in A]
# plot results
plot_SNR(A, SNR)
# find maximum SNR
Amax = A[np.argmax(SNR)]
Pc = 1 + erf(-1/(np.sqrt(2)*Amax))
print(r'Maximum SNR = {0:2.3f} dB for A = {1:2.1f} dB with clipping probability {2:2.1e}'
.format(np.array(SNR).max(), 20*np.log10(Amax), Pc))
```
Maximum SNR = 40.854 dB for A = -11.7 dB with clipping probability 1.2e-04
**Exercise**
* Can you explain the overall shape of the SNR?
* For which RMS level and probability of clipping is the SNR optimal?
* Change the wordlength `w` of the quantizer. How does the SNR change?
Solution: The SNR is low for low RMS levels of the input signal since the relative level of the quantization error is high. The SNR increases with increasing level until the clipping errors become dominant which make the SNR decay after its maximum. The SNR is optimal for $A \approx -12$ dB which is equivalent to $\Pr \{ |x[k]| > x_\text{max} \} \approx 10^{-4}$. Increasing the wordlength by one bit increases the SNR approximately by 6 dB.
### Laplace Distributed Signal
The [Laplace distribution](../random_signals/important_distributions.ipynb#Laplace-Distribution) is a commonly applied model for speech and music signals. As for the normal distribution, clipping will occur with a non-zero probability. The probability that clipping occurs can be derived from the [cumulative distribution function](../random_signals/important_distributions.ipynb#Laplace-Distribution) (CDF) of the normal distribution as
\begin{equation}
\Pr \{ |x[k]| > x_\text{max} \} = e^{- \frac{\sqrt{2}}{A}}
\end{equation}
The SNR for a Laplace distributed signal is in general lower compared to a normal distributed signal. The reason for this is, that the Laplace distribution features low signal values with a higher and large values with a lower probability in comparison to the normal distribution. The relative quantization error for small signals is higher, which results in a lower average SNR. The probability of overload distortions is also higher compared to the normal distribution.
#### Example - Quantization of a Laplace distributed signal
The following example evaluates the SNR of a linear uniform quantizer with $w=8$ for a Laplace distributed signal $x[k]$. The SNR is computed and plotted for various RMS levels.
```python
w = 8 # wordlength of the quantizer
A = np.logspace(-2, 0, num=500) # relative RMS levels
N = int(1e6) # number of samples
np.random.seed(1)
def compute_SNR(a):
'''Numerically evaluate SNR of a quantized Laplace distributed signal.'''
# compute input signal
x = np.random.laplace(size=N, scale=a/np.sqrt(2))
# quantize signal
xQ = uniform_midtread_quantizer(x, Q)
e = xQ - x
# compute SNR
SNR = 10*np.log10((np.var(x)/np.var(e)))
return SNR
# quantization step
Q = 1/(2**(w-1))
# compute SNR for given RMS levels
SNR = [compute_SNR(a) for a in A]
# plot results
plot_SNR(A, SNR)
# find maximum SNR
Amax = A[np.argmax(SNR)]
Pc = np.exp(-np.sqrt(2)/Amax)
print(r'Maximum SNR = {0:2.3f} dB for A = {1:2.1f} dB with clipping probability {2:2.1e}'
.format(np.array(SNR).max(), 20*np.log10(Amax), Pc))
```
Maximum SNR = 35.581 dB for A = -16.6 dB with clipping probability 7.1e-05
**Exercise**
* Compare the SNR for the Laplace distributed signal to the case of a normally distributed signal. What is different?
Solution: The overall SNR is lower compared to the case of a normally distributed signal. Its maximum is also at lower RMS levels. Both can be explained by the properties of the Laplace distribution discussed above.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
| 5162ef41c22c0714b2b609c75dcc338fbd960a8f | 377,633 | ipynb | Jupyter Notebook | quantization/linear_uniform_quantization_error.ipynb | ZeroCommits/digital-signal-processing-lecture | e1e65432a5617a309ec02327a14962e37a0f7ec5 | [
"MIT"
]
| 630 | 2016-01-05T17:11:43.000Z | 2022-03-30T07:48:27.000Z | quantization/linear_uniform_quantization_error.ipynb | alirezaopmc/digital-signal-processing-lecture | e1e65432a5617a309ec02327a14962e37a0f7ec5 | [
"MIT"
]
| 12 | 2016-11-07T15:49:55.000Z | 2022-03-10T13:05:50.000Z | quantization/linear_uniform_quantization_error.ipynb | alirezaopmc/digital-signal-processing-lecture | e1e65432a5617a309ec02327a14962e37a0f7ec5 | [
"MIT"
]
| 172 | 2015-12-26T21:05:40.000Z | 2022-03-10T23:13:30.000Z | 60.132643 | 28,822 | 0.616382 | true | 5,087 | Qwen/Qwen-72B | 1. YES
2. YES | 0.721743 | 0.835484 | 0.603005 | __label__eng_Latn | 0.984686 | 0.239312 |
# Multibody dynamics of simple biomechanical models
> Marcos Duarte
> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/))
> Federal University of ABC, Brazil
The human body is composed of multiple interconnected segments (which can be modeled as rigid or flexible) and each segment may have translational and rotational movement. The part of mechanics for the study of movement and forces of interconnected bodies is called [multibody system](http://en.wikipedia.org/wiki/Multibody_system) or multibody dynamics.
There are different approaches to deduce the kinematics and dynamics of such bodies, the most common are the [Newton-Euler](http://en.wikipedia.org/wiki/Newton%E2%80%93Euler_equations) and the [Langrangian](http://en.wikipedia.org/wiki/Lagrangian_mechanics) formalisms. The Newton-Euler formalism is based on the well known Newton-Euler equations. The Langrangian formalism uses the [principle of least action](http://en.wikipedia.org/wiki/Principle_of_least_action) and describes the movement based on [generalized coordinates](http://en.wikipedia.org/wiki/Generalized_coordinates), a set of parameters (typically, a convenient minimal set) to describe the configuration of the system taking into account its constraints. For a system with multiple bodies and several constraints, e.g., the human body, it is easier to describe the dynamics of such system using the Langrangian formalism.
Next, we will study two simple problems of multibody systems in the context of biomechanics which we can handle well using the Newton-Euler approach. First a planar one-link system (which is not a multibody!), which can represent the movement of one limb of the body or the entire body as a single inverted pendulum. Second, a planar two-link system, which can represent the movement of two segments of the body, e.g., upper arm and forearm. Zajac and Gordon (1989) and Zajac (1993) offer excellent discussions about applying multibody system concepts to understanding human body movement.
## Newton-Euler equations
For a two-dimensional movement in the $XY$ plane, the Newton-Euler equations are:
\begin{equation}
\left\{ \begin{array}{l l}
\sum F_X = m \ddot{x}_{cm} \\
\\
\sum F_Y = m \ddot{y}_{cm} \\
\\
\sum M_Z = I_{cm} \ddot{\alpha}_Z
\end{array} \right.
\label{}
\end{equation}
Where the movement is described around the body center of mass ($cm$). $(F_X,\,F_Y)$ and $M_Z$ are, respectively, the forces and moment of forces (torques) acting on the body, $(\ddot{x}_{cm},\,\ddot{y}_{cm})$ and $\ddot{\alpha}_Z$ are, respectively, the linear and angular accelerations, and $I_{cm}$ is the body moment of inertia around the $Z$ axis passing through the body center of mass.
Let's use Sympy to derive some of the characteristics of the systems.
```python
from sympy import Symbol, symbols, cos, sin, Matrix, simplify
from sympy.physics.mechanics import dynamicsymbols, mlatex, init_vprinting
init_vprinting()
from IPython.display import display, Math
```
## One-link system
Let's study the dynamics of a planar inverted pendulum as a model for the movement of a human body segment with an external force acting on the segment (see Figure 1).
<figure><figcaption><i><center>Figure. Planar inverted pendulum with joint actuators (muscles) and corresponding free body diagram. See text for notation convention.</center></i></figcaption>
The following notation convention will be used for this problem:
- $L$ is the length of the segment.
- $d$ is the distance from the joint of the segment to its center of mass position.
- $m$ is the mass of the segment.
- $g$ is the gravitational acceleration (+).
- $\alpha$ is the angular position of the joint w.r.t. horizontal, $\ddot{\alpha_i}$ is the corresponding angular acceleration.
- $I$ is the moment of inertia of the segment around its center of mass position.
- $F_{r}$ is the joint reaction force.
- $F_{e}$ is the external force acting on the segment.
- $T$ is the joint moment of force (torque).
Muscles responsible for the movement of the segment are represented as a single pair of antagonistic joint actuators (e.g., flexors and extensors). We will consider that all joint torques are generated only by these muscles (we will disregard the torques generated by ligaments and other tissues) and the total or net joint torque will be the sum of the torques generated by the two muscles:
\begin{equation}
T = T_{net} = T_{extension} - T_{flexion} $$
Where we considered the extensor torque as positive. In what follows, we will determine only the net torque, we will be unable to decompose the net torque in its components.
### Kinetics
From the free body diagram, the Newton-Euler equations for the planar inverted pendulum are:
\begin{equation}
\begin{array}{l l}
F_{r,x} + F_{e,x} = m\ddot{x} \\
\\
F_{r,y} - mg + F_{e,y} = m\ddot{y} \\
\\
T + dF_{r,x}\sin\alpha - dF_{r,y}\cos\alpha - (L-d)F_{e,x}\sin\alpha + (L-d)F_{e,y}\cos\alpha = I\ddot{\alpha}
\end{array}
\label{}
\end{equation}
However, manually placing the terms in the Newton-Euler equations as we did above where we calculated the signs of the cross products is error prone. We can avoid this manual placing by treating the quantities as vectors and express them in matricial form:
\begin{equation}
\begin{array}{l l}
\mathbf{F}_r + \mathbf{F}_g + \mathbf{F}_e = m\ddot{\mathbf{r}} \\
\\
\mathbf{T} + \mathbf{r}_{cm,j} \times \mathbf{F}_r + \mathbf{r}_{cm,e} \times \mathbf{F}_e = I\ddot{\mathbf{\alpha}}
\end{array}
\label{}
\end{equation}
Where:
\begin{equation}
\begin{bmatrix} F_{rx} \\ F_{ry} \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ -g \\ 0 \end{bmatrix} + \begin{bmatrix} F_{ex} \\ F_{ey} \\ 0 \end{bmatrix} = m\begin{bmatrix} \ddot{x} \\ \ddot{y} \\ 0 \end{bmatrix} , \quad \begin{bmatrix} \hat{i} \\ \hat{j} \\ \hat{k} \end{bmatrix}
\label{}
\end{equation}
\begin{equation}
\begin{bmatrix} 0 \\ 0 \\ T_z \end{bmatrix} + \begin{bmatrix} -d\cos\alpha \\ -d\sin\alpha \\ 0 \end{bmatrix} \times \begin{bmatrix} F_{rx} \\ F_{ry} \\ 0 \end{bmatrix} + \begin{bmatrix} (L-d)\cos\alpha \\ (L-d)\sin\alpha \\ 0 \end{bmatrix} \times \begin{bmatrix} F_{ex} \\ F_{ey} \\ 0 \end{bmatrix} = I_z\begin{bmatrix} 0 \\ 0 \\ \ddot{\alpha} \end{bmatrix} , \quad \begin{bmatrix} \hat{i} \\ \hat{j} \\ \hat{k} \end{bmatrix} $$
Note that $\times$ represents the cross product, not matrix multiplication. Then, both in symbolic or numeric manipulation we would use the cross product function to perform part of the calculations. There are different computational tools that can be used for the formulation of the equations of motion. For instance, Sympy has a module, [Classical Mechanics](http://docs.sympy.org/dev/modules/physics/mechanics/), and see [this list](http://real.uwaterloo.ca/~mbody/#Software) for other software. Let's continue with the manual formulation of the equations hence they are not complex.
We can rewrite the equation for the moments of force in a form that doesn't explicitly involve the joint reaction force expressing the moments of force around the joint center:
\begin{equation}
T - mgd\cos\alpha - LF_{e,x}\sin\alpha + LF_{e,y}\cos\alpha = I_o\ddot{\alpha}
\label{}
\end{equation}
Where $I_o$ is the moment of inertia around the joint, $I_o=I_{cm}+md^2$, using the parallel axis theorem.
The torque due to the joint reaction force does not appear on this equation; this torque is null because by the definition the reaction force acts on the joint. If we want to determine the joint torque and we know the kinematics, we perform inverse dynamics:
\begin{equation}
T = I_o\ddot{\alpha} + mgd \cos \alpha + LF_{e,x}\sin\alpha - LF_{e,y}\cos\alpha
\label{}
\end{equation}
If we want to determine the kinematics and we know the joint torque, we perform direct dynamics:
\begin{equation}
\ddot{\alpha} = I_o^{-1}[T - mgd \cos \alpha - LF_{e,x}\sin\alpha + LF_{e,y}\cos\alpha ]
\label{}
\end{equation}
The expression above is a second-order differential equation which typically is solved numerically. So, unless we are explicitly interested in estimating the joint reaction forces, we don't need to use them for calculating the joint torque or simulate movement. Anyway, let's look at the kinematics of this problem to introduce some important concepts which will be needed later.
### Kinematics
A single planar inverted pendulum has one degree of freedom, the rotation movement of the segment around the pin joint. In this case, if the angular position $\alpha(t)$ is known, the coordinates $x(t)$ and $y(t)$ of the center of mass and their derivatives can be readily determined (a process referred as forward kinematics):
```python
t = Symbol('t')
d, L = symbols('d L', positive=True)
a = dynamicsymbols('alpha')
```
```python
x, y = d*cos(a), d*sin(a)
xd, yd = x.diff(t), y.diff(t)
xdd, ydd = xd.diff(t), yd.diff(t)
display(Math(r'x=' + mlatex(x)))
display(Math(r'\dot{x}=' + mlatex(xd)))
display(Math(r'\ddot{x}=' + mlatex(xdd)))
display(Math(r'y=' + mlatex(y)))
display(Math(r'\dot{y}=' + mlatex(yd)))
display(Math(r'\ddot{y}=' + mlatex(ydd)))
```
$$x=d \operatorname{cos}\left(\alpha\right)$$
$$\dot{x}=- d \operatorname{sin}\left(\alpha\right) \dot{\alpha}$$
$$\ddot{x}=- d \operatorname{sin}\left(\alpha\right) \ddot{\alpha} - d \operatorname{cos}\left(\alpha\right) \dot{\alpha}^{2}$$
$$y=d \operatorname{sin}\left(\alpha\right)$$
$$\dot{y}=d \operatorname{cos}\left(\alpha\right) \dot{\alpha}$$
$$\ddot{y}=- d \operatorname{sin}\left(\alpha\right) \dot{\alpha}^{2} + d \operatorname{cos}\left(\alpha\right) \ddot{\alpha}$$
The terms in $\ddot{x}$ and $\ddot{y}$ proportional to $\dot{\alpha}^2$ are components of the centripetal acceleration on the body. As the name suggests, the [centripetal](http://en.wikipedia.org/wiki/Centripetal_force) acceleration is always directed to the center (towards the joint) when the segment is rotating. See the notebook [Kinematic chain](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/KinematicChain.ipynb) for more on that.
As an exercise, let's go back to the Newton-Euler equation for the sum of torques around the center of mass where the torques due to the joint reaction forces are explicit. From the equation for the the sum of forces, hence we have expressions for the linear accelerations, we can isolate the reaction forces and substitute them on the equation for the torques. With a little help from Sympy:
```python
m, I, g = symbols('m I g', positive=True)
Fex, Fey = symbols('F_ex F_ey')
add = a.diff(t, 2)
```
```python
Frx = m*xdd - Fex
Fry = m*ydd + m*g - Fey
display(Math(r'F_{rx}=' + mlatex(Frx)))
display(Math(r'F_{ry}=' + mlatex(Fry)))
```
$$F_{rx}=- F_{ex} + m \left(- d \operatorname{sin}\left(\alpha\right) \ddot{\alpha} - d \operatorname{cos}\left(\alpha\right) \dot{\alpha}^{2}\right)$$
$$F_{ry}=- F_{ey} + g m + m \left(- d \operatorname{sin}\left(\alpha\right) \dot{\alpha}^{2} + d \operatorname{cos}\left(\alpha\right) \ddot{\alpha}\right)$$
```python
T = I*add - d*sin(a)*Frx + d*cos(a)*Fry + (L-d)*sin(a)*Fex - (L-d)*cos(a)*Fey
display(Math(r'T\quad=\quad ' + mlatex(T)))
```
$$T\quad=\quad F_{ex} \left(L - d\right) \operatorname{sin}\left(\alpha\right) - F_{ey} \left(L - d\right) \operatorname{cos}\left(\alpha\right) + I \ddot{\alpha} - d \left(- F_{ex} + m \left(- d \operatorname{sin}\left(\alpha\right) \ddot{\alpha} - d \operatorname{cos}\left(\alpha\right) \dot{\alpha}^{2}\right)\right) \operatorname{sin}\left(\alpha\right) + d \left(- F_{ey} + g m + m \left(- d \operatorname{sin}\left(\alpha\right) \dot{\alpha}^{2} + d \operatorname{cos}\left(\alpha\right) \ddot{\alpha}\right)\right) \operatorname{cos}\left(\alpha\right)$$
This equation for the torques around the center of mass of only one rotating segment seems too complicated. The equation we derived before for the torques around the joint was much simpler. However, if we look at the terms on this last equation, we can simplify most of them. Let's use Sympy to simplify this equation:
```python
T = simplify(T)
display(Math(r'T=' + mlatex(T)))
```
$$T=F_{ex} L \operatorname{sin}\left(\alpha\right) - F_{ey} L \operatorname{cos}\left(\alpha\right) + I \ddot{\alpha} + d^{2} m \ddot{\alpha} + d g m \operatorname{cos}\left(\alpha\right)$$
And we are back to the more simple equation we've seen before. The first two terms on the right side are the torque due to the external force, the third and fourth are the moment of inertia around the joint (use the theorem of parallel axis) times the acceleration, and the last term is the gravitational torque.
But what happened with all the other terms in the equation?
First, the terms proportional to the angular acceleration were just components from each direction of the 'inertial' torque that when summed resulted in $md^2\ddot{\alpha}$.
Second, the terms proportional to $\dot{\alpha}^2$ are components of the torque due to the centripetal force (acceleration). But the centripetal force passes through the joint as well as through the center of mass, i.e., it has zero lever arm and this torque should be zero. Indeed, when summed these terms are canceled out.
Now let's study a two-link system which can rotate independently around each joint. We will see that now the torque due to the centripetal force most of the times will not cancel out and a new torque component will appear.
### The Jacobian matrix
Another way to deduce the velocity and acceleration of a point at the rotating link is to use the [Jacobian matrix](http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant) (see [Kinematic chain](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/KinematicChain.ipynb)). Remember that in the context of kinematic chains, the Jacobian is a matrix of all first-order partial derivatives of the linear position vector of the endpoint with respect to the angular position vector. For the planar one-link case, this means that the Jacobian matrix is:
\begin{equation}
\mathbf{J}=
\begin{bmatrix}
\dfrac{\partial x}{\partial \alpha} \\
\dfrac{\partial y}{\partial \alpha} \\
\end{bmatrix}
\label{}
\end{equation}
```python
r = Matrix((x, y))
J = r.diff(a)
display(Math(r'\mathbf{J}=' + mlatex(J)))
```
$$\mathbf{J}=\left[\begin{matrix}- d \operatorname{sin}\left(\alpha\right)\\d \operatorname{cos}\left(\alpha\right)\end{matrix}\right]$$
And Sympy has a function to calculate the Jacobian:
```python
J = r.jacobian([a])
display(Math(r'\mathbf{J}=' + mlatex(J)))
```
$$\mathbf{J}=\left[\begin{matrix}- d \operatorname{sin}\left(\alpha\right)\\d \operatorname{cos}\left(\alpha\right)\end{matrix}\right]$$
The linear velocity of a point in the link will be given by the product between the Jacobian of the kinematic link and its angular velocity:
\begin{equation}
\mathbf{v} = \mathbf{J} \dot{\alpha}
\label{}
\end{equation}
Using Sympy:
```python
vel = J*a.diff(t)
display(Math(r'\begin{bmatrix} \dot{x} \\ \dot{y} \end{bmatrix}=' + mlatex(vel)))
```
$$\begin{bmatrix} \dot{x} \\ \dot{y} \end{bmatrix}=\left[\begin{matrix}- d \operatorname{sin}\left(\alpha\right) \dot{\alpha}\\d \operatorname{cos}\left(\alpha\right) \dot{\alpha}\end{matrix}\right]$$
And the linear acceleration will be given by the derivative of this last expression:
\begin{equation}
\mathbf{a} = \dot{\mathbf{J}} \dot{\alpha} + \mathbf{J} \ddot{\alpha}
\label{}
\end{equation}
And using Sympy again:
```python
acc = (J*a.diff(t)).diff(t)
display(Math(r'\begin{bmatrix} \ddot{x} \\ \ddot{y} \end{bmatrix}=' + mlatex(acc)))
```
$$\begin{bmatrix} \ddot{x} \\ \ddot{y} \end{bmatrix}=\left[\begin{matrix}- d \operatorname{sin}\left(\alpha\right) \ddot{\alpha} - d \operatorname{cos}\left(\alpha\right) \dot{\alpha}^{2}\\- d \operatorname{sin}\left(\alpha\right) \dot{\alpha}^{2} + d \operatorname{cos}\left(\alpha\right) \ddot{\alpha}\end{matrix}\right]$$
Same expressions as before.
We can also use the Jacobian matrix to calculate the torque due to a force on the link:
\begin{equation}
T = \mathbf{J}^T \begin{bmatrix} F_{ex} \\ F_{ey} \end{bmatrix}
\label{}
\end{equation}
```python
Te = J.T*Matrix((Fex, Fey))
display(Math(r'T_e=' + mlatex(Te[0])))
```
$$T_e=- F_{ex} d \operatorname{sin}\left(\alpha\right) + F_{ey} d \operatorname{cos}\left(\alpha\right)$$
## Two-link system
Let's study the dynamics of a planar double inverted pendulum (see Figure 2) as a model of two interconnected segments in the human body with an external force acting on the distal segment. Once again, we will consider that there are muscles around each joint and they generate torques.
<figure><figcaption><i><center>Figure. Planar double inverted pendulum with joint actuators (muscles) and corresponding free body diagrams. See text for notation convention.</center></i></figcaption>
The following notation convention will be used for this problem:
- Subscript $i$ runs 1 or 2 meaning first (most proximal) or second joint when referring to angles, joint moments, or joint reaction forces, or meaning first or second segment when referring to everything else.
- $L_i$ is the length of segment $i$.
- $d_i$ is the distance from the proximal joint of segment $i$ to its center of mass position.
- $m_i$ is the mass of segment $i$.
- $g$ is the gravitational acceleration (+).
- $\alpha_i$ is the angular position of joint $i$ in the joint space, $\ddot{\alpha_i}$ is the corresponding angular acceleration.
- $\theta_i$ is the angular position of joint $i$ in the segmental space w.r.t. horizontal, $\theta_1=\alpha_1$ and $\theta_2=\alpha_1+\alpha_2$.
- $I_i$ is the moment of inertia of segment $i$ around its center of mass position.
- $F_{ri}$ is the reaction force of joint $i$.
- $F_{e}$ is the external force acting on the distal segment.
- $T_i$ is the moment of force (torque) of joint $i$.
Hence we know we will need the linear accelerations for solving the Newton-Euler equations, let's deduce them first.
### Kinematics
Once again, if the angular positions $\alpha_1(t)$ and $\alpha_2(t)$ are known, the coordinates $(x_1(t), y_1(t))$ and $(x_2(t), y_2(t))$ and their derivatives can be readily determined (by forward kinematics):
#### Link 1
```python
t = Symbol('t')
d1, d2, L1, L2 = symbols('d1, d2, L_1 L_2', positive=True)
a1, a2 = dynamicsymbols('alpha1 alpha2')
a1d, a2d = a1.diff(t), a2.diff(t)
a1dd, a2dd = a1.diff(t, 2), a2.diff(t, 2)
```
```python
x1, y1 = d1*cos(a1), d1*sin(a1)
x1d, y1d = x1.diff(t), y1.diff(t)
x1dd, y1dd = x1d.diff(t), y1d.diff(t)
display(Math(r'x_1=' + mlatex(x1)))
display(Math(r'\dot{x}_1=' + mlatex(x1d)))
display(Math(r'\ddot{x}_1=' + mlatex(x1dd)))
display(Math(r'y_1=' + mlatex(y1)))
display(Math(r'\dot{y}_1=' + mlatex(y1d)))
display(Math(r'\ddot{y}_1=' + mlatex(y1dd)))
```
$$x_1=d_{1} \operatorname{cos}\left(\alpha_{1}\right)$$
$$\dot{x}_1=- d_{1} \operatorname{sin}\left(\alpha_{1}\right) \dot{\alpha}_{1}$$
$$\ddot{x}_1=- d_{1} \operatorname{sin}\left(\alpha_{1}\right) \ddot{\alpha}_{1} - d_{1} \operatorname{cos}\left(\alpha_{1}\right) \dot{\alpha}_{1}^{2}$$
$$y_1=d_{1} \operatorname{sin}\left(\alpha_{1}\right)$$
$$\dot{y}_1=d_{1} \operatorname{cos}\left(\alpha_{1}\right) \dot{\alpha}_{1}$$
$$\ddot{y}_1=- d_{1} \operatorname{sin}\left(\alpha_{1}\right) \dot{\alpha}_{1}^{2} + d_{1} \operatorname{cos}\left(\alpha_{1}\right) \ddot{\alpha}_{1}$$
#### Link 2
```python
x2, y2 = L1*cos(a1) + d2*cos(a1+a2), L1*sin(a1) + d2*sin(a1+a2)
x2d, y2d = x2.diff(t), y2.diff(t)
x2dd, y2dd = x2d.diff(t), y2d.diff(t)
display(Math(r'x_2=' + mlatex(x2)))
display(Math(r'\dot{x}_2=' + mlatex(x2d)))
display(Math(r'\ddot{x}_2=' + mlatex(x2dd)))
display(Math(r'y_2=' + mlatex(y2)))
display(Math(r'\dot{y}_2=' + mlatex(y2d)))
display(Math(r'\ddot{y}_2=' + mlatex(y2dd)))
```
$$x_2=L_{1} \operatorname{cos}\left(\alpha_{1}\right) + d_{2} \operatorname{cos}\left(\alpha_{1} + \alpha_{2}\right)$$
$$\dot{x}_2=- L_{1} \operatorname{sin}\left(\alpha_{1}\right) \dot{\alpha}_{1} - d_{2} \left(\dot{\alpha}_{1} + \dot{\alpha}_{2}\right) \operatorname{sin}\left(\alpha_{1} + \alpha_{2}\right)$$
$$\ddot{x}_2=- L_{1} \operatorname{sin}\left(\alpha_{1}\right) \ddot{\alpha}_{1} - L_{1} \operatorname{cos}\left(\alpha_{1}\right) \dot{\alpha}_{1}^{2} - d_{2} \dot{\alpha}_{1} + \dot{\alpha}_{2}^{2} \operatorname{cos}\left(\alpha_{1} + \alpha_{2}\right) - d_{2} \left(\ddot{\alpha}_{1} + \ddot{\alpha}_{2}\right) \operatorname{sin}\left(\alpha_{1} + \alpha_{2}\right)$$
$$y_2=L_{1} \operatorname{sin}\left(\alpha_{1}\right) + d_{2} \operatorname{sin}\left(\alpha_{1} + \alpha_{2}\right)$$
$$\dot{y}_2=L_{1} \operatorname{cos}\left(\alpha_{1}\right) \dot{\alpha}_{1} + d_{2} \left(\dot{\alpha}_{1} + \dot{\alpha}_{2}\right) \operatorname{cos}\left(\alpha_{1} + \alpha_{2}\right)$$
$$\ddot{y}_2=- L_{1} \operatorname{sin}\left(\alpha_{1}\right) \dot{\alpha}_{1}^{2} + L_{1} \operatorname{cos}\left(\alpha_{1}\right) \ddot{\alpha}_{1} - d_{2} \dot{\alpha}_{1} + \dot{\alpha}_{2}^{2} \operatorname{sin}\left(\alpha_{1} + \alpha_{2}\right) + d_{2} \left(\ddot{\alpha}_{1} + \ddot{\alpha}_{2}\right) \operatorname{cos}\left(\alpha_{1} + \alpha_{2}\right)$$
Inspecting the equations above, we see a new kind of acceleration, proportional to $\dot{\alpha_1}\dot{\alpha_2}$. This acceleration is due to the [Coriolis effect](http://en.wikipedia.org/wiki/Coriolis_effect) and is present only when there are movement in the two joints.
### Kinetics
From the free body diagrams, the Newton-Euler equations for the planar double inverted pendulum are:
#### Link 2
\begin{equation}
\begin{array}{l l}
F_{r2x} + F_{e,x} = m_2\ddot{x}_{2} \\
\\
F_{r2y} - m_2g + F_{e,y} = m_2\ddot{y}_{2} \\
\\
T_2 + d_2F_{r2x}\sin(\alpha_1+\alpha_2) - d_2F_{r2y}\cos(\alpha_1+\alpha_2) - (L_2-d_2)F_{e,x}\sin(\alpha_1+\alpha_2) - (L_2-d_2)F_{e,y}\cos(\alpha_1+\alpha_2) = I_{2}(\ddot{\alpha}_1+\ddot{\alpha}_2)
\end{array}
\label{}
\end{equation}
#### Link 1
\begin{equation}
\begin{array}{l l}
F_{r1x} - F_{r2x} = m_1\ddot{x}_{1} \\
\\
F_{r1y} - F_{r2y} - m_1g = m_1\ddot{y}_{1} \\
\\
T_1 - T_2 + d_1F_{r1x}\sin\alpha_1 - d_1F_{r1y}\cos\alpha_1 + (L_1-d_1)F_{r2x}\sin\alpha_1 - (L_1-d_1)F_{r2y}\cos\alpha_1 = I_{1}\ddot{\alpha}_1
\end{array}
\label{}
\end{equation}
If we want to determine the joint torques and we know the kinematics of the links, the inverse dynamics approach, we isolate the joint torques in the equations above, start solving for link 2 and then link 1. To determine the kinematics knowing the joint torques, the direct dynamics approach, we isolate the joint angular accelerations in the equations above and solve the ordinary differential equations.
Let's express the equations for the torques substituting the terms we know:
```python
m1, m2, I1, I2, g = symbols('m_1, m_2, I_1 I_2 g', positive=True)
```
```python
# link 2
Fr2x = m2*x2dd - Fex
Fr2y = m2*y2dd + m2*g - Fey
T2 = I2*(a1dd+a2dd) - d2*Fr2x*sin(a1+a2) + d2*Fr2y*cos(a1+a2) + (L2-d2)*Fex*sin(a1+a2) - (L2-d2)*Fey*cos(a1+a2)
T2 = simplify(T2)
# link 1
Fr1x = m1*x1dd + Fr2x
Fr1y = m1*y1dd + Fr2y + m1*g
T1 = I1*a1dd + T2 - d1*Fr1x*sin(a1) + d1*Fr1y*cos(a1) - (L1-d1)*Fr2x*sin(a1) + (L1-d1)*Fr2y*cos(a1)
T1 = simplify(T1)
```
The expressions for the joint moments of force are:
```python
display(Math(r'T_1\quad = \quad ' + mlatex(T1)))
display(Math(r'T_2\quad = \quad ' + mlatex(T2)))
```
$$T_1\quad = \quad F_{ex} L_{1} \operatorname{sin}\left(\alpha_{1}\right) + F_{ex} L_{2} \operatorname{sin}\left(\alpha_{1} + \alpha_{2}\right) - F_{ey} L_{1} \operatorname{cos}\left(\alpha_{1}\right) - F_{ey} L_{2} \operatorname{cos}\left(\alpha_{1} + \alpha_{2}\right) + I_{1} \ddot{\alpha}_{1} + I_{2} \ddot{\alpha}_{1} + I_{2} \ddot{\alpha}_{2} + L_{1}^{2} m_{2} \ddot{\alpha}_{1} - 2 L_{1} d_{2} m_{2} \operatorname{sin}\left(\alpha_{2}\right) \dot{\alpha}_{1} \dot{\alpha}_{2} - L_{1} d_{2} m_{2} \operatorname{sin}\left(\alpha_{2}\right) \dot{\alpha}_{2}^{2} + 2 L_{1} d_{2} m_{2} \operatorname{cos}\left(\alpha_{2}\right) \ddot{\alpha}_{1} + L_{1} d_{2} m_{2} \operatorname{cos}\left(\alpha_{2}\right) \ddot{\alpha}_{2} + L_{1} g m_{2} \operatorname{cos}\left(\alpha_{1}\right) + d_{1}^{2} m_{1} \ddot{\alpha}_{1} + d_{1} g m_{1} \operatorname{cos}\left(\alpha_{1}\right) + d_{2}^{2} m_{2} \ddot{\alpha}_{1} + d_{2}^{2} m_{2} \ddot{\alpha}_{2} + d_{2} g m_{2} \operatorname{cos}\left(\alpha_{1} + \alpha_{2}\right)$$
$$T_2\quad = \quad F_{ex} L_{2} \operatorname{sin}\left(\alpha_{1} + \alpha_{2}\right) - F_{ey} L_{2} \operatorname{cos}\left(\alpha_{1} + \alpha_{2}\right) + I_{2} \ddot{\alpha}_{1} + I_{2} \ddot{\alpha}_{2} + L_{1} d_{2} m_{2} \operatorname{sin}\left(\alpha_{2}\right) \dot{\alpha}_{1}^{2} + L_{1} d_{2} m_{2} \operatorname{cos}\left(\alpha_{2}\right) \ddot{\alpha}_{1} + d_{2}^{2} m_{2} \ddot{\alpha}_{1} + d_{2}^{2} m_{2} \ddot{\alpha}_{2} + d_{2} g m_{2} \operatorname{cos}\left(\alpha_{1} + \alpha_{2}\right)$$
There is an elegant form to display the equations for the torques using generalized coordinates, $q=[\alpha_1, \alpha_2]^T$ and grouping the terms proportional to common quantities in matrices, see for example, Craig (2005, page 180), Pandy (2001), and Zatsiorsky (2002, page 383):
\begin{equation}
\begin{array}{l l}
\tau = M(q)\ddot{q} + C(q,\dot{q}) + G(q) + E(q,\dot{q})
\end{array}
\label{}
\end{equation}
Where $\tau$ is a matrix (2x1) of joint torques; $M$ is the mass or inertia matrix (2x2); $\ddot{q}$ is a matrix (2x1) of angular accelerations; $C$ is a matrix (2x1) of [centipetal](http://en.wikipedia.org/wiki/Centripetal_force) and [Coriolis](http://en.wikipedia.org/wiki/Coriolis_effect) torques; $G$ is a matrix (2x1) of gravitational torques; and $E$ is a matrix (2x1) of external torques.
Let's use Sympy to display the equations in this new form:
```python
T1, T2 = T1.expand(), T2.expand()
q1, q2 = dynamicsymbols('q_1 q_2')
q1d, q2d = q1.diff(t), q2.diff(t)
q1dd, q2dd = q1.diff(t, 2), q2.diff(t, 2)
T1 = T1.subs({a1:q1, a2:q2, a1d:q1d, a2d:q2d, a1dd:q1dd, a2dd:q2dd})
T2 = T2.subs({a1:q1, a2:q2, a1d:q1d, a2d:q2d, a1dd:q1dd, a2dd:q2dd})
```
```python
M = Matrix(((simplify(T1.coeff(q1dd)), simplify(T1.coeff(q2dd))),
(simplify(T2.coeff(q1dd)), simplify(T2.coeff(q2dd)))))
C = Matrix((simplify(T1.coeff(q1d**2)*q1d**2 + T1.coeff(q2d**2)*q2d**2 + T1.coeff(q1d*q2d)*q1d*q2d),
simplify(T2.coeff(q1d**2)*q1d**2 + T2.coeff(q2d**2)*q2d**2 + T2.coeff(q1d*q2d)*q1d*q2d)))
G = Matrix((simplify(T1.coeff(g)*g),
simplify(T2.coeff(g)*g)))
E = Matrix((simplify(T1.coeff(Fex)*Fex + T1.coeff(Fey)*Fey),
simplify(T2.coeff(Fex)*Fex + T2.coeff(Fey)*Fey)))
display(Math(r'\begin{eqnarray}\tau&=&\begin{bmatrix}\tau_1\\ \tau_2\\ \end{bmatrix} \\' +
r'M(q)&=&' + mlatex(M) + r'\\' +
r'\ddot{q}&=&' + mlatex(Matrix((q1dd, q2dd))) + r'\\' +
r'C(q,\dot{q})&=&' + mlatex(C) + r'\\' +
r'G(q)&=&' + mlatex(G) + r'\\' +
r'E(q,\dot{q})&=&' + mlatex(E) + r'\end{eqnarray}'))
```
$$\begin{eqnarray}\tau&=&\begin{bmatrix}\tau_1\\ \tau_2\\ \end{bmatrix} \\M(q)&=&\left[\begin{matrix}I_{1} + I_{2} + L_{1}^{2} m_{2} + 2 L_{1} d_{2} m_{2} \operatorname{cos}\left(q_{2}\right) + d_{1}^{2} m_{1} + d_{2}^{2} m_{2} & I_{2} + L_{1} d_{2} m_{2} \operatorname{cos}\left(q_{2}\right) + d_{2}^{2} m_{2}\\I_{2} + L_{1} d_{2} m_{2} \operatorname{cos}\left(q_{2}\right) + d_{2}^{2} m_{2} & I_{2} + d_{2}^{2} m_{2}\end{matrix}\right]\\\ddot{q}&=&\left[\begin{matrix}\ddot{q}_{1}\\\ddot{q}_{2}\end{matrix}\right]\\C(q,\dot{q})&=&\left[\begin{matrix}- L_{1} d_{2} m_{2} \left(2 \dot{q}_{1} + \dot{q}_{2}\right) \operatorname{sin}\left(q_{2}\right) \dot{q}_{2}\\L_{1} d_{2} m_{2} \operatorname{sin}\left(q_{2}\right) \dot{q}_{1}^{2}\end{matrix}\right]\\G(q)&=&\left[\begin{matrix}g \left(L_{1} m_{2} \operatorname{cos}\left(q_{1}\right) + d_{1} m_{1} \operatorname{cos}\left(q_{1}\right) + d_{2} m_{2} \operatorname{cos}\left(q_{1} + q_{2}\right)\right)\\d_{2} g m_{2} \operatorname{cos}\left(q_{1} + q_{2}\right)\end{matrix}\right]\\E(q,\dot{q})&=&\left[\begin{matrix}F_{ex} \left(L_{1} \operatorname{sin}\left(q_{1}\right) + L_{2} \operatorname{sin}\left(q_{1} + q_{2}\right)\right) - F_{ey} \left(L_{1} \operatorname{cos}\left(q_{1}\right) + L_{2} \operatorname{cos}\left(q_{1} + q_{2}\right)\right)\\L_{2} \left(F_{ex} \operatorname{sin}\left(q_{1} + q_{2}\right) - F_{ey} \operatorname{cos}\left(q_{1} + q_{2}\right)\right)\end{matrix}\right]\end{eqnarray}$$
With this convention, to perform inverse dynamics we would calculate:
\begin{equation}
\tau = M(q)\ddot{q} + C(q,\dot{q}) + G(q) + E(q,\dot{q})
\label{}
\end{equation}
And for direct dynamics we would solve the differential equation:
\begin{equation}
\ddot{q} = M(q)^{-1} \left[\tau - C(q,\dot{q}) - G(q) - E(q,\dot{q}) \right]
\label{}
\end{equation}
The advantage of calculating analytically the derivatives of the position vector as function of the joint angles and using the notation above is that each term that contributes to each joint torque or acceleration can be easily identified.
#### Coupling (or interaction) effects
The terms off the main diagonal in the inertia matrix (which are the same) and the centripetal and Coriolis terms represent the effects of the movement (nonzero velocity) of one joint over the other. These torques are referred as coupling or interaction effects (see for example Hollerbach and Flash (1982) for an application of this concept in the study of the motor control of the upper limb movement).
#### Planar double pendulum
Using the same equations above, one can represent a planar double pendulum (hanging from the top, not inverted) considering the angles $\alpha_1$ and $\alpha_2$ negative, e.g., at $\alpha_1=-90^o$ and $\alpha_2=0$ the pendulum is hanging vertical.
#### WARNING: $F_r$ is not the actual joint reaction force!
For these two examples, in the Newton-Euler equations based on the free body diagrams we represented the consequences of all possible muscle forces on a joint as a net muscle torque and all forces acting on a joint as a resultant joint reaction force. That is, all forces between segments were represented as a resultant force that doesn't generate torque and a force couple (or free moment) that only generates torque. This is an important principle in mechanics of rigid bodies, see for example [this text](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/FreeBodyDiagram.ipynb). However, this principle creates the unrealistic notion that the sum of forces is applied directly on the joint (which has no further implication for a rigid body), but it is inaccurate for the understanding of the local effects on the joint. So, if we are trying to understand the stress on the joint or mechanisms of joint injury, the forces acting on the joint and on the rest of the segment must be considered individually.
#### Determination of muscle force
The torque $T$ exerted by a muscle is given by the product between the muscle-tendon moment arm $r$ and its force $F$. For the human body, there is more than one muscle crossing a joint and several joints. In such case, the torques due to the muscles are expressed in the following matrix form considering $n$ joints and $m$ muscles:
\begin{eqnarray}
\begin{bmatrix} T_1 \\ \vdots \\ T_n \end{bmatrix} = \begin{bmatrix} r_{11} & \cdots & r_{1m} \\ \vdots & \ddots & \vdots \\ r_{n1} & \cdots & r_{nm} \end{bmatrix} \begin{bmatrix} F_1 \\ \vdots \\ F_m \end{bmatrix}
\label{}
\end{eqnarray}
Where $r_{nm}$ is the moment arm about joint $n$ of the muscle $m$.
In the example of the two-link system, we sketched two uniarticular muscles for each of the two joints, consequently:
\begin{eqnarray}
\begin{bmatrix} T_1 \\ T_2 \end{bmatrix} = \begin{bmatrix} r_{1,ext} & r_{1,flex} & 0 & 0 \\ 0 & 0 & r_{1,ext} & r_{1,flex} \end{bmatrix} \begin{bmatrix} F_{1,ext} \\ -F_{1,flex} \\ F_{2,ext} \\ -F_{2,flex} \end{bmatrix}
\label{}
\end{eqnarray}
The moment arm of a muscle varies with the motion of the joints it crosses. In this case, using the [virtual work principle](http://en.wikipedia.org/wiki/Virtual_work) the moment arm can be given by (Sherman et al., 2013; Nigg and Herzog, 2006, page 634):
\begin{equation}
r(q) = \dfrac{\partial L_{MT}(q)}{\partial q}
\label{}
\end{equation}
Where $L_{MT}(q)$ is the length of the muscle-tendon unit expressed as a function of angle $q$.
For the simulation of human movement, muscles can be modeled as [Hill-type muscles](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MuscleSimulation.ipynb), the torques they generate are given by the matrix above, and this matrix is entered in the ODE for a multibody system dynamics we deduced before:
\begin{equation}
\ddot{q} = M(q)^{-1} \left[R_{MT}(q)F_{MT}(a,L_{MT},\dot{L}_{MT}) - C(q,\dot{q}) - G(q) - E(q,\dot{q}) \right]
\label{}
\end{equation}
Where $R_{MT}$ and $F_{MT}$ are matrices for the moment arms and muscle-tendon forces, respectively.
This ODE is then solved numerically given initial values; but this problem is far from trivial for a simulation with several segments and muscles.
## Numerical simulation
Let's simulate a voluntary movement of the upper limb using the planar two-link system as a model in order to visualize the contribution of each torque term. We will ignore the muscle dynamics and we will calculate the joint torques necessary to move the upper limb from one point to another under the assumption that the movement is performed with the smoothest trajectory possible. I.e., the movement is performed with a [minimum-jerk trajectory](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/MinimumJerkHypothesis.ipynb), a hypothesis about control of voluntary movements proposed by Flash and Hogan (1985).
Once we determine the desired trajectory, we can calculate the velocity and acceleration of the segments and combine with anthropometric measures to calculate the joint torques necessary to move the segments. This means we will perform inverse dynamics.
Let's simulate a slow (4 s) and a fast (0.5 s) movement of the upper limb starting at the anatomical neutral position (upper limb at the side of the trunk) and ending with the upper arm forward at horizontal and elbow flexed at 90 degrees.
First, let's import the necessary Python libraries and customize the environment:
```python
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['lines.linewidth'] = 3
matplotlib.rcParams['font.size'] = 13
matplotlib.rcParams['lines.markersize'] = 5
matplotlib.rc('axes', grid=False, labelsize=14, titlesize=16, ymargin=0.05)
matplotlib.rc('legend', numpoints=1, fontsize=11)
import sys
sys.path.insert(1, r'./../functions') # add to pythonpath
```
Let's take the anthropometric data from Dempster's model (see [Body segment parameters](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/BodySegmentParameters.ipynb)):
```python
height, mass = 1.70, 70 # m, kg
L1n, L2n = 0.188*height, 0.253*height
d1n, d2n = 0.436*L1n, 0.682*L2n
m1n, m2n = 0.0280*mass, 0.0220*mass
rg1n, rg2n = 0.322, 0.468
I1n, I2n = m1n*(rg1n*L1n)**2, m2n*(rg2n*L2n)**2
```
Considering these lengths, the initial and final positions of the endpoint (finger tip) for the simulated movement will be:
```python
xi, yi = 0, -L1n-L2n
xf, yf = L1n, L2n
gn = 9.81 # gravity acceleration m/s2
```
### Slow movement
```python
duration = 4 # seconds
```
The endpoint minimum jerk trajectory will be (see [Kinematic chain in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/KinematicChain.ipynb)):
```python
from minjerk import minjerk
```
```python
time, rlin, vlin, alin, jlin = minjerk([xi, yi], [xf, yf], duration=duration)
```
Let's find the joint angles to produce this minimum-jerk trajectory (inverse kinematics):
```python
from invkin2_2d import invkin
```
```python
rang = invkin(time, rlin, L1=L1n, L2=L2n)
```
For the joint torques, we need to calculate the angular velocity and acceleration. Let's do that using numerical differentiation:
```python
def diff_c(ang, duration):
"""Numerical differentiations using the central difference for the angular data.
"""
# central difference (f(x+h)-f(x-h))/(2*h)
dt = duration/(ang.shape[0]-1)
vang = np.empty_like(rang)
aang = np.empty_like(rang)
vang[:, 0] = np.gradient(rang[:, 0], dt)
vang[:, 1] = np.gradient(rang[:, 1], dt)
aang[:, 0] = np.gradient(vang[:, 0], dt)
aang[:, 1] = np.gradient(vang[:, 1], dt)
_, ax = plt.subplots(1, 3, sharex=True, figsize=(10, 3))
ax[0].plot(time, rang*180/np.pi)
ax[0].legend(['Ang 1', 'Ang 2'], framealpha=.5, loc='best')
ax[1].plot(time, vang*180/np.pi)
ax[2].plot(time, aang*180/np.pi)
ylabel = [r'Displacement [$\mathrm{^o}$]', r'Velocity [$\mathrm{^o/s}$]', r'Acceleration [$\mathrm{^o/s^2}$]']
for i, axi in enumerate(ax):
axi.set_xlabel('Time [$s$]')
axi.set_ylabel(ylabel[i])
axi.xaxis.set_major_locator(plt.MaxNLocator(4))
axi.yaxis.set_major_locator(plt.MaxNLocator(4))
plt.tight_layout()
plt.show()
return vang, aang
vang, aang = diff_c(rang, duration)
```
```python
def dyna(time, L1n, L2n, d1n, d2n, m1n, m2n, gn, I1n, I2n, q1, q2, rang, vang, aang, Fexn, Feyn, M, C, G, E):
"""Numerical calculation and plot for the torques of a planar two-link system.
"""
from sympy import lambdify, symbols
Mfun = lambdify((I1, I2, L1, L2, d1, d2, m1, m2, q1, q2), M, 'numpy')
Mn = Mfun(I1n, I2n, L1n, L2n, d1n, d2n, m1n, m2n, rang[:, 0], rang[:, 1])
M00 = Mn[0, 0][:, 0]*aang[:, 0]
M01 = Mn[0, 1][:, 0]*aang[:, 1]
M10 = Mn[1, 0][:, 0]*aang[:, 0]
M11 = Mn[1, 1]*aang[:, 1]
Q1d, Q2d = symbols('Q1d Q2d')
dicti = {q1.diff(t, 1):Q1d, q2.diff(t, 1):Q2d}
C0fun = lambdify((L1, d2, m2, q2, Q1d, Q2d), C[0].subs(dicti), 'numpy')
C0 = C0fun(L1n, d2n, m2n, rang[:, 1], vang[:, 0], vang[:, 1])
C1fun = lambdify((L1, d2, m2, q2, Q1d, Q2d), C[1].subs(dicti), 'numpy')
C1 = C1fun(L1n, d2n, m2n, rang[:, 1], vang[:, 0], vang[:, 1])
G0fun = lambdify((L1, d1, d2, m1, m2, g, q1, q2), G[0], 'numpy')
G0 = G0fun(L1n, d1n, d2n, m1n, m2n, gn, rang[:, 0], rang[:, 1])
G1fun = lambdify((L1, d1, d2, m1, m2, g, q1, q2), G[1], 'numpy')
G1 = G1fun(L1n, d1n, d2n, m1n, m2n, gn, rang[:, 0], rang[:, 1])
E0fun = lambdify((L1, L2, q1, q2, Fex, Fey), E[0], 'numpy')
E0 = E0fun(L1n, L2n, rang[:, 0], rang[:, 1], 0, 0)
E1fun = lambdify((L1, L2, q1, q2, Fex, Fey), E[1], 'numpy')
E1 = E1fun(L1n, L2n, rang[:, 0], rang[:, 1], Fexn, Feyn)
_, ax = plt.subplots(1, 2, sharex=True, squeeze=True, figsize=(10, 4))
ax[0].plot(time, M00+M01)
ax[0].plot(time, C0)
ax[0].plot(time, G0)
ax[0].plot(time, E0)
ax[0].plot(time, M00+M01+C0+G0, 'k--', linewidth=4)
ax[0].set_ylabel(r'Torque [Nm]')
ax[0].set_title('Joint 1')
ax[1].plot(time, M10+M11, label='Mass/Inertia')
ax[1].plot(time, C1, label='Centripetal/Coriolis')
ax[1].plot(time, G1, label='Gravitational')
ax[1].plot(time, E1, label='External')
ax[1].plot(time, M10+M11+C1+G1, 'k--', linewidth=4, label='Muscular (sum)')
ax[1].set_title('Joint 2')
ax[1].legend(framealpha=.5, loc='upper right', bbox_to_anchor=(1.6, 1))
for i, axi in enumerate(ax):
axi.set_xlabel('Time [$s$]')
axi.xaxis.set_major_locator(plt.MaxNLocator(4))
axi.yaxis.set_major_locator(plt.MaxNLocator(4))
plt.tight_layout()
plt.show()
return M00, M01, M10, M11, C0, C1, G0, G1, E0, E1
```
```python
Fexn, Feyn = 0, 0
M00, M01, M10, M11, C0, C1, G0, G1, E0, E1 = dyna(time, L1n, L2n, d1n, d2n, m1n, m2n, gn, I1n, I2n,
q1, q2, rang, vang, aang, Fexn, Feyn, M, C, G, E)
```
The joint torques essentially compensate the gravitational torque.
### Fast movement
Let's see what is changed for a fast movement:
```python
duration = 0.5 # seconds
time, rlin, vlin, alin, jlin = minjerk([xi, yi], [xf, yf], duration=duration)
rang = invkin(time, rlin, L1=L1n, L2=L2n)
vang, aang = diff_c(rang, duration)
M00, M01, M10, M11, C0, C1, G0, G1, E0, E1 = dyna(time, L1n, L2n, d1n, d2n, m1n, m2n, gn, I1n, I2n,
q1, q2, rang, vang, aang, Fexn, Feyn, M, C, G, E)
```
The interaction torques are larger than the gravitational torques for most part of the movement.
### Fast movement in the horizontal plane
Let's simulate a fast movement in the horizontal plane:
```python
gn = 0 # gravity acceleration m/s2
M00, M01, M10, M11, C0, C1, G0, G1, E0, E1 = dyna(time, L1n, L2n, d1n, d2n, m1n, m2n, gn, I1n, I2n,
q1, q2, rang, vang, aang, Fexn, Feyn, M, C, G, E)
```
## Exercises
1. Derive the equations of motion for a single pendulum (not inverted).
2. Derive the equations of motion for a double pendulum (not inverted).
3. Consider the double pendulum moving in the horizontal plane and with no external force. Find out the type of movement and which torque terms are changed when:
a) $\dot{\alpha}_1=0^o$
b) $\alpha_2=0^o$
c) $\dot{\alpha}_2=0^o$
d) $2\alpha_1+\alpha_2=180^o$ (hint: a two-link system with this configuration is called polar manipulator)
4. Derive the equations of motion and the torque terms using angles in the segmental space $(\theta_1,\,\theta_2)$.
5. Run the numerical simulations for the torques with different parameters.
## References
- Craig JJ (2005) [Introduction to Robotics: Mechanics and Control](http://books.google.com.br/books?id=MqMeAQAAIAAJ). 3rd Edition. Prentice Hall.
- Hollerbach JM, Flash T (1982) [Dynamic interactions between limb segments during planar arm movement](http://link.springer.com/article/10.1007%2FBF00353957). Biological Cybernetics, 44, 67-77.
- Nigg BM and Herzog W (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley.
- Pandy MG (2001) [Computer modeling and simulation](https://drive.google.com/open?id=0BxbW72zV7WmUbXZBR2VRMnF5UTA&authuser=0). Annu. Rev. Biomed. Eng., 3, 245–73.
- Sherman MA, Seth A, Delp SL (2013) [What is a moment arm? Calculating muscle effectiveness in biomechanical models using generalized coordinates](http://simtk-confluence.stanford.edu:8080/download/attachments/3376330/ShermanSethDelp-2013-WhatIsMuscleMomentArm-Final2-DETC2013-13633.pdf?version=1&modificationDate=1369103515834) in Proc. ASME Int. Design Engineering Technical Conferences (IDETC), Portland, OR, USA.
- Zajac FE (1993) [Muscle coordination of movement: a perspective](http://e.guigon.free.fr/rsc/article/Zajac93.pdf). J Biomech., 26, Suppl 1:109-24.
- Zajac FE, Gordon ME (1989) [Determining muscle's force and action in multi-articular movement](https://drive.google.com/open?id=0BxbW72zV7WmUcC1zSGpEOUxhWXM&authuser=0). Exercise and Sport Sciences Reviews, 17, 187-230.
- Zatsiorsky VM (2002) [Kinetics of human motion](http://books.google.com.br/books?id=wp3zt7oF8a0C). Human Kinetics.
| 5968af1262beb97547e26c2c8747bd6aef4d318d | 330,696 | ipynb | Jupyter Notebook | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/MultibodyDynamics.ipynb | raissabthibes/bmc | 840800fb94ea3bf188847d0771ca7197dfec68e3 | [
"MIT"
]
| null | null | null | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/MultibodyDynamics.ipynb | raissabthibes/bmc | 840800fb94ea3bf188847d0771ca7197dfec68e3 | [
"MIT"
]
| null | null | null | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/MultibodyDynamics.ipynb | raissabthibes/bmc | 840800fb94ea3bf188847d0771ca7197dfec68e3 | [
"MIT"
]
| null | null | null | 192.601048 | 43,712 | 0.874483 | true | 14,392 | Qwen/Qwen-72B | 1. YES
2. YES | 0.718594 | 0.73412 | 0.527534 | __label__eng_Latn | 0.896528 | 0.063968 |
## Gaussian Process Regression
## Part I - Multivariate Gaussian Distribution
## 2nd Machine Learning in Heliophysics
## Boulder, CO
### 21 - 25 March 2022
### Enrico Camporeale (University of Colorado, Boulder & NOAA Space Weather Prediction Center)
#### enrico.camporeale@noaa.gov
This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">
Gaussian Process is a powerful technique for regression and classification
+ It is a <strong>non-parametric</strong> method
+ It has a much simpler algorithm than parametric equivalents (neural networks, etc.)
+ But it is harder to understand...
The output of GP is a fully probabilistic prediction in terms of Gaussian distributions (mean and variance)
# References
## The bible of GP
Available online (legally!)
http://www.gaussianprocess.org/gpml/chapters/
We will cover mostly Chapter 2 (Regression), Chapter 4 (Covariance Functions), Chapter 5 (Hyperparamaters)
# Gaussian distribution
<em>There are over 100 topics all named after Gauss</em>
https://en.wikipedia.org/wiki/List_of_things_named_after_Carl_Friedrich_Gauss
## Starting with one variable
The Gaussian distribution is arguably the most ubiquitous distribution in statistics, physics, social sciences, economy, etc.
+ Central Limit Theorem
+ Thermodynamical equilibrium (Maxwell–Boltzmann distribution)
+ Brownian motion
+ etc.
Also called <strong> Normal distribution </strong>
$$p(x|\mu,\sigma) = \frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)$$
```python
%matplotlib inline
from ipywidgets import interactive
import matplotlib.pyplot as plt
import numpy as np
def f(sigma, mu):
plt.figure(2)
x = np.linspace(-10, 10, num=1000)
plt.plot(x, 1/np.sqrt(2*np.pi)/sigma * np.exp(-0.5*(x-mu)**2/sigma**2))
plt.ylim(-0.1, 1)
plt.show()
interactive_plot = interactive(f, sigma=(0, 3.0), mu=(-3, 3, 0.5))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
```
interactive(children=(FloatSlider(value=1.5, description='sigma', max=3.0), FloatSlider(value=0.0, description…
Why does the peak of the distribution change?
The distribution is normalized:
$$\frac{1}{\sqrt{2\pi}\sigma}\int_{-\infty}^\infty \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)dx=1$$
$$p(x|\mu,\sigma) = \frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)$$
The mean (expectation) value of a random variable $x$ normally distributed is
$\mathbb{E}(x) = \int_{-\infty}^\infty p(x) x dx = \frac{1}{\sqrt{2\pi}\sigma}\int_{-\infty}^\infty \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right) x dx = \frac{1}{\sqrt{2\pi}\sigma}\int_{-\infty}^\infty \exp\left(-\frac{z^2}{2\sigma^2}\right) (z+\mu) dz$ =
$$\mu$$
$$p(x|\mu,\sigma) = \frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)$$
The variance of a random variable $x$ is defined as
$var(x) = \mathbb{E}(x^2) - \mathbb{E}(x)^2$
When $x$ is normally distributed
$\mathbb{E}(x^2) = \frac{1}{\sqrt{2\pi}\sigma}\int_{-\infty}^\infty \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right) x^2 dx = \sigma^2 + \mu^2$
$var(x) = \sigma^2$
## Gaussian distribution of 2 variables
If two variables $x$ and $y$ are independent, their <strong> joint</strong> probability is
$$p(x,y) = p(x)p(y)$$
$$p(x,y) = \frac{1}{{4\pi}\sigma_x\sigma_y}\exp\left(-\frac{(x-\mu_x)^2}{2\sigma_x^2}-\frac{(y-\mu_y)^2}{2\sigma_y^2}\right)$$
```python
%matplotlib inline
def f(sigma_x, sigma_y):
fig = plt.figure(figsize=(10, 10))
xx, yy = np.mgrid[-10:10:0.2, -10:10:0.2]
f = 1/(4*np.pi)/sigma_x/sigma_y * np.exp(-0.5*(xx**2/sigma_x**2+yy**2/sigma_y**2))
ax = plt.axes(projection='3d')
surf = ax.plot_surface(xx, yy, f, rstride=1, cstride=1, cmap='coolwarm', edgecolor='none')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('PDF')
fig.colorbar(surf, shrink=0.5, aspect=5) # add color bar indicating the PDF
interactive_plot = interactive(f, sigma_x=(0, 3.0), sigma_y=(0,3.0))
output = interactive_plot.children[-1]
interactive_plot
```
interactive(children=(FloatSlider(value=1.5, description='sigma_x', max=3.0), FloatSlider(value=1.5, descripti…
A better way of displaying 2D distributions is by using contour lines (isocontours).
What family of curves are represented by this equation ?
$\frac{(x-\mu_x)^2}{2\sigma_x^2}+\frac{(y-\mu_y)^2}{2\sigma_y^2}=const$
```python
def f(sigma_x, sigma_y):
fig = plt.figure(figsize=(7, 7))
xx, yy = np.mgrid[-10:10:0.2, -10:10:0.2]
f = 1/(4*np.pi)/sigma_x/sigma_y * np.exp(-0.5*(xx**2/sigma_x**2+yy**2/sigma_y**2))
ax = fig.gca()
ax.set_xlim(-10, 10)
ax.set_ylim(-10, 10)
cfset = ax.contourf(xx, yy, f, cmap='coolwarm')
ax.imshow(np.rot90(f), cmap='coolwarm', extent=[-10,10,-10,10]),
cset = ax.contour(xx, yy, f, colors='k')
ax.clabel(cset, inline=1, fontsize=10)
ax.set_xlabel('x')
ax.set_ylabel('y')
interactive_plot = interactive(f, sigma_x=(0, 3.0), sigma_y=(0,3.0))
output = interactive_plot.children[-1]
#output.layout.height = '500px'
```
```python
%matplotlib inline
interactive_plot
```
interactive(children=(FloatSlider(value=1.5, description='sigma_x', max=3.0), FloatSlider(value=1.5, descripti…
# Matrix form
$$p(x,y) = \frac{1}{{4\pi}\sigma_x\sigma_y}\exp\left(-\frac{(x-\mu_x)^2}{2\sigma_x^2}-\frac{(y-\mu_y)^2}{2\sigma_y^2}\right)$$
The 2D normal distribution can be rewritten as
$$p(x,y) = \frac{1}{4\pi\sigma_x\sigma_y}\exp\left(-\frac{1}{2}\left(\begin{bmatrix}x \\ y \end{bmatrix} - \begin{bmatrix}\mu_x \\ \mu_y \end{bmatrix}\right)^T \begin{bmatrix} \sigma_x^2 & 0 \\ 0 & \sigma_y^2 \end{bmatrix}^{-1} \left(\begin{bmatrix}x \\ y \end{bmatrix} - \begin{bmatrix}\mu_x \\ \mu_y \end{bmatrix}\right) \right)$$
that is
$$p(x,y) = \frac{1}{4\pi|\boldsymbol{D}|^{1/2}}\exp\left(-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu})^T \boldsymbol{D}^{-1}(\boldsymbol{x}-\boldsymbol{\mu}) \right)$$
where $\boldsymbol{x} = \begin{bmatrix} x \\ y \end{bmatrix}$ , $\boldsymbol{\mu} = \begin{bmatrix} \mu_x \\ \mu_y \end{bmatrix}$, $\boldsymbol{D}=\begin{bmatrix} \sigma_x^2 & 0 \\ 0 & \sigma_y^2 \end{bmatrix}$
We can now introduce a rotation of the coordinates $(x,y)$ via a rotation matrix $\boldsymbol{R}$ such that
$\boldsymbol{x}\rightarrow\boldsymbol{Rx}$
$$p(x,y) = \frac{1}{4\pi|\boldsymbol{D}|^{1/2}}\exp\left(-\frac{1}{2}(\boldsymbol{Rx}-\boldsymbol{R\mu})^T \boldsymbol{D}^{-1}(\boldsymbol{Rx}-\boldsymbol{R\mu}) \right)$$
which finally reduces to
$$p(x,y) = \frac{1}{4\pi|\boldsymbol{\Sigma}|^{1/2}}\exp\left(-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu})^T \boldsymbol{\Sigma}^{-1}(\boldsymbol{x}-\boldsymbol{\mu}) \right)$$
with $\boldsymbol{\Sigma}^{-1} = \boldsymbol{R^T}\boldsymbol{D}^{-1}\boldsymbol{R}$
$\boldsymbol{R}$ is a rotation matrix, so it is unitary: $\boldsymbol{R}\boldsymbol{R}^T=\boldsymbol{I}$, hence:
$$\boldsymbol{\Sigma} = \boldsymbol{R^T}\boldsymbol{D}\boldsymbol{R}$$
(proof: $\boldsymbol{I}=\boldsymbol{\Sigma}\boldsymbol{\Sigma}^{-1} = \boldsymbol{R}^T\boldsymbol{D}\boldsymbol{R}\boldsymbol{R}^T\boldsymbol{D}^{-1}\boldsymbol{R}=\boldsymbol{I}$)
This can now be generalized to any number of variables $D$, and we have then derived the general formula for a multivariate Gaussian distribution
$$p(x,y) = \frac{1}{(2\pi\boldsymbol)^{D/2}|{\Sigma}|^{1/2}}\exp\left(-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu})^T \boldsymbol{\Sigma}^{-1}(\boldsymbol{x}-\boldsymbol{\mu}) \right)$$
for which there is always an appropriate tranformation of variables (rotation) that makes the variables independent.
The general rotation matrix for an angle $\theta$ in 2D is
$R=\begin{bmatrix}\cos\theta & -\sin\theta \\ \sin\theta & \cos\theta\end{bmatrix}$
```python
def f(sigma_x, sigma_y, theta=0):
fig = plt.figure(figsize=(7, 7))
xx, yy = np.mgrid[-5:5:0.1, -5:5:0.1]
theta = theta /180*np.pi
R=np.matrix([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]])
D_inv=np.matrix([[1/sigma_x**2,0],[0, 1/sigma_y**2]])
D=np.matrix([[sigma_x**2,0],[0, sigma_y**2]])
Sigma = np.matmul(np.matmul(np.transpose(R),D),R)
Sigma_inv = np.matmul(np.matmul(np.transpose(R),D_inv),R)
f = 1/(4*np.pi)/sigma_x/sigma_y * np.exp(-0.5*(xx**2*Sigma_inv[0,0]+ 2*xx*yy*Sigma_inv[0,1]+yy**2*Sigma_inv[1,1]))
ax = fig.gca()
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
cfset = ax.contourf(xx, yy, f, cmap='coolwarm')
ax.imshow(np.rot90(f), cmap='coolwarm', extent=[-10,10,-10,10]),
cset = ax.contour(xx, yy, f, colors='k')
ax.clabel(cset, inline=1, fontsize=10)
ax.set_xlabel('x', fontsize=16)
ax.set_ylabel('y', fontsize=16)
ax.text(-4,3,np.core.defchararray.add('$\Sigma^{-1}=$\n',np.array_str(Sigma_inv)), fontsize=16)
ax.text(-4,-3.5,np.core.defchararray.add('$\Sigma=$\n',np.array_str(Sigma)), fontsize=16)
interactive_plot = interactive(f, sigma_x=(0, 3.0), sigma_y=(0,3.0),theta=(0,180))
output = interactive_plot.children[-1]
```
```python
interactive_plot
```
interactive(children=(FloatSlider(value=1.5, description='sigma_x', max=3.0), FloatSlider(value=1.5, descripti…
Something peculiar about these matrices?
They are symmetric!
What if instead we choose to use a matrix $\Sigma^{-1}$ that is NOT symmetric?
(Note: any matrix can be decomposed in the sum of a symmetric and an anti-symmetric matrix)
Exercise: show that the anti-symmetric part disappears from the exponent in the Gaussian
Hence: without loss of generality $\Sigma^{-1}$ can be taken as symmetric.
The inverse of a symmetric matrix is symmetric: $\Sigma$ is also symmetric
$\boldsymbol{\Sigma}$ is called the <strong> Covariance matrix</strong>
$\boldsymbol{\Sigma}^{-1}$ is called the <strong> Precision matrix</strong>
## Covariance
If we have a set of random variables $\boldsymbol{X}=\{X_1,X_2,\ldots,X_D\}$ the <strong>covariance</strong> between two variables is defined as:
$$cov(X_i,X_j)=\mathbb{E}[(X_i-\mathbb{E}[X_i]) (X_j-\mathbb{E}[X_j])]$$
and the covariance matrix is the corresponding matrix of elements $\boldsymbol{K}_{i,j}=cov(X_i,X_j)$. Hence the diagonal entries of the covariance matrix are the variances of each element of $\mathbf{X}$.
$\mathbf{\Sigma}=\begin{bmatrix}cov(X_1,X_1) & cov(X_1,X_2) & \cdots & cov(X_1,X_D) \\ cov(X_2,X_1) & cov(X_2,X_2) & \cdots & cov(X_2,X_D)\\ \vdots & \vdots & \vdots & \vdots\\ cov(X_D,X_1) & cov(X_D,X_2) & \cdots & cov(X_D,X_D) \end{bmatrix}$
Exercise: show that if two random variables $X$ and $Y$ are independent, their covariance is equal to zero.
## Partitioned covariance and precision matrices
Assume we split our $D-$ dimensional set of random variables $\boldsymbol{X}$ in two sets $\boldsymbol{x_a}$ and $\boldsymbol{x_b}$ (each can be multi-dimensional).
Likewise, we can split the mean values in two corresponding sets $\boldsymbol{\mu_a}$ and $\boldsymbol{\mu_b}$.
The vectors $\boldsymbol{X}$ and $\boldsymbol{\mu}$ can then be expressed as:
$\boldsymbol{X}=\begin{bmatrix}\boldsymbol{x_a}\\ \boldsymbol{x_b}\end{bmatrix} $, $\boldsymbol{\mu}=\begin{bmatrix}\boldsymbol{\mu_a}\\ \boldsymbol{\mu_b}\end{bmatrix} $.
The covariance matrix $\boldsymbol{\Sigma}$ can be partitioned as
$\boldsymbol{\Sigma}=\begin{bmatrix} \boldsymbol{\Sigma}_{aa} & \boldsymbol{\Sigma}_{ab}\\ \boldsymbol{\Sigma}_{ba} & \boldsymbol{\Sigma}_{bb}\end{bmatrix}$ Notice that $\boldsymbol{\Sigma}_{aa}$ and $\boldsymbol{\Sigma}_{bb}$ are still symmetric, while $\boldsymbol{\Sigma}_{ab}=\boldsymbol{\Sigma}_{ba}^T$
We can also introduce a similar partition for the precision matrix $\boldsymbol\Lambda=\boldsymbol\Sigma^{-1}$:
$\boldsymbol{\Lambda}=\begin{bmatrix} \boldsymbol{\Lambda}_{aa} & \boldsymbol{\Lambda}_{ab}\\ \boldsymbol{\Lambda}_{ba} & \boldsymbol{\Lambda}_{bb}\end{bmatrix}$
However, keep in mind that the partition of the inverse is not equal to the inverse of a partition!
$\boldsymbol{\Lambda}_{aa}\ne\boldsymbol{\Sigma}_{aa}^{-1}$
## When Gaussian always Gaussian
We can now reason in terms of the D-dimensional multivariate Gaussian distribution defined over the joint set $(\boldsymbol x_a,\boldsymbol x_b)$ as $p(\boldsymbol x_a,\boldsymbol x_b) = \mathcal{N}(\boldsymbol x|\boldsymbol\mu,\boldsymbol{\Sigma})$.
The Gaussian distribution has unique properties!
+ The marginal distribution $p(\boldsymbol x_a) = \int p(\boldsymbol x_a,\boldsymbol x_b) d\boldsymbol x_b$ is Gaussian
+ The conditional distribution $p(\boldsymbol x_a|\boldsymbol x_b) = \frac{p(\boldsymbol x_a,\boldsymbol x_b)}{p(\boldsymbol x_b)} = \frac{p(\boldsymbol x_a,\boldsymbol x_b)}{\int p(\boldsymbol x_a,\boldsymbol x_b) d\boldsymbol x_a}$ is Gaussian
## Marginal distribution
$p(\boldsymbol x_a, \boldsymbol x_b)=\mathcal{N}\left(\boldsymbol x|\begin{bmatrix}\boldsymbol{\mu_a}\\ \boldsymbol{\mu_b}\end{bmatrix} ,\begin{bmatrix} \boldsymbol{\Sigma}_{aa} & \boldsymbol{\Sigma}_{ab}\\ \boldsymbol{\Sigma}_{ba} & \boldsymbol{\Sigma}_{bb}\end{bmatrix}\right)$
The marginal distribution is obtained when we 'marginalize' (ie integrate) the distribution over a set of random variables. In the 2D graphical representation this can be understood as collapsing the distribution over one axes.
What are the mean and covariance matrix of the marginal distribution ?
$p(\boldsymbol x_a) = \int p(\boldsymbol x_a,\boldsymbol x_b) d\boldsymbol x_b = \mathcal{N}(\boldsymbol x_a| ?, ?)$
$p(\boldsymbol x_a) = \int p(\boldsymbol x_a,\boldsymbol x_b) d\boldsymbol x_b = \mathcal{N}(\boldsymbol x_a| \boldsymbol \mu_a, \boldsymbol \Sigma_{aa})$
## Conditional distribution
Whereas the result for the marginal distribution is somewhat intuitive, a less intuitive result holds for the conditional distribution, which we derive here.
The conditional distribution $p(\boldsymbol x_a| \boldsymbol x_b)$ is simply evaluated by considering the joint distribution $p(\boldsymbol x_a,\boldsymbol x_b)$ and considering $\boldsymbol x_b$ as a constant.
Using the partioning introduced above for $\boldsymbol x$, $\boldsymbol \mu$ and the precision matrix $\boldsymbol\Lambda$, we have:
$(\boldsymbol{x}-\boldsymbol{\mu})^T\boldsymbol{\Sigma}^{-1}(\boldsymbol{x}-\boldsymbol{\mu})=(\boldsymbol{x_a}-\boldsymbol{\mu_a})^T\boldsymbol{\Lambda}_{aa}(\boldsymbol{x_a}-\boldsymbol{\mu_a})+2(\boldsymbol{x_a}-\boldsymbol{\mu_a})^T\boldsymbol{\Lambda}_{ab}(\boldsymbol{x_b}-\boldsymbol{\mu_b})+(\boldsymbol{x_b}-\boldsymbol{\mu_b})^T\boldsymbol{\Lambda}_{bb}(\boldsymbol{x_b}-\boldsymbol{\mu_b})$
Now, we expect $p(\boldsymbol x_a| \boldsymbol x_b)\sim\mathcal N(\boldsymbol x_a|\boldsymbol\mu_{a|b},\boldsymbol\Sigma_{a|b})$.
A general form for the argument of the exponent is
$(\boldsymbol{x}_a-\boldsymbol{\mu_{a|b}})^T\boldsymbol{\Sigma}^{-1}_{a|b}(\boldsymbol{x}_a-\boldsymbol{\mu}_{a|b})=\boldsymbol x_a^T \boldsymbol{\Sigma}^{-1}_{a|b} \boldsymbol x_a -2 \boldsymbol x_a^T \boldsymbol{\Sigma}^{-1}_{a|b}\boldsymbol \mu_{a|b} + \boldsymbol \mu^T_{a|b} \boldsymbol{\Sigma}^{-1}_{a|b} \boldsymbol \mu_{a|b}$ (where we have used the symmetry of $\boldsymbol\Sigma_{a|b}$).
$(\boldsymbol{x}-\boldsymbol{\mu})^T\boldsymbol{\Sigma}^{-1}(\boldsymbol{x}-\boldsymbol{\mu})=(\boldsymbol{x_a}-\boldsymbol{\mu_a})^T\boldsymbol{\Lambda}_{aa}(\boldsymbol{x_a}-\boldsymbol{\mu_a})+2(\boldsymbol{x_a}-\boldsymbol{\mu_a})^T\boldsymbol{\Lambda}_{ab}(\boldsymbol{x_b}-\boldsymbol{\mu_b})+(\boldsymbol{x_b}-\boldsymbol{\mu_b})^T\boldsymbol{\Lambda}_{bb}(\boldsymbol{x_b}-\boldsymbol{\mu_b})$
$(\boldsymbol{x}_a-\boldsymbol{\mu_{a|b}})^T\boldsymbol{\Sigma}^{-1}_{a|b}(\boldsymbol{x}_a-\boldsymbol{\mu}_{a|b})=\boldsymbol x_a^T \boldsymbol{\Sigma}^{-1}_{a|b} \boldsymbol x_a -2 \boldsymbol x_a^T \boldsymbol{\Sigma}^{-1}_{a|b}\boldsymbol \mu_{a|b} + \boldsymbol \mu^T_{a|b} \boldsymbol{\Sigma}^{-1}_{a|b} \boldsymbol \mu_{a|b}$
It is now sufficient to equate equal terms in $\boldsymbol x_a$ in the above two equations.
Terms in $\boldsymbol x_a^2\longrightarrow$: $\boldsymbol x_a^T \boldsymbol{\Sigma}^{-1}_{a|b} \boldsymbol x_a = \boldsymbol x_a^T \boldsymbol{\Lambda}_{aa} \boldsymbol x_a$, from which $\boldsymbol{\Sigma}_{a|b} = \boldsymbol{\Lambda}_{aa}^{-1}$
Terms in $\boldsymbol x_a\longrightarrow$: $2\boldsymbol x_a^T(-\boldsymbol\Lambda_{aa}\boldsymbol\mu_a+\boldsymbol\Lambda_{ab}\boldsymbol (\boldsymbol x_b-\boldsymbol \mu_b))= -2\boldsymbol x_a^T\boldsymbol\Sigma^{-1}_{a|b}\boldsymbol\mu_{a|b}$ from which $\boldsymbol\mu_{a|b}=\Sigma_{a|b}(\boldsymbol\Lambda_{aa}\boldsymbol\mu_a-\boldsymbol\Lambda_{ab}\boldsymbol (\boldsymbol x_b-\boldsymbol \mu_b))=\boldsymbol\mu_a-\boldsymbol\Lambda_{aa}^{-1}\boldsymbol\Lambda_{ab}\boldsymbol (\boldsymbol x_b-\boldsymbol \mu_b)$
So far we have:
$\boldsymbol{\Sigma}_{a|b} = \boldsymbol{\Lambda}_{aa}^{-1}$
$\boldsymbol\mu_{a|b}=\boldsymbol\mu_a-\boldsymbol\Lambda_{aa}^{-1}\boldsymbol\Lambda_{ab}\boldsymbol (\boldsymbol x_b-\boldsymbol \mu_b)$
However, we would like to express the covariance matrix and the mean of the conditional distribution $p(\boldsymbol x_a| \boldsymbol x_b)$ in terms of the partioned co-variance matrix and mean of the full distribution. We need to use the following identity that relates the inverse of a partitioned matrix, with the partition of the inverse:
$\begin{bmatrix}A & B \\ C & D\end{bmatrix}^{-1} = \begin{bmatrix}(A-BD^{-1}C)^{-1} & -(A-BD^{-1}C)^{-1}BD^{-1} \\-D^{-1}C(A-BD^{-1}C)^{-1} & D^{-1}+D^{-1}C(A-BD^{-1}C)^{-1}BD^{-1} \end{bmatrix}$
In our case
$\boldsymbol{\begin{bmatrix}\boldsymbol\Sigma_{aa} & \boldsymbol\Sigma_{ab} \\ \boldsymbol\Sigma_{ba} & \boldsymbol\Sigma_{bb}\end{bmatrix}^{-1} = \begin{bmatrix}\boldsymbol\Lambda_{aa} & \boldsymbol\Lambda_{ab} \\ \boldsymbol\Lambda_{ba} & \boldsymbol\Lambda_{bb}\end{bmatrix}}$
Hence: $\boldsymbol\Lambda_{aa} = (\boldsymbol\Sigma_{aa}- \boldsymbol\Sigma_{ab}\boldsymbol\Sigma_{bb}^{-1}\boldsymbol\Sigma_{ba})^{-1}$ and $\boldsymbol\Lambda_{ab} = - (\boldsymbol\Sigma_{aa}- \boldsymbol\Sigma_{ab}\boldsymbol\Sigma_{bb}^{-1}\boldsymbol\Sigma_{ba})^{-1}\boldsymbol\Sigma_{ab}\boldsymbol\Sigma_{bb}^{-1}$
and finally:
\begin{equation}\boxed{\boldsymbol\mu_{a|b}=\boldsymbol\mu_a+\boldsymbol\Sigma_{ab}\boldsymbol\Sigma_{bb}^{-1}(\boldsymbol x_b - \boldsymbol\mu_b) \\
\boldsymbol\Sigma_{a|b} = \boldsymbol\Sigma_{aa} - \boldsymbol\Sigma_{ab}\boldsymbol\Sigma_{bb}^{-1}\boldsymbol\Sigma_{ba}} \end{equation}
```python
## Example with a 2D distribution
def f(mu_a, mu_b, x_b=0, sigma_aa=1, sigma_bb=1, sigma_ab=0):
fig = plt.figure(figsize=(7, 7))
xx, yy = np.mgrid[-5:5:0.1, -5:5:0.1]
y=np.linspace(-5,5,100)
Sigma = np.matrix([[sigma_aa,sigma_ab],[sigma_ab, sigma_bb]])
Sigma_inv = np.linalg.inv(Sigma)
Sigma_det = np.linalg.det(Sigma)
f = 1/(4*np.pi)/np.sqrt(Sigma_det) * np.exp(-0.5*((xx-mu_a)**2*Sigma_inv[0,0]+ 2*(xx-mu_a)*(yy-mu_b)*Sigma_inv[0,1]+(yy-mu_b)**2*Sigma_inv[1,1]))
mu_ab = mu_a +sigma_ab/sigma_bb*(x_b-mu_b)
Sigma_cond = sigma_aa-sigma_ab**2/sigma_bb
ax = fig.gca()
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
cfset = ax.contourf(xx, yy, f, cmap='coolwarm')
cset = ax.contour(xx, yy, f, colors='k')
ax.clabel(cset, inline=1, fontsize=10)
ax.plot([-5, 5],[x_b,x_b], color='black')
ax.plot(y,x_b+ 1/np.sqrt(2*np.pi)/Sigma_cond * np.exp(-0.5*(y-mu_ab)**2/Sigma_cond**2), color='yellow', linewidth=2)
ax.set_xlabel('x_a', fontsize=16)
ax.set_ylabel('x_b', fontsize=16)
ax.text(-4,3,np.core.defchararray.add('$\Sigma^{-1}=$\n',np.array_str(Sigma_inv)), fontsize=16)
ax.text(-4,-3.5,np.core.defchararray.add('$\Sigma=$\n',np.array_str(Sigma)), fontsize=16)
interactive_plot = interactive(f, sigma_aa=(0, 3.0), sigma_bb=(0,3.0), sigma_ab=(0,3.0), mu_a=(-2.0,2.0), mu_b=(-2.0,2.0),x_b=(-2.0,2.0))
output = interactive_plot.children[-1]
```
```python
interactive_plot
```
interactive(children=(FloatSlider(value=-0.6, description='mu_a', max=2.0, min=-2.0), FloatSlider(value=0.8, d…
What are the interesting properties of $\boldsymbol\mu_{a|b}$ and $\boldsymbol\Sigma_{a|b}$ ??
$\boldsymbol\mu_{a|b}$ depends linearly on $\boldsymbol x_b$
$\boldsymbol\Sigma_{a|b}$ depends on all partitions of $\boldsymbol\Sigma$ but it is INDEPENDENT of $\boldsymbol x_b$
## End of Part 1 !!
| 532c2541c0383c41e39764c4c8e93ac355cb4786 | 138,451 | ipynb | Jupyter Notebook | Gaussian Process Regression Part 1.ipynb | ecamporeale/GP_lecture_MLHelio | 194fd9f2c2908bd286c5945d9f243a91163e1397 | [
"MIT"
]
| 8 | 2022-03-21T21:43:24.000Z | 2022-03-30T12:40:47.000Z | Gaussian Process Regression Part 1.ipynb | ecamporeale/GP_lecture_MLHelio | 194fd9f2c2908bd286c5945d9f243a91163e1397 | [
"MIT"
]
| null | null | null | Gaussian Process Regression Part 1.ipynb | ecamporeale/GP_lecture_MLHelio | 194fd9f2c2908bd286c5945d9f243a91163e1397 | [
"MIT"
]
| 1 | 2022-03-28T13:44:21.000Z | 2022-03-28T13:44:21.000Z | 145.279119 | 58,188 | 0.875573 | true | 6,793 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.774583 | 0.699677 | __label__eng_Latn | 0.405657 | 0.463915 |
# Example Notebook for sho1d.py
Import the sho1d.py file as well as the test_sho1d.py file
```
from sympy import *
from IPython.display import display_pretty
from sympy.physics.quantum import *
from sympy.physics.quantum.sho1d import *
from sympy.physics.quantum.tests.test_sho1d import *
init_printing(pretty_print=False, use_latex=False)
```
### Printing Of Operators
Create a raising and lowering operator and make sure they print correctly
```
ad = RaisingOp('a')
a = LoweringOp('a')
```
```
ad
```
RaisingOp(a)
```
a
```
a
```
print(latex(ad))
print(latex(a))
```
a^{\dag}
a
```
display_pretty(ad)
display_pretty(a)
```
RaisingOp(a)
a
```
print(srepr(ad))
print(srepr(a))
```
RaisingOp(Symbol('a'))
LoweringOp(Symbol('a'))
```
print(repr(ad))
print(repr(a))
```
RaisingOp(a)
a
### Printing of States
Create a simple harmonic state and check its printing
```
k = SHOKet('k')
b = SHOBra('b')
```
```
k
```
|k>
```
b
```
<b|
```
print(pretty(k))
print(pretty(b))
```
❘k⟩
⟨b❘
```
print(latex(k))
print(latex(b))
```
{\left|k\right\rangle }
{\left\langle b\right|}
```
print(srepr(k))
print(srepr(b))
```
SHOKet(Symbol('k'))
SHOBra(Symbol('b'))
### Properties
Take the dagger of the raising and lowering operators. They should return each other.
```
Dagger(ad)
```
a
```
Dagger(a)
```
RaisingOp(a)
Check Commutators of the raising and lowering operators
```
Commutator(ad,a).doit()
```
-1
```
Commutator(a,ad).doit()
```
1
Take a look at the dual states of the bra and ket
```
k.dual
```
<k|
```
b.dual
```
|b>
Taking the InnerProduct of the bra and ket will return the KroneckerDelta function
```
InnerProduct(b,k).doit()
```
KroneckerDelta(k, b)
Take a look at how the raising and lowering operators act on states. We use qapply to apply an operator to a state
```
qapply(ad*k)
```
sqrt(k + 1)*|k + 1>
```
qapply(a*k)
```
sqrt(k)*|k - 1>
But the states may have an explicit energy level. Let's look at the ground and first excited states
```
kg = SHOKet(0)
kf = SHOKet(1)
```
```
qapply(ad*kg)
```
|1>
```
qapply(ad*kf)
```
sqrt(2)*|2>
```
qapply(a*kg)
```
0
```
qapply(a*kf)
```
|0>
Notice that a*kg is 0 and a*kf is the |0> the ground state.
### NumberOp & Hamiltonian
Let's look at the Number Operator and Hamiltonian Operator
```
k = SHOKet('k')
ad = RaisingOp('a')
a = LoweringOp('a')
N = NumberOp('N')
H = Hamiltonian('H')
```
The number operator is simply expressed as ad*a
```
N().rewrite('a').doit()
```
RaisingOp(a)*a
The number operator expressed in terms of the position and momentum operators
```
N().rewrite('xp').doit()
```
-1/2 + (m**2*omega**2*X**2 + Px**2)/(2*hbar*m*omega)
It can also be expressed in terms of the Hamiltonian operator
```
N().rewrite('H').doit()
```
-1/2 + H/(hbar*omega)
The Hamiltonian operator can be expressed in terms of the raising and lowering operators, position and momentum operators, and the number operator
```
H().rewrite('a').doit()
```
hbar*omega*(1/2 + RaisingOp(a)*a)
```
H().rewrite('xp').doit()
```
(m**2*omega**2*X**2 + Px**2)/(2*m)
```
H().rewrite('N').doit()
```
hbar*omega*(1/2 + N)
The raising and lowering operators can also be expressed in terms of the position and momentum operators
```
ad().rewrite('xp').doit()
```
sqrt(2)*(m*omega*X - I*Px)/(2*sqrt(hbar)*sqrt(m*omega))
```
a().rewrite('xp').doit()
```
sqrt(2)*(m*omega*X + I*Px)/(2*sqrt(hbar)*sqrt(m*omega))
### Properties
Let's take a look at how the NumberOp and Hamiltonian act on states
```
qapply(N*k)
```
k*|k>
Apply the Number operator to a state returns the state times the ket
```
ks = SHOKet(2)
qapply(N*ks)
```
2*|2>
```
qapply(H*k)
```
hbar*k*omega*|k> + hbar*omega*|k>/2
Let's see how the operators commute with each other
```
Commutator(N,ad).doit()
```
RaisingOp(a)
```
Commutator(N,a).doit()
```
-a
```
Commutator(N,H).doit()
```
0
### Representation
We can express the operators in NumberOp basis. There are different ways to create a matrix in Python, we will use 3 different ways.
#### Sympy
```
represent(ad, basis=N, ndim=4, format='sympy')
```
[0, 0, 0, 0]
[1, 0, 0, 0]
[0, sqrt(2), 0, 0]
[0, 0, sqrt(3), 0]
#### Numpy
```
represent(ad, basis=N, ndim=5, format='numpy')
```
array([[ 0. , 0. , 0. , 0. , 0. ],
[ 1. , 0. , 0. , 0. , 0. ],
[ 0. , 1.41421356, 0. , 0. , 0. ],
[ 0. , 0. , 1.73205081, 0. , 0. ],
[ 0. , 0. , 0. , 2. , 0. ]])
#### Scipy.Sparse
```
represent(ad, basis=N, ndim=4, format='scipy.sparse', spmatrix='lil')
```
<4x4 sparse matrix of type '<type 'numpy.float64'>'
with 3 stored elements in Compressed Sparse Row format>
```
print(represent(ad, basis=N, ndim=4, format='scipy.sparse', spmatrix='lil'))
```
(1, 0) 1.0
(2, 1) 1.41421356237
(3, 2) 1.73205080757
The same can be done for the other operators
```
represent(a, basis=N, ndim=4, format='sympy')
```
[0, 1, 0, 0]
[0, 0, sqrt(2), 0]
[0, 0, 0, sqrt(3)]
[0, 0, 0, 0]
```
represent(N, basis=N, ndim=4, format='sympy')
```
[0, 0, 0, 0]
[0, 1, 0, 0]
[0, 0, 2, 0]
[0, 0, 0, 3]
```
represent(H, basis=N, ndim=4, format='sympy')
```
[hbar*omega/2, 0, 0, 0]
[ 0, 3*hbar*omega/2, 0, 0]
[ 0, 0, 5*hbar*omega/2, 0]
[ 0, 0, 0, 7*hbar*omega/2]
#### Ket and Bra Representation
```
k0 = SHOKet(0)
k1 = SHOKet(1)
b0 = SHOBra(0)
b1 = SHOBra(1)
```
```
print(represent(k0, basis=N, ndim=5, format='sympy'))
```
[1]
[0]
[0]
[0]
[0]
```
print(represent(k1, basis=N, ndim=5, format='sympy'))
```
[0]
[1]
[0]
[0]
[0]
```
print(represent(b0, basis=N, ndim=5, format='sympy'))
```
[1, 0, 0, 0, 0]
```
print(represent(b1, basis=N, ndim=5, format='sympy'))
```
[0, 1, 0, 0, 0]
```
```
| 28679ff682c327f2bb6cfc5b53d15cf3d24b8cf3 | 25,888 | ipynb | Jupyter Notebook | examples/notebooks/sho1d_example.ipynb | utkarshdeorah/sympy | dcdf59bbc6b13ddbc329431adf72fcee294b6389 | [
"BSD-3-Clause"
]
| 8,323 | 2015-01-02T15:51:43.000Z | 2022-03-31T13:13:19.000Z | examples/notebooks/sho1d_example.ipynb | utkarshdeorah/sympy | dcdf59bbc6b13ddbc329431adf72fcee294b6389 | [
"BSD-3-Clause"
]
| 15,102 | 2015-01-01T01:33:17.000Z | 2022-03-31T22:53:13.000Z | examples/notebooks/sho1d_example.ipynb | utkarshdeorah/sympy | dcdf59bbc6b13ddbc329431adf72fcee294b6389 | [
"BSD-3-Clause"
]
| 4,490 | 2015-01-01T17:48:07.000Z | 2022-03-31T17:24:05.000Z | 19.686692 | 154 | 0.408336 | true | 2,251 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.819893 | 0.73119 | __label__eng_Latn | 0.797938 | 0.537131 |
```python
# Check that have our correct Kernel running
import sys
print(sys.executable)
print(sys.version)
print(sys.version_info)
```
/opt/conda/envs/python/bin/python
3.8.3 (default, Jul 2 2020, 16:21:59)
[GCC 7.3.0]
sys.version_info(major=3, minor=8, micro=3, releaselevel='final', serial=0)
# Predicting Solids in Rivers
## Background
Equations used to compute bed load transport in rivers and streams are based upon regression analysis of data collected for variables related to solids load calculations.
However, these methods have been found to be difficult to use, too complex for practical use, and/or inadequate in terms of estimation precision.
\ref developed and deployed a database-search engine used to find approximate solutions of solids load transport based on model-specific input data.
The tool is available via a web interface in an attempt to simplify and streamline sediment transport estimation.
The search processing was all server-side, operating on a single, central database.
Here we will explore the underlying database and replicate some parts of the original work.
###
## Concept of Distance in N-Dimensional Space
The concept of distance is vital to the search engine. In the screening tool the search input values, S, Q, U, and D50 are compared to their commensurate values in the database, and a distance is computed from the search values to values in the database. The nearest values in N-dimensional distance are selected (actually by a sort) and then used for the estimation of unit solids discharge for the search values. The search engine has several different kinds of distances that the engineer may select.
### Minkowski Distance
The distance between the search values and a database record is computed using (Tan et al., 2008).
\begin{equation}
L_p= (|x_{1,data} - x_{1,search}|^p + |x_{2,data} - x_{2,search}|^p + \dots + |x_{N,data} - x_{N,search}|^p)^{\frac{1}{p}}
\end{equation}
`p` represents a parameter that modifies the Minkowski distance based upon its magnitude.
When p > 0 and integer produces a quantity known as the Minkowski distance.
The Euclidean distance between the two vectors xdata and xsearch, which is the hypotenuse-type distance that engineers are readily familiar with, is the special case of Equation 9 when p = 2, and the distance itself is called the L2-norm.
In many situations, the Euclidean distance is insufficient for capturing the actual distances in a given high-dimensional space if traverse of that space along a hypotenuse is infeasible.
For example, taxi drivers in Manhattan should measure distance not in terms of the length of the straight line to their destination, but in terms of the Manhattan (taxi distance) distance, which takes into account that streets are either orthogonal or parallel to each other.
The taxi distance is also called the L1 norm and is the special case of Equation 9 when p=1.
This distance measures the shortest path along Cartesian axes (like city streets).
When some elements are unknown (as may be the case in our searches) or the noise in the elements is substantial, the Euclidean distance is not the most appropriate measure of distance, hence the value of p is left as a variable (Erickson,2010).
## Data Value Standardization
The variables in the database are not expressed in the same magnitude, range, and scale.
For example, discharge values are several orders of magnitude larger in the database than median grain diameter, hence the two are not directly comparable when computing a distance for the search algorithm.
In such a case, one way to facilitate direct interpretation for comparing composite indices of the original data having different magnitudes and unit systems is to use normalization.
Normalization serves the purpose of bringing the indicators into the same unit scale or unit base and makes distance computations appropriate.
Normalizing data is done using various standardization techniques to assign a value to each variable so that they may be directly compared without unintentional bias due to differences in unit scale.
### Z-score Standardization
Z-score standardization is a commonly used normalization method that converts all indicators to a common scale with an average of zero and standard deviation of one. This transformation is the same as computing a standard-normal score for each data value.
The average of zero avoids the introduction of aggregation distortions stemming from differences in indicators’ means.
The scaling factor is the standard deviation of the indicator across, for instance, the velocities, slopes or unit solids discharges being ranked.
Thus, an indicator with extreme values will have intrinsically a greater effect on the composite indicator.
The raw score on each data entry is converted to a Z-score, then distances are calculated using the Z-scores for each variable rather than the raw value.
Upon completion of the distance calculations and selection of the nearest neighbors, the results are transformed back into the raw values for subsequent presentation.
Equation 10 shows the basic z-score formula used to normalize data sets (TIBCO).
$z = \frac{(x − μ)}{σ}$
33
Equation 10
### Unit-Interval [0,1] Standardization
An alternate approach considered for the screening algorithm is an option to use a mapping of each variable in the database to a [0,1] scale and linearly weight within the scale.
This standardization has the same goal as Z-score, which is to prevent one variable from overwhelming the distance computations because of its relative magnitude.
The unit interval [0,1] standardization technique differs from the Z-score in that the variability is governed by the minimum and maximum value for each variable, and hence extrapolation is not feasible.
Because extrapolation is likely necessary until new records are added to the database, this standardization method is not appropriate.
### Unstandardized
The unstandardized approach is not apprpriate because discharge and/or velocity completely dominate any search algorithm, almost to the exclusion of the other variables.
The option was useful for method testing and database error detection but is not useful for production application.
```python
```
```python
```
```python
```
### Download Current Database
Download using http:get method to access the public database from the source URL.
```python
import requests # Module to process http/https requests
remote_url="http://54.243.252.9/engr-1330-webroot/9-MyJupyterNotebooks/43-SolidsInRivers/solids_in_rivers.csv" # set the url
rget = requests.get(remote_url, allow_redirects=True, verify=False) # get the remote resource, follow imbedded links, ignore the certificate
open('solids_in_rivers.csv','wb').write(rget.content) # extract from the remote the contents, assign to a local file same name
import pandas as pd # Module to process dataframes (not absolutely needed but somewhat easier than using primatives, and gives graphing tools)
```
```python
```
```python
riverdb = pd.read_csv("solids_in_rivers.csv")
#riverdb.head()
#print(riverdb["D50_m"])
riverdb.describe()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>ID</th>
<th>Q_m3_s</th>
<th>q_m2_s</th>
<th>U_m_s</th>
<th>W_m</th>
<th>H_m</th>
<th>R_m</th>
<th>S_m_m</th>
<th>D16_m</th>
<th>D50_m</th>
<th>...</th>
<th>Record_Number</th>
<th>Froude</th>
<th>GammaS_N_m3</th>
<th>GammaF_N_m3</th>
<th>Tau0_kg_m_s2</th>
<th>TauStar</th>
<th>Ustar_m_s</th>
<th>ManningN</th>
<th>WP_m</th>
<th>A_m2</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>12081.000000</td>
<td>12401.000000</td>
<td>12401.000000</td>
<td>11270.000000</td>
<td>12401.000000</td>
<td>11704.000000</td>
<td>9675.000000</td>
<td>12289.000000</td>
<td>7359.000000</td>
<td>12401.000000</td>
<td>...</td>
<td>12401.000000</td>
<td>11269.000000</td>
<td>12401.000000</td>
<td>12401.000000</td>
<td>9563.000000</td>
<td>9563.000000</td>
<td>9563.000000</td>
<td>9270.000000</td>
<td>9382.000000</td>
<td>11270.000000</td>
</tr>
<tr>
<th>mean</th>
<td>1940.571724</td>
<td>195.722908</td>
<td>0.967272</td>
<td>0.891899</td>
<td>30.219469</td>
<td>0.738820</td>
<td>0.606682</td>
<td>0.011017</td>
<td>0.019633</td>
<td>0.036385</td>
<td>...</td>
<td>6380.274897</td>
<td>0.536185</td>
<td>25629.979244</td>
<td>9805.728190</td>
<td>21.922912</td>
<td>0.210218</td>
<td>0.114883</td>
<td>0.039846</td>
<td>29.447613</td>
<td>158.835167</td>
</tr>
<tr>
<th>std</th>
<td>2949.308625</td>
<td>1294.763761</td>
<td>2.747765</td>
<td>0.545093</td>
<td>94.382748</td>
<td>1.807773</td>
<td>1.543387</td>
<td>0.018917</td>
<td>0.021628</td>
<td>0.048645</td>
<td>...</td>
<td>3696.599235</td>
<td>0.369873</td>
<td>2214.983569</td>
<td>128.285394</td>
<td>31.256190</td>
<td>0.375793</td>
<td>0.093251</td>
<td>0.042504</td>
<td>103.538520</td>
<td>992.809279</td>
</tr>
<tr>
<th>min</th>
<td>1.000000</td>
<td>0.000500</td>
<td>0.001094</td>
<td>0.047000</td>
<td>0.076200</td>
<td>0.000000</td>
<td>0.007562</td>
<td>0.000002</td>
<td>0.000066</td>
<td>0.000011</td>
<td>...</td>
<td>1.000000</td>
<td>0.026609</td>
<td>10100.800000</td>
<td>9745.510000</td>
<td>0.027512</td>
<td>0.005681</td>
<td>0.005246</td>
<td>0.001850</td>
<td>0.121153</td>
<td>0.001200</td>
</tr>
<tr>
<th>25%</th>
<td>37.000000</td>
<td>0.041300</td>
<td>0.071941</td>
<td>0.490000</td>
<td>0.710000</td>
<td>0.124400</td>
<td>0.070938</td>
<td>0.001240</td>
<td>0.002000</td>
<td>0.000640</td>
<td>...</td>
<td>3101.000000</td>
<td>0.319346</td>
<td>25987.600000</td>
<td>9794.920000</td>
<td>1.310125</td>
<td>0.036348</td>
<td>0.036224</td>
<td>0.014014</td>
<td>0.800300</td>
<td>0.065400</td>
</tr>
<tr>
<th>50%</th>
<td>114.000000</td>
<td>0.699000</td>
<td>0.197778</td>
<td>0.760000</td>
<td>4.880000</td>
<td>0.270000</td>
<td>0.153412</td>
<td>0.004000</td>
<td>0.014000</td>
<td>0.011700</td>
<td>...</td>
<td>6487.000000</td>
<td>0.446229</td>
<td>25987.600000</td>
<td>9803.980000</td>
<td>6.218710</td>
<td>0.067029</td>
<td>0.078964</td>
<td>0.021891</td>
<td>1.676300</td>
<td>0.763333</td>
</tr>
<tr>
<th>75%</th>
<td>3554.000000</td>
<td>8.080000</td>
<td>0.643761</td>
<td>1.152320</td>
<td>13.140000</td>
<td>0.590000</td>
<td>0.450000</td>
<td>0.013000</td>
<td>0.033000</td>
<td>0.063000</td>
<td>...</td>
<td>9587.000000</td>
<td>0.637954</td>
<td>25987.600000</td>
<td>9803.980000</td>
<td>33.615450</td>
<td>0.222178</td>
<td>0.183112</td>
<td>0.048671</td>
<td>13.457700</td>
<td>7.174412</td>
</tr>
<tr>
<th>max</th>
<td>9825.000000</td>
<td>28822.300000</td>
<td>101.618000</td>
<td>4.100000</td>
<td>1109.420000</td>
<td>77.000000</td>
<td>16.246300</td>
<td>0.200000</td>
<td>0.098000</td>
<td>0.220000</td>
<td>...</td>
<td>12687.000000</td>
<td>5.687350</td>
<td>41384.100000</td>
<td>13366.500000</td>
<td>207.570000</td>
<td>7.761280</td>
<td>0.455660</td>
<td>0.372269</td>
<td>1142.270000</td>
<td>18225.400000</td>
</tr>
</tbody>
</table>
<p>8 rows × 27 columns</p>
</div>
```python
riverdb.plot.scatter(x='D16_m',y='D50_m')
```
```python
riverdb.plot.scatter(x='R_m',y='q_m2_s')
```
```python
```
```python
```
```python
## Now delete the local copy
! rm solids_in_rivers.csv
```
```python
import math
def phi_value(diameter):
# diameter in meters
diameter = diameter*1000.0 # convert to millimeters
# print(diameter)
phi_value = 1.0*math.log2(diameter)
return phi_value
def d_value(phi_val):
# phi_val is log2(diam), return diam
d_value = 2.0**phi_val
return d_value
```
```python
mu = phi_value(0.00032004)
sigma = math.sqrt(1.22)/math.sqrt(2.)
myguess = mu
print(myguess)
print(d_value(myguess))
from scipy.optimize import newton
def f(x):
global mu,sigma
quantile = 0.5
argument = (x - mu)/(math.sqrt(2.0)*sigma)
normdist = (1.0 + math.erf(argument))/2.0
return normdist - quantile
phi50 = newton(f, myguess)
def f(x):
global mu,sigma
quantile = 0.05
argument = (x - mu)/(math.sqrt(2.0)*sigma)
normdist = (1.0 + math.erf(argument))/2.0
return normdist - quantile
phi05 = newton(f, myguess)
def f(x):
global mu,sigma
quantile = 0.16
argument = (x - mu)/(math.sqrt(2.0)*sigma)
normdist = (1.0 + math.erf(argument))/2.0
return normdist - quantile
phi16 = newton(f, myguess)
def f(x):
global mu,sigma
quantile = 0.84
argument = (x - mu)/(math.sqrt(2.0)*sigma)
normdist = (1.0 + math.erf(argument))/2.0
return normdist - quantile
phi84 = newton(f, myguess)
def f(x):
global mu,sigma
quantile = 0.90
argument = (x - mu)/(math.sqrt(2.0)*sigma)
normdist = (1.0 + math.erf(argument))/2.0
return normdist - quantile
phi90 = newton(f, myguess)
print('d05 = ',d_value(phi05)/1000,' millimeters')
print('d16 = ',d_value(phi16)/1000,' millimeters')
print('d50 = ',d_value(phi50)/1000,' millimeters')
print('d84 = ',d_value(phi84)/1000,' millimeters')
print('d90 = ',d_value(phi90)/1000,' millimeters')
```
-1.6436758641647295
0.32004
d05 = 0.00013136495766569197 millimeters
d16 = 0.00018680794242288665 millimeters
d50 = 0.00032004 millimeters
d84 = 0.0005482936125281758 millimeters
d90 = 0.000640489979928618 millimeters
```python
D16_m D50_m D84_m D90_m
7185 0.043 0.126 0.28 0.339
```
```python
import math
def normdensity(x,mu,sigma):
weight = 1.0 /(sigma * math.sqrt(2.0*math.pi))
argument = ((x - mu)**2)/(2.0*sigma**2)
normdensity = weight*math.exp(-1.0*argument)
return normdensity
def normdist(x,mu,sigma):
argument = (x - mu)/(math.sqrt(2.0)*sigma)
normdist = (1.0 + math.erf(argument))/2.0
return normdist
```
```python
math.sqrt(2)
```
1.4142135623730951
```python
```
| 8632f402860d35e1cb6d5ac1b17d5c0ba2c1f1da | 50,609 | ipynb | Jupyter Notebook | 9-MyJupyterNotebooks/43-SolidsInRivers/SolidsInRivers.ipynb | dustykat/engr-1330-psuedo-course | 3e7e31a32a1896fcb1fd82b573daa5248e465a36 | [
"CC0-1.0"
]
| null | null | null | 9-MyJupyterNotebooks/43-SolidsInRivers/SolidsInRivers.ipynb | dustykat/engr-1330-psuedo-course | 3e7e31a32a1896fcb1fd82b573daa5248e465a36 | [
"CC0-1.0"
]
| null | null | null | 9-MyJupyterNotebooks/43-SolidsInRivers/SolidsInRivers.ipynb | dustykat/engr-1330-psuedo-course | 3e7e31a32a1896fcb1fd82b573daa5248e465a36 | [
"CC0-1.0"
]
| null | null | null | 70.095568 | 13,408 | 0.722026 | true | 4,837 | Qwen/Qwen-72B | 1. YES
2. YES | 0.682574 | 0.752013 | 0.513304 | __label__eng_Latn | 0.897069 | 0.030906 |
## **Viscoelastic wave equation implementation on a staggered grid**
This is a first attempt at implementing the viscoelastic wave equation as described in [1]. See also the FDELMODC implementation by Jan Thorbecke [2].
In the following example, a three dimensional toy problem will be introduced consisting of a single Ricker source located at (100, 50, 35) in a 200 m $\times$ 100 m $\times$ 100 *m* domain.
```python
# Required imports:
import numpy as np
import sympy as sp
from devito import *
from examples.seismic.source import RickerSource, TimeAxis
from examples.seismic import ModelViscoelastic, plot_image
```
The model domain is now constructed. It consists of an upper layer of water, 50 m in depth, and a lower rock layer separated by a 4 m thick sediment layer.
```python
# Domain size:
extent = (200., 100., 100.) # 200 x 100 x 100 m domain
h = 1.0 # Desired grid spacing
shape = (int(extent[0]/h+1), int(extent[1]/h+1), int(extent[2]/h+1))
# Model physical parameters:
vp = np.zeros(shape)
qp = np.zeros(shape)
vs = np.zeros(shape)
qs = np.zeros(shape)
rho = np.zeros(shape)
# Set up three horizontally separated layers:
vp[:,:,:int(0.5*shape[2])+1] = 1.52
qp[:,:,:int(0.5*shape[2])+1] = 10000.
vs[:,:,:int(0.5*shape[2])+1] = 0.
qs[:,:,:int(0.5*shape[2])+1] = 0.
rho[:,:,:int(0.5*shape[2])+1] = 1.05
vp[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 1.6
qp[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 40.
vs[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 0.4
qs[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 30.
rho[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 1.3
vp[:,:,int(0.5*shape[2])+1+int(4/h):] = 2.2
qp[:,:,int(0.5*shape[2])+1+int(4/h):] = 100.
vs[:,:,int(0.5*shape[2])+1+int(4/h):] = 1.2
qs[:,:,int(0.5*shape[2])+1+int(4/h):] = 70.
rho[:,:,int(0.5*shape[2])+1+int(4/h):] = 2.
```
Now create a Devito vsicoelastic model generating an appropriate computational grid along with absorbing boundary layers:
```python
# Create model
origin = (0, 0, 0)
spacing = (h, h, h)
so = 4 # FD space order (Note that the time order is by default 1).
nbl = 20 # Number of absorbing boundary layers cells
model = ModelViscoelastic(space_order=so, vp=vp, qp=qp, vs=vs, qs=qs,
rho=rho, origin=origin, shape=shape, spacing=spacing,
nbl=nbl)
```
Operator `initdamp` run in 0.05 s
Operator `padfunc` run in 0.02 s
Operator `padfunc` run in 0.01 s
Operator `padfunc` run in 0.01 s
Operator `padfunc` run in 0.01 s
Operator `padfunc` run in 0.02 s
The source frequency is now set along with the required model parameters:
```python
# Source freq. in MHz (note that the source is defined below):
f0 = 0.12
# Thorbecke's parameter notation
l = model.lam
mu = model.mu
ro = model.irho
k = 1.0/(l + 2*mu)
pi = l + 2*mu
t_s = (sp.sqrt(1.+1./model.qp**2)-1./model.qp)/f0
t_ep = 1./(f0**2*t_s)
t_es = (1.+f0*model.qs*t_s)/(f0*model.qs-f0**2*t_s)
```
```python
# Time step in ms and time range:
t0, tn = 0., 30.
dt = model.critical_dt
time_range = TimeAxis(start=t0, stop=tn, step=dt)
```
Generate Devito time functions for the velocity, stress and memory variables appearing in the viscoelastic model equations. By default, the initial data of each field will be set to zero.
```python
# PDE fn's:
x, y, z = model.grid.dimensions
damp = model.damp
# Staggered grid setup:
# Velocity:
v = VectorTimeFunction(name="v", grid=model.grid, time_order=1, space_order=so)
# Stress:
tau = TensorTimeFunction(name='t', grid=model.grid, space_order=so, time_order=1)
# Memory variable:
r = TensorTimeFunction(name='r', grid=model.grid, space_order=so, time_order=1)
s = model.grid.stepping_dim.spacing # Symbolic representation of the model grid spacing
```
And now the source and PDE's are constructed:
```python
# Source
src = RickerSource(name='src', grid=model.grid, f0=f0, time_range=time_range)
src.coordinates.data[:] = np.array([100., 50., 35.])
# The source injection term
src_xx = src.inject(field=tau[0, 0].forward, expr=src*s)
src_yy = src.inject(field=tau[1, 1].forward, expr=src*s)
src_zz = src.inject(field=tau[2, 2].forward, expr=src*s)
# Particle velocity
u_v = Eq(v.forward, model.damp * (v + s*ro*div(tau)))
# Stress equations:
u_t = Eq(tau.forward, model.damp * (s*r.forward + tau +
s * (l * t_ep / t_s * diag(div(v.forward)) +
mu * t_es / t_s * (grad(v.forward) + grad(v.forward).T))))
# Memory variable equations:
u_r = Eq(r.forward, damp * (r - s / t_s * (r + l * (t_ep/t_s-1) * diag(div(v.forward)) +
mu * (t_es/t_s-1) * (grad(v.forward) + grad(v.forward).T) )))
```
We now create and then run the operator:
```python
# Create the operator:
op = Operator([u_v, u_r, u_t] + src_xx + src_yy + src_zz,
subs=model.spacing_map)
```
```python
#NBVAL_IGNORE_OUTPUT
# Execute the operator:
op(dt=dt)
```
Operator `Kernel` run in 62.86 s
Before plotting some results, let us first look at the shape of the data stored in one of our time functions:
```python
v[0].data.shape
```
(2, 241, 141, 141)
Since our functions are first order in time, the time dimension is of length 2. The spatial extent of the data includes the absorbing boundary layers in each dimension (i.e. each spatial dimension is padded by 20 grid points to the left and to the right).
The total number of instances in time considered is obtained from:
```python
time_range.num
```
136
Hence 223 time steps were executed. Thus the final time step will be stored in index given by:
```python
np.mod(time_range.num,2)
```
0
Now, let us plot some 2D slices of the fields `vx` and `szz` at the final time step:
```python
#NBVAL_SKIP
# Mid-points:
mid_x = int(0.5*(v[0].data.shape[1]-1))+1
mid_y = int(0.5*(v[0].data.shape[2]-1))+1
# Plot some selected results:
plot_image(v[0].data[1, :, mid_y, :], cmap="seismic")
plot_image(v[0].data[1, mid_x, :, :], cmap="seismic")
plot_image(tau[2, 2].data[1, :, mid_y, :], cmap="seismic")
plot_image(tau[2, 2].data[1, mid_x, :, :], cmap="seismic")
```
```python
#NBVAL_IGNORE_OUTPUT
assert np.isclose(norm(v[0]), 0.102959, atol=1e-4, rtol=0)
```
# References
[1] Johan O. A. Roberston, *et.al.* (1994). "Viscoelatic finite-difference modeling" GEOPHYSICS, 59(9), 1444-1456.
[2] https://janth.home.xs4all.nl/Software/fdelmodcManual.pdf
| 3026496855823994183880899015961f2f72dffe | 161,153 | ipynb | Jupyter Notebook | examples/seismic/tutorials/09_viscoelastic.ipynb | rhodrin/devito | cd1ae745272eb0315aa1c36038a3174f1817e0d0 | [
"MIT"
]
| 1 | 2020-06-08T20:44:35.000Z | 2020-06-08T20:44:35.000Z | examples/seismic/tutorials/09_viscoelastic.ipynb | rhodrin/devito | cd1ae745272eb0315aa1c36038a3174f1817e0d0 | [
"MIT"
]
| null | null | null | examples/seismic/tutorials/09_viscoelastic.ipynb | rhodrin/devito | cd1ae745272eb0315aa1c36038a3174f1817e0d0 | [
"MIT"
]
| 1 | 2021-01-05T07:27:35.000Z | 2021-01-05T07:27:35.000Z | 330.231557 | 43,560 | 0.931947 | true | 2,130 | Qwen/Qwen-72B | 1. YES
2. YES | 0.899121 | 0.682574 | 0.613717 | __label__eng_Latn | 0.780155 | 0.2642 |
# **Is there a reasonable (physical) interpretation of neural network weights - or is this even a thing to care about?**
## **Maybe yes, maybe no, definitely sometimes**
## **NOTES:**
- Okay .... I'm just going to assume everybody knows some basics of NNets
- I ended up going down this path because of the work I was doing when I started working
### NNet weights - the innocent explanation
**Most of the "stuff" I've seen has described NNet weights in the following way**
- The neurons in a neural network are like the neurons of the brain
- Disclaimer: I'm not a neurologist, but I hear this isn't really accurate
- The weights between the neurons represent the "connection strength" between neurons.
- Now ... maybe accurate, but:
- Connections between "what"
- Isn't there some better explanation
### Point of this little presentation: Indeed, one can sometimes understand/interpret these NNet neurons in a physically meaningful way
## Simple case 1:
So ... let's investigate a little toy problem.
In this problem we have a function where points are "inside the ring" or "outside the ring"
Let's see if we can recover the simple function (hint we know it, it's obvious) with a baby nnet ... and see what we learn.
Oh, btw: I'll be measuring the angles here in: $\tau = 2.rad = 360\deg$
```python
# Imports and setup
from numpy import cos, logical_and, sin, stack, pi as PI, round
from numpy.random import rand, seed
from sklearn.neural_network import MLPClassifier
from sympy import init_printing, Matrix, symbols
from sympy.functions.elementary.piecewise import Piecewise
from sympy.functions.elementary.exponential import exp
from sympy.plotting.plot import plot
from sympy.printing import pprint
from sympy.simplify import simplify
init_printing()
TRAIN_SIZE = 10000
TEST_SIZE = 2000
INNER_SIZE=0.5
OUTER_SIZE=0.6
```
### Training and test coordinates in Spherical/Polar coordinates
```python
pol_train = rand(TRAIN_SIZE, 2)
pol_test = rand(TEST_SIZE, 2)
train_sol = logical_and(INNER_SIZE <= pol_train[:, 0], pol_train[:, 0] <= OUTER_SIZE).astype(int)
test_sol = logical_and(INNER_SIZE <= pol_test[:, 0], pol_test[:, 0] <= OUTER_SIZE).astype(int)
```
### And in Cartesian coordinates
```python
xy_train = stack(
(
pol_train[:, 0] * cos(2 * pol_train[:, 1]),
pol_train[:, 0] * sin(2 * pol_train[:, 1]),
),
axis=1,
)
xy_test = stack(
(
pol_test[:, 0] * cos(2 * pol_test[:, 1]),
pol_test[:, 0] * sin(2 * pol_test[:, 1]),
),
axis=1,
)
```
### Okay let's train our little network in polar coordinates
```python
seed(15)
pol_clf = MLPClassifier(
hidden_layer_sizes=(2,),
activation="relu",
solver="lbfgs",
max_iter=2000,
tol=0.000001,
alpha=0,
)
pol_clf.fit(pol_train, train_sol)
print(f"Loss: {pol_clf.loss_}")
print(f"Test score: {pol_clf.score(pol_test, test_sol)}")
```
Loss: 2.0598994096204845e-09
Test score: 0.9995
**Cool, let's look at our weights and biases for the 2 layers of connections**
```python
print("Weights:")
for idx, weight in enumerate(pol_clf.coefs_):
print(f"Layer {idx}:")
print("\t{}".format(str(weight).replace("\n", "\n\t")))
print("Biases:")
for idx, bias in enumerate(pol_clf.intercepts_):
print(f"Layer {idx}:")
print("\t{}".format(str(bias).replace("\n", "\n\t")))
```
Weights:
Layer 0:
[[ 7.66819542e+02 -4.42665991e+02]
[-1.00513087e+00 7.08175869e-02]]
Layer 1:
[[-533.3279756 ]
[-330.69901043]]
Biases:
Layer 0:
[-458.57175954 222.95733456]
Layer 1:
[548.96814356]
Wow ... (well not really so surprising) ... there's like 4 orders of magnitude difference in the contributions between the $r$ terms and $\theta$ terms.
Let's do some simple rounding, write out the equations, and plot the simple graphs:
```python
input_vals = Matrix([symbols("r θ")])
pol_coefs = [round(_, 0) for _ in pol_clf.coefs_]
pol_ints = [round(_, 0) for _ in pol_clf.intercepts_]
layer_0_top, layer_0_bottom = input_vals * Matrix(pol_coefs[0]) + Matrix(pol_ints[0]).transpose()
print("Layer 0 before activation:")
pprint(layer_0_top)
pprint(layer_0_bottom)
```
Layer 0 before activation:
767.0⋅r - θ - 459.0
223.0 - 443.0⋅r
```python
layer_0 = Matrix(
[
[
Piecewise((layer_0_top, simplify(layer_0_top > 0)), (0, True)),
Piecewise((layer_0_bottom, simplify(layer_0_bottom > 0)), (0, True))
]
]
)
layer_0=layer_0.subs("θ", 0)
layer_0
plot(*layer_0, (symbols("r"), -2, 2))
plot(*layer_0, (symbols("r"), 0.42, 0.65))
layer_0
```
So ... we can definitely see something different is happening between $~0.5$ and $~0.5$.
Let's go through the rest and see what pops out.
```python
layer_1_pre_activation=simplify((layer_0*Matrix(pol_coefs[1]) + Matrix(pol_ints[1]))[0])
layer_1=simplify(Piecewise((layer_1_pre_activation, layer_1_pre_activation > 0), (0, True)))
layer_1
```
$\displaystyle \begin{cases} \begin{cases} 146633.0 r - 73264.0 & \text{for}\: r < 0.503386004514673 \\245196.0 - 408811.0 r & \text{for}\: r > 0.598435462842242 \\549.0 & \text{otherwise} \end{cases} & \text{for}\: r > 0.499641963268841 \wedge r < 0.599778381697166 \\0 & \text{otherwise} \end{cases}$
Let's plot this and binarize
```python
plot(layer_1, (symbols("r"), 0.25, 0.75))
binarized_output = simplify(Piecewise((1, layer_1 > 0), (0, True)))
print("Binarized output:")
pprint(binarized_output)
plot(binarized_output, (symbols("r"), 0, 1))
```
#### Surprise surprise, we've recoverd our "original" eqution
- One cannot really replicate this with cartesian co-ordinates
- Think about encoding: $r_0^2 <= x^2 + y ^2 <= r_1^2$
- One does not get nice simple results if you don't use `relu` as the activation function
- The nice (mathematically `unsmooth`, `continuous`) edges, which the other SciPy activation functions don't have, are needed
- **In certain conditions one can get meaningful information from the weights/biases of the NNet rather than just from the output layer**
- In the above scenario the two components of the original equation ($x >= 0.5$ and $x<=0.6$) can more-or-less just be read off straight from the weights.
**So, what do we learn?**
- The way one encodes the problem is important
- Architecting the NNet so that it reflects what's happening in the problem can dramatically simplify things
- With the cartesian co-ordinates something like 2 - 3 layers of 6 - 8 neurons were needed for good accuracy (still worse accuracy, though, that one 2-neuron layer with polar coordinates).
- **In some circumstances it is possible to get meaningful information from the weights of the NNet**
## Part 2:
Q: What did I start off doing?
A: Building seismic models for mines.
```python
import lattice_v2
lattice_v2.run_sim()
```
### What does this look like if we plot things slightly differently?
Sorry: I was unable to put this together ... hopefully you can see it in your mind's eye, though.
**The imagining:**
- How would this look if we plotted the nodes of the model not spatially, _but according to the time at which they first deviated from their rest state_ (i.e. time ordered)?
- I hope in your imagination it looks something like this:
**Am I lying to you?**
Well, you can decide for yourself. Let's take some sentences from an important paper$^{[1]}$.
- Lailly and Tarantola recast the migration imaging principle ... as a local optimization problem, the aim of which is least-squares minimization of the misfit between recorded and modeled data.
- ... the gradient of the misfit function ... can be built by crosscorrelating the incident wavefield ... and the back-propagated residual wavefields.
- As widely stressed, FWI is an ill-posed problem, meaning that an infinite number of models matches the data. Some regularizations are conventionally applied to the inversion to make it better posed.
- Alternatively, the inverse of the Hessian in equation 11 can be replaced by a scalar $\alpha$, the so-called step length, leading to the gradient or steepest-descent method.
**So the keywords are:**
- Forward-problem
- Back propogation
- Gradient descent
- Regularization
**Does this sound familiar?**
Going through the calculations, one finds that, indeed, it is extremely similar to "normal" NNets (though somewhat more involved).
- Below is a successful (though synthetic) example of one of my inversions:
```python
from IPython.display import Video
Video("./resources/Beta_Inversion.mp4", width=600)
```
### Is seismology a special case here?
- No
- Without hardly any effort I dug up a paper $^{[2]}$ using the same technique but for robotic arms (in reinforcement learning):
- In this case the control equations for the robotic arm were the governing equations of the NNet
- The parameters estimated were the parameters the robotic arm needed to operate as desired
- Then I found this gem $^{[3]}$
- This demonstrates that a technique like this can be applied in many cases where the equations controlling the system are known
### Is this ideal (back to the seismology case)?
- No ... there is a huge need for regularization
- Why?
- NNets often need to be regularized as they are underdetermined
- There is no surprise here
- The system as described by the "raw" physical equations are <i><u>even more underdetermined</u></i> than most NNets
- Why?
- Look again at that very simple 2D seismic waves
- Imagine it without boundaries
- How many of those "rock elements" / "neurons" actually contribute to what is sensed at the sensor?
- Answer: very few
- But modifying/reformulation of the standard problem gives far better results (see [3] and [4])
# Conclusion / Point of Interest
- **In some situations, given a neural-network that describes a system, one can interpret not only the input's and outputs as "physically" meaningful, but one can even map the neural network weights and connections to physically meaningful parameters and equations controlling and describing the system.**
- **If one knows the equations describing the system, there is a chance one can reformulate the problem so that the weights of the NNet can be interpreted as physically meaningful parameters.**
1. J. Virieux and S. Operto
An overview of full-waveform inversion in exploration geophysics
Geophysics v**74.6** (2006)
2. [Neural Networks with Physical Meaning: Representation of Kinematic Equations from Robot Arms Using a Neural Network Topology](https://ieeexplore.ieee.org/document/8588585)
3. K. Kashinath, M. Mustafa, A. Albert et al.
Physics-informed machine learning: case studies for weather and climate modelling
Philosophical Transactions of the Royal Society (15 Feb. 2021)
Available url: https://royalsocietypublishing.org/doi/10.1098/rsta.2020.0093 (as of 2022-01-13)
4. Y. Wu and Y. Lin
InversionNet: A Real-Time and Accurate Full Waveform Inversion with CNNs and continuous CRFs
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING (5 Jan 2019)
(Available URL: https://arxiv.org/abs/1811.07875 as of 17 Jan 2022)
| 08cda0e44a6f534a87ee16f2686537fe3c69e9a0 | 112,245 | ipynb | Jupyter Notebook | NNet_weight_interpretation.ipynb | Vincent-de-Comarmond/phys-wght-interp | fdb971f81ec9415c4e148b2195b8db67a3ae9974 | [
"MIT"
]
| null | null | null | NNet_weight_interpretation.ipynb | Vincent-de-Comarmond/phys-wght-interp | fdb971f81ec9415c4e148b2195b8db67a3ae9974 | [
"MIT"
]
| null | null | null | NNet_weight_interpretation.ipynb | Vincent-de-Comarmond/phys-wght-interp | fdb971f81ec9415c4e148b2195b8db67a3ae9974 | [
"MIT"
]
| null | null | null | 144.088575 | 36,644 | 0.872654 | true | 3,004 | Qwen/Qwen-72B | 1. YES
2. YES | 0.746139 | 0.845942 | 0.631191 | __label__eng_Latn | 0.989774 | 0.304798 |
# Google Page Rank Algorithm
In this notebook, we learn and code up a simplified version of Google's Page Rank Algorithm, which is a direct application of Eigenvectors and Eigenvalues we learnt in Linear Algebra.
Reference to the original paper: $\href{http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf}{here}$
Reference to blog:
$\href{https://www.dhruvonmath.com/2019/03/20/pagerank/}{here}$
Google Page Rank Algorithm predicts a rank for each webpage on the internet. This rank depends on the number of ingoing and outgoing links to a webpage.
You can think of rank of a webpage as follows: If a web page gets rank $1$, it means that a random web searcher who clicks links randomly would spend the most amount of time in this web page. If a web page gets rank $2$, it means that a random web searcher who keeps clicking links randomly would spend the second most amount of time in this web page and so on and so forth.
Given the structure of the web, it is very obvious to model the web as a Graph Data structure. The nodes of the graph are the web pages. Edges of the graph represent links between the web pages.
Note that the graph is a directed graph. There maybe a link from webpage 'A' to webpage 'B', it may not always be the case that there exists a link from webpage 'B' to webpage 'A'.
A graph with $N$ nodes can be represented with a $N \times N$ matrix. This matrix is known as Adjacency matrix.
$A_{ij}$ represents the weight of the edge connecting the $j^{th}$ node to the $i^{th}$ node.
We can think of this adjacency matrix as concatenation of $N$ column vectors. The $i^{th}$ column defines the edges from node $i$ to all other nodes.
**Exercise:**
Can you draw the graph corresponding to the following adjacency matrix?
\begin{equation}
A = \begin{pmatrix}
0 & 1/2 & 0 & 0\\
1/3 & 0 & 0 & 1/2\\
1/3 & 0 & 0 & 1/2\\
1/3 & 1/2 & 1 & 0\\
\end{pmatrix}
\end{equation}
**Exercise:**
Can you write down the adjacency matrix for the following graph?
Let us normalize each column of the adjacency matrix so that the entries sum upto $1$. This is because we want to output the rank as a probability value of the amount of time spent on that webpage.
We start with equiprobable ranks for all webpages.
We update the ranks($r$) as follows:
\begin{equation}
r(i) = \underset{j}{\sum} r(j) * A(i, j)
\end{equation}
\begin{equation}
r' = A.r
\end{equation}
The above is a recursive definition.
```python
import numpy as np
```
```python
# Complete the below function to return A after normalizing
# each of it's columns
# Normalizing a column means sum of entries in each column adds to 1.
# PLEASE USE VECTORISED CODE FOR EFFICIENCY
def normalize_columns(A):
sums = A.sum(axis=0)
return (A/sums)
```
```python
A = np.array([[0, 0, 1, 1], [1, 0, 0, 0], [1, 1, 0, 1], [1, 1, 0, 0]])
print(normalize_columns(A))
```
[[0. 0. 1. 0.5 ]
[0.33333333 0. 0. 0. ]
[0.33333333 0.5 0. 0.5 ]
[0.33333333 0.5 0. 0. ]]
**Expected Output:**
[[0. 0. 1. 0.5 ]
[0.33333333 0. 0. 0. ]
[0.33333333 0.5 0. 0.5 ]
[0.33333333 0.5 0. 0. ]]
```python
# Complete the below function to take a matrix A and a vector r.
# Return the updated rank.
# PLEASE USE VECTORISED CODE WITHOUT LOOPS
def update_rank(A, r):
r = np.dot(A,r)
return r
```
```python
# Complete the below function to check if two vectors a and b are equal
# Since we are dealing with real numbers,
# we say two elements(x and y) are equal if abs(x - y) <= epsilon
# PLEASE USE VECTORISED CODE WITHOUT LOOPS
ep = 1e-8
def check_equality(a, b):
val = np.abs(a - b) <= ep
if(np.any(val[:]== False)):
return False
return True
```
```python
# Complete the below function to compute ranks iteratively until
# ranks stabilise.
# We say ranks become stabilised when the after updation the ranks
# do not change.
# Use the functions defined above.
def compute_iteratively(A, initial_rank):
curr_rank = initial_rank
prev_rank=np.zeros(initial_rank.shape[0])
while(check_equality(curr_rank,prev_rank)!=True):
prev_rank = curr_rank
curr_rank = update_rank(A,curr_rank)
return curr_rank
```
```python
# Complete the below function to compute final ranks at one shot using
# eigen values and eigen vectors
# You may use inbuilt functions to compute eigenvectors and eigenvalues
import scipy.linalg as la
def compute_using_eig(A, initial_rank):
eigenvals, eigenvecs = la.eig(A)
egi = eigenvals.astype(int)
i = np.where(egi==1)
i = i[0][0]
return normalize_columns(eigenvecs[:,i])
```
```python
A = np.array([[0, 1, 0, 0], [1, 0, 0, 1], [1, 0, 0, 1], [1, 1, 1, 0]])
A = normalize_columns(A)
r = np.array([0.25, 0.25, 0.25, 0.25])
print("Rank computed iteratively: \n", compute_iteratively(A, r))
print("Rank computed using eigen values and eigen vectors: \n", compute_using_eig(A, r))
```
Rank computed iteratively:
[0.12 0.24 0.24 0.4 ]
Rank computed using eigen values and eigen vectors:
[0.12 0.24 0.24 0.4 ]
/usr/lib/python3/dist-packages/ipykernel_launcher.py:7: ComplexWarning: Casting complex values to real discards the imaginary part
import sys
**Expected Output:**
Rank computed iteratively:
[0.12 0.24 0.24 0.39999999]
Rank computed using eigen values and eigen vectors:
[0.12 0.24 0.24 0.4 ]
```python
A = np.array([[0, 0, 1, 1], [1, 0, 0, 0], [1, 1, 0, 1], [1, 1, 0, 0]])
A = normalize_columns(A)
r = np.array([0.25, 0.25, 0.25, 0.25])
print("Rank computed iteratively: \n", compute_iteratively(A, r))
print("Rank computed using eigen values and eigen vectors: \n", compute_using_eig(A, r))
```
Rank computed iteratively:
[0.38709677 0.12903226 0.29032258 0.19354839]
Rank computed using eigen values and eigen vectors:
[0.38709677+0.j 0.12903226+0.j 0.29032258+0.j 0.19354839+0.j]
/usr/lib/python3/dist-packages/ipykernel_launcher.py:7: ComplexWarning: Casting complex values to real discards the imaginary part
import sys
**Expected Output:**
Rank computed iteratively:
[0.38709677 0.12903226 0.29032258 0.19354839]
Rank computed using eigen values and eigen vectors:
[0.38709677 0.12903226 0.29032258 0.19354839]
| f109cf6edb299e272771fba288461a5b6090e987 | 10,235 | ipynb | Jupyter Notebook | day8/morning/Google Page Rank Algorithm Assignment/Page Rank Algorithm.ipynb | avani17101/CVIT-Workshop | 0339021123b82dfa55c6f6fa4d8c4322ecf7e687 | [
"MIT"
]
| 4 | 2020-06-27T06:38:10.000Z | 2021-06-01T15:37:33.000Z | day8/morning/Google Page Rank Algorithm Assignment/Page Rank Algorithm.ipynb | avani17101/CVIT-Workshop | 0339021123b82dfa55c6f6fa4d8c4322ecf7e687 | [
"MIT"
]
| 4 | 2020-06-08T18:41:11.000Z | 2020-07-27T10:25:24.000Z | day8/morning/Google Page Rank Algorithm Assignment/Page Rank Algorithm.ipynb | avani17101/CVIT-Workshop | 0339021123b82dfa55c6f6fa4d8c4322ecf7e687 | [
"MIT"
]
| null | null | null | 30.46131 | 383 | 0.542648 | true | 1,933 | Qwen/Qwen-72B | 1. YES
2. YES | 0.94079 | 0.91118 | 0.857229 | __label__eng_Latn | 0.953623 | 0.829962 |
# GPyTorch Regression Tutorial
<a href="https://colab.research.google.com/github/jwangjie/gpytorch/blob/master/examples/01_Exact_GPs/Simple_GP_Regression.ipynb" target="_parent"></a>
## Introduction
In this notebook, we demonstrate many of the design features of GPyTorch using the simplest example, training an RBF kernel Gaussian process on a simple function. We'll be modeling the function
\begin{align}
y &= \sin(2\pi x) + \epsilon \\
\epsilon &\sim \mathcal{N}(0, 0.04)
\end{align}
with 100 training examples, and testing on 51 test examples.
**Note:** this notebook is not necessarily intended to teach the mathematical background of Gaussian processes, but rather how to train a simple one and make predictions in GPyTorch. For a mathematical treatment, Chapter 2 of Gaussian Processes for Machine Learning provides a very thorough introduction to GP regression (this entire text is highly recommended): http://www.gaussianprocess.org/gpml/chapters/RW2.pdf
```python
# COMMENT this if not used in colab
!pip3 install gpytorch
```
Collecting gpytorch
[?25l Downloading https://files.pythonhosted.org/packages/9c/5f/ce79e35c1a36deb25a0eac0f67bfe85fb8350eb8e19223950c3d615e5e9a/gpytorch-1.0.1.tar.gz (229kB)
[K |████████████████████████████████| 235kB 2.8MB/s
[?25hBuilding wheels for collected packages: gpytorch
Building wheel for gpytorch (setup.py) ... [?25l[?25hdone
Created wheel for gpytorch: filename=gpytorch-1.0.1-py2.py3-none-any.whl size=390441 sha256=4c1c86a4228d2a6b7a30cf734f9a1b61de06c54de6177c0c84e34ae7d7b02939
Stored in directory: /root/.cache/pip/wheels/10/2f/7a/3328e5713d796daeec2ce8ded141d5f3837253fc3c2a5c62e0
Successfully built gpytorch
Installing collected packages: gpytorch
Successfully installed gpytorch-1.0.1
```python
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
### Set up training data
In the next cell, we set up the training data for this example. We'll be using 100 regularly spaced points on [0,1] which we evaluate the function on and add Gaussian noise to get the training labels.
```python
# Training data is 100 points in [0,1] inclusive regularly spaced
train_x = torch.linspace(0, 1, 100)
# True function is sin(2*pi*x) with Gaussian noise
train_y = torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * math.sqrt(0.04)
```
```python
plt.plot(train_x.numpy(), train_y.numpy())
plt.show()
```
## Setting up the model
The next cell demonstrates the most critical features of a user-defined Gaussian process model in GPyTorch. Building a GP model in GPyTorch is different in a number of ways.
First in contrast to many existing GP packages, we do not provide full GP models for the user. Rather, we provide *the tools necessary to quickly construct one*. This is because we believe, analogous to building a neural network in standard PyTorch, it is important to have the flexibility to include whatever components are necessary. As can be seen in more complicated examples, this allows the user great flexibility in designing custom models.
For most GP regression models, you will need to construct the following GPyTorch objects:
1. A **GP Model** (`gpytorch.models.ExactGP`) - This handles most of the inference.
1. A **Likelihood** (`gpytorch.likelihoods.GaussianLikelihood`) - This is the most common likelihood used for GP regression.
1. A **Mean** - This defines the prior mean of the GP.(If you don't know which mean to use, a `gpytorch.means.ConstantMean()` is a good place to start.)
1. A **Kernel** - This defines the prior covariance of the GP.(If you don't know which kernel to use, a `gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())` is a good place to start).
1. A **MultivariateNormal** Distribution (`gpytorch.distributions.MultivariateNormal`) - This is the object used to represent multivariate normal distributions.
### The GP Model
The components of a user built (Exact, i.e. non-variational) GP model in GPyTorch are, broadly speaking:
1. An `__init__` method that takes the training data and a likelihood, and constructs whatever objects are necessary for the model's `forward` method. This will most commonly include things like a mean module and a kernel module.
2. A `forward` method that takes in some $n \times d$ data `x` and returns a `MultivariateNormal` with the *prior* mean and covariance evaluated at `x`. In other words, we return the vector $\mu(x)$ and the $n \times n$ matrix $K_{xx}$ representing the prior mean and covariance matrix of the GP.
This specification leaves a large amount of flexibility when defining a model. For example, to compose two kernels via addition, you can either add the kernel modules directly:
```python
self.covar_module = ScaleKernel(RBFKernel() + WhiteNoiseKernel())
```
Or you can add the outputs of the kernel in the forward method:
```python
covar_x = self.rbf_kernel_module(x) + self.white_noise_module(x)
```
```python
# We will use the simplest form of GP model, exact inference
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(train_x, train_y, likelihood)
```
### Plot before optimizing the hyperparamters
Let's take a look at the model parameters. More information is <a href="https://colab.research.google.com/github/jwangjie/gpytorch/blob/master/examples/00_Basic_Usage/Hyperparameters.ipynb">here</a>.
```python
model.state_dict()
```
OrderedDict([('likelihood.noise_covar.raw_noise', tensor([0.])),
('mean_module.constant', tensor([0.])),
('covar_module.raw_outputscale', tensor(0.)),
('covar_module.base_kernel.raw_lengthscale', tensor([[0.]]))])
Goes to run [Make predictions with the model](https://colab.research.google.com/github/jwangjie/gpytorch/blob/master/examples/01_Exact_GPs/Simple_GP_Regression.ipynb#scrollTo=S1gMlb1TCM7i), and then run `Plot the model fit`.
```python
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.fast_pred_var():
test_x = torch.linspace(0, 1, 51)
observed_pred = likelihood(model(test_x))
with torch.no_grad():
# Initialize plot
f, ax = plt.subplots(1, 1, figsize=(8, 6))
# Get upper and lower confidence bounds
lower, upper = observed_pred.confidence_region()
# Plot training data as black stars
ax.plot(train_x.numpy(), train_y.numpy(), 'k*')
# Plot predictive means as blue line
ax.plot(test_x.numpy(), observed_pred.mean.numpy(), 'r')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.3)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
```
### Model modes
Like most PyTorch modules, the `ExactGP` has a `.train()` and `.eval()` mode.
- `.train()` mode is for optimizing model hyperameters.
- `.eval()` mode is for computing predictions through the model posterior.
## Training the model
In the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.
The most obvious difference here compared to many other GP implementations is that, as in standard PyTorch, the core training loop is written by the user. In GPyTorch, we make use of the standard PyTorch optimizers as from `torch.optim`, and all trainable parameters of the model should be of type `torch.nn.Parameter`. Because GP models directly extend `torch.nn.Module`, calls to methods like `model.parameters()` or `model.named_parameters()` function as you might expect coming from PyTorch.
In most cases, the boilerplate code below will work well. It has the same basic components as the standard PyTorch training loop:
1. Zero all parameter gradients
2. Call the model and compute the loss
3. Call backward on the loss to fill in gradients
4. Take a step on the optimizer
However, defining custom training loops allows for greater flexibility. For example, it is easy to save the parameters at each step of training, or use different learning rates for different parameters (which may be useful in deep kernel learning for example).
```python
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iter = 2 if smoke_test else 50
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam([
{'params': model.parameters()}, # Includes GaussianLikelihood parameters
], lr=0.1)
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iter):
# Zero gradients from previous iteration
optimizer.zero_grad()
# Output from model
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % (
i + 1, training_iter, loss.item(),
model.covar_module.base_kernel.lengthscale.item(),
model.likelihood.noise.item()
))
optimizer.step()
```
Iter 1/50 - Loss: 0.906 lengthscale: 0.693 noise: 0.693
Iter 2/50 - Loss: 0.873 lengthscale: 0.644 noise: 0.644
Iter 3/50 - Loss: 0.839 lengthscale: 0.598 noise: 0.598
Iter 4/50 - Loss: 0.802 lengthscale: 0.554 noise: 0.554
Iter 5/50 - Loss: 0.762 lengthscale: 0.513 noise: 0.513
Iter 6/50 - Loss: 0.717 lengthscale: 0.475 noise: 0.474
Iter 7/50 - Loss: 0.671 lengthscale: 0.438 noise: 0.437
Iter 8/50 - Loss: 0.624 lengthscale: 0.404 noise: 0.402
Iter 9/50 - Loss: 0.579 lengthscale: 0.371 noise: 0.370
Iter 10/50 - Loss: 0.537 lengthscale: 0.341 noise: 0.339
Iter 11/50 - Loss: 0.498 lengthscale: 0.315 noise: 0.311
Iter 12/50 - Loss: 0.462 lengthscale: 0.292 noise: 0.284
Iter 13/50 - Loss: 0.427 lengthscale: 0.273 noise: 0.260
Iter 14/50 - Loss: 0.393 lengthscale: 0.258 noise: 0.237
Iter 15/50 - Loss: 0.359 lengthscale: 0.247 noise: 0.216
Iter 16/50 - Loss: 0.325 lengthscale: 0.238 noise: 0.197
Iter 17/50 - Loss: 0.291 lengthscale: 0.233 noise: 0.180
Iter 18/50 - Loss: 0.257 lengthscale: 0.229 noise: 0.163
Iter 19/50 - Loss: 0.223 lengthscale: 0.228 noise: 0.149
Iter 20/50 - Loss: 0.190 lengthscale: 0.229 noise: 0.135
Iter 21/50 - Loss: 0.157 lengthscale: 0.232 noise: 0.123
Iter 22/50 - Loss: 0.126 lengthscale: 0.236 noise: 0.112
Iter 23/50 - Loss: 0.096 lengthscale: 0.241 noise: 0.102
Iter 24/50 - Loss: 0.067 lengthscale: 0.248 noise: 0.093
Iter 25/50 - Loss: 0.040 lengthscale: 0.256 noise: 0.084
Iter 26/50 - Loss: 0.016 lengthscale: 0.264 noise: 0.077
Iter 27/50 - Loss: -0.005 lengthscale: 0.274 noise: 0.070
Iter 28/50 - Loss: -0.024 lengthscale: 0.283 noise: 0.064
Iter 29/50 - Loss: -0.039 lengthscale: 0.292 noise: 0.059
Iter 30/50 - Loss: -0.051 lengthscale: 0.300 noise: 0.054
Iter 31/50 - Loss: -0.059 lengthscale: 0.307 noise: 0.050
Iter 32/50 - Loss: -0.065 lengthscale: 0.311 noise: 0.046
Iter 33/50 - Loss: -0.069 lengthscale: 0.312 noise: 0.043
Iter 34/50 - Loss: -0.071 lengthscale: 0.310 noise: 0.040
Iter 35/50 - Loss: -0.072 lengthscale: 0.306 noise: 0.038
Iter 36/50 - Loss: -0.071 lengthscale: 0.300 noise: 0.036
Iter 37/50 - Loss: -0.069 lengthscale: 0.293 noise: 0.034
Iter 38/50 - Loss: -0.066 lengthscale: 0.285 noise: 0.033
Iter 39/50 - Loss: -0.063 lengthscale: 0.278 noise: 0.032
Iter 40/50 - Loss: -0.059 lengthscale: 0.271 noise: 0.031
Iter 41/50 - Loss: -0.056 lengthscale: 0.265 noise: 0.030
Iter 42/50 - Loss: -0.054 lengthscale: 0.260 noise: 0.030
Iter 43/50 - Loss: -0.053 lengthscale: 0.256 noise: 0.029
Iter 44/50 - Loss: -0.053 lengthscale: 0.253 noise: 0.029
Iter 45/50 - Loss: -0.054 lengthscale: 0.251 noise: 0.030
Iter 46/50 - Loss: -0.056 lengthscale: 0.250 noise: 0.030
Iter 47/50 - Loss: -0.058 lengthscale: 0.250 noise: 0.031
Iter 48/50 - Loss: -0.061 lengthscale: 0.251 noise: 0.031
Iter 49/50 - Loss: -0.064 lengthscale: 0.252 noise: 0.032
Iter 50/50 - Loss: -0.066 lengthscale: 0.253 noise: 0.033
```python
model.state_dict()
```
OrderedDict([('likelihood.noise_covar.raw_noise', tensor([-3.3824])),
('mean_module.constant', tensor([0.0588])),
('covar_module.raw_outputscale', tensor(-0.4108)),
('covar_module.base_kernel.raw_lengthscale',
tensor([[-1.2372]]))])
## Make predictions with the model
In the next cell, we make predictions with the model. To do this, we simply put the model and likelihood in eval mode, and call both modules on the test data.
Just as a user defined GP model returns a `MultivariateNormal` containing the prior mean and covariance from forward, a trained GP model in eval mode returns a `MultivariateNormal` containing the posterior mean and covariance. Thus, getting the predictive mean and variance, and then sampling functions from the GP at the given test points could be accomplished with calls like:
```python
f_preds = model(test_x)
y_preds = likelihood(model(test_x))
f_mean = f_preds.mean
f_var = f_preds.variance
f_covar = f_preds.covariance_matrix
f_samples = f_preds.sample(sample_shape=torch.Size(1000,))
```
The `gpytorch.settings.fast_pred_var` context is not needed, but here we are giving a preview of using one of our cool features, getting faster predictive distributions using [LOVE](https://arxiv.org/abs/1803.06058).
```python
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.fast_pred_var():
test_x = torch.linspace(0, 1, 51)
observed_pred = likelihood(model(test_x))
```
## Plot the model fit
In the next cell, we plot the mean and confidence region of the Gaussian process model. The `confidence_region` method is a helper method that returns 2 standard deviations above and below the mean.
```python
with torch.no_grad():
# Initialize plot
f, ax = plt.subplots(1, 1, figsize=(8, 6))
# Get upper and lower confidence bounds
lower, upper = observed_pred.confidence_region()
# Plot training data as black stars
ax.plot(train_x.numpy(), train_y.numpy(), 'k*')
# Plot predictive means as blue line
ax.plot(test_x.numpy(), observed_pred.mean.numpy(), 'r')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.3)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
```
```python
```
| addaabc764b7f6f2b769f1222d4ab1729fd6342d | 103,854 | ipynb | Jupyter Notebook | examples/01_Exact_GPs/Simple_GP_Regression.ipynb | jwangjie/gpytorch | 15979dacf1997af7daf0fdeddbdbfcef0730b007 | [
"MIT"
]
| 2 | 2021-10-30T03:50:28.000Z | 2022-02-22T22:01:14.000Z | examples/01_Exact_GPs/Simple_GP_Regression.ipynb | jwangjie/gpytorch | 15979dacf1997af7daf0fdeddbdbfcef0730b007 | [
"MIT"
]
| null | null | null | examples/01_Exact_GPs/Simple_GP_Regression.ipynb | jwangjie/gpytorch | 15979dacf1997af7daf0fdeddbdbfcef0730b007 | [
"MIT"
]
| 3 | 2020-09-18T18:58:12.000Z | 2021-05-27T15:39:00.000Z | 161.514774 | 28,066 | 0.855875 | true | 4,546 | Qwen/Qwen-72B | 1. YES
2. YES | 0.712232 | 0.849971 | 0.605377 | __label__eng_Latn | 0.932383 | 0.244824 |
# 问题设定
在小车倒立杆(CartPole)游戏中,我们希望通过强化学习训练一个智能体(agent),尽可能不断地左右移动小车,使得小车上的杆不倒,我们首先定义CartPole游戏:
CartPole游戏即是强化学习模型的enviorment,它与agent交互,实时更新state,内部定义了reward function,其中state有以下定义:
$$
state \in \mathbb{R}^4
$$
state每一个维度分别代表了:
- 小车位置,它的取值范围是-2.4到2.4
- 小车速度,它的取值范围是负无穷到正无穷
- 杆的角度,它的取值范围是-41.8°到41.8°
- 杆的角速,它的取值范围是负无穷到正无穷
action是一个2维向量,每一个维度分别代表向左和向右移动。
$$
action \in \mathbb{R}^2
$$
小车每一次向左或向右移动都会加1分,这即是reward function,但是如果杆角度大于±12°、小车位置大于±2.4、行动次数大于200次,游戏将会结束。我们希望在游戏结束时得分尽可能大。
# 策略梯度
设计一个网络,其输入是state,输出是对应各个action的概率,并策略梯度(PolicyGradient)进行迭代训练。
我们首先定义$\tau$为一次回合的迹:
$$
\tau = \{s_1, a_1, r_1, \cdots, s_T, a_T, r_T \} \\
$$
$R(\tau)$是这次迹的奖励值之和:
$$
R(\tau) = \sum^{T}_{t=1} r_t
$$
直观地,我们希望最大化:
$$
\bar{R}_{\theta} = \sum_{\tau} R(\tau) P(\tau \lvert \theta) \approx \frac{1}{N} \sum^{N}_{n=1} R(\tau^{n})
$$
则首先对$\bar{R}_{\theta}$求梯度:
$$
\begin{align}
\nabla \bar{R}_{\theta} &= \sum_{\tau} R(\tau) \nabla P(\tau \lvert \theta) \\
&= \sum_{\tau} R(\tau) P(\tau \lvert \theta) \cdot \frac{\nabla P(\tau \lvert \theta)}{P(\tau \lvert \theta)} \\
&= \sum_{\tau} R(\tau) P(\tau \lvert \theta) \cdot \nabla \log P(\tau \lvert \theta) \\
&\approx \frac{1}{N} \sum^{N}_{n=1} R(\tau^n) \cdot \nabla \log P(\tau^n \lvert \theta)
\end{align}
$$
而对于$P(\tau^n \lvert \theta)$,则可以展开成以下形式:
$$
\begin{align}
p(\tau^n \lvert \theta) &= p(s_1)p(a_1 \lvert s_1, \theta)p(r_1, s_2 \lvert s_1, a_1)p(a_2 \lvert s_2, \theta) \cdots p(a_t \lvert s_t, \theta)p(r_t, s_{t+1} \lvert s_t, a_t) \\
&= p(s_1) \prod_{t} p(a_t \lvert s_t, \theta)p(r_t, s_{t+1} \lvert s_t, a_t)
\end{align}
$$
将上式带入$\log P(\tau^n \lvert \theta)$中:
$$
\begin{align}
\nabla \log P(\tau^n \lvert \theta) &= \nabla \log \left (p(s_1) \prod_{t} p(a_t \lvert s_t, \theta)p(r_t, s_{t+1} \lvert s_t, a_t) \right) \\
&= \nabla \log p(s_1) + \sum^{T}_{t=1} \nabla \log p(a_t \lvert s_t, \theta) + \sum^{T}_{t=1} \nabla p(r_t, s_{t+1} \lvert s_t, a_t) \\
&= \sum^{T}_{t=1} \nabla \log p(a_t \lvert s_t, \theta)
\end{align}
$$
最终$\nabla \bar{R}_{\theta}$将改写为:
$$
\begin{align}
\nabla \bar{R}_{\theta} &\approx \frac{1}{N} \sum^{1}_{N} R(\tau^n) \cdot \nabla \log P(\tau^n \lvert \theta) \\
&= \frac{1}{N} \sum^{N}_{n=1} R(\tau^n) \sum^{T_n}_{t=1} \nabla \log p(a_t \lvert s_t, \theta) \\
&= \frac{1}{N} \sum^{N}_{n=1} \sum^{T_n}_{t=1} R(\tau^n) \nabla \log p(a_t \lvert s_t, \theta)
\end{align}
$$
本质上是最小化N回合采样出的action与网络输出的action的交叉熵的基础上乘以$R(\tau^n)$:
$$
- \sum^{N}_{n=1} R(\tau^n) \cdot a_i \log p_i
$$
需要注意的是,$R(\tau^n)$对于不同的问题计算方式是不同的,在CartPole中,我们更关注回合开始时的奖励,因为他们直接影响了我们是否有机会进行更可能多的动作,所以在这个问题中,$R(\tau^n)$是这样计算的:
```
# Copy r_buffer
r_buffer = self.r_buffer
# Init r_tau
r_tau = 0
# Calculate r_tau
for index in reversed(range(0, len(r_buffer))):
r_tau = r_tau * self.gamma + r_buffer[index]
```
# 代码实现
首先导入必要包:
```python
import tensorflow as tf
import numpy as np
import gym
import sys
sys.path.append('.')
```
实现Agent类:
```python
class Agent(object):
def __init__(self, a_space, s_space, **options):
self.session = tf.Session()
self.a_space, self.s_space = a_space, s_space
self.s_buffer, self.a_buffer, self.r_buffer = [], [], []
self._init_options(options)
self._init_input()
self._init_nn()
self._init_op()
def _init_input(self):
self.s = tf.placeholder(tf.float32, [None, self.s_space])
self.r = tf.placeholder(tf.float32, [None, ])
self.a = tf.placeholder(tf.int32, [None, ])
def _init_nn(self):
# Kernel init.
w_init = tf.random_normal_initializer(.0, .3)
# Dense 1.
dense_1 = tf.layers.dense(self.s,
32,
tf.nn.relu,
kernel_initializer=w_init)
# Dense 2.
dense_2 = tf.layers.dense(dense_1,
32,
tf.nn.relu,
kernel_initializer=w_init)
# Action logits.
self.a_logits = tf.layers.dense(dense_2,
self.a_space,
kernel_initializer=w_init)
# Action prob.
self.a_prob = tf.nn.softmax(self.a_logits)
def _init_op(self):
# One hot action.
action_one_hot = tf.one_hot(self.a, self.a_space)
# Calculate cross entropy.
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=action_one_hot, logits=self.a_logits)
self.loss_func = tf.reduce_mean(cross_entropy * self.r)
self.train_op = tf.train.AdamOptimizer(self.learning_rate).minimize(self.loss_func)
self.session.run(tf.global_variables_initializer())
def _init_options(self, options):
try:
self.learning_rate = options['learning_rate']
except KeyError:
self.learning_rate = 0.001
try:
self.gamma = options['gamma']
except KeyError:
self.gamma = 0.95
def predict(self, state):
action_prob = self.session.run(self.a_prob, feed_dict={self.s: state[np.newaxis, :]})
return np.random.choice(range(action_prob.shape[1]), p=action_prob.ravel())
def save_transition(self, state, action, reward):
self.s_buffer.append(state)
self.a_buffer.append(action)
self.r_buffer.append(reward)
def train(self):
# Copy r_buffer
r_buffer = self.r_buffer
# Init r_tau
r_tau = 0
# Calculate r_tau
for index in reversed(range(0, len(r_buffer))):
r_tau = r_tau * self.gamma + r_buffer[index]
self.r_buffer[index] = r_tau
# Minimize loss.
_, loss = self.session.run([self.train_op, self.loss_func], feed_dict={
self.s: self.s_buffer,
self.a: self.a_buffer,
self.r: self.r_buffer,
})
self.s_buffer, self.a_buffer, self.r_buffer = [], [], []
```
# 实验结果
通过`gym`初始化`CartPole`游戏环境并执行训练:
```python
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('CartPole-v0')
env.seed(1)
env = env.unwrapped
model = Agent(env.action_space.n, env.observation_space.shape[0])
r_sum_list, r_episode_sum = [], None
for episode in range(500):
# Reset env.
s, r_episode = env.reset(), 0
# Start episode.
while True:
# if episode > 80:
# env.render()
# Predict action.
a = model.predict(s)
# Iteration.
s_n, r, done, _ = env.step(a)
if done:
r = -5
r_episode += r
# Save transition.
model.save_transition(s, a, r)
s = s_n
if done:
if r_episode_sum is None:
r_episode_sum = sum(model.r_buffer)
else:
r_episode_sum = r_episode_sum * 0.99 + sum(model.r_buffer) * 0.01
r_sum_list.append(r_episode_sum)
break
# Start train.
model.train()
if episode % 50 == 0:
print("Episode: {} | Reward is: {}".format(episode, r_episode))
```
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
Episode: 0 | Reward is: 17.0
Episode: 50 | Reward is: 71.0
Episode: 100 | Reward is: 26.0
Episode: 150 | Reward is: 50.0
Episode: 200 | Reward is: 102.0
Episode: 250 | Reward is: 194.0
Episode: 300 | Reward is: 197.0
Episode: 350 | Reward is: 71.0
Episode: 400 | Reward is: 147.0
Episode: 450 | Reward is: 182.0
最后绘制出回合与奖励函数的曲线:
```python
plt.plot(np.arange(len(r_sum_list)), r_sum_list)
plt.title('Actor Only on CartPole')
plt.xlabel('Episode')
plt.ylabel('Total Reward')
plt.show()
```
| 611d2fd6c368bf41c1c4612a5f263f8883a71915 | 30,868 | ipynb | Jupyter Notebook | note/PolicyGradient.ipynb | Ceruleanacg/Learning-Notes | 1b2718dc85e622e35670fffbb525bb50d385f9a3 | [
"MIT"
]
| 95 | 2018-06-01T03:57:39.000Z | 2021-12-31T04:51:21.000Z | note/PolicyGradient.ipynb | Ceruleanacg/Descent | 1b2718dc85e622e35670fffbb525bb50d385f9a3 | [
"MIT"
]
| 1 | 2020-02-28T13:27:15.000Z | 2020-02-28T13:27:15.000Z | note/PolicyGradient.ipynb | Ceruleanacg/Descent | 1b2718dc85e622e35670fffbb525bb50d385f9a3 | [
"MIT"
]
| 15 | 2018-06-24T07:33:29.000Z | 2020-10-03T04:12:27.000Z | 60.054475 | 17,316 | 0.751782 | true | 2,850 | Qwen/Qwen-72B | 1. YES
2. YES | 0.743168 | 0.695958 | 0.517214 | __label__eng_Latn | 0.125451 | 0.039991 |
# Neural Network Fundamentals
## Gradient Descent Introduction:
https://www.youtube.com/watch?v=IxBYhjS295w
```python
from IPython.display import YouTubeVideo
YouTubeVideo("IxBYhjS295w")
```
```python
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
np.random.seed(1)
%matplotlib inline
np.random.seed(1)
```
```python
N = 100
x = np.random.rand(N,1)*5
# Let the following command be the true function
y = 2.3 + 5.1*x
# Get some noisy observations
y_obs = y + 2*np.random.randn(N,1)
```
```python
plt.scatter(x,y_obs,label='Observations')
plt.plot(x,y,c='r',label='True function')
plt.legend()
plt.show()
```
## Gradient Descent
We are trying to minimise $\sum \xi_i^2$.
\begin{align}
\mathcal{L} & = \frac{1}{N}\sum_{i=1}^N (y_i-f(x_i,w,b))^2 \\
\frac{\delta\mathcal{L}}{\delta w} & = -\frac{1}{N}\sum_{i=1}^N 2(y_i-f(x_i,w,b))\frac{\delta f(x_i,w,b)}{\delta w} \\
& = -\frac{1}{N}\sum_{i=1}^N 2\xi_i\frac{\delta f(x_i,w,b)}{\delta w}
\end{align}
where $\xi_i$ is the error term $y_i-f(x_i,w,b)$ and
$$
\frac{\delta f(x_i,w,b)}{\delta w} = x_i
$$
Similar expression can be found for $\frac{\delta\mathcal{L}}{\delta b}$ (exercise).
Finally the weights can be updated as $w_{new} = w_{current} - \gamma \frac{\delta\mathcal{L}}{\delta w}$ where $\gamma$ is a learning rate between 0 and 1.
```python
# Helper functions
def f(w,b):
return w*x+b
def loss_function(e):
L = np.sum(np.square(e))/N
return L
def dL_dw(e,w,b):
return -2*np.sum(e*df_dw(w,b))/N
def df_dw(w,b):
return x
def dL_db(e,w,b):
return -2*np.sum(e*df_db(w,b))/N
def df_db(w,b):
return np.ones(x.shape)
```
```python
# The Actual Gradient Descent
def gradient_descent(iter=100,gamma=0.1):
# get starting conditions
w = 10*np.random.randn()
b = 10*np.random.randn()
params = []
loss = np.zeros((iter,1))
for i in range(iter):
# from IPython.core.debugger import Tracer; Tracer()()
params.append([w,b])
e = y_obs - f(w,b) # Really important that you use y_obs and not y (you do not have access to true y)
loss[i] = loss_function(e)
#update parameters
w_new = w - gamma*dL_dw(e,w,b)
b_new = b - gamma*dL_db(e,w,b)
w = w_new
b = b_new
return params, loss
params, loss = gradient_descent()
```
```python
iter=100
gamma = 0.1
w = 10*np.random.randn()
b = 10*np.random.randn()
params = []
loss = np.zeros((iter,1))
for i in range(iter):
# from IPython.core.debugger import Tracer; Tracer()()
params.append([w,b])
e = y_obs - f(w,b) # Really important that you use y_obs and not y (you do not have access to true y)
loss[i] = loss_function(e)
#update parameters
w_new = w - gamma*dL_dw(e,w,b)
b_new = b - gamma*dL_db(e,w,b)
w = w_new
b = b_new
```
```python
dL_dw(e,w,b)
```
0.007829640537794828
```python
plt.plot(loss)
```
```python
params = np.array(params)
plt.plot(params[:,0],params[:,1])
plt.title('Gradient descent')
plt.xlabel('w')
plt.ylabel('b')
plt.show()
```
```python
params[-1]
```
array([4.98991104, 2.72258102])
## Multivariate case
We are trying to minimise $\sum \xi_i^2$. This time $ f = Xw$ where $w$ is Dx1 and $X$ is NxD.
\begin{align}
\mathcal{L} & = \frac{1}{N} (y-Xw)^T(y-Xw) \\
\frac{\delta\mathcal{L}}{\delta w} & = -\frac{1}{N} 2\left(\frac{\delta f(X,w)}{\delta w}\right)^T(y-Xw) \\
& = -\frac{2}{N} \left(\frac{\delta f(X,w)}{\delta w}\right)^T\xi
\end{align}
where $\xi_i$ is the error term $y_i-f(X,w)$ and
$$
\frac{\delta f(X,w)}{\delta w} = X
$$
Finally the weights can be updated as $w_{new} = w_{current} - \gamma \frac{\delta\mathcal{L}}{\delta w}$ where $\gamma$ is a learning rate between 0 and 1.
```python
N = 1000
D = 5
X = 5*np.random.randn(N,D)
w = np.random.randn(D,1)
y = X.dot(w)
y_obs = y + np.random.randn(N,1)
```
```python
w
```
array([[ 0.93774813],
[-2.62540124],
[ 0.74616483],
[ 0.67411002],
[ 1.0142675 ]])
```python
X.shape
```
(1000, 5)
```python
w.shape
```
(5, 1)
```python
(X*w.T).shape
```
(1000, 5)
```python
# Helper functions
def f(w):
return X.dot(w)
def loss_function(e):
L = e.T.dot(e)/N
return L
def dL_dw(e,w):
return -2*X.T.dot(e)/N
```
```python
def gradient_descent(iter=100,gamma=1e-3):
# get starting conditions
w = np.random.randn(D,1)
params = []
loss = np.zeros((iter,1))
for i in range(iter):
params.append(w)
e = y_obs - f(w) # Really important that you use y_obs and not y (you do not have access to true y)
loss[i] = loss_function(e)
#update parameters
w = w - gamma*dL_dw(e,w)
return params, loss
params, loss = gradient_descent()
```
```python
plt.plot(loss)
```
```python
params[-1]
```
array([[ 0.94792987],
[-2.60989696],
[ 0.72929842],
[ 0.65272494],
[ 1.01038855]])
```python
model = LinearRegression(fit_intercept=False)
model.fit(X,y)
model.coef_.T
```
array([[ 0.93774813],
[-2.62540124],
[ 0.74616483],
[ 0.67411002],
[ 1.0142675 ]])
```python
# compare parameters side by side
np.hstack([params[-1],model.coef_.T])
```
array([[ 0.94792987, 0.93774813],
[-2.60989696, -2.62540124],
[ 0.72929842, 0.74616483],
[ 0.65272494, 0.67411002],
[ 1.01038855, 1.0142675 ]])
## Stochastic Gradient Descent
```python
def dL_dw(X,e,w):
return -2*X.T.dot(e)/len(X)
def gradient_descent(gamma=1e-3, n_epochs=100, batch_size=20, decay=0.9):
epoch_run = int(len(X)/batch_size)
# get starting conditions
w = np.random.randn(D,1)
params = []
loss = np.zeros((n_epochs,1))
for i in range(n_epochs):
params.append(w)
for j in range(epoch_run):
idx = np.random.choice(len(X),batch_size,replace=False)
e = y_obs[idx] - X[idx].dot(w) # Really important that you use y_obs and not y (you do not have access to true y)
#update parameters
w = w - gamma*dL_dw(X[idx],e,w)
loss[i] = e.T.dot(e)/len(e)
gamma = gamma*decay #decay the learning parameter
return params, loss
params, loss = gradient_descent()
```
```python
plt.plot(loss)
```
```python
np.hstack([params[-1],model.coef_.T])
```
array([[ 0.94494132, 0.93774813],
[-2.6276984 , -2.62540124],
[ 0.74654537, 0.74616483],
[ 0.66766209, 0.67411002],
[ 1.00760747, 1.0142675 ]])
```python
```
| 579d49b063b104ea1866841d861d00303db60ae5 | 98,058 | ipynb | Jupyter Notebook | jupyter/Keras_TensorFlow_Course/Lesson 02 - GradientDescent.ipynb | multivacplatform/multivac-dl | 54cb33960ba14f32ed9ac185a4c151a6b72a97ca | [
"MIT"
]
| 1 | 2018-11-24T10:47:49.000Z | 2018-11-24T10:47:49.000Z | jupyter/Keras_TensorFlow_Course/Lesson 02 - GradientDescent.ipynb | multivacplatform/multivac-dl | 54cb33960ba14f32ed9ac185a4c151a6b72a97ca | [
"MIT"
]
| null | null | null | jupyter/Keras_TensorFlow_Course/Lesson 02 - GradientDescent.ipynb | multivacplatform/multivac-dl | 54cb33960ba14f32ed9ac185a4c151a6b72a97ca | [
"MIT"
]
| null | null | null | 138.304654 | 27,052 | 0.88621 | true | 2,212 | Qwen/Qwen-72B | 1. YES
2. YES | 0.9659 | 0.851953 | 0.822901 | __label__eng_Latn | 0.488653 | 0.750207 |
<!--NAVIGATION-->
< [Biological Computing in Python I](05-Python_I.ipynb) | [Main Contents](Index.ipynb) | [Biological Computing in R](07-R.ipynb) >
# Biological Computing in Python II <span class="tocSkip"> <a name="chap:python_II"></a>
>> ...some things in life are bad. They can really make you mad. Other things just make you swear and curse. When you're chewing on life's gristle, don't grumble; give a whistle, and this'll help things turn out for the best. And... always look on the bright side of life...
— Guess who?
<h1>Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Numerical-computing-in-Python" data-toc-modified-id="Numerical-computing-in-Python-1"><span class="toc-item-num">1 </span>Numerical computing in Python</a></span><ul class="toc-item"><li><span><a href="#Indexing-and-accessing-arrays" data-toc-modified-id="Indexing-and-accessing-arrays-1.1"><span class="toc-item-num">1.1 </span>Indexing and accessing arrays</a></span></li><li><span><a href="#Manipulating-arrays" data-toc-modified-id="Manipulating-arrays-1.2"><span class="toc-item-num">1.2 </span>Manipulating arrays</a></span><ul class="toc-item"><li><span><a href="#Replacing,-adding-or-deleting-elements" data-toc-modified-id="Replacing,-adding-or-deleting-elements-1.2.1"><span class="toc-item-num">1.2.1 </span>Replacing, adding or deleting elements</a></span></li><li><span><a href="#Flattening-or-reshaping-arrays" data-toc-modified-id="Flattening-or-reshaping-arrays-1.2.2"><span class="toc-item-num">1.2.2 </span>Flattening or reshaping arrays</a></span></li></ul></li><li><span><a href="#Pre-allocating-arrays" data-toc-modified-id="Pre-allocating-arrays-1.3"><span class="toc-item-num">1.3 </span>Pre-allocating arrays</a></span></li><li><span><a href="#numpy-matrices" data-toc-modified-id="numpy-matrices-1.4"><span class="toc-item-num">1.4 </span><code>numpy</code> matrices</a></span><ul class="toc-item"><li><span><a href="#Matrix-vector-operations" data-toc-modified-id="Matrix-vector-operations-1.4.1"><span class="toc-item-num">1.4.1 </span>Matrix-vector operations</a></span></li></ul></li></ul></li><li><span><a href="#Two-particularly-useful-scipy-sub-packages" data-toc-modified-id="Two-particularly-useful-scipy-sub-packages-2"><span class="toc-item-num">2 </span>Two particularly useful <code>scipy</code> sub-packages</a></span><ul class="toc-item"><li><span><a href="#sc.stats" data-toc-modified-id="sc.stats-2.1"><span class="toc-item-num">2.1 </span><code>sc.stats</code></a></span></li><li><span><a href="#Numerical-integration-using--scipy" data-toc-modified-id="Numerical-integration-using--scipy-2.2"><span class="toc-item-num">2.2 </span>Numerical integration using <code>scipy</code></a></span><ul class="toc-item"><li><span><a href="#The-Lotka-Volterra-model" data-toc-modified-id="The-Lotka-Volterra-model-2.2.1"><span class="toc-item-num">2.2.1 </span>The Lotka-Volterra model</a></span></li></ul></li></ul></li><li><span><a href="#Plotting-in-Python" data-toc-modified-id="Plotting-in-Python-3"><span class="toc-item-num">3 </span>Plotting in Python</a></span></li><li><span><a href="#Practicals" data-toc-modified-id="Practicals-4"><span class="toc-item-num">4 </span>Practicals</a></span></li><li><span><a href="#The-need-for-speed:-profiling-code" data-toc-modified-id="The-need-for-speed:-profiling-code-5"><span class="toc-item-num">5 </span>The need for speed: profiling code</a></span><ul class="toc-item"><li><span><a href="#Profiling-in-Python" data-toc-modified-id="Profiling-in-Python-5.1"><span class="toc-item-num">5.1 </span>Profiling in Python</a></span></li><li><span><a href="#Quick-profiling-with-timeit" data-toc-modified-id="Quick-profiling-with-timeit-5.2"><span class="toc-item-num">5.2 </span>Quick profiling with <code>timeit</code></a></span></li></ul></li><li><span><a href="#Practicals" data-toc-modified-id="Practicals-6"><span class="toc-item-num">6 </span>Practicals</a></span><ul class="toc-item"><li><span><a href="#Lotka-Volterra-model-problem" data-toc-modified-id="Lotka-Volterra-model-problem-6.1"><span class="toc-item-num">6.1 </span>Lotka-Volterra model problem</a></span></li><li><span><a href="#Extra-Credit-problems" data-toc-modified-id="Extra-Credit-problems-6.2"><span class="toc-item-num">6.2 </span>Extra Credit problems</a></span></li></ul></li><li><span><a href="#Networks-in-Python" data-toc-modified-id="Networks-in-Python-7"><span class="toc-item-num">7 </span>Networks in Python</a></span><ul class="toc-item"><li><span><a href="#Food-web-network-example" data-toc-modified-id="Food-web-network-example-7.1"><span class="toc-item-num">7.1 </span>Food web network example</a></span></li></ul></li><li><span><a href="#Practicals" data-toc-modified-id="Practicals-8"><span class="toc-item-num">8 </span>Practicals</a></span></li><li><span><a href="#Regular-expressions-in-Python" data-toc-modified-id="Regular-expressions-in-Python-9"><span class="toc-item-num">9 </span>Regular expressions in Python</a></span><ul class="toc-item"><li><span><a href="#Metacharacters-vs.-regular-characters" data-toc-modified-id="Metacharacters-vs.-regular-characters-9.1"><span class="toc-item-num">9.1 </span>Metacharacters vs. regular characters</a></span></li><li><span><a href="#regex-elements" data-toc-modified-id="regex-elements-9.2"><span class="toc-item-num">9.2 </span>regex elements</a></span></li><li><span><a href="#Regex-in-python" data-toc-modified-id="Regex-in-python-9.3"><span class="toc-item-num">9.3 </span>Regex in <code>python</code></a></span></li></ul></li><li><span><a href="#Practicals:-Some-RegExercises" data-toc-modified-id="Practicals:-Some-RegExercises-10"><span class="toc-item-num">10 </span>Practicals: Some RegExercises</a></span><ul class="toc-item"><li><span><a href="#Grouping-regex-patterns" data-toc-modified-id="Grouping-regex-patterns-10.1"><span class="toc-item-num">10.1 </span>Grouping regex patterns</a></span></li></ul></li><li><span><a href="#Useful-re-commands" data-toc-modified-id="Useful-re-commands-11"><span class="toc-item-num">11 </span>Useful <code>re</code> commands</a></span><ul class="toc-item"><li><span><a href="#Finding-all-matches" data-toc-modified-id="Finding-all-matches-11.1"><span class="toc-item-num">11.1 </span>Finding all matches</a></span></li><li><span><a href="#Finding-in-files" data-toc-modified-id="Finding-in-files-11.2"><span class="toc-item-num">11.2 </span>Finding in files</a></span></li><li><span><a href="#Groups-within-multiple-matches" data-toc-modified-id="Groups-within-multiple-matches-11.3"><span class="toc-item-num">11.3 </span>Groups within multiple matches</a></span></li><li><span><a href="#Extracting-text-from-webpages" data-toc-modified-id="Extracting-text-from-webpages-11.4"><span class="toc-item-num">11.4 </span>Extracting text from webpages</a></span></li><li><span><a href="#Replacing-text" data-toc-modified-id="Replacing-text-11.5"><span class="toc-item-num">11.5 </span>Replacing text</a></span></li></ul></li><li><span><a href="#Practicals" data-toc-modified-id="Practicals-12"><span class="toc-item-num">12 </span>Practicals</a></span><ul class="toc-item"><li><span><a href="#Blackbirds-problem" data-toc-modified-id="Blackbirds-problem-12.1"><span class="toc-item-num">12.1 </span>Blackbirds problem</a></span></li></ul></li><li><span><a href="#Using-Python-to-build-workflows" data-toc-modified-id="Using-Python-to-build-workflows-13"><span class="toc-item-num">13 </span>Using Python to build workflows</a></span><ul class="toc-item"><li><span><a href="#Using-subprocess" data-toc-modified-id="Using-subprocess-13.1"><span class="toc-item-num">13.1 </span>Using <code>subprocess</code></a></span><ul class="toc-item"><li><span><a href="#Running-processes" data-toc-modified-id="Running-processes-13.1.1"><span class="toc-item-num">13.1.1 </span>Running processes</a></span></li></ul></li><li><span><a href="#Handling-directory-and-file-paths" data-toc-modified-id="Handling-directory-and-file-paths-13.2"><span class="toc-item-num">13.2 </span>Handling directory and file paths</a></span></li><li><span><a href="#Running-R" data-toc-modified-id="Running-R-13.3"><span class="toc-item-num">13.3 </span>Running <code>R</code></a></span></li></ul></li><li><span><a href="#Practicals" data-toc-modified-id="Practicals-14"><span class="toc-item-num">14 </span>Practicals</a></span><ul class="toc-item"><li><span><a href="#Using-os-problem-1" data-toc-modified-id="Using-os-problem-1-14.1"><span class="toc-item-num">14.1 </span>Using <code>os</code> problem 1</a></span></li><li><span><a href="#Using-os-problem-2" data-toc-modified-id="Using-os-problem-2-14.2"><span class="toc-item-num">14.2 </span>Using <code>os</code> problem 2</a></span></li></ul></li><li><span><a href="#Readings-and-Resources" data-toc-modified-id="Readings-and-Resources-15"><span class="toc-item-num">15 </span>Readings and Resources</a></span></li></ul></div>
In this chapter, we will build on the [first Python Chapter](05-Python_I.ipynb). We cover some more advanced topics that will round-off your training in Biological Computing in Python.
## Numerical computing in Python
The python package `scipy` allows you to do serious number crunching, including:
* Linear algebra (matrix and vector operations)
* Numerical integration (solving ODEs)
* Fourier transforms
* Interpolation
* Calculating special functions (incomplete Gamma, Bessel, etc.)
* Generation of random numbers
* Using statistical functions and transformations
In the following, we will use the `numpy array` data structure for data manipulations and calculations. These
arrays are similar in some respects to python lists, but are more naturally multidimensional, homogeneous in type (the default is float), and allow efficient (fast) manipulations. Thus numpy arrays are analogous to the R `matrix` data object/structure. We will use the `scipy` package, which includes `numpy`, and lot more.
So let's try `scipy`:
```python
import scipy as sc
```
```python
a = sc.array(range(5)) # a one-dimensional array
a
```
array([0, 1, 2, 3, 4])
```python
print(type(a))
```
<class 'numpy.ndarray'>
```python
print(type(a[0]))
```
<class 'numpy.int64'>
Thus the last two outputs tell you that firstly, there is a data structure type (and a class) called `numpy.ndarray`, and secondly, that at position `0` (remember, Python indexing starts at 0) it holds an [64 bit integer](https://en.wikipedia.org/wiki/9,223,372,036,854,775,807). All elements in `a` will be of type `int` because that is what `range()` returns (try `?range`).
<figure>
<small>
<center>
(Source: [http://pages.physics.cornell.edu/~myers/teaching/ComputationalMethods/python/arrays.html](http://pages.physics.cornell.edu/~myers/teaching/ComputationalMethods/python/arrays.html))
<figcaption>
A graphical depiction of numpy/numpy arrays, which can have multiple dimensions (even greater than 3).
</figcaption>
</center>
</small>
</figure>
You can also specify the data type of the array:
```python
a = sc.array(range(5), float)
a
```
array([ 0., 1., 2., 3., 4.])
```python
a.dtype # Check type
```
dtype('float64')
You can also get a 1-D arrays as follows:
```python
x = sc.arange(5)
x
```
array([0, 1, 2, 3, 4])
```python
x = sc.arange(5.) #directly specify float using decimal
x
```
array([ 0., 1., 2., 3., 4.])
As with other Python variables (e.g., created as a list or a dictionary), you can apply methods to variables created as numpy arrays. For example, type `x.` and hit TAB to see all methods you can apply to`x`. To see dimensions of `x`:
```python
x.shape
```
(5,)
Remember, you can type `:?x.methodname` to get info on a particular method. For example, try `?x.shape`.
You can also convert to and from Python lists:
```python
b = sc.array([i for i in range(10) if i%2==1]) #odd numbersbetween 1 and `0
b
```
array([1, 3, 5, 7, 9])
```python
c = b.tolist() #convert back to list
c
```
[1, 3, 5, 7, 9]
To make a matrix, you need a 2-D numpy array:
```python
mat = sc.array([[0, 1], [2, 3]])
mat
```
array([[0, 1],
[2, 3]])
```python
mat.shape
```
(2, 2)
### Indexing and accessing arrays
As with other Python data objects such as lists, numpy array elements can be accessed using square brackets (`[ ]`) with the usual `[row,column]` reference. Indexing of numpy arrays works like that for other data structures, with index values starting at 0. So, you can obtain all the elements of a particular row as:
```python
mat[1] # accessing whole 2nd row, remember indexing starts at 0
```
array([2, 3])
```python
mat[:,1] #accessing whole second column
```
array([1, 3])
And accessing particular elements:
```python
mat[0,0] # 1st row, 1st column element
```
0
```python
mat[1,0] # 2nd row, 1st column element
```
2
Note that (like all other programming languages) row index always comes before column index. That is, `mat[1]` is always going to mean "whole second row", and `mat[1,1]` means 1st row and 1st column element. Therefore, to access the whole second column, you need:
```python
mat[:,0] #accessing whole first column
```
array([0, 2])
Python indexing also accepts negative values for going back to the start
from the end of an array:
```python
mat[0,1]
```
1
```python
mat[0,-1] #interesting!
```
1
```python
mat[0,-2] #very interesting, but rather useless for this simple matrix!
```
0
### Manipulating arrays
Manipulating numpy arrays is pretty straightforward.
---
> **Why numpy arrays are computationally efficient:** The data associated with a numpy array object (its metadata – number of dimensions, shape, data type, etc – as well as the actual data) are stored in a homogeneous and contiguous block of memory (a "data buffer"), at a particular address in the system's RAM (Random Access Memory). This makes numpy arrays more efficient than a pure Python data structures like lists whose data are scattered across the system memory.
---
#### Replacing, adding or deleting elements
Let's look at how you can replace, add, or delete an array element (a single entry, or whole row(s) or whole column(s)):
```python
mat[0,0] = -1 #replace a single element
mat
```
array([[-1, 1],
[ 2, 3]])
```python
mat[:,0] = [12,12] #replace whole column
mat
```
array([[12, 1],
[12, 3]])
```python
sc.append(mat, [[12,12]], axis = 0) #append row, note axis specification
```
array([[12, 1],
[12, 3],
[12, 12]])
```python
sc.append(mat, [[12],[12]], axis = 1) #append column
```
array([[12, 1, 12],
[12, 3, 12]])
```python
newRow = [[12,12]] #create new row
```
```python
mat = sc.append(mat, newRow, axis = 0) #append that existing row
mat
```
array([[12, 1],
[12, 3],
[12, 12]])
```python
sc.delete(mat, 2, 0) #Delete 3rd row
```
array([[12, 1],
[12, 3]])
And concatenation:
```python
mat = sc.array([[0, 1], [2, 3]])
mat0 = sc.array([[0, 10], [-1, 3]])
sc.concatenate((mat, mat0), axis = 0)
```
array([[ 0, 1],
[ 2, 3],
[ 0, 10],
[-1, 3]])
#### Flattening or reshaping arrays
You can also "flatten" or "melt" arrays, that is, change array dimensions (e.g., from a matrix to a vector):
```python
mat.ravel() # NOTE: ravel is row-priority - happens row by row
```
array([0, 1, 2, 3])
```python
mat.reshape((4,1)) # this is different from ravel - check ?sc.reshape
```
array([[0],
[1],
[2],
[3]])
```python
mat.reshape((1,4)) # NOTE: reshaping is also row-priority
```
array([[0, 1, 2, 3]])
```python
mat.reshape((3, 1)) # But total elements must remain the same!
```
This is a bit different than how [`R` behaves](07-R.ipynb#Recycling), where you won't get an error (R "recycles" data), which can be dangerous!
### Pre-allocating arrays
As in other computer languages, it is usually more efficient to preallocate an array rather than append / insert / concatenate additional elements, rows, or columns. *why*? – because you might run out of contiguous space in the specific system memory (RAM) address where the current array is stored. Preallocation allocates all the RAM memory you need in one call, while resizing the array (through append,insert,concatenate, resize, etc) may require copying the array to a larger block of memory, slowing things down, and significantly so if the matrix/array is very large.
For example, if you know the size of your matrix or array, you can initialize it with ones or zeros:
```python
sc.ones((4,2)) #(4,2) are the (row,col) array dimensions
```
array([[ 1., 1.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.]])
```python
sc.zeros((4,2)) # or zeros
```
array([[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.]])
```python
m = sc.identity(4) #create an identity matrix
m
```
array([[ 1., 0., 0., 0.],
[ 0., 1., 0., 0.],
[ 0., 0., 1., 0.],
[ 0., 0., 0., 1.]])
```python
m.fill(16) #fill the matrix with 16
m
```
array([[ 16., 16., 16., 16.],
[ 16., 16., 16., 16.],
[ 16., 16., 16., 16.],
[ 16., 16., 16., 16.]])
### `numpy` matrices
Scipy/Numpy also has a `matrix` data structure class. Numpy matrices are strictly 2-dimensional, while numpy arrays are N-dimensional. Matrix objects are a subclass of numpy arrays, so they inherit all the attributes and methods of numpy arrays (ndarrays).
The main advantage of scipy matrices is that they provide a convenient notation for matrix multiplication: if `a` and `b` are matrices, then `a * b` is their matrix product.
#### Matrix-vector operations
Now let's perform some common matrix-vector operations on arrays (you can also try the same using matrices instead of arrays):
```python
mm = sc.arange(16)
mm = mm.reshape(4,4) #Convert to matrix
mm
```
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
```python
mm.transpose()
```
array([[ 0, 4, 8, 12],
[ 1, 5, 9, 13],
[ 2, 6, 10, 14],
[ 3, 7, 11, 15]])
```python
mm + mm.transpose()
```
array([[ 0, 5, 10, 15],
[ 5, 10, 15, 20],
[10, 15, 20, 25],
[15, 20, 25, 30]])
```python
mm - mm.transpose()
```
array([[ 0, -3, -6, -9],
[ 3, 0, -3, -6],
[ 6, 3, 0, -3],
[ 9, 6, 3, 0]])
```python
mm * mm.transpose() ## Note: Elementwise multiplication!
```
array([[ 0, 4, 16, 36],
[ 4, 25, 54, 91],
[ 16, 54, 100, 154],
[ 36, 91, 154, 225]])
```python
mm // mm.transpose()
```
/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:1: RuntimeWarning: divide by zero encountered in floor_divide
"""Entry point for launching an IPython kernel.
array([[0, 0, 0, 0],
[4, 1, 0, 0],
[4, 1, 1, 0],
[4, 1, 1, 1]])
Note that we used integer division `//`. Note also the warning you get(because of zero division). So let's avoid the divide by zero:
```python
mm // (mm+1).transpose()
```
array([[0, 0, 0, 0],
[2, 0, 0, 0],
[2, 1, 0, 0],
[3, 1, 1, 0]])
```python
mm * sc.pi
```
array([[ 0. , 3.14159265, 6.28318531, 9.42477796],
[ 12.56637061, 15.70796327, 18.84955592, 21.99114858],
[ 25.13274123, 28.27433388, 31.41592654, 34.55751919],
[ 37.69911184, 40.8407045 , 43.98229715, 47.1238898 ]])
```python
mm.dot(mm) # MATRIX MULTIPLICATION
```
array([[ 56, 62, 68, 74],
[152, 174, 196, 218],
[248, 286, 324, 362],
[344, 398, 452, 506]])
```python
mm = sc.matrix(mm) # convert to scipy matrix class
mm
```
matrix([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
```python
print(type(mm))
```
<class 'numpy.matrixlib.defmatrix.matrix'>
```python
mm * mm # now matrix multiplication is syntactically easier
```
matrix([[ 56, 62, 68, 74],
[152, 174, 196, 218],
[248, 286, 324, 362],
[344, 398, 452, 506]])
We can do a lot more by importing the `linalg` sub-package: `sc.linalg`. Try it.
## Two particularly useful `scipy` sub-packages
Two particularly useful `scipy` sub-packages are `sc.integrate` (*what will I need this for?*) and `sc.stats`. *Why not use `R` for this?* — because often you might just want to calculate some summary stats of your simulation results within Python.
### `sc.stats`
Let's take a quick spin in `sc.stats`.
```python
import scipy.stats
```
```python
scipy.stats.norm.rvs(size = 10) # 10 samples from N(0,1)
```
array([-1.18737619, 0.97466982, 0.24744395, -0.13007556, -1.91779308,
1.39418314, 0.66301452, 2.50881111, 0.79540968, -0.35377855])
```python
scipy.stats.randint.rvs(0, 10, size =7) # Random integers between 0 and 10
```
array([3, 7, 4, 2, 1, 8, 1])
### Numerical integration using `scipy`
Numerical integration is the approximate computation of an integral using numerical techniques. You need numerical integration whenever you have a complicated function that cannot be integrated analytically using anti-derivatives. For example, calculating the area under a curve A particularly useful application is solving ordinary differential equations (ODEs), commonly used for modelling biological systems.
#### The Lotka-Volterra model
Let's try numerical integration in Python for solving a classical model in biology — the Lotka-Volterra model for a predator-prey system in two-dimensional space (e.g., on land).
The Lotka-Volterra (LV) model is:
\begin{aligned}
\frac{dR}{dt} &= r R - a C R \\
\frac{dC}{dt} &= - z C + e a C R
\end{aligned}
where $C$ and $R$ are consumer (e.g., predator) and resource (e.g., prey) population abundance (either number $\times$ area$^{-1}$ ), $r$ is the intrinsic (per-capita) growth rate of the resource population (time$^{-1}$), $a$ is per-capita "search rate" for the resource ($\text{area}\times \text{time}^{-1}$) multiplied by its attack success probability, which determines the encounter and consumption rate of the consumer on the resource, $z$ is mortality rate ($\text{time}^{-1}$) and $e$ is the consumer's efficiency (a fraction) in converting resource to consumer biomass.
We have already imported scipy above (`import scipy as sc`) so we can proceed to solve the LV model using numerical integration.
First, import `scipy`'s `integrate` submodule:
```python
import scipy.integrate as integrate
```
Now define a function that returns the growth rate of consumer and resource population at any given time step.
```python
def dCR_dt(pops, t=0):
R = pops[0]
C = pops[1]
dRdt = r * R - a * R * C
dCdt = -z * C + e * a * R * C
return sc.array([dRdt, dCdt])
```
```python
type(dCR_dt)
```
function
Assign some parameter values:
```python
r = 1.
a = 0.1
z = 1.5
e = 0.75
```
Define the time vector; let's integrate from time point 0 to 15, using 1000 sub-divisions of time:
```python
t = sc.linspace(0, 15, 1000)
```
Note that the units of time are arbitrary here.
Set the initial conditions for the two populations (10 resources and 5 consumers per unit area), and convert the two into an array (because our `dCR_dt` function take an array as input).
```python
R0 = 10
C0 = 5
RC0 = sc.array([R0, C0])
```
Now numerically integrate this system forward from those starting conditions:
```python
pops, infodict = integrate.odeint(dCR_dt, RC0, t, full_output=True)
```
```python
pops
```
array([[ 10. , 5. ],
[ 10.07578091, 4.94421976],
[ 10.1529783 , 4.88948321],
...,
[ 9.99869712, 17.56204194],
[ 9.8872779 , 17.3642589 ],
[ 9.78000354, 17.16658946]])
So `pops` contains the result (the population trajectories). Also check what's in infodict (it's a dictionary with additional information)
```python
type(infodict)
```
dict
```python
infodict.keys()
```
dict_keys(['mused', 'message', 'tsw', 'nst', 'imxer', 'nje', 'lenrw', 'nfe', 'tcur', 'hu', 'nqu', 'tolsf', 'leniw'])
Check what the `infodict` output is by reading the help documentation with `?scipy.integrate.odeint`. For example, you can return a message to screen about whether the integration was successful:
```python
infodict['message']
```
'Integration successful.'
So it worked, great! But we would like to visualize the results. Let's do it using the `matplotlib` package.
## Plotting in Python
To visualize the results of your numerical simulations in Python (or for data exploration/analyses), you can use `matplotlib`, which uses Matlab like plotting syntax.
First let's import the package:
```python
import matplotlib.pylab as p
```
Now open empty figure object (analogous to an R graphics object).
```python
f1 = p.figure()
```
<matplotlib.figure.Figure at 0x7fccb45185c0>
```python
p.plot(t, pops[:,0], 'g-', label='Resource density') # Plot
p.plot(t, pops[:,1] , 'b-', label='Consumer density')
p.grid()
p.legend(loc='best')
p.xlabel('Time')
p.ylabel('Population density')
p.title('Consumer-Resource population dynamics')
p.show()# To display the figure
```
Finally, save the figure as a pdf:
```python
f1.savefig('../results/LV_model.pdf') #Save figure
```
You can use many any other output formats; check the documentation with `p.savefig`.
## Practicals
1. Create a self-standing script using the above example and save it as `LV1.py` in your code directory. In addition to generating the above figure, it should also generate the following figure:
<figure>
<small>
<center>
<figcaption>
Generate this figure as part of the `LV1.py` script.
</figcaption>
</center>
</small>
</figure>
It should save both figures in pdf to the `results` directory, *without displaying them on screen*.
## The need for speed: profiling code
Donald Knuth says: *Premature optimization is the root of all evil*.
Indeed, computational speed may not be your initial concern. Also, you should focus on developing clean, reliable, reusable code rather than worrying first about how fast your code runs. However, speed will become an issue when and if your analysis or modeling becomes complex enough (e.g., food web or large network simulations). In that case, knowing which parts of your code take the most time is useful – optimizing those parts may save you lots of time.
To find out what is slowing down your code you need to "profile" your code: locate the sections of your code where speed bottlenecks exist.
### Profiling in Python
Profiling is easy in `ipython` – simply use the command:
```python
%run -p your_function_name
```
Let's write an illustrative program (name it `profileme.py`):
```python
def my_squares(iters):
out = []
for i in range(iters):
out.append(i ** 2)
return out
def my_join(iters, string):
out = ''
for i in range(iters):
out += string.join(", ")
return out
def run_my_funcs(x,y):
print(x,y)
my_squares(x)
my_join(x,y)
return 0
run_my_funcs(10000000,"My string")
```
Now run it with `run -p profileme.py`, and you should see something like:
```bash
20000063 function calls (20000062 primitive calls) in 9.026 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 3.335 3.335 3.732 3.732 profileme.py:1(my_squares)
1 2.698 2.698 5.200 5.200 profileme.py:7(my_join)
10000001 2.502 0.000 2.502 0.000 {method 'join' of 'str' objects}
10000008 0.397 0.000 0.397 0.000 {method 'append' of 'list' objects}
1 0.093 0.093 9.025 9.025 profileme.py:13(run_my_funcs)
[more output]
```
Now you can see that the `my_join` function is hogging most of the time, followed by `my_squares`. Furthermore, its the string method `join` that is clearly slowing `my_join` down, and the list method `append` that is slowing `my_squares`down. In other words, `.join`ing the string again and again, and `.append`ing values to a list are both not particularly fast.
Can we do better? *Yes!*
Let's try this alternative approach to writing the program (save it as `profileme2.py`):
```python
def my_squares(iters):
out = [i ** 2 for i in range(iters)]
return out
def my_join(iters, string):
out = ''
for i in range(iters):
out += ", " + string
return out
def run_my_funcs(x,y):
print(x,y)
my_squares(x)
my_join(x,y)
return 0
run_my_funcs(10000000,"My string")
```
We did two things: converted the loop to a list comprehension, and replaced the `.join` with an explicit string concatenation.
Now profile this program (`run -p profileme2.py`), and you should get something like:
```bash
64 function calls (63 primitive calls) in 4.585 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 2.497 2.497 2.497 2.497 profileme2.py:2(<listcomp>)
1 1.993 1.993 1.993 1.993 profileme2.py:5(my_join)
1 0.094 0.094 4.584 4.584 profileme2.py:11(run_my_funcs)
[more output]
```
Woo hoo! So we about halved the time! Not quite enough to grab a pint, but ah well...
Another approach would be to preallocate a `numpy` array instead of using a list for `my_squares`. Try it.
### Quick profiling with `timeit`
Alternatively, you can use the `timeit` module if you want to figure out what the best way to do something specific as part of a larger program (say a particular command or a loop) might be.
Type an run the following code in a python script called `timeitme.py`:
```python
##############################################################################
# loops vs. list comprehensions: which is faster?
##############################################################################
iters = 1000000
import timeit
from profileme import my_squares as my_squares_loops
from profileme2 import my_squares as my_squares_lc
# %timeit my_squares_loops(iters)
# %timeit my_squares_lc(iters)
##############################################################################
# loops vs. the join method for strings: which is faster?
##############################################################################
mystring = "my string"
from profileme import my_join as my_join_join
from profileme2 import my_join as my_join
# %timeit(my_join_join(iters, mystring))
# %timeit(my_join(iters, mystring))
```
Note how we imported the functions using `from profileme import my_squares as my_squares_loops`, etc., which highlights the convenience of Python's elegant object-oriented approach.
Now run the two sets of comparisons using `timeit()` in ipython and make sure every line makes sense. Note that I have commented out the `%timeit()` commands because having a magic command inside a script will not work.
Of course, a simple approach would have been to time the functions like this:
```python
import time
start = time.time()
my_squares_loops(iters)
print("my_squares_loops takes %f s to run." % (time.time() - start))
start = time.time()
my_squares_lc(iters)
print("my_squares_lc takes %f s to run." % (time.time() - start))
```
But you'll notice that if you run it multiple times, the time taken changes each time. So `timeit` takes a sample of runs and returns the average, which is better.
*But remember, don't go crazy with profiling for the sake of shaving a couple of milliseconds, tempting as that may be!*
## Practicals
### Lotka-Volterra model problem
Copy and modify `LV1.py` into another script called `LV2.py` that does the following:
* Take arguments for the four LV model parameters $r$, $a$, $z$ ,$e$ from the command line:
```
LV2.py arg1 arg2 ... etc
```
* Runs the Lotka-Volterra model with prey density dependence $r R \left(1 - \frac{R} {K}\right)$, which changes the coupled ODEs to,
\begin{aligned}
\frac{dR}{dt} &= r R \left(1 - \frac{R} {K}\right) - a C R \\
\frac{dC}{dt} &= - z C + e a C R
\end{aligned}
* Saves the plot as `.pdf` in an appropriate location.
* The chosen parameter values should show in the plot (e.g., $r = 1, a = .5 $, etc) You can change time length $t$ too.
* Include a script in `code` that will run both `LV1.py` and `LV2.py` with appropriate arguments. This script should also profile the two scripts and print the results to screen for each of the scripts using the `%run -p` approach. Look at and compare the speed bottlenecks in `LV1.py` and `LV2.py`. *Think about how you could further speed up the scripts.*
### Extra Credit problems
*Write every subsequent extra credit script file with a new name such as `LV3.py`,`LV4.py`, etc.*
* **Extra credit**: Choose appropriate values for the parameters such that both predator and prey persist with prey density dependence — the final (non-zero) population values should be printed to screen.
* **Extra-extra credit**: Write a discrete-time version of the LV model called `LV3.py`. The discrete-time model is:
\begin{align}
R_{t+1} &= R_t (1 + r \left(1 - \frac{R_t}{K}\right) - a C_t)\\
C_{t+1} &= C_t (1 - z + e a R_t)
\end{align}
Include this script in `run_LV.py`, and profile it as well.
* **Extra-extra-extra credit**: Write a version of the discrete-time model (which you implemented in `LV3.py`) simulation with a random gaussian fluctuation in resource's growth rate at each time-step:
\begin{aligned}
R_{t+1} &= R_t (1 + (r + \epsilon) \left(1 - \frac{R_t}{K}\right)- a C_t)\\
C_{t+1} &= C_t (1 - z + e a R_t)
\end{aligned}
where $\epsilon$ is a random fluctuation drawn from a gaussian distribution (use `sc.stats`). Include this
script in ` run_LV.py`, and profile it as well. You can also add fluctuations to both populations simultaneously this way:
\begin{aligned}
R_{t+1} &= R_t (1 + \epsilon + r + \left(1 - \frac{R_t}{K}\right) - a C_t)\\
C_{t+1} &= C_t (1 - z + \epsilon + e a R_t)
\end{aligned}
*As always, test, add, commit and push all your new code and data to your git repository.*
## Networks in Python
ALL biological systems have a network representation, consisting of nodes for the biological entities of interest, and edges or links for the relationships between them. Here are some examples:
* Metabolic networks
* Gene regulatory networks
* Individual-Individual (e.g., social networks)
* Who-eats-whom (Food web) networks
* Mutualistic (e.g., plant-pollinator) networks
*Can you think of a few more examples from biology?*
You can easily simulate, analyze, and visualize biological networks in both `python` and `R` using some nifty packages. A full network analysis tutorial is out of the scope of our Python module's objectives, but let's try a simple visualization using the ` networkx` python package.
For this you need to first install the package, for example, by using:
```bash
sudo apt-get install python3-networkx
```
### Food web network example
As an example, let's plot a food web network.
The best way to store a food web dataset is as an "adjacency list" of who eats whom: a matrix with consumer name/id in 1st column, and resource name/id in 2nd column, and a separate matrix of species names/ids and properties such as biomass (node's abundance), or average body mass. You will see what these data structures look like below.
First, import the necessary modules:
```python
import networkx as nx
import scipy as sc
import matplotlib.pyplot as p
```
Let's generate a "synthetic" food web. We can do this with the following function that generates a random adjacency list of a $N$-species food web with "connectance probability" $C$: the probability of having a link between any pair of species in the food web.
```python
def GenRdmAdjList(N = 2, C = 0.5):
"""
"""
Ids = range(N)
ALst = []
for i in Ids:
if sc.random.uniform(0,1,1) < C:
Lnk = sc.random.choice(Ids,2).tolist()
if Lnk[0] != Lnk[1]: #avoid self (e.g., cannibalistic) loops
ALst.append(Lnk)
return ALst
```
Note that we are using a uniform random distribution between `[0,1]` to generate a connectance probability between each species pair.
Now, let's assign a body mass range (in log scale):
Now assign number of species (`MaxN`) and connectance (`C`):
```python
MaxN = 30
C = 0.75
```
Now generate an adjacency list representing a random food web:
```python
AdjL = sc.array(GenRdmAdjList(MaxN, C))
AdjL
```
array([[18, 20],
[15, 29],
[ 0, 3],
[12, 3],
[ 3, 8],
[ 8, 26],
[ 8, 17],
[ 4, 25],
[ 1, 20],
[22, 11],
[19, 15],
[28, 22],
[ 8, 29],
[15, 9],
[ 7, 15],
[27, 29],
[21, 13],
[14, 15],
[12, 28],
[21, 19],
[11, 23],
[19, 23]])
So that's what an adjacency list looks like. The two columns of numbers correspond to the consumer and resource ids, respectively.
Now generate species (node) data:
```python
Sps = sc.unique(AdjL) # get species ids
```
Now generate body sizes for the species. We will use a log$_{10}$ scale because species body sizes tend to be [log-normally distributed](08-Data_R.ipynb#Histograms).
```python
SizRan = ([-10,10]) #use log10 scale
Sizs = sc.random.uniform(SizRan[0],SizRan[1],MaxN)
Sizs
```
array([ 7.9092565 , 8.28890769, 1.93290082, 5.65741057, 3.20689454,
2.70967583, 5.00598443, 0.90134135, -8.38467277, 2.64012453,
4.97781765, -4.00465023, -9.93293439, 7.90936016, -8.22276944,
-4.86729704, 1.7039879 , 0.44887105, 0.20853699, -8.99008497,
-4.74009949, -4.24718942, -9.93293894, -2.73320298, 0.04017755,
-5.55501357, -6.83177169, 2.72087488, -6.51475447, 3.5965115 ])
Let's visualize the size distribution we have generated.
```python
p.hist(Sizs) #log10 scale
```
```python
p.hist(10 ** Sizs) #raw scale
```
Now let's plot the network, with node sizes proportional to (log) body size:
```python
p.close('all') # close all open plot objects
```
Let's use a circular configuration. For this, we need to calculate the coordinates, easily done using networkx:
```python
pos = nx.circular_layout(Sps)
```
See `networkx.layout` for inbuilt functions to compute other types of node coordinates.
Now generate a networkx graph object:
```python
G = nx.Graph()
```
Now add the nodes and links (edges) to it:
```python
G.add_nodes_from(Sps)
G.add_edges_from(tuple(AdjL)) # this function needs a tuple input
```
Generate node sizes that are proportional to (log) body sizes:
```python
NodSizs= 1000 * (Sizs-min(Sizs))/(max(Sizs)-min(Sizs))
```
Now render (plot) the graph:
```python
nx.draw_networkx(G, pos, node_size = NodSizs)
```
Some of you might get the warning above, or a different one. In that case, just try upgrading the networkx package.
## Practicals
1. Type the above code for plotting a food web network in a program file called `DrawFW.py`. This file should save the plotted network as a pdf.
2. (**Extra Credit**) You can also do nice network visualizations in R. Here you will convert a network visualization script written in `R` using the `igraph` package to a python script that does the same thing.
* First copy the script file called `Nets.R` and the data files it calls and run it. This script visualizes the [QMEE CDT collaboration network](http://www.imperial.ac.uk/qmee-cdt), coloring the the nodes by the type of node (organization type: "University","Hosting Partner", "Non-hosting Partner").
* Now, convert this script to a `python` script that does the same thing, including writing to an `.svg` file using the same QMEE CDT link and node data. You can use `networkx` or some other python network visualization package.
## Regular expressions in Python
Let's shift gears now, and look at a very important skill that you should learn, or at least be aware of — *Regular expressions*.
Regular expressions (regex) are a tool to find patterns (not just a particular sequence of characters) in strings. For example, `your@email.com` is a specific sequence of characters, but, in fact, all email addresses have such a pattern: alphanumeric characters, a "@", alphanumeric characters, a ".", alphanumeric characters. Using regex, you can search for all email addresses in a text file by searching for this pattern.
There are many uses of regex, such as:
* Parsing (reading) text files and finding and replacing or deleting specific patterns
* Finding DNA motifs in sequence data
* Navigating through files in a directory
* Extracting information from html and xml files
Thus, if you are interested in data mining, need to clean or process data in any other way, or convert a bunch of information into usable data, knowing regex is absolutely necessary.
<figure>
<small>
<center>
(Source: [www.xkcd.com](https://www.xkcd.com/208/))
<figcaption>
Regular expressions could change your life!
</figcaption>
</center>
</small>
</figure>
Regex packages are available for most programming languages (recall [`grep` in UNIX](01-Unix.ipynb#Using-`grep`); that is how regex first became popular).
### Metacharacters vs. regular characters
A regex may consist of a combination of "metacharacters" (modifiers) and "regular" or literal characters. There are 14 metacharacters:
<center>
<code>[</code> <code>]</code> <code>{</code> <code>}</code> <code>(</code> <code>)</code> <code>\</code> <code>^</code> <code>$</code> <code>.</code> <code>|</code> <code>?</code> <code>*</code> <code>+</code>
</center>
These metacharacters do special things, for example:
* `[12]` means match target to *1* and if that does not match then match target to *2*
* `[0-9]` means match to any character in range *0* to *9*
* `[^Ff]` means anything except upper or lower case *f* and `[^a-z]` means everything except lower case *a* to *z*
Everything else is interpreted literally (e.g., *a* is matched by entering `a` in the regex).
`[` and `]`, specify a character "class" — the set of characters that you wish to match. Metacharacters are not active inside classes. For example, <code>[a-z$]</code> will match any of the characters `a` to `z`, but also <code>$</code>, because inside a character class it loses its special metacharacter status.
### regex elements
A useful (not exhaustive) list of regex elements is:
|Regex|Description|
|:-|:-|
|\ | inhibit the "specialness" of a (meta)character so that it can be interpreted literally. So, for example, use `\.` to match a period or `\\` to match a slash|
|`aX9`| match the character string *aX9* exactly (case sensitively)|
|`8`| match the number *8*|
|`\n`| match a newline|
|`\t`| match a tab |
|`\s`| match a whitespace |
|`.`| match any character except line break (newline)|
|`\w`| match a single "word" character: any alphanumeric character (including underscore)|
|`\W`| match any character not covered by `\w`, i.e., match any non-alphanumeric character excluding underscore, such as `?`, `!`, `+`, `<`, etc. |
|`\d`| match a numeric (integer) character|
|`\D`| match any character not covered by ` \d` (i.e., match a non-digit)|
|`[atgc]` | match any character listed: `a`, `t`, `g`, `c`|
| <code>at|gc</code> | match `at` or `gc`|
|`[^atgc]`| match any character not listed: any character except `a`, `t`, `g`, `c`|
|`?`| match the preceding pattern element zero or one times|
|*|match the preceding pattern element zero or more times|
|`+`| match the preceding pattern element one or more times|
|`{n}`| match the preceding pattern element exactly `n` times|
|`{n,}`| match the preceding pattern element at least `n` times|
|`{n,m}`| match the preceding pattern element at least `n` but not more than `m` times|
|`^`| match the start of a string|
|<code>$</code>| match the end of a string|
### Regex in `python`
Regex functions in python are in the module `re`.
Let's import it:
```python
import re
```
The simplest `python` regex function is `re.search`, which searches the string for match to a given pattern — returns a *match object* if a match is found and `None` if not. Thus, the command `match = re.search(pat, str)` finds matches of the pattern `pat` in the given string `str` and stores the search result in a variable named `match`.
> **Always** put `r` in front of your regex — it tells python to read the regex in its "raw" (literal) form. Without raw string notation (`r"text"`), every backslash (`\`) in a regular expression would have to be prefixed with another one to escape it. Read more about this [here](https://docs.python.org/3.5/library/re.html).
OK, let's try some regexes (type all that follows in `regexs.py`):
```python
my_string = "a given string"
```
Find a space in the string:
```python
match = re.search(r'\s', my_string)
print(match)
```
<_sre.SRE_Match object; span=(1, 2), match=' '>
That's only telling you that a match was found (the object was created successfully).
To see the match, use:
```python
match.group()
```
' '
Now let's try another pattern:
```python
match = re.search(r'\d', my_string)
```
```python
print(match)
```
None
No surprise, because there are no numeric characters in our string!
To know whether a pattern was matched, we can use an `if`:
```python
MyStr = 'an example'
match = re.search(r'\w*\s', MyStr) # what pattern is this?
if match:
print('found a match:', match.group())
else:
print('did not find a match')
```
found a match: an
Here are some more regexes (add all that follows to `regexs.py`):
```python
match = re.search(r'2' , "it takes 2 to tango")
match.group()
```
'2'
```python
match = re.search(r'\d' , "it takes 2 to tango")
match.group()
```
'2'
```python
match = re.search(r'\d.*' , "it takes 2 to tango")
match.group()
```
'2 to tango'
```python
match = re.search(r'\s\w{1,3}\s', 'once upon a time')
match.group()
```
' a '
```python
match = re.search(r'\s\w*$', 'once upon a time')
match.group()
```
' time'
Let's switch to a more compact syntax by directly returning the matched group (by directly appending `.group()` to the result).
```python
re.search(r'\w*\s\d.*\d', 'take 2 grams of H2O').group()
```
'take 2 grams of H2'
```python
re.search(r'^\w*.*\s', 'once upon a time').group() # 'once upon a '
```
'once upon a '
Note that *, `+`, and `{ }` are all "greedy": They repeat the previous regex token as many times as possible.
As a result, they may match more text than you want. To make it non-greedy and terminate at the first found instance of a pattern, use `?`:
```python
re.search(r'^\w*.*?\s', 'once upon a time').group()
```
'once '
To further illustrate greediness in regexes, let's try matching an HTML tag:
```python
re.search(r'<.+>', 'This is a <EM>first</EM> test').group()
```
'<EM>first</EM>'
But we wanted just `<EM>`!
It's because `+` is greedy. Instead, we can make `+` "lazy":
```python
re.search(r'<.+?>', 'This is a <EM>first</EM> test').group()
```
'<EM>'
OK, moving on from greed and laziness...
```python
re.search(r'\d*\.?\d*','1432.75+60.22i').group()
```
'1432.75'
Note `\` before the `.`, to be able to find a literal `.`
Otherwise, `re.search` will consider it to be a regex element (`.` means "match any character except newline").
A couple more examples:
```python
re.search(r'[AGTC]+', 'the sequence ATTCGT').group()
```
'ATTCGT'
```python
re.search(r'\s+[A-Z]\w+\s*\w+', "The bird-shit frog's name is Theloderma asper.").group()
```
' Theloderma asper'
<figure>
<small>
<center>
<figcaption>
In case you were wondering what *Theloderma asper*, the "bird-shit frog", looks like. I snapped this one in North-east India ages ago
</figcaption>
</center>
</small>
</figure>
How about looking for email addresses in a string? For example, let's try matching a string consisting of an academic's name, email address and research area or interest (no need to type this into any python file):
```python
MyStr = 'Samraat Pawar, s.pawar@imperial.ac.uk, Systems biology and ecological theory'
match = re.search(r"[\w\s]+,\s[\w\.@]+,\s[\w\s]+",MyStr)
match.group()
```
'Samraat Pawar, s.pawar@imperial.ac.uk, Systems biology and ecological theory'
Note the use of `[ ]`'s: for example, `[\w\s]` ensures that any combination of word characters and spaces is found.
Let's see if this regex works on a different pattern of email addresses:
```python
MyStr = 'Samraat Pawar, s-pawar@imperial.ac.uk, Systems biology and ecological theory'
```
```python
match = re.search(r"[\w\s]+,\s[\w\.@]+,\s[\w\s&]+",MyStr)
match.group()
```
Nope! So let's make the email address part of the regex more robust:
```python
match = re.search(r"[\w\s]+,\s[\w\.-]+@[\w\.-]+,\s[\w\s&]+",MyStr)
match.group()
```
'Samraat Pawar, s-pawar@imperial.ac.uk, Systems biology and ecological theory'
## Practicals: Some RegExercises
The following exercises are not for submission as part of your coursework, but we will discuss them in class on a subsequent day.
1. Try the regex we used above for finding names (`[\w\s]+`) for cases where the person's name has something unexpected, like a `?` or a `+`. Does it work? How can you make it more robust?
* Translate the following regular expressions into regular English:
* `r'^abc[ab]+\s\t\d'`
* `r'^\d{1,2}\/\d{1,2}\/\d{4}$'`
* `r'\s*[a-zA-Z,\s]+\s*'`
* Write a regex to match dates in format YYYYMMDD, making sure that:
* Only seemingly valid dates match (i.e., year greater than 1900)
* First digit in month is either 0 or 1
* First digit in day $\leq 3$
### Grouping regex patterns
You can group regex patterns into meaningful blocks using parentheses. Let's look again at the example of finding email addresses.
```python
MyStr = 'Samraat Pawar, s.pawar@imperial.ac.uk, Systems biology and ecological theory'
match = re.search(r"[\w\s]+,\s[\w\.-]+@[\w\.-]+,\s[\w\s&]+",MyStr)
match.group()
```
'Samraat Pawar, s.pawar@imperial.ac.uk, Systems biology and ecological theory'
Without grouping the regex:
```python
match.group(0)
```
'Samraat Pawar, s.pawar@imperial.ac.uk, Systems biology and ecological theory'
Now create groups using `( )`:
```python
match = re.search(r"([\w\s]+),\s([\w\.-]+@[\w\.-]+),\s([\w\s&]+)",MyStr)
if match:
print(match.group(0))
print(match.group(1))
print(match.group(2))
print(match.group(3))
```
Samraat Pawar, s.pawar@imperial.ac.uk, Systems biology and ecological theory
Samraat Pawar
s.pawar@imperial.ac.uk
Systems biology and ecological theory
Nice! This is very handy for extracting specific patterns from text data. Note that we excluded the `,`'s and the `\s`'s from the grouping parentheses because we don't want them to be returned in the match group list.
Have a look at `re4.py` in the TheMulQuaBio's code repository for more on parsing email addresses using regexes.
## Useful `re` commands
Here are some important functions in the `re` module:
|Command|What it does|
|:-|:-|
| `re.search(reg, text)`| Scans the string and finds the first match of the pattern, returning a `match` object if successful and `None` otherwise.|
| `re.match(reg, text)`| Like `re.search`, but only matches the beginning of the string.|
| `re.compile(reg)`| Compiles (stores) a regular expression for repeated use, improving efficiency.|
| `re.split(ref, text)`| Splits the text by the occurrence of the pattern described by the regular expression.|
| `re.findall(ref, text)`| Like `re.search`, but returns a list of all matches. If groups are present, returns a list of groups.|
| `re.finditer(ref, text)`| Like `re.findall`, but returns an iterator containing the match objects over which you can iterate. Useful for "crawling" efficiently through text till you find all necessary number of matches.|
| `re.sub(ref, repl, text)`| Substitutes each non-overlapping occurrence of the match with the text in `repl`.|
|||
Many of these commands also work on whole contents of files. We will look at an example of this below. Let us try some are particularly useful applications of some of these commands.
### Finding all matches
Above we used re.search() to find the first match for a pattern. In many scenarios, you will need to find *all* the matches of a pattern. The function `re.findall()` does precisely this and returns all matches as a list of strings, with each string representing one match.
Let's try this on an extension of the email example above for some data with multiple addresses:
```python
MyStr = "Samraat Pawar, s.pawar@imperial.ac.uk, Systems biology and ecological theory; Another academic, a-academic@imperial.ac.uk, Some other stuff thats equally boring; Yet another academic, y.a_academic@imperial.ac.uk, Some other stuff thats even more boring"
```
Now `re.findall()` returns a list of all the emails found:
```python
emails = re.findall(r'[\w\.-]+@[\w\.-]+', MyStr)
for email in emails:
print(email)
```
s.pawar@imperial.ac.uk
a-academic@imperial.ac.uk
y.a_academic@imperial.ac.uk
Nice!
### Finding in files
You will generally be wanting to apply regex searches to whole files. You might be tempted to write a loop to iterate over the lines of the file, calling `re.findall()` on each line. However, `re.findall()` can return a list of all the matches in a single step.
Let's try finding all species names that correspond to Oaks in a data file:
```python
f = open('../data/TestOaksData.csv', 'r')
found_oaks = re.findall(r"Q[\w\s].*\s", f.read())
found_oaks
```
['Quercus, robur\n', 'Quercus, cerris\n', 'Quercus, petraea\n']
```python
for name in strings:
print(name.replace(",",""))
```
This works because recall that `f.read()` returns the whole text of a file in a single string). Also, the file is closed after reading.
### Groups within multiple matches
Grouping pattern matches using `( )` as you learned above, can be combined with `re.findall()`. If the pattern includes *two or more* groups, then instead of returning a list of strings, `re.findall()` returns a list of tuples. Each tuple represents one match of the pattern, and inside the tuple is group(1), group(2), etc.
Let's try it:
```python
MyStr = "Samraat Pawar, s.pawar@imperial.ac.uk, Systems biology and ecological theory; Another academic, a.academic@imperial.ac.uk, Some other stuff thats equally boring; Yet another academic, y.a.academic@imperial.ac.uk, Some other stuff thats even more boring"
found_matches = re.findall(r"([\w\s]+),\s([\w\.-]+@[\w\.-]+)", MyStr)
found_matches
```
[('Samraat Pawar', 's.pawar@imperial.ac.uk'),
(' Another academic', 'a.academic@imperial.ac.uk'),
(' Yet another academic', 'y.a.academic@imperial.ac.uk')]
```python
for item in found_matches:
print(item)
```
('Samraat Pawar', 's.pawar@imperial.ac.uk')
(' Another academic', 'a.academic@imperial.ac.uk')
(' Yet another academic', 'y.a.academic@imperial.ac.uk')
### Extracting text from webpages
OK, let's step up the ante here. How about extracting text from a web page to create your own data? Let's try extracting data from [this page](https://www.imperial.ac.uk/silwood-park/academic-staff/).
You will need a new package `urllib3`. Install it, and import it (also `import re` if needed).
```python
import urllib3
```
```python
conn = urllib3.PoolManager() # open a connection
r = conn.request('GET', 'https://www.imperial.ac.uk/silwood-park/academic-staff/')
webpage_html = r.data #read in the webpage's contents
```
This is returned as bytes (not strings).
```python
type(webpage_html)
```
bytes
So decode it (remember, the default decoding that this method applies is *utf-8*):
```python
My_Data = webpage_html.decode()
#print(My_Data)
```
That's a lot of potentially useful information! Let's extract all the names of academics:
```python
pattern = r"Dr\s+\w+\s+\w+"
regex = re.compile(pattern) # example use of re.compile(); you can also ignore case with re.IGNORECASE
for match in regex.finditer(My_Data): # example use of re.finditer()
print(match.group())
```
Dr Arkhat Abzhanov
Dr Arkhat Abzhanov
Dr Cristina Banks
Dr Tom Bell
Dr Martin Bidartondo
Dr Martin Bidartondo
Dr Martin Brazeau
Dr Lauren Cator
Dr Matteo Fumagalli
Dr Matteo Fumagalli
Dr Richard Gill
Dr Richard Gill
Dr Jason Hodgson
Dr Andrew Knight
Dr Andrew Knight
Dr Morena Mills
Dr Morena Mills
Dr Samraat Pawar
Dr Julia Schroeder
Dr Julia Schroeder
Dr Joseph Tobias
Dr Joseph Tobias
Dr Mike Tristem
Dr Mike Tristem
Dr Magda Charalambous
Dr Magda Charalambous
Dr Rebecca Kordas
Dr Rebecca Kordas
Dr Vassiliki Koufopanou
Dr Eoin O
Dr Eoin O
Dr David Orme
Dr James Rosindell
Dr Chris Wilson
Dr Oliver Windram
Dr Colin Clubbe
Dr George McGavin
Dr George McGavin
Dr Michael Themis
Dr Michael Themis
Again, nice! However, its' not perfect. You can improve this by:
* Extracting Prof names as well
* Eliminating the repeated matches
* Grouping to separate title from first and second names
* Extracting names that have unexpected characters (e.g., ``O'Gorman'', which are currently not being matched properly)
*Try making these improvements.*
Of course, you can match and extract other types of patterns as well, such as urls and email addresses (though this example web page does not have email addresses).
### Replacing text
Using the same web page data, let's try using the `re.sub` command on the same web page data (`My_Data`) to replace text:
```python
New_Data = re.sub(r'\t'," ", My_Data) # replace all tabs with a space
# print(New_Data)
```
## Practicals
### Blackbirds problem
Complete the code `blackbirds.py` that you find in the `TheMulQuaBio` (necessary data file is also there).
*As always, test, add, commit and push all your new code and data to your git repository.*
## Using Python to build workflows
You can use python to build an automated data analysis or simulation workflow that involves multiple languages, especially the ones you have already learnt: R, $\LaTeX$, and UNIX bash. For example, you could, in theory, write a single Python script to generate and update your masters dissertation, tables, plots, and all. Python is ideal for building such workflows because it has packages for practically every purpose.
*Thus this topic may be useful for your [Miniproject](Appendix-MiniProj.ipynb), which will involve building a reproducible computational workflow.*
### Using `subprocess`
For building a workflow in Python the `subprocess` module is key. With this module you can run non-Python commands and scripts, obtain their outputs, and also crawl through and manipulate directories.
First, import the module (this is part of the python standard library, so you won't need to install it):
```python
import subprocess
```
#### Running processes
There are two main ways to run commands through subprocess: `run` (available in Python 3.5 onwards) for basic usage, and `Popen` (`P`rocess `open`) for more advanced usage. We will work directly with `popen` because `run()` is a wrapper around `Popen`. Using `Popen` directly gives more control over how the command is run, and how its input and output are processed.
Let's try running some commands in the UNIX bash.
$\star$ In a terminal, first `cd` to your `code` directory, launch `ipython3`, then and type:
```python
p = subprocess.Popen(["echo", "I'm talkin' to you, bash!"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
```
This creates an object `p`, from which you can extract the output and other information of the command you ran. Before we do anything more, let's look at our `subprocess.popen` call carefully.
* The command line arguments were passed as a list of strings, which avoids the need for escaping quotes or other special characters that might be interpreted by the shell (for example, in this case, there are apostrophes in the string that is being `echo`ed in bash).
* `stdout` is the output from the process "spawned" by your command. This is bytes sequence (which you will need to decode - more on this below).
* `stderr` is the error code (from which you can capture whether the process ran successfully or not). The method PIPE creates a new "pipe" to the "child process".
```python
stdout, stderr = p.communicate()
```
```python
stderr
```
b''
Nothing here, because the echo command does no return an any code. the `b` indicates that the output is in bits (unencoded). By default, stdout, stderr (and other outputs of `p.communicate`) are returned as binary (byte) format.
Now check what's in `stdout`:
```python
stdout
```
b"I'm talkin' to you, bash!\n"
Let's encode and print it.
```python
print(stdout.decode())
```
I'm talkin' to you, bash!
You can also use a `universal_newlines = True` so that these outputs are returned as encoded text (default being *utf-8* usually), with line endings converted to '\n'. For more information [see the documentation](https://docs.python.org/3.5/library/subprocess.html).
Let's try something else:
```python
p = subprocess.Popen(["ls", "-l"], stdout=subprocess.PIPE)
stdout, stderr = p.communicate()
# print(stdout.decode())
```
Recall that the `ls -l` command lists all files in a long listing format.
You can also call python itself from bash (!):
```python
p = subprocess.Popen(["python", "boilerplate.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) # A bit silly!
stdout, stderr = p.communicate()
print(stdout.decode())
```
This is a boilerplate
Similarly, to compile a $\LaTeX$ document (using `pdflatex` in this case), you can do something like:
```python
subprocess.os.system("pdflatex yourlatexdoc.tex")
```
You can also do this instead:
```python
p = subprocess.Popen(["python", "boilerplate.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) # A bit silly!
stdout, stderr = p.communicate()
print(stdout.decode())
```
This is a boilerplate
### Handling directory and file paths
You can also use `subprocess.os` to make your code OS (Linux, Windows, Mac) independent. For example to assign paths:
```python
subprocess.os.path.join('directory', 'subdirectory', 'file')
```
'directory/subdirectory/file'
The result would be appropriately different on Windows (with backslashes instead of forward slashes).
Note that in all cases you can "catch" the output of `subprocess` so that you can then use the output within your
python script. A simple example, where the output is a platform-dependent directory path, is:
```python
MyPath = subprocess.os.path.join('directory', 'subdirectory', 'file')
MyPath
```
'directory/subdirectory/file'
Explore what `subprocess` can do by tabbing
`subprocess.`, and also for submodules, e.g., type
`subprocess.os.` and then tab.
### Running `R`
R is likely an important part of your project's analysis and data visualization components in particular — for example for statistical analyses and pretty plotting (`ggplot2`!).
You can run `R` from Python easily. Try the following:
$\star$ Create an R script file called `TestR.R` in your `code` directory with the following content:
```r
print("Hello, this is R!")
```
Now, create a script `TestR.py` with the following content :
```python
import subprocess
subprocess.Popen("Rscript --verbose TestR.R > ../Results/TestR.Rout 2> ../Results/TestR_errFile.Rout", shell=True).wait()
```
2
Now run `TestR.py` (or `%cpaste`) and check`TestR.Rout` and `TestR_errorFile.Rout`.
Also check what happens if you run (type directly in `ipython` or `python` console):
```python
subprocess.Popen("Rscript --verbose NonExistScript.R > ../Results/outputFile.Rout 2> ../Results/errorFile.Rout", shell=True).wait()
```
2
It is possible that the location of `RScript` is different in your Ubuntu install. To locate it, try `find /usr -name 'Rscript'` in the linux terminal (not in `python`!). For example, you might need to specify the path to it using `/usr/lib/R/bin/Rscript`.
What do you see on the screen? Now check `outputFile.Rout`and `errorFile.Rout.
## Practicals
As always, test, add, commit and push all your new code and data to your git repository.
### Using `os` problem 1
Open `using_os.py` and complete the tasks assigned (hint: you might want to look at `subprocess.os.walk()`)
### Using `os` problem 2
Open `fmr.R` and work out what it does; check that you have `NagyEtAl1999.csv`. Now write python code called
`run_fmr_R.py` that:
Runs `fmr.R` to generate the desired result
`run_fmr_R.py` should also print to the python screen whether the run was successful, and the contents of the R console output
* `git add`, `commit` and `push` all your week's code and data to your git repository by next Wednesday.*
Readings and Resources
----------------------
* [The matplotlib website](http://matplotlib.org)
* For SciPy, the [official documentation is good](https://docs.scipy.org/doc/); Read about the scipy modules you think will be important to you.
* The "ecosystem" for Scientific computing in python: <http://www.scipy-lectures.org/>
* A Primer on Scientific Programming with Python <http://www.springer.com/us/book/9783642549595>; Multiple copies of this book are available from the central library and can be requested to Silwood from the IC library website. You can also find a pdf - google it
* Many great examples of applications in the [scipy cookbook](https://lagunita.stanford.edu/courses/DB/2014/SelfPaced/about)
* For regex: <https://docs.python.org/2/howto/regex.html>
* Google's short class on regex in python: <https://developers.google.com/edu/python/regular-expressions>
And this exercise: https://developers.google.com/edu/python/exercises/baby-names
* <http://www.regular-expressions.info/> has a good intro, tips and a great array of canned solutions
* Use and abuse of regex: <https://blog.codinghorror.com/regex-use-vs-regex-abuse/>
| efdafea78efa0a689de37337e427182ad397dc84 | 215,747 | ipynb | Jupyter Notebook | notebooks/06-Python_II.ipynb | mathemage/TheMulQuaBio | 63a0ad6803e2aa1b808bc4517009c18a8c190b4c | [
"MIT"
]
| 1 | 2019-10-12T13:33:14.000Z | 2019-10-12T13:33:14.000Z | notebooks/06-Python_II.ipynb | OScott19/TheMulQuaBio | 197d710f76163469dfc7fa9d2d95ba3a739eccc7 | [
"MIT"
]
| null | null | null | notebooks/06-Python_II.ipynb | OScott19/TheMulQuaBio | 197d710f76163469dfc7fa9d2d95ba3a739eccc7 | [
"MIT"
]
| null | null | null | 53.402723 | 45,740 | 0.730703 | true | 19,152 | Qwen/Qwen-72B | 1. YES
2. YES | 0.689306 | 0.872347 | 0.601314 | __label__eng_Latn | 0.968052 | 0.235384 |
```python
from sympy import *
from math import factorial
```
# Discrete Random Variables
```python
"""
Definition:
The cumulative distribution function (CDF), F(·), of a random
variable, X, is defined by
F(x) := P(X ≤ x).
"""
```
```python
#Exemplo: Jogar dado
x = (1,2,3,4,5,6)
wp = 1/len(x)
```
```python
"""
Definition:
A discrete random variable, X, has probability mass function (PMF),
p(·), if p(x) ≥ 0 and for all events A we have
P(X ∈ A) = X x∈A p(x).
"""
```
```python
#Probabilidade de ser maior ou igual a 4 P(x=>4)
px=0
seq = ''
print('Probabilidade de ser maior ou igual a 4 P(x=>4)', x[3:])
for i in x[3:]:
px = px+wp
seq = seq + ' ' + str(i)
print('probabilidade: ', round(px, 2), ' valor', seq)
```
Probabilidade de ser maior ou igual a 4 P(x=>4) (4, 5, 6)
probabilidade: 0.17 valor 4
probabilidade: 0.33 valor 4 5
probabilidade: 0.5 valor 4 5 6
```python
"""
Definition:
The expected value of a discrete random variable, X, is given by
E[X] := SUM xi p(xi).
"""
```
```python
#Para o mesmo caso, o valor esperado seria:
Ex = 0
for i in x:
Ex = Ex + wp*i
print('O valor esperado E(x) é de:', Ex)
```
O valor esperado E(x) é de: 3.5
```python
"""
Definition. The variance of any random variable, X, is defined as
Var(X) := E[(X − E[X])2]
= E[X2] − E[X]2
"""
```
```python
#Obtendo a variancia para o caso
varx = 0
Ex2 = 0
for i in x:
Ex2 = Ex2 + wp*i**2
varx = Ex2 - Ex**2
print("A variancia para de um dado é de:", round(varx, 2))
```
A variancia para de um dado é de: 2.92
# The Binomial Distribution
```python
"""
We say X has a binomial distribution, or X ∼ Bin(n, p), if
P(X = r) = (n r)p**r(1 − p)**n−r
For example, X might represent the number of heads in n independent coin
tosses, where p = P(head). The mean and variance of the binomial distribution
satisfy
E[X] = np
Var(X) = np(1 − p).
"""
```
```python
"""
(n r) = n!/(r!(n-r)!)
"""
```
### A Financial Application
```python
"""
Suppose a fund manager outperforms the market in a given year with
probability p and that she underperforms the market with probability 1 − p.
She has a track record of 10 years and has outperformed the market in 8 of
the 10 years.
Moreover, performance in any one year is independent of performance in
other years.
Question: How likely is a track record as good as this if the fund manager had no
skill so that p = 1/2?
Answer: Let X be the number of outperforming years. Since the fund manager
has no skill, X ∼ Bin(n = 10, p = 1/2) and
P(X ≥ 8) = Xnr=8(n r)p**r(1 − p)**n−r
Question: Suppose there are M fund managers? How well should the best one do
over the 10-year period if none of them had any skill?
"""
```
```python
#Resolvendo a questão a cima, temos:
n=10
p=1/2
r=8
Px = (factorial(n)/(factorial(r)*factorial(n-r)))*p**(r)*(1-p)**(n-r)
print('A probabilidade é de:', round(Px*100, 1), '%')
```
A probabilidade é de: 4.4 %
# The Poisson Distribution
```python
"""
We say X has a Poisson(λ) distribution if
P(X = r) = λ**(r)*e**(-λ)/r!
E[X] = λ and Var(X) = λ
"""
```
# Bayes’ Theorem
```python
"""
Let A and B be two events for which P(B) 6= 0. Then
P(A | B) = P(ATB)/P(B)
= P(B | A)P(A)/P(B)
= P(B | A)P(A)/(SUM P(B | Aj)P(Aj))
where the Aj’s form a partition of the sample-space.
"""
```
```python
"""
Let Y1 and Y2 be the outcomes of tossing two fair dice independently of
one another.
Let X := Y1 + Y2. Question: What is P(Y1 ≥ 4 | X ≥ 8)?
"""
```
```python
#Resolvendo a questão a cima, temos:
y1 = (1,2,3,4,5,6)
y2 =(1,2,3,4,5,6)
wp=1/6
for i in x[3:]:
pa = pa+wp
```
# Continuous Random Variables
```python
"""
Definition. A continuous random variable, X, has probability density function
(PDF), f(·), if f(x) ≥ 0 and for all events A
"""
```
```python
```
# The Normal Distribution
```python
```
| 6dd4e5721ddb6e000f8e25e79019951687dd25ed | 8,703 | ipynb | Jupyter Notebook | Financial Engineering & Risk Management/Introduction to Financial Engineering and Risk Management/probability(I).ipynb | MaikeRM/FinancialEngineering | d5881995ff3097e77cb62633ab22d25625c81ee7 | [
"MIT"
]
| null | null | null | Financial Engineering & Risk Management/Introduction to Financial Engineering and Risk Management/probability(I).ipynb | MaikeRM/FinancialEngineering | d5881995ff3097e77cb62633ab22d25625c81ee7 | [
"MIT"
]
| null | null | null | Financial Engineering & Risk Management/Introduction to Financial Engineering and Risk Management/probability(I).ipynb | MaikeRM/FinancialEngineering | d5881995ff3097e77cb62633ab22d25625c81ee7 | [
"MIT"
]
| null | null | null | 21.330882 | 90 | 0.47363 | true | 1,325 | Qwen/Qwen-72B | 1. YES
2. YES | 0.923039 | 0.909907 | 0.83988 | __label__eng_Latn | 0.859196 | 0.789655 |
# Making a Binary Decision
## _Visualizing Binary Regression_
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
from scipy.optimize import curve_fit
%matplotlib inline
```
A normal decision when working with binary data (situations where the response $y \in \{0,1\}$) is to perform a logisitic (or similar) regression on the data to find the estimated probability of one class or the other based on a set of input variables, $X$. Often times once this regression has been performed, this information is then used for the scientist or engineer to make an informed decision as to which class an out of sample $X_i$ will belong. Ultimately the goal is an accurate prediction of the class to when a set of observations belongs, and oftentimes some measure of error.
This notebook will attempt to visualize different situations that can arise in both the response variables and the input variables.
First let us examine the binary regression model:
### Binary Regression Model
\begin{eqnarray}
y_i &\sim& Bernoulli(g^{-1}(\eta_i)) \\
\eta_i &=& x_i \beta \\
\end{eqnarray}
where $y_i \in \{0,1\}, i=1,\dots,n$ is a binary response variable for a collection of $n$ objects, with $p$ covariate measurements $x_i = (x_{i1}, \dots, x_{ip})$. $g(u)$ is a link function, $\eta_i$ denotes the linear predictor and $\beta$ represents a $(p\times1)$ column vector of regression coefficients$
Under this model we can implement iteratively reweight least squares to solve this equation under a variety of settings. If a bayesian estimate were necessary, methods such as those by Albert and Chib (1993) or Holmes and Held (2006) for probit or logistic regression can be utilized. This notebook will focus on iteratively reweighted least squares for speed of computation.
### Iteratively Re-weight Least Squares Function
```
def IRWLS(yi, X, link='logit', tol=1e-8, max_iter=100, verbose=False):
"""
Iteratively Re-Weighted Least Squares
"""
try:
nobs, npar = X.shape
except:
nobs = X.shape[0]
npar = 1
W = np.identity(nobs)
#Ordinary Least Squares as first Beta Guess
beta_start = betas = wls(X,W,yi)
lstart = lold = binLklihood(X, yi, beta_start, link)
delt = betaDelta(lold)
step = 0.0
while np.sum(np.abs(delt)) > tol and step < max_iter:
step += 1
delt = betaDelta(lold)
lnew = binLklihood(X, yi, betas + delt, link)
if lnew.likelihood < lold.likelihood:
delt = delt/2.
betas = betas + delt
lold = binLklihood(X,yi,betas,link)
else:
betas = betas + delt
lold = binLklihood(X,yi,betas,link)
if verbose:
print """Step {0}: \nLikelihood: {1}""".format(step, lold.likelihood)
variance = lold.variance
return betas, variance
class binLklihood:
def __init__(self, X, y, betas, link='logit'):
self.X = X
self.y = y
self.betas = betas
self.link = link
if link == 'logit':
self.pi = invlogit(np.dot(X, betas))
elif link == 'probit':
self.pi = stats.norm.cdf(np.dot(X, betas))
self.W = np.diag(self.pi*(1 - self.pi))
self.likelihood = loglike(self.y, self.pi)
self.score = X.transpose().dot((y - self.pi))
self.information = X.transpose().dot(self.W).dot(X)
self.variance = np.linalg.pinv(self.information) / 4.0
def betaDelta(binlk):
"""
Change in Delta for a given binomial likelihood object
"""
return np.linalg.pinv(binlk.information).dot(binlk.score)
def invlogit(val):
"""
Inverse Logit Function
"""
return np.exp(val) / (np.exp(val) + 1)
def wls(X, W, yi):
"""
Weighted Least Squares
"""
XtWX = X.transpose().dot(W).dot(X)
XtWy = X.transpose().dot(W).dot(yi)
return np.linalg.pinv(XtWX).dot(XtWy)
def loglike(yi, pi):
"""
Binary Log-Likelihood
"""
vect_loglike = yi*np.log(pi) + (1-yi)*np.log(1-pi)
return np.sum(vect_loglike)
```
## Simulations of binary data
We will begin with showing a set of simulations for which there is only one predictor variable.
Assumptions:
$\begin{eqnarray}
y &\sim& Bernoulli(p) \\
X_i &\sim& N(\mu_i, \sigma^{2}_{i})
\end{eqnarray}$
_***Simulation 1 - Complete Separation of groups. Balanced Data***_
```
def makesamples(p_success, m1, s1, m2, s2, nsim):
nsim = 1000
y = stats.bernoulli.rvs(p_success, size=nsim)
X = np.vstack((np.ones(nsim),np.array([stats.norm.rvs(m1,s1) if i == 1
else stats.norm.rvs(m2,s2) for i in y]))).transpose()
F = plt.figure()
plt.subplot(211)
plt.plot(X[:,1],y, marker='o', linestyle='')
plt.subplot(212)
F.set_size_inches(10,3)
plt.hist(X[:,1][y==1], bins=50, color='r', alpha=0.5)
plt.hist(X[:,1][y==0], bins=50, color='b', alpha=0.5)
plt.show()
return X, y
X, y = makesamples(0.5, 10, 1, 5, 1, 1000.)
```
```
def probability_curve(X,y):
beta, var = IRWLS(y, X)
testpts = np.arange(np.min(X),np.max(X),0.01)
testmat = np.vstack((np.ones(len(testpts)),testpts)).transpose()
probs = invlogit(np.dot(testmat, beta))
F = plt.figure()
plt.plot(X[:,1],y, marker='o', linestyle='')
plt.plot(testpts, probs)
plt.axhline(y=0.5, linestyle='--')
F.set_size_inches(10,3)
plt.show()
return beta, var
```
```
beta, var = probability_curve(X, y)
```
### Diagnosing the model
Read about ROC curves, but the main idea is that if it's above the red dotted line, then that is a favorable model for that type of dataset. Also if the ROC line matches the red line exactly, no matter what model you've fit using this method... You'll only do as good as randomly guessing.
http://en.wikipedia.org/wiki/Receiver_operating_characteristic
```
def calcRates(y, pred, checkpt, verbose=True):
true_positive = 0.
true_negative = 0.
false_positive = 0.
false_negative = 0.
totalP = len(y[y==1])
totalN = len(y[y==0])
for i in np.arange(len(y)):
if y[i] == 1 and pred[i] <= checkpt:
false_negative += 1.
if y[i] == 1 and pred[i] > checkpt:
true_positive += 1.
if y[i] == 0 and pred[i] >= checkpt:
false_positive += 1.
if y[i] == 0 and pred[i] < checkpt:
true_negative += 1.
TPR = true_positive / totalP
TNR = true_negative / totalN
FPR = false_positive / totalP
FNR = false_negative / totalN
if verbose:
print """True Positive Rate = {0}
True Negative Rate = {1}
False Positive Rate = {2}
False Negative Rate = {3}\n""".format(TPR, TNR, FPR, FNR)
return TPR, TNR, FPR, FNR
```
```
def plotROC(y, pred, verbose=False):
results = [1.,1.]
for i in np.arange(0,1.01,0.01):
TPR, TNR, FPR, FNR = calcRates(y,pred, i, verbose)
results = np.vstack((results,[FPR,TPR]))
results = np.vstack((results,[0.0,0.0]))
F = plt.figure()
plt.plot(results[:,0], results[:,1])
plt.plot(np.array([0,1]), np.array([0,1]),color='r',linestyle="--")
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.show()
```
```
plotROC(y, np.dot(X,beta))
```
_***Simulation 2 - Partial Overlap of groups. Balanced Data***_
```
X, y = makesamples(0.5, 6, 1, 5, 1, 1000)
```
```
beta, var = probability_curve(X, y)
```
```
plotROC(y, np.dot(X,beta))
```
_***Simulation 3 - Total Overlap of groups. Balanced Data***_
```
X, y = makesamples(0.5, 5,1,5,1, 1000)
```
```
beta, var = probability_curve(X, y)
```
```
plotROC(y, np.dot(X,beta))
```
_***Simulation 4 - No Overlap of groups. Unbalanced Data***_
```
x, y = makesamples(0.2, 5,0.5, 8,1, 10000)
beta, var = probability_curve(x,y)
plotROC(y, np.dot(x, beta))
```
_***Simulation 5 - Partial overlap of groups. Unbalanced Data***_
```
x, y = makesamples(0.2, 5,.5, 7,1, 10000)
beta, var = probability_curve(x,y)
plotROC(y, np.dot(x, beta))
```
_***Simulation 6 - Overlap of groups. Unbalanced Data***_
```
x, y = makesamples(0.2, 5,.5, 6,1, 10000)
beta, var = probability_curve(x,y)
plotROC(y, np.dot(x, beta))
```
## Moving onto more dimensions...
_***Simulation 7 - No overlap. Unbalanced Data. Two Dimensions***_
Below is an algorithm that samples. Assumes X1 pts and X2 points are independent.
```
def makesamples2d(p_success, m_x1, m_y1, s1, m_x2, m_y2, s2, nsim):
nsim = 1000
response = stats.bernoulli.rvs(p_success, size=nsim)
X = np.vstack((np.ones(nsim),
np.array([stats.norm.rvs(m_x1,s1) if i == 1
else stats.norm.rvs(m_x2,s2) for i in response]),
np.array([stats.norm.rvs(m_y1,s1) if i == 1
else stats.norm.rvs(m_y2,s2) for i in response]))).transpose()
F = plt.figure()
plt.plot(X[response==1][:,1], X[response==1][:,2], marker='.', linestyle='')
plt.plot(X[response==0][:,1], X[response==0][:,2], marker='.', color='r', linestyle='')
F.set_size_inches(6,6)
plt.show()
return X, response
X, y = makesamples2d(0.2, 7.2, 7.2, 0.5, 5,5, 1, 10000.)
```
```
def probability_curve(X,response):
beta, var = IRWLS(response, X)
testX1 = np.arange(np.min(X[:,1]),np.max(X[:,1]),0.01)
F = plt.figure()
plt.plot(X[response==1][:,1], X[response==1][:,2], marker='.', linestyle='')
plt.plot(X[response==0][:,1], X[response==0][:,2], marker='.', color='r', linestyle='')
plt.plot(testX1, makepline(testX1,beta,0.5), color='black', linestyle='dotted', marker='')
plt.plot(testX1, makepline(testX1,beta,0.25), color='red', linestyle='dashed', marker='')
plt.plot(testX1, makepline(testX1,beta,0.05), color='red', linestyle='-', marker='')
plt.plot(testX1, makepline(testX1,beta,0.75), color='blue', linestyle='dashed', marker='')
plt.plot(testX1, makepline(testX1,beta,0.95), color='blue', linestyle='-', marker='')
F.set_size_inches(6,6)
plt.show()
return beta, var
def makepline(X, beta, p):
beta3 = beta[2]
beta_star = beta[:-1]
X_star = np.vstack((np.ones(len(X)), X)).transpose()
x3 = (logiT(p) - np.dot(X_star, beta_star)) / beta3
return x3
def logiT(p):
return np.log(p/(1-p))
```
```
beta, var = probability_curve(X,y)
```
```
plotROC(y, np.dot(X,beta))
```
_***Simulation 8 - Partial overlap. Unbalanced Data. Two Dimensions***_
```
X, y = makesamples2d(0.2, 6.5, 6.5, .5, 5,5, 1, 10000.)
beta, var = probability_curve(X,y)
plotROC(y, np.dot(X,beta))
```
_***Simulation 9 - Complete overlap. Unbalanced Data. Two Dimensions***_
```
X, y = makesamples2d(0.2, 5, 5, .5, 5,5, 1, 10000.)
beta, var = probability_curve(X,y)
plotROC(y, np.dot(X,beta))
```
## Simulating Chandra Data
```
def makeChandraSamples2d(p_success, m_x1, m_y1, sx1, sy1, m_x2, m_y2, sx2, sy2, nsim):
response = stats.bernoulli.rvs(p_success, size=nsim)
X = np.vstack((np.ones(nsim),
np.array([stats.norm.rvs(m_x1,sx1) if i == 1
else stats.norm.rvs(m_x2,sx2) for i in response]),
np.array([stats.norm.rvs(m_y1,sy1) if i == 1
else stats.norm.rvs(m_y2,sy2) for i in response]))).transpose()
F = plt.figure()
plt.plot(X[response==0][:,1], X[response==0][:,2], marker='.', color='b', linestyle='')
plt.plot(X[response==1][:,1], X[response==1][:,2], marker='x', color='r', linestyle='')
plt.show()
return X, response
```
```
X, y = makeChandraSamples2d(.25, 9.7, 0.12, .5, .02, 8.5, 0.12, 1, .02, 8000)
beta, var = probability_curve(X,y)
plotROC(y, np.dot(X, beta), verbose=False)
```
_***Now lets play with some non-linear regression!***_
$\begin{equation}
f(X\beta) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1^2 + \beta_4 x_2^2 + \beta_5 x_1 x_2
\end{equation}$
```
def makeNonLinear2d(p_success, m_x1, m_y1, sx1, sy1, m_x2, m_y2, sx2, sy2, nsim):
response = stats.bernoulli.rvs(p_success, size=nsim)
X_linear = np.vstack((np.ones(nsim),
np.array([stats.norm.rvs(m_x1,sx1) if i == 1
else stats.norm.rvs(m_x2,sx2) for i in response]),
np.array([stats.norm.rvs(m_y1,sy1) if i == 1
else stats.norm.rvs(m_y2,sy2) for i in response])))
X_nonlinear = np.vstack((X_linear, X_linear[1,:]**2, X_linear[2,:]**2)).transpose() #, X_linear[1,:]*X_linear[2,:]
print X[1,:]**2
X_linear = X_linear.transpose()
F = plt.figure()
plt.plot(X_linear[response==0][:,1], X_linear[response==0][:,2], marker='.', color='b', linestyle='')
plt.plot(X_linear[response==1][:,1], X_linear[response==1][:,2], marker='x', color='r', linestyle='')
plt.show()
return X_linear, X_nonlinear, response
```
```
def probability_curve_nonlinear(X, Xnl, response):
beta_linear, var_linear = IRWLS(response, X)
beta_nonlinear, var_nonlinear = IRWLS(response, Xnl)
x1 = np.arange(-2, 12, 0.1)
x2 = np.arange(-2, 12, 0.1)
xx1, xx2 = np.meshgrid(x1, x2, sparse=True)
zz_nl = invlogit(beta_nonlinear[0] + beta_nonlinear[1]*xx1 + beta_nonlinear[2]*xx2 + beta_nonlinear[3]*xx1*xx1 + beta_nonlinear[4]*xx2*xx2) # + beta_nonlinear[5]*xx1*xx2
zz_l = invlogit(beta_linear[0] + beta_linear[1]*xx1 + beta_linear[2]*xx2)
F = plt.figure()
plt.plot(X[response==0][:,1], X[response==0][:,2], marker='.', linestyle='')
plt.plot(X[response==1][:,1], X[response==1][:,2], marker='x', color='r', linestyle='')
CS = plt.contour(x1, x2, zz_nl, colors='k')
plt.clabel(CS, inline=1, fontsize=10)
plt.show()
F = plt.figure()
plt.plot(X[response==0][:,1], X[response==0][:,2], marker='.', linestyle='')
plt.plot(X[response==1][:,1], X[response==1][:,2], marker='x', color='r', linestyle='')
CS = plt.contour(x1, x2, zz_l, colors='k')
plt.clabel(CS, inline=1, fontsize=10)
plt.show()
return beta_linear, var_linear, beta_nonlinear, var_nonlinear
def logiT(p):
return np.log(p/(1-p))
```
```
Xl, Xnl, y = makeNonLinear2d(.1, 3,3,1,1, 5,5,2,2, 1000)
b1, v1, b2, v2 = probability_curve_nonlinear(Xl, Xnl, y)
plotROC(y, np.dot(Xl,b1))
plotROC(y, np.dot(Xnl,b2))
```
| 67caa078db5631996e9c2686ccabc31b051b7526 | 656,447 | ipynb | Jupyter Notebook | bayes_analysis_vegetabile/tutorials/LogisticRegression.ipynb | sot/aca_stats | 0b0b393cd42d2e9162ce4925b468037f2b7c7a18 | [
"BSD-3-Clause"
]
| null | null | null | bayes_analysis_vegetabile/tutorials/LogisticRegression.ipynb | sot/aca_stats | 0b0b393cd42d2e9162ce4925b468037f2b7c7a18 | [
"BSD-3-Clause"
]
| 3 | 2017-05-22T20:16:17.000Z | 2018-11-27T12:44:03.000Z | bayes_analysis_vegetabile/tutorials/LogisticRegression.ipynb | sot/aca_stats | 0b0b393cd42d2e9162ce4925b468037f2b7c7a18 | [
"BSD-3-Clause"
]
| null | null | null | 647.383629 | 52,589 | 0.934173 | true | 4,529 | Qwen/Qwen-72B | 1. YES
2. YES | 0.887205 | 0.718594 | 0.63754 | __label__eng_Latn | 0.524044 | 0.31955 |
# Hybrid Monte Carlo
## Affine Short Rate Models
In this notebook we analyse yield curve modelling based on affine term structure models. We start with a classical CIR model. Then we analyse initial yield curve calibration via deterministic shift extension. Finally, we also analyse the impact of square root processes on volatility smile.
```python
import sys
sys.path.append('../') # make python find our modules
import numpy as np
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
import QuantLib as ql
```
## CIR Model Properties
As a first step we wet up a CIR model and analyse modelled yield curves and volatilities.
```python
from hybmc.models.AffineShortRateModel import CoxIngersollRossModel, quadraticExponential, cirMoments
r0 = 0.02
chi_ = 0.07
theta_ = 0.05
sigma_ = 0.07
cirModel = CoxIngersollRossModel(r0,chi_,theta_,sigma_,quadraticExponential(1.5))
```
We have a look at the *intitial* yield curve implied by the model.
```python
dT = 1.0/365.0
f = lambda t, T, rt : np.log(cirModel.zeroBondPrice(t,T,rt) / cirModel.zeroBondPrice(t,T+dT,rt)) / dT
```
```python
T = np.linspace(0.0,20.0,21)
X0 = cirModel.initialValues()
f_ = np.array([ f(0.0,T_,cirModel.r0) for T_ in T ])
curve = pd.DataFrame([ T, f_ ]).T
curve.columns = ['T', 'f(0,T)']
fig = px.line(curve, x='T', y='f(0,T)')
fig.show()
```
Next we check *future* model-implied curves.
```python
T0 = 5.0
shortRates = np.linspace(0.02, 0.04, 5)
for r in reversed(shortRates):
f_ = np.array([ f(T0,T0+T_,r) for T_ in T ])
fig.add_trace(go.Scatter(x=T0+T, y=f_, mode='lines', name='r=%6.4f'%r))
fig.show()
```
The humped shape looks much better compared to Hull-White model.
We are also interested in the volatility of rates.
Zero bonds are given by $P(t,T,r) = \exp(-B_{CIR}(t,T) r(t) + A_{CIR})(t,T)$. Future zero rates from $T_0$ to $T_1$ are defined also
\begin{align}
F(t;T_0,T_0) &= \frac{1}{T_1-T_0} \log\left( \frac{P\left(t,T_0,r(t)\right)}{P\left(t,T_1,r(t)\right)} \right) \\
&= \frac{-\left[ B_{CIR}(t,T_0) - B_{CIR}(t,T_1)\right] r(t) + A_{CIR}(t,T_0) - A_{CIR}(t,T_1) }{T_1-T_0}.
\end{align}
In particular, we get for the variance of $F(T_0,T_0,T_1)$
$$
Var\left[ F(T_0,T_0,T_1) \right] = \left[ \frac{B_{CIR}(T_0,T_1)}{T_1-T_0} \right]^2 \cdot Var\left[ r(T_0) \right].
$$
This yields the proxy ATM swap rate volatility
$$
\sigma(T_0,T_1) = \underbrace{\frac{B_{CIR}(T_0,T_1)}{T_1-T_0}}_{\lambda(T_0,T_1)}
\underbrace{\sqrt{ \frac{Var\left[ r(T_0) \right]}{T_0} } }_{\sigma_{CIR}}
$$
```python
lambda_ = lambda T0,T1 : cirModel.ricattiAB(T0,T1,0.0,1.0)[1] / (T1 - T0)
expiryTimes = np.linspace(1.0, 10.0,10)
swapTerms = np.linspace(1.0, 10.0,10)
scalings = pd.DataFrame([ [T0, dT, lambda_(T0,T0+dT)] for T0 in expiryTimes for dT in swapTerms ],columns=['T0', 'dT', 'scaling'])
#fig = go.Figure(data=[go.Surface(x=scalings.T0,y=scalings.dT,z=scalings.scaling)])
fig = px.scatter_3d(scalings, x='T0', y='dT', z='scaling')
fig.show()
```
```python
sigma_CIR = lambda T0 : np.sqrt(cirMoments(cirModel.r0,T0,cirModel.chi(0.0),cirModel.theta(0.0),cirModel.sigma(0.0))[1] / T0)
vols = pd.DataFrame([ [T0, sigma_CIR(T0)] for T0 in expiryTimes], columns=['T0','sigma_CIR'])
fig = px.line(vols,x='T0',y='sigma_CIR')
fig.show()
```
## Yield Curve Fit
```python
import QuantLib as ql
today = ql.Settings.instance().evaluationDate
curveData = pd.DataFrame([[ 0.0, 5.0, 10.0, 20.0 ],
[ 0.020, 0.028, 0.033, 0.035 ]]).T
curveData.columns = ['T', 'f']
curveData['Date'] = [ today + int(t*365) for t in curveData['T'] ]
yts = ql.ForwardCurve(curveData['Date'],curveData['f'],ql.Actual365Fixed())
```
```python
fMarket = lambda T : yts.forwardRate(T,T,ql.Continuous).rate()
curve['fM(0,T)'] = [ fMarket(T) for T in curve['T'] ]
fig = go.Figure()
fig.add_trace(go.Scatter(x=curve['T'], y=curve['f(0,T)'], mode='lines', name='f(0,T)'))
fig.add_trace(go.Scatter(x=curve['T'], y=curve['fM(0,T)'], mode='lines', name='fM(0,T)'))
fig.show()
```
```python
zeroRateCir = lambda T : -np.log(cirModel.zeroBondPrice(0.0,T,cirModel.r0))/T
zeroRateYts = lambda T : -np.log(yts.discount(T)) / T
zeros = pd.DataFrame(np.linspace(0.1,20,200),columns=['T'])
zeros['CIR'] = [ zeroRateCir(T) for T in zeros['T'] ]
zeros['Yts'] = [ zeroRateYts(T) for T in zeros['T'] ]
fig = go.Figure()
fig.add_trace(go.Scatter(x=zeros['T'], y=zeros['CIR'], mode='lines', name='CIR'))
fig.add_trace(go.Scatter(x=zeros['T'], y=zeros['Yts'], mode='lines', name='Yts'))
fig.show()
```
```python
from hybmc.models.ShiftedRatesModel import ShiftedRatesModel
shiModel = ShiftedRatesModel(yts,cirModel)
```
```python
from hybmc.simulations.McSimulation import McSimulation
times = np.linspace(0.0,20.0,21)
nPaths = 2**13
seed = 3141
simCir = McSimulation(cirModel,times,nPaths,seed,showProgress=True)
simShi = McSimulation(shiModel,times,nPaths,seed,showProgress=True)
#
dT = 0.0
zcbCir = np.mean(np.array([
[ cirModel.zeroBond(times[t],times[t]+dT,simCir.X[p,t,:],None) / cirModel.numeraire(times[t],simCir.X[p,t,:]) for t in range(len(times)) ]
for p in range(nPaths) ]), axis=0)
zcbShi = np.mean(np.array([
[ shiModel.zeroBond(times[t],times[t]+dT,simShi.X[p,t,:],None) / shiModel.numeraire(times[t],simShi.X[p,t,:]) for t in range(len(times)) ]
for p in range(nPaths) ]), axis=0)
#
mcZeroCir = [ -np.log(df)/T for df,T in zip(zcbCir,times) ]
mcZeroShi = [ -np.log(df)/T for df,T in zip(zcbShi,times) ]
fig.add_trace(go.Scatter(x=times[1:], y=mcZeroCir[1:], mode='markers', name='CIR'))
fig.add_trace(go.Scatter(x=times[1:], y=mcZeroShi[1:], mode='markers', name='Yts'))
fig.show()
```
```python
```
| 07a176f32c974dac0befc96ffd0ab389cf56c437 | 9,238 | ipynb | Jupyter Notebook | doc/AffineShortRateModels.ipynb | sschlenkrich/HybridMonteCarlo | 72f54aa4bcd742430462b27b72d70369c01f9ac4 | [
"MIT"
]
| 3 | 2021-08-18T18:34:41.000Z | 2021-12-24T07:05:19.000Z | doc/AffineShortRateModels.ipynb | sschlenkrich/HybridMonteCarlo | 72f54aa4bcd742430462b27b72d70369c01f9ac4 | [
"MIT"
]
| null | null | null | doc/AffineShortRateModels.ipynb | sschlenkrich/HybridMonteCarlo | 72f54aa4bcd742430462b27b72d70369c01f9ac4 | [
"MIT"
]
| 3 | 2021-01-31T11:41:19.000Z | 2022-03-25T19:51:20.000Z | 31.52901 | 296 | 0.549361 | true | 2,044 | Qwen/Qwen-72B | 1. YES
2. YES | 0.919643 | 0.808067 | 0.743133 | __label__eng_Latn | 0.304767 | 0.564879 |
<a href="https://colab.research.google.com/github/julianovale/project_trains/blob/master/Exemplo_03.ipynb" target="_parent"></a>
```
from sympy import I, Matrix, symbols, Symbol, eye
from datetime import datetime
import numpy as np
import pandas as pd
```
```
# Rotas
R1 = Matrix([[0,"R1_p1",0],[0,0,"R1_v1"],[0,0,0]])
R2 = Matrix([[0,"R2_p1",0],[0,0,"R2_v1"],[0,0,0]])
```
```
# Seções (semáforos)
T1 = Matrix([[0, "p1"],["v1", 0]])
```
```
def kronSum(A,B):
m = np.size(A,1)
n = np.size(B,1)
A = np.kron(A,np.eye(n))
B = np.kron(np.eye(m),B)
return A + B
```
```
momento_inicio = datetime.now()
'''
Algebra de rotas
'''
rotas = kronSum(R1,R2)
'''
Algebra de seções
secoes = kronSum(T1,T2)
secoes = kronSum(secoes,T3)
secoes = kronSum(secoes,T4)
secoes = kronSum(secoes,T5)
'''
'''
Algebra de sistema
'''
sistema = np.kron(rotas, T1) # lembrar de trocar para "secoes" se tiver vários semáforos
# calcula tempo de processamento
tempo_processamento = datetime.now() - momento_inicio
```
```
sistema = pd.DataFrame(data=sistema,index=list(range(1,np.size(sistema,0)+1)), columns=list(range(1,np.size(sistema,1)+1)))
```
```
sistema.shape
```
(18, 18)
```
print(tempo_processamento)
```
0:00:00.018771
```
sistema
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
<th>12</th>
<th>13</th>
<th>14</th>
<th>15</th>
<th>16</th>
<th>17</th>
<th>18</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R2_p1*p1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R1_p1*p1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>0</td>
<td>0</td>
<td>1.0*R2_p1*v1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R1_p1*v1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R2_v1*p1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R1_p1*p1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R2_v1*v1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R1_p1*v1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>5</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R1_p1*p1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>6</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R1_p1*v1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>7</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R2_p1*p1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R1_v1*p1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>8</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R2_p1*v1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R1_v1*v1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>9</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R2_v1*p1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R1_v1*p1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>10</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R2_v1*v1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R1_v1*v1</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>11</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R1_v1*p1</td>
</tr>
<tr>
<th>12</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R1_v1*v1</td>
<td>0</td>
</tr>
<tr>
<th>13</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R2_p1*p1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>14</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R2_p1*v1</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>15</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R2_v1*p1</td>
</tr>
<tr>
<th>16</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*R2_v1*v1</td>
<td>0</td>
</tr>
<tr>
<th>17</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>18</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
```
momento_inicio = datetime.now()
colunas = ['de_noh', 'para_noh', 'aresta']
grafo = pd.DataFrame(columns=colunas)
r = 1
c = 1
for j in range(np.size(sistema,0)):
for i in range(np.size(sistema,0)):
if sistema.loc[r,c]==0 and c < np.size(sistema,0):
c += 1
elif c < np.size(sistema,0):
grafo.loc[len(grafo)+1] = (r, c, sistema.loc[r,c])
c += 1
else:
c = 1
r += 1
tempo_processamento = datetime.now() - momento_inicio
print(tempo_processamento)
```
0:00:00.081615
```
grafo['aresta'] = grafo['aresta'].astype('str')
grafo
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>de_noh</th>
<th>para_noh</th>
<th>aresta</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1</td>
<td>4</td>
<td>1.0*R2_p1*p1</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>8</td>
<td>1.0*R1_p1*p1</td>
</tr>
<tr>
<th>3</th>
<td>2</td>
<td>3</td>
<td>1.0*R2_p1*v1</td>
</tr>
<tr>
<th>4</th>
<td>2</td>
<td>7</td>
<td>1.0*R1_p1*v1</td>
</tr>
<tr>
<th>5</th>
<td>3</td>
<td>6</td>
<td>1.0*R2_v1*p1</td>
</tr>
<tr>
<th>6</th>
<td>3</td>
<td>10</td>
<td>1.0*R1_p1*p1</td>
</tr>
<tr>
<th>7</th>
<td>4</td>
<td>5</td>
<td>1.0*R2_v1*v1</td>
</tr>
<tr>
<th>8</th>
<td>4</td>
<td>9</td>
<td>1.0*R1_p1*v1</td>
</tr>
<tr>
<th>9</th>
<td>5</td>
<td>12</td>
<td>1.0*R1_p1*p1</td>
</tr>
<tr>
<th>10</th>
<td>6</td>
<td>11</td>
<td>1.0*R1_p1*v1</td>
</tr>
<tr>
<th>11</th>
<td>7</td>
<td>10</td>
<td>1.0*R2_p1*p1</td>
</tr>
<tr>
<th>12</th>
<td>7</td>
<td>14</td>
<td>1.0*R1_v1*p1</td>
</tr>
<tr>
<th>13</th>
<td>8</td>
<td>9</td>
<td>1.0*R2_p1*v1</td>
</tr>
<tr>
<th>14</th>
<td>8</td>
<td>13</td>
<td>1.0*R1_v1*v1</td>
</tr>
<tr>
<th>15</th>
<td>9</td>
<td>12</td>
<td>1.0*R2_v1*p1</td>
</tr>
<tr>
<th>16</th>
<td>9</td>
<td>16</td>
<td>1.0*R1_v1*p1</td>
</tr>
<tr>
<th>17</th>
<td>10</td>
<td>11</td>
<td>1.0*R2_v1*v1</td>
</tr>
<tr>
<th>18</th>
<td>10</td>
<td>15</td>
<td>1.0*R1_v1*v1</td>
</tr>
<tr>
<th>19</th>
<td>12</td>
<td>17</td>
<td>1.0*R1_v1*v1</td>
</tr>
<tr>
<th>20</th>
<td>13</td>
<td>16</td>
<td>1.0*R2_p1*p1</td>
</tr>
<tr>
<th>21</th>
<td>14</td>
<td>15</td>
<td>1.0*R2_p1*v1</td>
</tr>
<tr>
<th>22</th>
<td>16</td>
<td>17</td>
<td>1.0*R2_v1*v1</td>
</tr>
</tbody>
</table>
</div>
```
new = grafo["aresta"].str.split("*", n = -1, expand = True)
grafo["aresta"]=new[1]
grafo["semaforo_secao"]=new[2]
new = grafo["aresta"].str.split("_", n = -1, expand = True)
grafo["semaforo_trem"]=new[1]
grafo['coincide'] = np.where(grafo['semaforo_secao']==grafo['semaforo_trem'], True, False)
grafo
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>de_noh</th>
<th>para_noh</th>
<th>aresta</th>
<th>semaforo_secao</th>
<th>semaforo_trem</th>
<th>coincide</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1</td>
<td>4</td>
<td>R2_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>8</td>
<td>R1_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>3</th>
<td>2</td>
<td>3</td>
<td>R2_p1</td>
<td>v1</td>
<td>p1</td>
<td>False</td>
</tr>
<tr>
<th>4</th>
<td>2</td>
<td>7</td>
<td>R1_p1</td>
<td>v1</td>
<td>p1</td>
<td>False</td>
</tr>
<tr>
<th>5</th>
<td>3</td>
<td>6</td>
<td>R2_v1</td>
<td>p1</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>6</th>
<td>3</td>
<td>10</td>
<td>R1_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>7</th>
<td>4</td>
<td>5</td>
<td>R2_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>8</th>
<td>4</td>
<td>9</td>
<td>R1_p1</td>
<td>v1</td>
<td>p1</td>
<td>False</td>
</tr>
<tr>
<th>9</th>
<td>5</td>
<td>12</td>
<td>R1_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>10</th>
<td>6</td>
<td>11</td>
<td>R1_p1</td>
<td>v1</td>
<td>p1</td>
<td>False</td>
</tr>
<tr>
<th>11</th>
<td>7</td>
<td>10</td>
<td>R2_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>12</th>
<td>7</td>
<td>14</td>
<td>R1_v1</td>
<td>p1</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>13</th>
<td>8</td>
<td>9</td>
<td>R2_p1</td>
<td>v1</td>
<td>p1</td>
<td>False</td>
</tr>
<tr>
<th>14</th>
<td>8</td>
<td>13</td>
<td>R1_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>15</th>
<td>9</td>
<td>12</td>
<td>R2_v1</td>
<td>p1</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>16</th>
<td>9</td>
<td>16</td>
<td>R1_v1</td>
<td>p1</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>17</th>
<td>10</td>
<td>11</td>
<td>R2_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>18</th>
<td>10</td>
<td>15</td>
<td>R1_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>19</th>
<td>12</td>
<td>17</td>
<td>R1_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>20</th>
<td>13</td>
<td>16</td>
<td>R2_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>21</th>
<td>14</td>
<td>15</td>
<td>R2_p1</td>
<td>v1</td>
<td>p1</td>
<td>False</td>
</tr>
<tr>
<th>22</th>
<td>16</td>
<td>17</td>
<td>R2_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
</tbody>
</table>
</div>
```
grafo = pd.DataFrame(data=grafo)
```
```
# PASSO 1
alcancavel = [1]
N = np.size(grafo,0)
for i in range(N):
de = grafo.loc[i+1]['de_noh']
para = grafo.loc[i+1]['para_noh']
if de in alcancavel:
alcancavel.append(para)
else:
i += 1
alcancavel.sort()
```
```
grafo01 = grafo[grafo.de_noh.isin(alcancavel)]
```
```
grafo01
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>de_noh</th>
<th>para_noh</th>
<th>aresta</th>
<th>semaforo_secao</th>
<th>semaforo_trem</th>
<th>coincide</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1</td>
<td>4</td>
<td>R2_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>8</td>
<td>R1_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>7</th>
<td>4</td>
<td>5</td>
<td>R2_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>8</th>
<td>4</td>
<td>9</td>
<td>R1_p1</td>
<td>v1</td>
<td>p1</td>
<td>False</td>
</tr>
<tr>
<th>9</th>
<td>5</td>
<td>12</td>
<td>R1_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>13</th>
<td>8</td>
<td>9</td>
<td>R2_p1</td>
<td>v1</td>
<td>p1</td>
<td>False</td>
</tr>
<tr>
<th>14</th>
<td>8</td>
<td>13</td>
<td>R1_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>15</th>
<td>9</td>
<td>12</td>
<td>R2_v1</td>
<td>p1</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>16</th>
<td>9</td>
<td>16</td>
<td>R1_v1</td>
<td>p1</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>19</th>
<td>12</td>
<td>17</td>
<td>R1_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>20</th>
<td>13</td>
<td>16</td>
<td>R2_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>22</th>
<td>16</td>
<td>17</td>
<td>R2_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
</tbody>
</table>
</div>
```
grafo01.drop(grafo01[grafo01.coincide == False].index, inplace=True)
grafo01
```
/usr/local/lib/python3.6/dist-packages/pandas/core/frame.py:3997: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
errors=errors,
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>de_noh</th>
<th>para_noh</th>
<th>aresta</th>
<th>semaforo_secao</th>
<th>semaforo_trem</th>
<th>coincide</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1</td>
<td>4</td>
<td>R2_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>8</td>
<td>R1_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>7</th>
<td>4</td>
<td>5</td>
<td>R2_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>9</th>
<td>5</td>
<td>12</td>
<td>R1_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>14</th>
<td>8</td>
<td>13</td>
<td>R1_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>19</th>
<td>12</td>
<td>17</td>
<td>R1_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>20</th>
<td>13</td>
<td>16</td>
<td>R2_p1</td>
<td>p1</td>
<td>p1</td>
<td>True</td>
</tr>
<tr>
<th>22</th>
<td>16</td>
<td>17</td>
<td>R2_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
</tbody>
</table>
</div>
| 04c7d06209ea67e2a37fb845b4a623938c2e55d7 | 55,764 | ipynb | Jupyter Notebook | Exemplo_03.ipynb | julianovale/project_trains | 73f698ab9618363b93777ab7337be813bf14d688 | [
"MIT"
]
| null | null | null | Exemplo_03.ipynb | julianovale/project_trains | 73f698ab9618363b93777ab7337be813bf14d688 | [
"MIT"
]
| null | null | null | Exemplo_03.ipynb | julianovale/project_trains | 73f698ab9618363b93777ab7337be813bf14d688 | [
"MIT"
]
| null | null | null | 35.00565 | 235 | 0.244172 | true | 8,857 | Qwen/Qwen-72B | 1. YES
2. YES | 0.76908 | 0.651355 | 0.500944 | __label__cym_Latn | 0.151307 | 0.00219 |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Singular-Value-Decomposition-(SVD)" data-toc-modified-id="Singular-Value-Decomposition-(SVD)-1"><span class="toc-item-num">1 </span>Singular Value Decomposition (SVD)</a></span><ul class="toc-item"><li><span><a href="#Interpretation-of-SVD" data-toc-modified-id="Interpretation-of-SVD-1.1"><span class="toc-item-num">1.1 </span>Interpretation of SVD</a></span><ul class="toc-item"><li><span><a href="#Geometric-Interpretation" data-toc-modified-id="Geometric-Interpretation-1.1.1"><span class="toc-item-num">1.1.1 </span>Geometric Interpretation</a></span></li><li><span><a href="#Factor-Interpretation" data-toc-modified-id="Factor-Interpretation-1.1.2"><span class="toc-item-num">1.1.2 </span>Factor Interpretation</a></span></li></ul></li><li><span><a href="#Worked-Example-Full-SVD" data-toc-modified-id="Worked-Example-Full-SVD-1.2"><span class="toc-item-num">1.2 </span>Worked Example Full SVD</a></span></li><li><span><a href="#Relationships-with-PCA" data-toc-modified-id="Relationships-with-PCA-1.3"><span class="toc-item-num">1.3 </span>Relationships with PCA</a></span></li><li><span><a href="#Applications" data-toc-modified-id="Applications-1.4"><span class="toc-item-num">1.4 </span>Applications</a></span><ul class="toc-item"><li><span><a href="#Dimensionality-Reduction" data-toc-modified-id="Dimensionality-Reduction-1.4.1"><span class="toc-item-num">1.4.1 </span>Dimensionality Reduction</a></span></li><li><span><a href="#Information-Retrieval" data-toc-modified-id="Information-Retrieval-1.4.2"><span class="toc-item-num">1.4.2 </span>Information Retrieval</a></span></li><li><span><a href="#Collaborative-Filtering" data-toc-modified-id="Collaborative-Filtering-1.4.3"><span class="toc-item-num">1.4.3 </span>Collaborative Filtering</a></span></li></ul></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
```python
from jupyterthemes import get_themes
from jupyterthemes.stylefx import set_nb_theme
themes = get_themes()
set_nb_theme(themes[1])
```
<style> @font-face {
font-family: 'Droid Sans Mono';
font-weight: normal;
font-style: normal;
src: local('Droid Sans Mono'), url('fonts/droid-sans-mono.woff') format('woff');
}
@font-face {
font-family: 'Exo_2';
font-weight: normal;
font-style: normal;
src: local('Exo_2'), url('fonts/exo-II-regular.ttf') format('truetype');
}
@font-face {
font-family: 'Fira Code';
font-weight: normal;
font-style: normal;
src: local('Fira Code'), url('fonts/firacode.otf') format('opentype');
}
@font-face {
font-family: 'Lora';
font-weight: normal;
font-style: normal;
src: local('Lora'), url('fonts/Lora-Regular.ttf') format('truetype');
}
div#notebook {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
line-height: 170%;
color: #303030;
}
body,
div.body {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
color: #303030;
background-color: #ffffff;
background: #ffffff;
}
body.notebook_app {
padding: 0;
background-color: #ffffff;
background: #ffffff;
padding-right: 0px !important;
overflow-y: hidden;
}
a {
font-family: "Exo_2", sans-serif;
color: #303030;
}
a:hover,
a:focus {
color: #2f2f2f;
}
.list_header,
div#notebook_list_header.row.list_header {
font-size: 14pt;
color: #2f2f2f;
background-color: #ffffff;
}
div#cluster_list_header.row.list_header,
div#running .row.list_header {
font-size: 14pt;
color: #303030;
background: #eeeeee;
background-color: #eeeeee;
border-bottom: 2px solid rgba(180,180,180,.30);
}
div#cluster_list > div.list_item.row,
div#cluster_list > div.list_item.row:hover {
background: #f7f7f7;
background-color: #f7f7f7;
}
div#clusters.tab-pane.active {
font-size: 12.0pt;
padding: 4px 0 4px 0;
}
#running .panel-group .panel .panel-heading {
font-size: 14pt;
color: #303030;
padding: 8px 8px;
background: #eeeeee;
background-color: #eeeeee;
}
#running .panel-group .panel .panel-heading a {
font-size: 14pt;
color: #303030;
}
#running .panel-group .panel .panel-heading a:focus,
#running .panel-group .panel .panel-heading a:hover {
font-size: 14pt;
color: #303030;
}
#running .panel-group .panel .panel-body .list_container .list_item {
background: #f7f7f7;
background-color: #f7f7f7;
padding: 2px;
border-bottom: 2px solid rgba(180,180,180,.30);
}
#running .panel-group .panel .panel-body .list_container .list_item:hover {
background: #f7f7f7;
background-color: #f7f7f7;
}
#running .panel-group .panel .panel-body {
padding: 2px;
}
div.running_list_info.toolbar_info {
font-size: 12.0pt;
padding: 4px 0 4px 0;
height: inherit;
line-height: inherit;
text-shadow: none;
}
.list_placeholder {
font-weight: normal;
}
#tree-selector {
padding: 0px;
}
#project_name > ul > li > a > i.fa.fa-home {
color: #ff7823;
font-size: 17pt;
display: inline-block;
position: static;
padding: 0px 0px;
font-weight: normal;
text-align: center;
vertical-align: text-top;
}
#project_name {
display: inline-flex;
padding-left: 7px;
margin-left: -2px;
margin-bottom: -20px;
text-align: -webkit-auto;
vertical-align: text-top;
}
div#notebook_toolbar div.dynamic-instructions {
font-family: "Exo_2", sans-serif;
font-size: 12.0pt;
}
.toolbar_info {
font-family: "Exo_2", sans-serif;
font-size: 12.0pt;
color: #303030;
text-shadow: none;
border: none;
height: inherit;
line-height: inherit;
}
.list_container {
font-size: 12.0pt;
color: #303030;
border: none;
text-shadow: none !important;
}
.list_container > div {
border-bottom: 1px solid rgba(180,180,180,.14);
font-size: 12.0pt;
}
.list_header > div,
.list_item > div {
padding-left: 0px;
}
.list_header > div input,
.list_item > div input {
top: 0px;
}
.list_header > div .item_link,
.list_item > div .item_link {
margin-left: -1px;
vertical-align: middle;
line-height: 22px;
font-size: 12.0pt;
}
.item_icon {
font-size: 12.0pt;
vertical-align: middle;
}
.list_item input:not([type="checkbox"]) {
padding-right: 0px;
height: auto;
width: 20%;
margin: 6px 0 0;
margin-top: 1px;
}
#button-select-all {
height: auto;
font-size: 12.0pt;
padding: 5px;
min-width: 65px;
z-index: 0;
}
button#tree-selector-btn {
height: auto;
font-size: 12.0pt;
padding: 5px;
}
input#select-all.pull-left.tree-selector {
margin-left: 7px;
margin-right: 2px;
margin-top: 5px;
}
input[type="radio"],
input[type="checkbox"] {
margin: 6px 0 0;
margin-top: 1px;
line-height: normal;
}
.list_container a {
font-size: 17px;
color: #303030;
border: none;
text-shadow: none !important;
font-weight: normal;
font-style: normal;
}
div.list_container a:hover {
color: #2f2f2f;
}
div.list_item:hover {
background-color: #fafafa;
}
.breadcrumb > li {
font-size: 12.0pt;
color: #303030;
border: none;
text-shadow: none !important;
}
ul#tabs a {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
font-weight: normal;
font-style: normal;
border-color: transparent;
text-shadow: none !important;
}
.nav-tabs {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
font-weight: normal;
font-style: normal;
background: #ffffff;
text-shadow: none !important;
border-color: transparent;
border-bottom-color: rgba(180,180,180,.30);
}
.nav-tabs > li > a:hover {
color: #2f2f2f;
background-color: rgba(180,180,180,.14);
}
.nav-tabs > li > a:active,
.nav-tabs > li > a:focus,
.nav-tabs > li.active > a,
.nav-tabs > li.active > a:focus,
.nav-tabs > li.active > a:hover,
.nav-tabs > li.active > a,
.nav-tabs > li.active > a:hover,
.nav-tabs > li.active > a:focus {
color: #1c1c1c;
background-color: #eeeeee;
border: 1px solid transparent;
border-bottom-color: transparent;
cursor: default;
}
.nav > li > a:hover,
.nav > li > a:focus {
text-decoration: none;
background-color: rgba(180,180,180,.14);
}
.nav > li.disabled > a,
.nav > li.disabled > a:hover {
color: #aaaaaa;
}
div#notebook {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
padding-top: 4px;
}
.notebook_app {
background-color: #ffffff;
}
#notebook-container {
padding: 13px;
background-color: #ffffff;
min-height: 0px;
box-shadow: none;
width: 980px;
margin-right: auto;
margin-left: auto;
}
div#ipython-main-app.container {
width: 980px;
margin-right: auto;
margin-left: auto;
margin-right: auto;
margin-left: auto;
}
.container {
width: 980px;
margin-right: auto;
margin-left: auto;
}
.notebook_app #header {
box-shadow: none !important;
background-color: #ffffff;
border-bottom: 2px solid rgba(180,180,180,.14);
}
#header {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
box-shadow: none;
background-color: #ffffff;
}
#header .header-bar {
background: #ffffff;
background-color: #ffffff;
}
body > #header .header-bar {
width: 100%;
background: #ffffff;
}
#menubar {
background-color: #ffffff;
}
#menubar .navbar,
.navbar-default {
background-color: #ffffff;
margin-bottom: 0px;
}
.navbar {
border: none;
}
.navbar-default {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
background-color: #ffffff;
border-color: rgba(180,180,180,.14);
line-height: 1.5em;
padding-bottom: 0px;
}
.navbar-default .navbar-nav > li > a {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
color: #303030;
display: block;
line-height: 1.5em;
padding-top: 8px;
padding-bottom: 6px;
}
.navbar-default .navbar-nav > li > a:hover,
.navbar-default .navbar-nav > li > a:focus {
color: #2f2f2f;
background-color: rgba(180,180,180,.14);
border-color: rgba(180,180,180,.14);
line-height: 1.5em;
}
.navbar-default .navbar-nav > .open > a,
.navbar-default .navbar-nav > .open > a:hover,
.navbar-default .navbar-nav > .open > a:focus {
color: #1c1c1c;
background-color: rgba(180,180,180,.14);
border-color: rgba(180,180,180,.14);
line-height: 1.5em;
}
.edit_mode .modal_indicator:before {
font-size: 13pt;
color: #2c85f7;
content: "\f040";
}
.item_icon {
color: #126dce;
}
.item_buttons .kernel-name {
font-size: 13pt;
color: #126dce;
line-height: 22px;
}
.running_notebook_icon:before {
color: #009e07 !important;
}
.item_buttons .running-indicator {
padding-top: 2px;
color: #009e07;
}
#modal_indicator {
float: right !important;
color: #126dce;
background: #ffffff;
background-color: #ffffff;
}
#kernel_indicator {
float: right !important;
color: #ff7823;
background: #ffffff;
background-color: #ffffff;
font-size: 14.5pt;
border-left: 2px solid #ff7823;
padding-bottom: 2px;
}
#kernel_indicator .kernel_indicator_name {
color: #ff7823;
background: #ffffff;
background-color: #ffffff;
font-size: 14.5pt;
padding-left: 5px;
padding-right: 5px;
}
div.notification_widget.info,
.notification_widget.info,
.notification_widget:active:hover,
.notification_widget.active:hover,
.open > .dropdown-toggle.notification_widget:hover,
.notification_widget:active:focus,
.notification_widget.active:focus,
.open > .dropdown-toggle.notification_widget:focus,
.notification_widget:active.focus,
.notification_widget.active.focus,
.open > .dropdown-toggle.notification_widget.focus,
div#notification_notebook.notification_widget.btn.btn-xs.navbar-btn,
div#notification_notebook.notification_widget.btn.btn-xs.navbar-btn:hover,
div#notification_notebook.notification_widget.btn.btn-xs.navbar-btn:focus {
color: #126dce;
background-color: #ffffff;
border-color: #ffffff;
}
#notification_area,
div.notification_area {
float: right !important;
position: static;
}
#kernel_logo_widget,
#kernel_logo_widget .current_kernel_logo {
display: none;
}
div#ipython_notebook {
display: none;
}
i.fa.fa-icon {
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
text-rendering: auto;
}
.fa {
display: inline-block;
font: normal normal normal 12pt/1 "FontAwesome", "Exo_2", sans-serif;
text-rendering: auto;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
.dropdown-menu {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
box-shadow: none;
padding: 0px;
text-align: left;
border: 2px solid rgba(180,180,180,.30);
background-color: #ffffff;
background: #ffffff;
line-height: 1.3;
margin: 0px;
}
.dropdown-menu:hover {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
border: 2px solid rgba(180,180,180,.30);
background-color: #ffffff;
box-shadow: none;
line-height: 1.3;
}
.dropdown-header {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
display: block;
color: #ff7823;
text-decoration: underline;
white-space: nowrap;
padding: 8px 0px 0px 6px;
line-height: 1.3;
}
.dropdown-menu > li > a {
font-family: "Exo_2", sans-serif;
font-size: 12.0pt;
line-height: 1.3;
display: block;
padding: 10px 25px 10px 14px;
color: #303030;
background-color: #ffffff;
background: #ffffff;
}
.dropdown-menu > li > a:hover {
color: #2f2f2f;
background-color: rgba(180,180,180,.14);
background: rgba(180,180,180,.14);
border-color: rgba(180,180,180,.14);
}
.dropdown-menu .divider {
height: 2px;
margin: 0px 0px;
overflow: hidden;
background-color: rgba(180,180,180,.30);
}
.dropdown-submenu > .dropdown-menu {
top: 0;
left: 100%;
margin-top: -2px;
margin-left: 0px;
padding-top: 0px;
}
.dropdown-menu > .disabled > a,
.dropdown-menu > .disabled > a:hover,
.dropdown-menu > .disabled > a:focus {
font-family: "Exo_2", sans-serif;
font-size: 12.0pt;
font-weight: normal;
color: #aaaaaa;
padding: none;
display: block;
clear: both;
line-height: 1.2;
white-space: nowrap;
}
.dropdown-submenu > a:after {
color: #303030;
margin-right: -16px;
}
.dropdown-submenu:hover > a:after,
.dropdown-submenu:active > a:after,
.dropdown-submenu:focus > a:after,
.dropdown-submenu:visited > a:after {
color: #ff7823;
margin-right: -16px;
}
div.kse-dropdown > .dropdown-menu,
.kse-dropdown > .dropdown-menu {
min-width: 0;
top: 94%;
}
.btn,
.btn-default {
font-family: "Exo_2", sans-serif;
color: #303030;
background: #ebebeb;
background-color: #ebebeb;
border: 2px solid #e8e8e8;
font-weight: normal;
box-shadow: none;
text-shadow: none;
border-radius: 2px;
font-size: inherit;
}
.btn:hover,
.btn:active:hover,
.btn.active:hover,
.btn-default:hover,
.open > .dropdown-toggle.btn-default:hover,
.open > .dropdown-toggle.btn:hover {
color: #2f2f2f;
background-color: #e4e4e4;
background: #e4e4e4;
border-color: #e4e4e4;
background-image: none;
box-shadow: none !important;
border-radius: 2px;
}
.btn:active,
.btn.active,
.btn:active:focus,
.btn.active:focus,
.btn:active.focus,
.btn.active.focus,
.btn-default:focus,
.btn-default.focus,
.btn-default:active,
.btn-default.active,
.btn-default:active:hover,
.btn-default.active:hover,
.btn-default:active:focus,
.btn-default.active:focus,
.btn-default:active.focus,
.btn-default.active.focus,
.open > .dropdown-toggle.btn:focus,
.open > .dropdown-toggle.btn.focus,
.open > .dropdown-toggle.btn-default {
color: #1c1c1c;
background-color: #e4e4e4;
background: #e4e4e4;
border-color: #e4e4e4;
background-image: none;
box-shadow: none !important;
border-radius: 2px;
}
.item_buttons > .btn,
.item_buttons > .btn-group,
.item_buttons > .input-group {
margin-left: 5px;
background: #eeeeee;
background-color: #eeeeee;
border: 2px solid #eeeeee;
}
.item_buttons > .btn:hover,
.item_buttons > .btn-group:hover,
.item_buttons > .input-group:hover {
margin-left: 5px;
background: #e9e9e9;
background-color: #e9e9e9;
border: 2px solid #e9e9e9;
}
.btn-group > .btn-mini,
.btn-sm,
.btn-group-sm > .btn,
.btn-xs,
.btn-group-xs > .btn,
.alternate_upload .btn-upload,
.btn-group,
.btn-group-vertical {
font-size: 12.0pt;
font-weight: normal;
}
.btn-xs,
.btn-group-xs > .btn {
font-size: 12.0pt;
background-image: none;
font-weight: normal;
text-shadow: none;
display: inline-table;
}
.alternate_upload .btn-upload {
display: none;
}
.alternate_upload input.fileinput {
display: none;
}
button.close {
border: 0px none;
font-family: sans-serif;
font-size: 25pt;
}
.dynamic-buttons {
font-size: inherit;
padding-top: 0px;
display: inline-block;
}
.close {
color: #de143d;
opacity: .5;
text-shadow: none;
}
.close:hover {
color: #de143d;
opacity: 1;
}
div.btn.btn-default.output_collapsed {
background: #eeeeee;
background-color: #eeeeee;
border-color: #eeeeee;
}
div.btn.btn-default.output_collapsed:hover {
background: #e9e9e9;
background-color: #e9e9e9;
border-color: #e9e9e9;
}
div.nbext-enable-btns .btn[disabled],
div.nbext-enable-btns .btn[disabled]:hover,
.btn-default.disabled,
.btn-default[disabled],
.btn-default.disabled:hover,
.btn-default[disabled]:hover,
fieldset[disabled] .btn-default:hover,
.btn-default.disabled:focus,
.btn-default[disabled]:focus,
fieldset[disabled] .btn-default:focus,
.btn-default.disabled.focus,
.btn-default[disabled].focus,
fieldset[disabled] .btn-default.focus {
color: #4a4a4a;
background: #e8e8e8;
background-color: #e8e8e8;
border-color: #e8e8e8;
}
.input-group-addon {
padding: 2px 5px;
font-size: 12.0pt;
font-weight: normal;
height: auto;
color: #303030;
text-align: center;
background-color: #ffffff;
border: none;
}
.btn-group > .btn + .dropdown-toggle {
padding-left: 8px;
padding-right: 8px;
height: 100%;
border-left: 2px solid #ff7823 !important;
}
.btn-group > .btn + .dropdown-toggle:hover {
border-left: 2px solid #ff7823 !important;
}
.input-group-btn {
position: relative;
font-size: inherit;
white-space: nowrap;
}
.input-group-btn:first-child > .btn,
.input-group-btn:first-child > .btn-group {
background: #eeeeee;
background-color: #eeeeee;
border: 1px solid #e9e9e9;
margin: 2px;
font-size: inherit;
}
.input-group-btn:first-child > .btn:hover,
.input-group-btn:first-child > .btn-group:hover {
background: #e9e9e9;
background-color: #e9e9e9;
border: 1px solid #e9e9e9;
margin: 2px;
font-size: inherit;
}
div.modal .btn-group > .btn:first-child {
background: #eeeeee;
background-color: #eeeeee;
border: 1px solid #e9e9e9;
margin-top: 0px !important;
margin-left: 0px;
margin-bottom: 2px;
}
div.modal .btn-group > .btn:first-child:hover {
background: #e9e9e9;
background-color: #e9e9e9;
border: 1px solid #e9e9e9;
}
div.modal > button,
div.modal-footer > button {
background: #eeeeee;
background-color: #eeeeee;
border-color: #eeeeee;
}
div.modal > button:hover,
div.modal-footer > button:hover {
background: #e9e9e9;
background-color: #e9e9e9;
border-color: #e9e9e9;
}
.modal-content {
font-family: "Exo_2", sans-serif;
font-size: 12.0pt;
position: relative;
background: #eeeeee;
background-color: #eeeeee;
border: none;
border-radius: 1px;
background-clip: padding-box;
outline: none;
}
.modal-header {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
color: #303030;
background: #eeeeee;
background-color: #eeeeee;
border-color: rgba(180,180,180,.30);
padding: 12px;
min-height: 16.4286px;
}
.modal-content h4 {
font-family: "Exo_2", sans-serif;
font-size: 16pt;
color: #303030;
padding: 5px;
}
.modal-body {
background-color: #ffffff;
position: relative;
padding: 15px;
}
.modal-footer {
padding: 10px;
text-align: right;
background-color: #f7f7f7;
border-top: 1px solid rgba(180,180,180,.30);
}
.alert-info {
background-color: #fdfdfd;
border-color: rgba(180,180,180,.30);
color: #303030;
}
.modal-header .close {
margin-top: -5px;
font-size: 25pt;
}
.modal-backdrop,
.modal-backdrop.in {
opacity: 0.75;
background-color: #eeeeee;
}
div.panel,
div.panel-default,
.panel,
.panel-default {
font-family: "Exo_2", sans-serif;
font-size: 12.0pt;
background-color: #f7f7f7;
color: #303030;
margin-bottom: 14px;
border: 0;
box-shadow: none;
}
div.panel > .panel-heading,
div.panel-default > .panel-heading {
font-size: 14pt;
color: #303030;
background: #eeeeee;
background-color: #eeeeee;
border: 0;
}
.modal .modal-dialog {
min-width: 950px;
margin: 50px auto;
}
div.container-fluid {
margin-right: auto;
margin-left: auto;
padding-left: 7px;
padding-right: 12px;
}
div.form-control,
.form-control {
font-family: "Exo_2", sans-serif;
font-size: inherit;
color: #303030;
background-color: #ffffff;
border: 2px solid #e7e7e7;
margin-left: 2px;
height: auto;
box-shadow: none;
padding: 6px 12px;
transition: border-color 0.15s ease-in-out 0s, box-shadow 0.15s ease-in-out 0s;
}
.form-group.list-group-item {
color: #303030;
background-color: #f7f7f7;
border-color: rgba(180,180,180,.30);
margin-bottom: 0px;
}
input,
button,
select,
textarea {
background-color: #ffffff;
font-weight: normal;
border: 2px solid rgba(180,180,180,.30);
}
select.form-control.select-xs {
height: auto;
}
div.output.output_scroll {
box-shadow: none;
}
::-webkit-scrollbar-track {
-webkit-box-shadow: inset 0 0 6px rgba(0,0,0,0.11);
background-color: #d0d0d0;
border-radius: 6px;
}
::-webkit-scrollbar {
width: 14px;
height: 10px;
background-color: #d0d0d0;
border-radius: 6px;
}
::-webkit-scrollbar-thumb {
background-color: #ffffff;
background-image: -webkit-gradient(linear,40% 0%,75% 86%,from(#ff6b0f ),color-stop(0.5,#ff8b42 ),to(#ff6b0f ));
min-height: 60px;
border-radius: 2px;
}
div.input_area {
background-color: #efefef;
padding-right: 1.2em;
border: 0px;
border-top-left-radius: 0px;
border-top-right-radius: 2px;
border-bottom-left-radius: 0px;
border-bottom-right-radius: 0px;
}
div.cell {
padding: 0px;
background: #efefef;
background-color: #efefef;
border: medium solid #ffffff;
border-top-right-radius: 2px;
border-top-left-radius: 2px;
}
div.cell.selected {
background: #efefef;
background-color: #efefef;
border: medium solid #ff7823;
padding: 0px;
border-top-right-radius: 2px;
border-top-left-radius: 2px;
}
.edit_mode div.cell.selected {
padding: 0px;
background: #efefef;
background-color: #efefef;
border: medium solid #ffd5bb;
border-top-right-radius: 2px;
border-top-left-radius: 2px;
}
div.cell.edit_mode {
padding: 0px;
background: #efefef;
background-color: #efefef;
border: medium solid #ffd5bb;
border-top-right-radius: 2px;
border-top-left-radius: 2px;
}
div.prompt,
.prompt {
font-family: "Fira Code", monospace;
font-size: 9.5pt;
font-weight: normal;
color: #aaaaaa;
line-height: 170%;
padding: 0px;
padding-top: 4px;
padding-left: .25em;
text-align: left !important;
min-width: 12ex;
width: 12ex;
}
div.prompt.input_prompt {
background-color: #efefef;
border-right: 2px solid rgba(240,147,43,.50);
border-top-left-radius: 2px;
border-top-right-radius: 0px;
border-bottom-left-radius: 0px;
border-bottom-right-radius: 0px;
min-width: 12ex;
width: 12ex !important;
}
div.output_wrapper {
background-color: #ffffff;
border: 0px;
margin-bottom: 0em;
margin-top: 0em;
border-top-right-radius: 0px;
border-top-left-radius: 0px;
border-bottom-left-radius: 2px;
border-bottom-right-radius: 2px;
}
div.output_subarea.output_text.output_stream.output_stdout,
div.output_subarea.output_text {
font-family: "Droid Sans Mono", monospace;
font-size: 10.0pt;
line-height: 150% !important;
background-color: #ffffff;
color: #303030;
border-top-right-radius: 0px;
border-top-left-radius: 0px;
border-bottom-left-radius: 2px;
border-bottom-right-radius: 2px;
}
div.output_area pre {
font-family: "Droid Sans Mono", monospace;
font-size: 10.0pt;
line-height: 150% !important;
color: #303030;
border-top-right-radius: 0px;
border-top-left-radius: 0px;
border-bottom-left-radius: 2px;
border-bottom-right-radius: 2px;
}
div.output_area {
display: -webkit-box;
}
div.output_html {
font-family: "Droid Sans Mono", monospace;
font-size: 10.0pt;
color: #353535;
background-color: #ffffff;
background: #ffffff;
}
div.output_subarea {
overflow-x: auto;
padding: .8em;
-webkit-box-flex: 1;
-moz-box-flex: 1;
box-flex: 1;
flex: 1;
max-width: 90%;
}
div.prompt.output_prompt {
font-family: "Fira Code", monospace;
font-size: 9.5pt;
background-color: #ffffff;
color: #ffffff;
border-bottom-left-radius: 2px;
border-top-right-radius: 0px;
border-top-left-radius: 0px;
border-bottom-right-radius: 0px;
min-width: 12ex;
width: 12ex;
}
div.out_prompt_overlay.prompt {
font-family: "Fira Code", monospace;
font-size: 9.5pt;
background-color: #ffffff;
border-bottom-left-radius: 2px;
border-top-right-radius: 0px;
border-top-left-radius: 0px;
border-bottom-right-radius: 0px;
min-width: 12ex;
width: 12ex;
}
div.out_prompt_overlay.prompt:hover {
background-color: #ffffff;
box-shadow: #e8e8e8 2px 1px 2px 2.5px inset;
border-bottom-left-radius: 2px;
-webkit-border-: 2px;
-moz-border-radius: 2px;
border-top-right-radius: 0px;
border-top-left-radius: 0px;
min-width: 12ex;
width: 12ex !important;
}
div.text_cell,
div.text_cell_render pre,
div.text_cell_render {
font-family: "Lora", serif;
font-size: 13pt;
line-height: 170% !important;
color: #353535;
background: #ffffff;
background-color: #ffffff;
border-radius: 2px;
}
div.cell.text_cell.rendered.selected {
font-family: "Lora", serif;
border: medium solid #126dce;
line-height: 170% !important;
background: #ffffff;
background-color: #ffffff;
border-radius: 2px;
}
div.cell.text_cell.unrendered.selected {
font-family: "Lora", serif;
line-height: 170% !important;
background: #ffffff;
background-color: #ffffff;
border: medium solid #126dce;
border-radius: 2px;
}
div.cell.text_cell.selected {
font-family: "Lora", serif;
line-height: 170% !important;
border: medium solid #126dce;
background: #ffffff;
background-color: #ffffff;
border-radius: 2px;
}
.edit_mode div.cell.text_cell.selected {
font-family: "Lora", serif;
line-height: 170% !important;
background: #ffffff;
background-color: #ffffff;
border: medium solid #87b0db;
border-radius: 2px;
}
div.text_cell.unrendered,
div.text_cell.unrendered.selected,
div.edit_mode div.text_cell.unrendered {
font-family: "Lora", serif;
line-height: 170% !important;
background: #ffffff;
background-color: #ffffff;
border-radius: 2px;
}
div.cell.text_cell.rendered .input_prompt {
font-family: "Fira Code", monospace;
font-size: 9.5pt;
font-weight: normal;
color: #aaaaaa;
text-align: left !important;
min-width: 0ex;
width: 0ex !important;
background-color: #ffffff;
border-right: 2px solid transparent;
}
div.cell.text_cell.unrendered .input_prompt {
font-family: "Fira Code", monospace;
font-size: 9.5pt;
font-weight: normal;
color: #aaaaaa;
text-align: left !important;
min-width: 0ex;
width: 0ex !important;
border-right: 2px solid transparent;
}
div.rendered_html code {
font-family: "Droid Sans Mono", monospace;
font-size: 11pt;
padding-top: 3px;
color: #303030;
background: #efefef;
background-color: #efefef;
}
pre,
code,
kbd,
samp {
white-space: pre-wrap;
}
code {
font-family: "Droid Sans Mono", monospace;
font-size: 11pt !important;
line-height: 170% !important;
color: #353535;
background: #efefef;
background-color: #efefef;
}
kbd {
padding: 4px;
font-size: 11pt;
color: #303030;
background-color: #efefef;
border: 0;
box-shadow: none;
}
pre {
display: block;
padding: 8.5px;
margin: 0 0 9px;
font-size: 12.0pt;
line-height: 1.42857143;
color: #303030;
background-color: #efefef;
border: 1px solid #e7e7e7;
border-radius: 2px;
}
div.rendered_html {
color: #353535;
}
div.rendered_html pre,
div.text_cell_render pre {
font-family: "Droid Sans Mono", monospace;
font-size: 11pt !important;
line-height: 170% !important;
color: #353535;
background: #efefef;
background-color: #efefef;
border: 2px #e7e7e7 solid;
max-width: 86%;
border-radius: 2px;
padding: 5px;
}
div.text_cell_render h1,
div.rendered_html h1,
div.text_cell_render h2,
div.rendered_html h2,
div.text_cell_render h3,
div.rendered_html h3,
div.text_cell_render h4,
div.rendered_html h4,
div.text_cell_render h5,
div.rendered_html h5 {
font-family: "Exo_2", sans-serif;
}
.rendered_html h1:first-child,
.rendered_html h2:first-child,
.rendered_html h3:first-child,
.rendered_html h4:first-child,
.rendered_html h5:first-child,
.rendered_html h6:first-child {
margin-top: 0.2em;
}
.rendered_html h1,
.text_cell_render h1 {
color: #126dce;
font-size: 220%;
text-align: center;
font-weight: lighter;
}
.rendered_html h2,
.text_cell_render h2 {
text-align: left;
font-size: 170%;
color: #126dce;
font-style: normal;
font-weight: lighter;
}
.rendered_html h3,
.text_cell_render h3 {
font-size: 150%;
color: #126dce;
font-weight: lighter;
text-decoration: italic;
font-style: normal;
}
.rendered_html h4,
.text_cell_render h4 {
font-size: 120%;
color: #126dce;
font-weight: underline;
font-style: normal;
}
.rendered_html h5,
.text_cell_render h5 {
font-size: 100%;
color: #2f2f2f;
font-weight: lighter;
text-decoration: underline;
}
.rendered_html table,
.rendered_html tr,
.rendered_html td {
font-family: "Fira Code", monospace;
font-size: 10.0pt !important;
line-height: 150% !important;
border: 1px solid #d6d6d6;
color: #353535;
background-color: #ffffff;
background: #ffffff;
}
table.dataframe,
.rendered_html tr,
.dataframe * {
font-family: "Fira Code", monospace;
font-size: 10.0pt !important;
border: 1px solid #d6d6d6;
}
.dataframe th,
.rendered_html th {
font-family: "Exo_2", sans-serif;
font-size: 11pt !important;
font-weight: bold;
border: 1px solid #c4c4c4;
background: #eeeeee;
}
.dataframe td,
.rendered_html td {
font-family: "Fira Code", monospace;
font-size: 10.0pt !important;
color: #353535;
background: #ffffff;
border: 1px solid #d6d6d6;
text-align: left;
min-width: 4em;
}
.dataframe-summary-row tr:last-child,
.dataframe-summary-col td:last-child {
font-family: "Exo_2", sans-serif;
font-size: 11pt !important;
font-weight: bold;
color: #353535;
border: 1px solid #d6d6d6;
background: #eeeeee;
}
div.widget-area {
background-color: #ffffff;
background: #ffffff;
color: #303030;
}
div.widget-area a {
font-family: "Exo_2", sans-serif;
font-size: 12.0pt;
font-weight: normal;
font-style: normal;
color: #303030;
text-shadow: none !important;
}
div.widget-area a:hover,
div.widget-area a:focus {
font-family: "Exo_2", sans-serif;
font-size: 12.0pt;
font-weight: normal;
font-style: normal;
color: #2f2f2f;
background: rgba(180,180,180,.14);
background-color: rgba(180,180,180,.14);
border-color: transparent;
background-image: none;
text-shadow: none !important;
}
div.widget_item.btn-group > button.btn.btn-default.widget-combo-btn,
div.widget_item.btn-group > button.btn.btn-default.widget-combo-btn:hover {
background: #eeeeee;
background-color: #eeeeee;
border: 2px solid #eeeeee !important;
font-size: inherit;
z-index: 0;
}
div.jupyter-widgets.widget-hprogress.widget-hbox,
div.widget-hbox,
.widget-hbox {
display: inline-table;
}
div.jupyter-widgets.widget-hprogress.widget-hbox .widget-label,
div.widget-hbox .widget-label,
.widget-hbox .widget-label {
font-size: 11pt;
min-width: 100%;
padding-top: 5px;
padding-right: 10px;
text-align: left;
vertical-align: text-top;
}
.progress {
overflow: hidden;
height: 20px;
margin-bottom: 10px;
padding-left: 10px;
background-color: #c6c6c6;
border-radius: 4px;
-webkit-box-shadow: none;
box-shadow: none;
}
.rendered_html :link {
font-family: "Exo_2", sans-serif;
font-size: 100%;
color: #2c85f7;
text-decoration: underline;
}
.rendered_html :visited,
.rendered_html :visited:active,
.rendered_html :visited:focus {
color: #2e6eb2;
}
.rendered_html :visited:hover,
.rendered_html :link:hover {
font-family: "Exo_2", sans-serif;
font-size: 100%;
color: #eb6a18;
}
a.anchor-link:link:hover {
font-size: inherit;
color: #eb6a18;
}
a.anchor-link:link {
font-size: inherit;
text-decoration: none;
padding: 0px 20px;
visibility: none;
color: #126dce;
}
div#nbextensions-configurator-container.container {
width: 980px;
margin-right: 0;
margin-left: 0;
}
div.nbext-selector > nav > .nav > li > a {
font-family: "Exo_2", sans-serif;
font-size: 12pt;
}
div.nbext-readme > .nbext-readme-contents > .rendered_html {
font-family: "Exo_2", sans-serif;
font-size: 12pt;
line-height: 145%;
padding: 1em 1em;
color: #353535;
background-color: #ffffff;
-webkit-box-shadow: none;
-moz-box-shadow: none;
box-shadow: none;
}
.nbext-icon,
.nbext-desc,
.nbext-compat-div,
.nbext-enable-btns,
.nbext-params {
margin-bottom: 8px;
font-size: 12pt;
}
div.nbext-readme > .nbext-readme-contents {
padding: 0;
overflow-y: hidden;
}
div.nbext-readme > .nbext-readme-contents:not(:empty) {
margin-top: 0.5em;
margin-bottom: 2em;
border: none;
border-top-color: rgba(180,180,180,.30);
}
.nbext-showhide-incompat {
padding-bottom: 0.5em;
color: #4a4a4a;
font-size: 12.0pt;
}
.shortcut_key,
span.shortcut_key {
display: inline-block;
width: 16ex;
text-align: right;
font-family: monospace;
}
mark,
.mark {
background-color: #ffffff;
color: #353535;
padding: .15em;
}
a.text-warning,
a.text-warning:hover {
color: #aaaaaa;
}
a.text-warning.bg-warning {
background-color: #ffffff;
}
span.bg-success.text-success {
background-color: transparent;
color: #009e07;
}
span.bg-danger.text-danger {
background-color: #ffffff;
color: #de143d;
}
.has-success .input-group-addon {
color: #009e07;
border-color: transparent;
background: inherit;
background-color: rgba(83,180,115,.10);
}
.has-success .form-control {
border-color: #009e07;
-webkit-box-shadow: inset 0 1px 1px rgba(0,0,0,0.025);
box-shadow: inset 0 1px 1px rgba(0,0,0,0.025);
}
.has-error .input-group-addon {
color: #de143d;
border-color: transparent;
background: inherit;
background-color: rgba(192,57,67,.10);
}
.has-error .form-control {
border-color: #de143d;
-webkit-box-shadow: inset 0 1px 1px rgba(0,0,0,0.025);
box-shadow: inset 0 1px 1px rgba(0,0,0,0.025);
}
.kse-input-group-pretty > kbd {
font-family: "Droid Sans Mono", monospace;
color: #303030;
font-weight: normal;
background: transparent;
}
.kse-input-group-pretty > kbd {
font-family: "Droid Sans Mono", monospace;
color: #303030;
font-weight: normal;
background: transparent;
}
div.nbext-enable-btns .btn[disabled],
div.nbext-enable-btns .btn[disabled]:hover,
.btn-default.disabled,
.btn-default[disabled] {
background: #e8e8e8;
background-color: #e8e8e8;
color: #282828;
}
label#Keyword-Filter {
display: none;
}
.nav-pills > li.active > a,
.nav-pills > li.active > a:hover,
.nav-pills > li.active > a:focus {
color: #ffffff;
background-color: #126dce;
}
.input-group .nbext-list-btn-add,
.input-group-btn:last-child > .btn-group > .btn {
background: #eeeeee;
background-color: #eeeeee;
border-color: #eeeeee;
}
.input-group .nbext-list-btn-add:hover,
.input-group-btn:last-child > .btn-group > .btn:hover {
background: #e9e9e9;
background-color: #e9e9e9;
border-color: #e9e9e9;
}
#notebook-container > div.cell.code_cell.rendered.selected > div.widget-area > div.widget-subarea > div > div.widget_item.btn-group > button.btn.btn-default.dropdown-toggle.widget-combo-carrot-btn {
background: #eeeeee;
background-color: #eeeeee;
border-color: #eeeeee;
}
#notebook-container > div.cell.code_cell.rendered.selected > div.widget-area > div.widget-subarea > div > div.widget_item.btn-group > button.btn.btn-default.dropdown-toggle.widget-combo-carrot-btn:hover {
background: #e9e9e9;
background-color: #e9e9e9;
border-color: #e9e9e9;
}
input.raw_input {
font-family: "Droid Sans Mono", monospace;
font-size: 11pt !important;
color: #303030;
background-color: #efefef;
border-color: #ececec;
background: #ececec;
width: auto;
vertical-align: baseline;
padding: 0em 0.25em;
margin: 0em 0.25em;
-webkit-box-shadow: none;
box-shadow: none;
}
audio,
video {
display: inline;
vertical-align: middle;
align-content: center;
margin-left: 20%;
}
.cmd-palette .modal-body {
padding: 0px;
margin: 0px;
}
.cmd-palette form {
background: #eeeeee;
background-color: #eeeeee;
}
.typeahead-field input:last-child,
.typeahead-hint {
background: #eeeeee;
background-color: #eeeeee;
z-index: 1;
}
.typeahead-field input {
font-family: "Exo_2", sans-serif;
color: #303030;
border: none;
font-size: 28pt;
display: inline-block;
line-height: inherit;
padding: 3px 10px;
height: 70px;
}
.typeahead-select {
background-color: #eeeeee;
}
body > div.modal.cmd-palette.typeahead-field {
display: table;
border-collapse: separate;
background-color: #f7f7f7;
}
.typeahead-container button {
font-family: "Exo_2", sans-serif;
font-size: 28pt;
background-color: #d0d0d0;
border: none;
display: inline-block;
line-height: inherit;
padding: 3px 10px;
height: 70px;
}
.typeahead-search-icon {
min-width: 40px;
min-height: 55px;
display: block;
vertical-align: middle;
text-align: center;
}
.typeahead-container button:focus,
.typeahead-container button:hover {
color: #2f2f2f;
background-color: #ff7823;
border-color: #ff7823;
}
.typeahead-list > li.typeahead-group.active > a,
.typeahead-list > li.typeahead-group > a,
.typeahead-list > li.typeahead-group > a:focus,
.typeahead-list > li.typeahead-group > a:hover {
display: none;
}
.typeahead-dropdown > li > a,
.typeahead-list > li > a {
color: #303030;
text-decoration: none;
}
.typeahead-dropdown,
.typeahead-list {
font-family: "Exo_2", sans-serif;
font-size: 13pt;
color: #303030;
background-color: #ffffff;
border: none;
background-clip: padding-box;
margin-top: 0px;
padding: 3px 2px 3px 0px;
line-height: 1.7;
}
.typeahead-dropdown > li.active > a,
.typeahead-dropdown > li > a:focus,
.typeahead-dropdown > li > a:hover,
.typeahead-list > li.active > a,
.typeahead-list > li > a:focus,
.typeahead-list > li > a:hover {
color: #2f2f2f;
background-color: #f7f7f7;
border-color: #f7f7f7;
}
.command-shortcut:before {
content: "(command)";
padding-right: 3px;
color: #aaaaaa;
}
.edit-shortcut:before {
content: "(edit)";
padding-right: 3px;
color: #aaaaaa;
}
ul.typeahead-list i {
margin-left: 1px;
width: 18px;
margin-right: 10px;
}
ul.typeahead-list {
max-height: 50vh;
overflow: auto;
}
.typeahead-list > li {
position: relative;
border: none;
}
div.input.typeahead-hint,
input.typeahead-hint,
body > div.modal.cmd-palette.in > div > div > div > form > div > div.typeahead-field > span.typeahead-query > input.typeahead-hint {
color: #aaaaaa !important;
background-color: transparent;
padding: 3px 10px;
}
.typeahead-dropdown > li > a,
.typeahead-list > li > a {
display: block;
padding: 5px;
clear: both;
font-weight: 400;
line-height: 1.7;
border: 1px solid #ffffff;
border-bottom-color: rgba(180,180,180,.30);
}
body > div.modal.cmd-palette.in > div {
min-width: 750px;
margin: 150px auto;
}
.typeahead-container strong {
font-weight: bolder;
color: #ff7823;
}
#find-and-replace #replace-preview .match,
#find-and-replace #replace-preview .insert {
color: #ffffff;
background-color: #ff7823;
border-color: #ff7823;
border-style: solid;
border-width: 1px;
border-radius: 0px;
}
#find-and-replace #replace-preview .replace .match {
background-color: #de143d;
border-color: #de143d;
border-radius: 0px;
}
#find-and-replace #replace-preview .replace .insert {
background-color: #009e07;
border-color: #009e07;
border-radius: 0px;
}
div.CodeMirror,
div.CodeMirror pre {
font-family: "Droid Sans Mono", monospace;
font-size: 11pt;
line-height: 170%;
color: #303030;
}
div.CodeMirror-lines {
padding-bottom: .6em;
padding-left: .5em;
padding-right: 1.5em;
padding-top: 4px;
}
span.ansiblack {
color: #dc4384;
}
span.ansiblue {
color: #009e07;
}
span.ansigray {
color: #ff7823;
}
span.ansigreen {
color: #333333;
}
span.ansipurple {
color: #653bc5;
}
span.ansicyan {
color: #055be0;
}
span.ansiyellow {
color: #ff7823;
}
span.ansired {
color: #de143d;
}
div.output-stderr {
background-color: #ebb5b7;
}
div.output-stderr pre {
color: #000000;
}
div.js-error {
color: #de143d;
}
.ipython_tooltip {
font-family: "Droid Sans Mono", monospace;
font-size: 11pt;
line-height: 170%;
border: 2px solid #dadada;
background: #eeeeee;
background-color: #eeeeee;
border-radius: 2px;
overflow-x: visible;
overflow-y: visible;
box-shadow: none;
position: absolute;
z-index: 1000;
}
.ipython_tooltip .tooltiptext pre {
font-family: "Droid Sans Mono", monospace;
font-size: 11pt;
line-height: 170%;
background: #eeeeee;
background-color: #eeeeee;
color: #303030;
overflow-x: visible;
overflow-y: visible;
max-width: 900px;
}
div#tooltip.ipython_tooltip {
overflow-x: wrap;
overflow-y: visible;
max-width: 800px;
}
div.tooltiptext.bigtooltip {
overflow-x: visible;
overflow-y: scroll;
height: 400px;
max-width: 800px;
}
.cm-s-ipython.CodeMirror {
font-family: "Droid Sans Mono", monospace;
font-size: 11pt;
background: #efefef;
color: #303030;
border-radius: 2px;
font-style: normal;
font-weight: normal;
}
.cm-s-ipython div.CodeMirror-selected {
background: #e0e1e3;
}
.cm-s-ipython .CodeMirror-gutters {
background: #e0e1e3;
border: none;
border-radius: 0px;
}
.cm-s-ipython .CodeMirror-linenumber {
color: #aaaaaa;
}
.cm-s-ipython .CodeMirror-cursor {
border-left: 2px solid #ff711a;
}
.cm-s-ipython span.cm-comment {
color: #8d8d8d;
font-style: italic;
}
.cm-s-ipython span.cm-atom {
color: #055be0;
}
.cm-s-ipython span.cm-number {
color: #ff8132;
}
.cm-s-ipython span.cm-property {
color: #e22978;
}
.cm-s-ipython span.cm-attribute {
color: #de143d;
}
.cm-s-ipython span.cm-keyword {
color: #713bc5;
font-weight: normal;
}
.cm-s-ipython span.cm-string {
color: #009e07;
}
.cm-s-ipython span.cm-meta {
color: #aa22ff;
}
.cm-s-ipython span.cm-operator {
color: #055be0;
}
.cm-s-ipython span.cm-builtin {
color: #e22978;
}
.cm-s-ipython span.cm-variable {
color: #303030;
}
.cm-s-ipython span.cm-variable-2 {
color: #de143d;
}
.cm-s-ipython span.cm-variable-3 {
color: #aa22ff;
}
.cm-s-ipython span.cm-def {
color: #e22978;
font-weight: normal;
}
.cm-s-ipython span.cm-error {
background: rgba(191,97,106,.40);
}
.cm-s-ipython span.cm-tag {
color: #e22978;
}
.cm-s-ipython span.cm-link {
color: #ff7823;
}
.cm-s-ipython span.cm-storage {
color: #055be0;
}
.cm-s-ipython span.cm-entity {
color: #e22978;
}
.cm-s-ipython span.cm-quote {
color: #009e07;
}
div.CodeMirror span.CodeMirror-matchingbracket {
color: #1c1c1c;
background-color: rgba(30,112,199,.30);
}
div.CodeMirror span.CodeMirror-nonmatchingbracket {
color: #1c1c1c;
background: rgba(191,97,106,.40) !important;
}
div.cell.text_cell .cm-s-default .cm-header {
color: #126dce;
}
div.cell.text_cell .cm-s-default span.cm-variable-2 {
color: #353535;
}
div.cell.text_cell .cm-s-default span.cm-variable-3 {
color: #aa22ff;
}
.cm-s-default span.cm-comment {
color: #8d8d8d;
}
.cm-s-default .cm-tag {
color: #009fb7;
}
.cm-s-default .cm-builtin {
color: #e22978;
}
.cm-s-default .cm-string {
color: #009e07;
}
.cm-s-default .cm-keyword {
color: #713bc5;
}
.cm-s-default .cm-number {
color: #ff8132;
}
.cm-s-default .cm-error {
color: #055be0;
}
.CodeMirror-cursor {
border-left: 2px solid #ff711a;
border-right: none;
width: 0;
}
.cm-s-default div.CodeMirror-selected {
background: #e0e1e3;
}
.cm-s-default .cm-selected {
background: #e0e1e3;
}
div#maintoolbar {
display: none !important;
}
#header-container {
display: none !important;
}
/**********************************
MathJax Settings and Style Script
**********************************/
.MathJax_Display,
.MathJax nobr>span.math>span {
border: 0 !important;
font-size: 110% !important;
text-align: center !important;
margin: 0em !important;
}
/* Prevents MathJax from jittering */
/* cell position when cell is selected */
.MathJax:focus, body :focus .MathJax {
display: inline-block !important;
}
</style>
```python
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.linalg import svd as scipy_svd
from sklearn.pipeline import Pipeline
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA, TruncatedSVD
from sklearn.preprocessing import normalize, StandardScaler
from sklearn.feature_extraction.text import TfidfVectorizer
%watermark -a 'Ethen' -d -t -v -p numpy,scipy,pandas,sklearn,matplotlib
```
Ethen 2017-11-18 21:13:40
CPython 3.5.2
IPython 6.2.1
numpy 1.13.3
scipy 0.19.1
pandas 0.20.3
sklearn 0.19.1
matplotlib 2.1.0
# Singular Value Decomposition (SVD)
When conducting data analysis project, it's very common to encounter dataset that contains some useful information to the task at hand, but also contains low quality information that do not contribute too much to the end goal. When facing this issue, there are numerous ways to isolating the signal from the noise. e.g. We can employ regularization methods such as [Lasso regression](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/regularization/regularization.ipynb) to perform constrained optimization, automatically dropping uninformative features from the model or use [tree-based methods](https://github.com/ethen8181/machine-learning#trees--20161210) to identify the features that were most often used for constructing the tree.
Or use a variance maximization method such as [Principal Component Analysis (PCA)](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/dim_reduct/PCA.ipynb) that aims to transform the data into a new set of orthogonal components, ensuring that the first component aligns to the maximum variance in the dataset, and the subsequent component aligns with the next maximum component and so on. In other words, it makes a dataset more compact while preserving information. The convention method for calculating PCA requires us to compute the full covariance matrix, making it suffer from extensive use of memory and can be numerically unstable. It turns out, SVD is a method that can be used to compute PCA and obtain the principal component to transform our raw dataset.
**Singular Value Decomposition (SVD)** is a particular decomposition method that decomposes an arbitrary matrix $A$ with $m$ rows and $n$ columns (assuming this matrix also has a rank of $r$, i.e. $r$ columns of the matrix $A$ are linear independent) into a set of related matrices:
$$
\begin{align}
A = U \Sigma V^{T}
\end{align}
$$
where:
- $\Sigma$ (Sigma) is a $r * r$ non-negative, decreasing order diagonal matrix. All elements not on the main diagonal are
0 and the elements of $\Sigma$ are called the singular values. Another common notation that is used for this matrix is $S$. Thus in the following post, we'll use these two symbols interchangeably.
- $U$ is a $m * r$ orthonormal matrix and $V$ is a $n * r$ orthonormal matrix.
- Orthogonal matrix refers to a square matrix where the columns are 90 degrees between each other and its inner dot product is zero, i.e. Given a orthogonal matrix $Q$, $Q^T Q = Q Q^T = I$ and $Q^T = Q^{-1}$.
- Orthonormal matrix: orthogonal matrix where columns are unit vectors.
A classic pictorial representation of SVD.
## Interpretation of SVD
### Geometric Interpretation
We'll use a 2 dimensional dataset for the geometric interpretation for ease of visualization. Transformation of a matrix by $U \Sigma V^T$ can be visualized as a rotation and reflection, scaling, rotation and reflection. We'll see this as a step-by-step visualization.
Given a matrix $x = \begin{bmatrix} -10 & -10 & 20 & 20\\ -10 & 20 & 20 & -10 \end{bmatrix}$ and a transformation matrix $A = \begin{bmatrix} 1 & 0.3 \\ 0.45 & 1.2 \end{bmatrix}$.
- $V^T x$ We can see that multiplying by $V^T$ rotates and reflects the input matrix $x$. Notice the swap of colors red-blue and green-yellow indicating a reflection along the x-axis.
- $S V^T x$ Since $S$ only contains values on the diagonal, it scales the matrix. $V$ rotates the matrix to a position where the singular values now represent the scaling factor along the V-basis. In the picture below $V^Tx$ is dashed and $SV^Tx$ is solid.
- $U S V^T$ Finally, $U$ rotates and reflects the matrix back to the standard basis.
Putting all three steps into one picture below, the dashed square shows $x$ as the corners and the transformed matrix $Ax$ as the solid shape.
The most useful property of the SVD is that the axes in the new space, which represent new latent attributes, are orthogonal. Hence original attributes are expressed in terms of new attributes that are independent of each other.
```python
# we can confirm this with code
A = np.array([[1, 0.3], [0.45, 1.2]])
U, S, V = scipy_svd(A)
print('singular values:', S)
# the toy 2d matrix
x = np.array([[-10, -10, 20, 20], [-10, 20, 20, -10]]).T
x
```
singular values: [ 1.49065822 0.71444949]
array([[-10, -10],
[-10, 20],
[ 20, 20],
[ 20, -10]])
```python
# change default font size
plt.rcParams['font.size'] = 12
# the plot is not as pretty as the diagram above,
# but hopefully it gets the point across
fig, ax = plt.subplots(1, 4, figsize = (20, 4))
ax[0].scatter(x[:, 0], x[:, 1])
ax[0].set_title('Original matrix')
temp = x @ V.T
ax[1].scatter(temp[:, 0], temp[:, 1])
ax[1].set_title('$V^Tx$')
temp = temp @ np.diag(S)
ax[2].scatter(temp[:, 0], temp[:, 1])
ax[2].set_title('$SV^Tx$')
temp = temp @ U
ax[3].scatter(temp[:, 0], temp[:, 1])
ax[3].set_title('$USV^Tx$')
plt.tight_layout()
plt.show()
```
### Factor Interpretation
Here we are given a rank 3 matrix $A$, representing ratings of movies by users.
| Name | Matrix | Alien | Star Wars | Casablanca | Titanic |
| ----- | ------ | ----- | --------- | ---------- | ------- |
| Joe | 1 | 1 | 1 | 0 | 0 |
| Jim | 3 | 3 | 3 | 0 | 0 |
| John | 4 | 4 | 4 | 0 | 0 |
| Jack | 5 | 5 | 5 | 0 | 0 |
| Jill | 0 | 2 | 0 | 4 | 4 |
| Jenny | 0 | 0 | 0 | 5 | 5 |
| Jane | 0 | 1 | 0 | 2 | 2 |
Applying SVD to this matrix will give us the following decomposition:
The key to understanding what SVD offers is viewing the $r$ columns of $U$, $\Sigma$, and $V$ as representing concepts that are hidden in the original matrix. In our contrived example, we can imagine there are two concepts underlying the movies, scientific fiction and romance.
To be explicit:
- The matrix $U$ connects people to concept. For example, looking at Joe (the first row in the original matrix). The value 0.14 in the first row and first column of $U$ is smaller than some of the other entries in that column. The rationale for this is because while Joe watches only science fiction, he doesn’t rate those movies highly.
- The matrix $V$ relates movies to concept. The approximately 0.58 in each of the first three columns of the first row of $V^T$ indicates that the first three movies – The Matrix, Alien and Star Wars – each are of science-fiction genre.
- Matrix $\Sigma$ gives the strength of each of the concepts. In our example, the strength of the science-fiction concept is 12.4, while the strength of the romance concept is 9.5. Intuitively, the science-fiction concept is stronger because the data provides more information about the movies of that genre and the people who like them.
- The third concept is a bit harder to interpret, but it doesn't matter that much, because its weight, given by the third nonzero diagonal entry in $\Sigma$ is relatively low compared to the first two concepts.
- Note that the matrix decomposition doesn't know the meaning of any column in the dataset, it discovers the underlying concept and it is up to us to interpret these latent factors.
## Worked Example Full SVD
Let's step through a worked example of "Full" SVD. In practice the full version is computationally expensive, since we must calculate the full matrices $U_{mr}$, $S_{rr}$, and $V_{nr}^{T}$. The "truncated" versions of SVD are usually preferred, where we can preselect the top $k < r$ dimensions of interest and calculate $U_{mk}$, $S_{kk}$ and $V_{nk}^{T}$. But the truncated version is a topic for another day.
```python
# matrix form of the table above
rank = 3
A = np.array([
[1, 1, 1, 0, 0],
[3, 3, 3, 0, 0],
[4, 4, 4, 0, 0],
[5, 5, 5, 0, 0],
[0, 2, 0, 4, 4],
[0, 0, 0, 5, 5],
[0, 1, 0, 2, 2]])
# we'll use a library to perform the svd
# so we can confirm our result with it
U, S, V = scipy_svd(A, full_matrices = False)
# we'll just print out S, a.k.a Sigma to show the numbers
# are identical to the results shown earlier
print(S)
```
[ 1.24810147e+01 9.50861406e+00 1.34555971e+00 3.04642685e-16
0.00000000e+00]
```python
# the following cell verifies some properties of SVD
# Verify calculation of A=USV^T
print(np.allclose(A, U @ np.diag(S) @ V))
# orthonormal, columns are unit vectors (length = 1)
print(np.allclose(np.round(np.sum(U * U, axis = 0)), np.ones(S.size)))
# orthogonal, dot product of itself is equivalent to
# the identity matrix U^T U = I
print(np.allclose(U.T @ U, np.eye(S.size)))
```
True
True
True
The SVD of a matrix $A$ is strongly connected to the eigenvalues of the symmetric matrices $A^{T}A$ and $AA^{T}$. We'll start with the expression for SVD: $A = U \Sigma V^T$.
$$
\begin{align}
A^T
&= (U \Sigma V^T)^T \\
&= (V^T)^T \Sigma^T U^T \\
&= V \Sigma^T U^T \\
&= V \Sigma U^T
\end{align}
$$
In the second step we use the matrix property that $(BA)^T = A^T B^T$ and in the final step $\Sigma$ is a diagonal matrix, thus $\Sigma^T = \Sigma$. Next:
$$
\begin{align}
A^T A
&= (V \Sigma U^T)(U \Sigma V^T) \\
&= V \Sigma I \Sigma V^T \\
&= V \Sigma \Sigma V^T
\end{align}
$$
In the second step, we use the fact that $U$ is a orthonormal matrix, so $U^T U$ is an identity matrix of the appropriate size.
We now multiply both sides of this equation by $V$ to get:
$$
\begin{align}
A^T A V
&= V \Sigma^2 V^T V \\
&= V \Sigma^2 I \\
&= V \Sigma^2
\end{align}
$$
Here we use the fact that $V$ is also a orthonormal matrix, so $V^T V$ is an identity matrix of the appropriate size. Looking at the equation $A^T A V = V \Sigma^2$, we now see that $V$ is the eigenvector of the matrix $A^T A$ and $\Sigma^2$ is the diagonal matrix whose entries are the corresponding eigenvalues. i.e. $V = eig(A^T A)$
```python
AtA = A.T @ A
_, V1 = np.linalg.eig(AtA)
# Note that the possible non-uniqueness of the decomposition means
# that an axis can be flipped without changing anything fundamental,
# thus we compare whether the absolute values are relatively close
# instead of the raw value
print(np.allclose(np.abs(V1[:, :rank]), np.abs(V1[:, :rank])))
```
True
Only $U$ remains to be computed, but it can be found in the same way we found $V$. Instead this time, we'll be starting with $A A^T$
$$
\begin{align}
A A^T U
&= (U \Sigma V^T)(V \Sigma U^T) U \\
&= V \Sigma I \Sigma V^T U \\
&= U \Sigma \Sigma U^T U \\
&= U \Sigma \Sigma I \\
&= U \Sigma^2
\end{align}
$$
In other words: $U = eig(A A^T)$
```python
AAt = A @ A.T
_, U1 = np.linalg.eig(AAt)
np.allclose(np.abs(U1[:, :rank]), np.abs(U[:, :rank]))
```
True
```python
# notice that since this is a rank 3 matrix
# only the first 3 values of 3 contains non-zero values
np.round(S, 0)
```
array([ 12., 10., 1., 0., 0.])
To sum this section up:
- $U$ is a $m * r$ orthonormal matrix of "left-singular" (eigen)vectors of $AA^{T}$.
- $V$ is a $n * r$ orthonormal matrix of 'right-singular' (eigen)vectors of $A^{T}A$.
- $\Sigma$ is a $r * r$ non-negative, decreasing order diagonal matrix. All elements not on the main diagonal are
0 and the elements of $\Sigma$ are called the singular values, which is the square root of nonzero eigenvalues.
For those interested, the following link contains a detail walkthrough of the computation by hand. [Notes: Singular Value Decomposition Tutorial](https://datajobs.com/data-science-repo/SVD-Tutorial-[Kirk-Baker].pdf)
## Relationships with PCA
This usage of SVD is very similar to Principal Components Analysis (PCA) and in fact several numerical software libraries actually use SVD under the hood for their PCA routines, for example `sklearn.decomposition.PCA` within scikit-learn. This is due to the fact that it is more numerically stable and it's also possible to perform a truncated SVD, which only needs us to calculate $U \Sigma V^T$ for the first $k<n$ features; this makes it far quicker to compute than the full covariance matrix as computed within PCA.
In the following section, we'll take a look at the relationship between these two methods, PCA and SVD. Recall from the documentation on [PCA](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/dim_reduct/PCA.ipynb), given the input matrix $\mathbf X$ the math behind the algorithm is to solve the eigendecomposition for the correlation matrix (assuming we standardized all features) $\mathbf C = \mathbf X^T \mathbf X / (n - 1)$. It turns out, we can represent $\mathbf C$ by a product of its eigenvectors $\mathbf W$ and diagonalized eigenvalues $\mathbf L$.
$$
\begin{align}
\mathbf C &= \mathbf W \mathbf L \mathbf W^T
\end{align}
$$
```python
# use some toy dataset
iris = load_iris()
X = iris['data']
# construct the pipeline
standardize = StandardScaler()
pca = PCA()
pipeline = Pipeline([
('standardize', standardize),
('pca', pca)
])
X_pca = pipeline.fit_transform(X)
```
```python
standardize = pipeline.named_steps['standardize']
X_std = standardize.transform(X)
# confirm the WLW^T
X_cov = np.cov(X_std.T)
eigen_values, eigen_vecs = np.linalg.eig(X_cov)
reconstructed_X = eigen_vecs @ np.diag(eigen_values) @ np.linalg.inv(eigen_vecs)
print(np.allclose(X_cov, reconstructed_X))
```
True
After obtaining the eigenvectors, i.e. principal direrctions, we can project our raw data onto the principal axes, which are called principal component scores via the operation $\mathbf{XW}$.
As for singular decompostion, $\mathbf X = \mathbf U \mathbf \Sigma \mathbf V^T$. We can write out the correlation matrix using this form:
$$
\begin{align}
\mathbf C
&= \mathbf V \mathbf \Sigma \mathbf U^T \mathbf U \mathbf \Sigma \mathbf V^T / (n - 1) \\
&= \mathbf V \frac{\mathbf \Sigma^2}{n - 1}\mathbf V^T
\end{align}
$$
Meaning thte right singular vectors $\mathbf V$ are principal directions and that singular values are related to the eigenvalues of correlation matrix via $\mathbf L = \mathbf \Sigma^2 / (n - 1)$ And the principal component scores can be computed by: $\mathbf X \mathbf V = \mathbf U \mathbf \Sigma \mathbf V^T \mathbf V = \mathbf U \mathbf \Sigma$.
```python
# here we'll print out the eigenvectors
# learned from PCA and the V learned from svd
pca.components_
```
array([[ 0.52237162, -0.26335492, 0.58125401, 0.56561105],
[ 0.37231836, 0.92555649, 0.02109478, 0.06541577],
[-0.72101681, 0.24203288, 0.14089226, 0.6338014 ],
[-0.26199559, 0.12413481, 0.80115427, -0.52354627]])
```python
# we can do X @ V to obtain the principal component score
U, S, V = scipy_svd(X_std)
V
```
array([[ 0.52237162, -0.26335492, 0.58125401, 0.56561105],
[-0.37231836, -0.92555649, -0.02109478, -0.06541577],
[ 0.72101681, -0.24203288, -0.14089226, -0.6338014 ],
[ 0.26199559, -0.12413481, -0.80115427, 0.52354627]])
Notice that some of the signs are flipped, this is normal due to the previously stated non-uniqueness of the decomposition. We'll now wrap up this section with a diagram of PCA versus SVD:
## Applications
This is by no means an exhaustive list of SVD's application.
### Dimensionality Reduction
Due to its relationships with PCA, we can imagine a very frequent use of SVD is feature reduction. By only selecting the top $k$ singular values, we have in effect compressed the original information and represented it using fewer features. Note that because SVD is also a numerical algorithm, it is important to standardize the features to ensure the magnitude of the entries are of similar range.
### Information Retrieval
SVD has also been used extensively in information retrieval, in this particular application, it is also known as Latent Semantic Analysis (LSA) or Latent Semantic Indexing (LSI). As we'll soon see, the idea is very similar to [topic modeling](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/clustering/topic_model/LDA.ipynb). The fundamental problem in information retrieval is: given some search terms, retrieve all of the documents that contain those search terms or, perhaps more usefully, return documents whose content is semantically related to the search terms. For example, if one of the search terms was "automobile" it might be appropriate to return also documents that contain the search term "car".
One approach to this problem is: Given an information repository, we might convert a raw text to document-term matrix with one row per document and one column per word. Then convert the search term as a vector in the same space, and retrieving document vectors that are close to the search vector. There are several problems with vector-based retrieval.
- First, the space is very high dimensional. For example, a typical collection of documents can easily mention more than 100,000 words even if stemming is used (i.e., "skip", "skipping", "skipped" are all treated as the same word). This creates problems for distance measurement due to the curse of dimensionality.
- Second, it treats each word as independent, whereas in languages like English, the same word can mean two different things ("bear" a burden versus "bear" in the woods), and two different words can mean the same thing ("car" and "automobile").
By applying SVD, we can reduce the dimension to speed up the search, words with similar meanings will get mapped to a similar truncated space. We'll take at this application in the following quick example:
```python
example = [
'Machine learning is super fun',
'Python is super, super cool',
'Statistics is cool, too',
'Data science is fun',
'Python is great for machine learning',
'I like football',
'Football is great to watch']
# a two-staged model pipeline,
# first convert raw words to a tfidf document-term matrix
# and apply svd decomposition after that
tfidf = TfidfVectorizer(stop_words = 'english')
svd = TruncatedSVD(n_components = 2)
pipeline = Pipeline([
('tfidf', tfidf),
('svd', svd)
])
X_lsa = pipeline.fit_transform(example)
X_lsa
```
array([[ 0.82714832, -0.20216821],
[ 0.64317518, -0.27989764],
[ 0.19952711, -0.19724375],
[ 0.24907097, -0.13828783],
[ 0.7392593 , 0.14892526],
[ 0.1162772 , 0.73645697],
[ 0.28427388, 0.79260792]])
```python
# mapping of words to latent factors/concepts,
# i.e. each concept is a linear combination of words
tfidf = pipeline.named_steps['tfidf']
vocab = tfidf.get_feature_names()
pd.DataFrame(svd.components_, index = ['concept1', 'concept2'], columns = vocab)
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>cool</th>
<th>data</th>
<th>football</th>
<th>fun</th>
<th>great</th>
<th>learning</th>
<th>like</th>
<th>machine</th>
<th>python</th>
<th>science</th>
<th>statistics</th>
<th>super</th>
<th>watch</th>
</tr>
</thead>
<tbody>
<tr>
<th>concept1</th>
<td>0.211903</td>
<td>0.082524</td>
<td>0.123490</td>
<td>0.293206</td>
<td>0.283966</td>
<td>0.425531</td>
<td>0.048611</td>
<td>0.425531</td>
<td>0.343490</td>
<td>0.082524</td>
<td>0.083414</td>
<td>0.510029</td>
<td>0.100157</td>
</tr>
<tr>
<th>concept2</th>
<td>-0.175362</td>
<td>-0.061554</td>
<td>0.654756</td>
<td>-0.124878</td>
<td>0.365768</td>
<td>-0.019431</td>
<td>0.413619</td>
<td>-0.019431</td>
<td>-0.029054</td>
<td>-0.061554</td>
<td>-0.110779</td>
<td>-0.240595</td>
<td>0.375162</td>
</tr>
</tbody>
</table>
</div>
```python
svd = pipeline.named_steps['svd']
print('total variance explained:', np.sum(svd.explained_variance_))
# mapping of document to latent factors/concepts,
# i.e. Eech document is a linear combination of the concepts
pd.DataFrame(X_lsa, index = example, columns = ['concept1', 'concept2'])
```
total variance explained: 0.252606886963
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>concept1</th>
<th>concept2</th>
</tr>
</thead>
<tbody>
<tr>
<th>Machine learning is super fun</th>
<td>0.827148</td>
<td>-0.202168</td>
</tr>
<tr>
<th>Python is super, super cool</th>
<td>0.643175</td>
<td>-0.279898</td>
</tr>
<tr>
<th>Statistics is cool, too</th>
<td>0.199527</td>
<td>-0.197244</td>
</tr>
<tr>
<th>Data science is fun</th>
<td>0.249071</td>
<td>-0.138288</td>
</tr>
<tr>
<th>Python is great for machine learning</th>
<td>0.739259</td>
<td>0.148925</td>
</tr>
<tr>
<th>I like football</th>
<td>0.116277</td>
<td>0.736457</td>
</tr>
<tr>
<th>Football is great to watch</th>
<td>0.284274</td>
<td>0.792608</td>
</tr>
</tbody>
</table>
</div>
After applying LSA, we can use the compressed features to see which documents are more similar to a particular document. The following code chunk shows the pairwise cosine similarity of all the documents.
```python
X_normed = normalize(X_lsa, axis = 1)
similarity = X_normed @ X_normed.T
pd.DataFrame(similarity, index = example, columns = example)
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Machine learning is super fun</th>
<th>Python is super, super cool</th>
<th>Statistics is cool, too</th>
<th>Data science is fun</th>
<th>Python is great for machine learning</th>
<th>I like football</th>
<th>Football is great to watch</th>
</tr>
</thead>
<tbody>
<tr>
<th>Machine learning is super fun</th>
<td>1.000000</td>
<td>0.985458</td>
<td>0.857746</td>
<td>0.964535</td>
<td>0.905386</td>
<td>-0.083026</td>
<td>0.104459</td>
</tr>
<tr>
<th>Python is super, super cool</th>
<td>0.985458</td>
<td>1.000000</td>
<td>0.932623</td>
<td>0.995359</td>
<td>0.820075</td>
<td>-0.251150</td>
<td>-0.066049</td>
</tr>
<tr>
<th>Statistics is cool, too</th>
<td>0.857746</td>
<td>0.932623</td>
<td>1.000000</td>
<td>0.963019</td>
<td>0.558322</td>
<td>-0.583514</td>
<td>-0.421662</td>
</tr>
<tr>
<th>Data science is fun</th>
<td>0.964535</td>
<td>0.995359</td>
<td>0.963019</td>
<td>1.000000</td>
<td>0.761204</td>
<td>-0.343126</td>
<td>-0.161758</td>
</tr>
<tr>
<th>Python is great for machine learning</th>
<td>0.905386</td>
<td>0.820075</td>
<td>0.558322</td>
<td>0.761204</td>
<td>1.000000</td>
<td>0.347952</td>
<td>0.516841</td>
</tr>
<tr>
<th>I like football</th>
<td>-0.083026</td>
<td>-0.251150</td>
<td>-0.583514</td>
<td>-0.343126</td>
<td>0.347952</td>
<td>1.000000</td>
<td>0.982423</td>
</tr>
<tr>
<th>Football is great to watch</th>
<td>0.104459</td>
<td>-0.066049</td>
<td>-0.421662</td>
<td>-0.161758</td>
<td>0.516841</td>
<td>0.982423</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
### Collaborative Filtering
This post is getting quite long, thus we'll just list it here that similar to the movie rating matrix in a couple of sections back, SVD can be applied to implementing recommendation system, namely collaborative filtering. I personally haven't checked it yet, but the following post seems to contain a walkthrough of SVD applied to collaborative filtering for people who are interested in diving deeper. [Blog: Matrix Factorization for Movie Recommendations in Python](https://beckernick.github.io/matrix-factorization-recommender/)
# Reference
- [Notes: Latent Semantic Analysis](http://www.datascienceassn.org/sites/default/files/users/user1/lsa_presentation_final.pdf)
- [Notes: Singular Value Decomposition Tutorial](https://datajobs.com/data-science-repo/SVD-Tutorial-[Kirk-Baker].pdf)
- [Blog: Feature Reduction using SVD](http://blog.applied.ai/feature-reduction-using-svd/)
- [Blog: Singular Value Decomposition Demystified](http://makeyourowntextminingtoolkit.blogspot.co.uk/2017/02/singular-value-decomposition-demystified.html)
- [Blog: Singular Value Decomposition (SVD) Visualisation](https://alyssaq.github.io/2015/singular-value-decomposition-visualisation/)
- [Blog: Reducing Dimensionality from Dimensionality Reduction Techniques](https://towardsdatascience.com/reducing-dimensionality-from-dimensionality-reduction-techniques-f658aec24dfe)
- [Online book: Mining Massive Dataset: Chapter 11 Dimensionality Reduction](http://infolab.stanford.edu/~ullman/mmds/ch11.pdf)
- [Online book: Understanding Complex Datasets - Data Mining with Matrix Decomposition Chapter 3: Singular Value Decomposition (SVD)](http://lnfm1.sai.msu.ru/~rastor/Books/Skillicorn-Understanding_complex_datasets_data_mining_with_matrix_decompositions.pdf)
- [StackExchange: Relationship between SVD and PCA. How to use SVD to perform PCA?](https://stats.stackexchange.com/questions/134282/relationship-between-svd-and-pca-how-to-use-svd-to-perform-pca)
| 7986d2aed38d485b26ae6d0b2c7fc19a1f8e3fa5 | 158,757 | ipynb | Jupyter Notebook | dim_reduct/svd.ipynb | certara-ShengnanHuang/machine-learning | d21dfbeabf2876ffe49fcef444ca4516c4d36df0 | [
"MIT"
]
| 2,104 | 2016-04-15T13:35:55.000Z | 2022-03-28T10:39:51.000Z | dim_reduct/svd.ipynb | certara-ShengnanHuang/machine-learning | d21dfbeabf2876ffe49fcef444ca4516c4d36df0 | [
"MIT"
]
| 10 | 2017-04-07T14:25:23.000Z | 2021-05-18T03:16:15.000Z | dim_reduct/svd.ipynb | certara-ShengnanHuang/machine-learning | d21dfbeabf2876ffe49fcef444ca4516c4d36df0 | [
"MIT"
]
| 539 | 2015-12-10T04:23:44.000Z | 2022-03-31T07:15:28.000Z | 49.970727 | 44,236 | 0.628533 | true | 21,016 | Qwen/Qwen-72B | 1. YES
2. YES | 0.715424 | 0.817574 | 0.584912 | __label__eng_Latn | 0.625341 | 0.197278 |
```python
%matplotlib inline
```
```python
# configure matplotlib
import matplotlib as mpl
mpl.rcParams['text.usetex'] = True
import collections
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from chmp.ds import reload
reload('chmp.ds')
from chmp.ds import (
colorize,
mpl_set,
pd_has_ordered_assign,
pgm,
plot_gaussian_contour,
)
```
```python
assert pd_has_ordered_assign(), "notebook requires .assign(...) to respect argument order"
```
# d-seperation
Based on Daphne Koller and Nir Friedman "Probabilistic Graphical Models" (2009).
```python
def _plot(edge1_lr, edge2_lr, observed, blocked):
ec = 'r' if blocked else 'k'
res = (
pgm()
.node('x', '', 1, 1)
.node('z', '', 3, 1)
.node('y', '', 2, 1, observed=observed, edgecolor=ec)
)
res = res.edge('x', 'y') if edge1_lr else res.edge('y', 'x')
res = res.edge('y', 'z') if edge2_lr else res.edge('z', 'y')
return res
_, ((ax11, ax12, ax13), (ax21, ax22, ax23)) = plt.subplots(2, 3, figsize=(16, 2))
_plot(True, True, False, False).render(ax=ax11)
_plot(True, True, True, True).render(ax=ax21)
_plot(False, True, False, False).render(ax=ax12)
_plot(False, True, True, True).render(ax=ax22)
_plot(True, False, False, True).render(ax=ax13)
_plot(True, False, True, False).render(ax=ax23)
pass
```
Visualize the indendence problems by approximating the mean absolute correlation (based on binning for $z$):
$$
MAC = \frac{\sum_z |\mathrm{corr}(x, y|z)|}{\sum_z}
$$
```python
def compute_macs(df, x, y, conditions):
result = collections.OrderedDict()
for condition in conditions:
if condition:
label = '| {}'.format(', '.join(condition))
else:
label = ''
key = r'$\mathrm{{MAC}}({}, {}{})$'.format(x, y, label)
mac = df.pipe(mean_absolute_correlation, x, y, condition=condition)
result[key] = mac
return pd.Series(result)
def mean_absolute_correlation(df, x, y, condition=()):
by = []
for var in condition:
by += [pd.qcut(df[var], 11)]
if by:
res = df.groupby(by)[[x, y]].corr().unstack()[x, y]
return abs(res).mean()
else:
return abs(df[[x, y]].corr().iloc[0, 1])
```
```python
base_graph = pgm().node('x', 1, 2).node('y', 2, 2).node('z', 1.5, 1)
```
```python
_, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(16, 3.25))
base_graph.edges('z', 'xy').render(title='fork', ax=ax1)
df = pd.DataFrame().assign(
z=lambda df: np.random.normal(size=10_000),
x=lambda df: np.random.normal(df["z"]),
y=lambda df: np.random.normal(df["z"]),
)
df.plot.scatter("x", "y", marker=".", ax=ax2, alpha=0.3)
df.pipe(plot_gaussian_contour, "x", "y", ax=ax2)
for color, (_, group) in colorize(df.groupby(pd.qcut(df["z"], 11))):
group.sample(n=100).plot.scatter("x", "y", marker=".", ax=ax3, color=color, alpha=0.3)
group.pipe(plot_gaussian_contour, 'x', 'y', edgecolor=color, ax=ax3)
compute_macs(df, 'x', 'y', [[], ['z']]).plot.bar(ax=ax4, rot=0)
pass
```
```python
_, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(16, 3.25))
base_graph.edges('xy', 'z').render(title='collider', ax=ax1)
df = pd.DataFrame().assign(
x=lambda df: np.random.normal(size=10_000),
y=lambda df: np.random.normal(size=10_000),
z=lambda df: np.random.normal(df["x"] + df["y"]),
)
df.plot.scatter("x", "y", marker=".", ax=ax2, alpha=0.3)
df.pipe(plot_gaussian_contour, "x", "y", ax=ax2)
for color, (_, group) in colorize(df.groupby(pd.qcut(df["z"], 11))):
group.sample(n=100).plot.scatter("x", "y", marker=".", ax=ax3, color=color, alpha=0.3)
group.pipe(plot_gaussian_contour, 'x', 'y', edgecolor=color, ax=ax3)
compute_macs(df, 'x', 'y', [[], ['z']]).plot.bar(ax=ax4, rot=0)
pass
```
```python
_, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(16, 3.25))
(
pgm()
.node('x', 1, 2)
.node('y', 2, 2)
.node('z', 1.5, 1)
.node('w', 1.5, 0)
.edge('x', 'z')
.edge('y', 'z')
.edge('z', 'w')
.render(title='collider with one child', ax=ax1)
)
df = pd.DataFrame().assign(
x=lambda df: np.random.normal(size=10_000),
y=lambda df: np.random.normal(size=10_000),
z=lambda df: np.random.normal(df["x"] + df["y"]),
w=lambda df: np.random.normal(df["z"])
)
sns.heatmap(df.corr(), annot=True, fmt='.2f', ax=ax2)
mpl_set(ax=ax2, invert='y')
for color, (_, group) in colorize(df.pipe(lambda df: df.groupby(pd.qcut(df['z'], 11)))):
group.sample(n=150).plot.scatter('x', 'y', alpha=0.3, marker='.', color=color, ax=ax3)
group.pipe(plot_gaussian_contour, 'x', 'y', edgecolor=color, ax=ax3)
compute_macs(df, 'x', 'y', [[], ['z'], ['w'], ['z', 'w']]).plot.bar(ax=ax4, rot=30)
pass
```
# Causal calculus
Based on Judea Pearl "Causal diagrams for empirical research" (1995).
Define:
- $(\dots)_{\overline{X}}$: the expression $\dots$ evaluated in the graph with with all edges that point into $X$ removed
- $(\dots)_{\underline{X}}$: the expression $\dots$ evaluated in graph with all edges that point out of $X$ removed
- $\mathrm{pa}(X)$: the parents of node set $X$
- $\mathrm{an}(X)$: the ancestors of node set $X$
**Rule 1.** If removing edges into $X$, make $Y$ and $Z$ independent, then the conditional probabilities in the causal graph reflect this fact:
$$
\begin{align}
P(Y|\mathrm{do}(X), Z, W) &= P(Y|\mathrm{do}(X), W)
& &\text{if $(Y \perp Z|X, W)_\overline{X}$}
\end{align}
$$
**Rule 2.** If $Z$ only acts "forward", i.e., there is no backdoor path, then conditioning and intervening have the same effect:
$$
\begin{align}
P(Y|\mathrm{do}(X), \mathrm{do}(Z), W) &= P(Y|\mathrm{do}(X), Z, W)
& & \text{if $(Y \perp Z|X, W)_{\overline{X}, \underline{Z}}$}
\end{align}
$$
**Rule 3.**
$$
\begin{align}
P(Y|\mathrm{do}(X), \mathrm{do}(Z), W) &= P(Y|\mathrm{do}(X), W)
& & \text{if $(Y \perp Z|X, W)_{\overline{X}, \overline{Z - \mathrm{an}(W)}}$}
\end{align}
$$
Notable special cases:
$$
\begin{align}
P(Y|\mathrm{do}(Z)) &= P(Y)
& & \text{if $(Y \perp Z)_\overline{Z}$, (III)} \\
P(Y|\mathrm{do}(\mathrm{pa}(Y))) &= P(Y|\mathrm{pa}(Y))
& & \text{since $(Y \perp \mathrm{pa}(Y))_\underline{\mathrm{pa}(Y)}, (II)$}
\end{align}
$$
The goal is always to remove all instances of $\mathrm{do}(\dots)$ from the expressions using aboves rules.
# Example I
```python
graph = (
pgm()
.node('X', 1.25, 1)
.node('Z', 2.0, 1.75)
.node('Y', 2.75, 1)
.edges('Z', 'XY')
.edges('X', 'Y')
)
_, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(12, 3.25))
graph.render(ax=ax1, title='Full graph $\mathcal{G}$')
graph.remove(outgoing='X').render(ax=ax2, title=r'Modified graph: $\mathcal{G}_{\underline{X}}$')
graph.remove(incoming='X').render(ax=ax3, title=r'Modified graph: $\mathcal{G}_{\overline{X}}$')
```
$$
\begin{align}
p(y|z, \mathrm{do}(x)) &= p(y|z, x)
&& \text{(II), since $(Y \perp X|Z)_\underline{X}$}
\\
p(z|\mathrm{do}(x)) &= p(z)
&& \text{(III), since $(Z \perp X)_\overline{X}$}
\\
p(y|\mathrm{do}(x)) &= \sum_z p(y|z, \mathrm{do}(x)) p(z|\mathrm{do}(x))
&&
\\
&= \sum_z p(y|z, x) p(z)
&&
\end{align}
$$
## Example II
```python
graph = (
pgm()
.node('E', 1, 2.5)
.node('X', 1.25, 1)
.node('Z', 2.0, 1.75)
.node('A', 3, 2.5)
.node('Y', 2.75, 1)
.edges('E', 'XZ')
.edges('Z', 'XY')
.edges('A', 'YZ')
.edges('X', 'Y')
)
_, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16, 4))
graph.render(ax=ax1, title='Full graph $\mathcal{G}$')
graph.remove(incoming='X').render(ax=ax2, title=r'Modified graph: $\mathcal{G}_{\overline{X}}$')
graph.remove(outgoing='X').render(ax=ax3, title=r'Modified graph: $\mathcal{G}_{\underline{X}}$')
```
Application
$$
\begin{align}
P(Y|\mathrm{do}(X)) &= \sum_{E, Z} P(Y|E, Z, \mathrm{do}(X)) P(E, Z|\mathrm{do}(X))
&&
\\
\hline
\\
P(E, Z|\mathrm{do}(X)) &= P(E, Z)
&& \text{(III), since $(\{E, Z\} \perp X)_\overline{X}$}
\\
P(Y|E, Z, \mathrm{do}(X)) &= P(Y|E, Z, X)
&& \text{(II), since $(Y \perp X|E, Z)_\underline{X}$}
\\
\hline
\\
&= \sum_{E, Z} P(Y|E, Z, X) P(E, Z)
&&
\\
&= \sum_{E, Z} \frac{P(Y, E, Z, X)}{P(X|E, Z)}
\end{align}
$$
# Counterfactuals
Answers to the question: given I used $X = x$ and the path and observed $Y = y$, what whould $Y$ have been had I used $X = x^\prime$.
Can be answered by three steps (Pearl, Glymour, Jewell "Causal inference in statistics - a primer":
- **Abduciton:** determine the distribution of latents given the evedince: $P(U|E = e)$
- **Action:** modify the model, by replacing the equations for $X$ with $X = x^\prime$
- **Prediction:** use the modified model and $P(U|E=e)$ to predict the consequence of the counterfactual
```python
```
| 6deda6e4d4fc0a9a4ec2a88a504e19e9ac8c84af | 354,203 | ipynb | Jupyter Notebook | 20180107-Causality/Notes.ipynb | chmp/misc-exp | 2edc2ed598eb59f4ccb426e7a5c1a23343a6974b | [
"MIT"
]
| 6 | 2017-10-31T20:54:37.000Z | 2020-10-23T19:03:00.000Z | 20180107-Causality/Notes.ipynb | chmp/misc-exp | 2edc2ed598eb59f4ccb426e7a5c1a23343a6974b | [
"MIT"
]
| 7 | 2020-03-24T16:14:34.000Z | 2021-03-18T20:51:37.000Z | 20180107-Causality/Notes.ipynb | chmp/misc-exp | 2edc2ed598eb59f4ccb426e7a5c1a23343a6974b | [
"MIT"
]
| 1 | 2019-07-29T07:55:49.000Z | 2019-07-29T07:55:49.000Z | 651.108456 | 91,380 | 0.946576 | true | 3,112 | Qwen/Qwen-72B | 1. YES
2. YES | 0.685949 | 0.79053 | 0.542264 | __label__eng_Latn | 0.385038 | 0.09819 |
## Example
Let's begin with an example. Suppose that you conduct an opinion survey amongst pilots. In the survey, you ask them basic demographics (gender, race, etc) and whether they agree/disagree with a statement on a scale.
You then have a set of categorical data with which you can compare responses to questions between demographics (or responses). Here we can compare pilot gender with the sense of discrimination within the field.
```R
library("gmodels")
gender <- structure(c(2L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L,
1L, 2L, 2L, 1L, 2L, 1L, 2L), .Label = c("Female", "Male"), class = "factor")
discrim <- structure(c(3L, 4L, 3L, 3L, 3L, 2L, 4L, 3L, 2L, 3L, 3L, 2L, 3L,
3L, 3L, 3L, 2L, 3L, 4L, 3L), .Label = c("A Lot", "No", "Not at all",
"Yes"), class = "factor")
CrossTable(gender, discrim, prop.c=FALSE, prop.r=FALSE, chisq=FALSE, prop.t=FALSE, prop.chisq=FALSE)
```
Cell Contents
|-------------------------|
| N |
|-------------------------|
Total Observations in Table: 20
| discrim
gender | No | Not at all | Yes | Row Total |
-------------|------------|------------|------------|------------|
Female | 2 | 4 | 2 | 8 |
-------------|------------|------------|------------|------------|
Male | 2 | 9 | 1 | 12 |
-------------|------------|------------|------------|------------|
Column Total | 4 | 13 | 3 | 20 |
-------------|------------|------------|------------|------------|
In this representation we see response counts for each combination on the categoricals. When we add each count for each possible combination, the sum equals the total number of responses (n=20). From the table we can see that there were two (2) counts of females who responded 'No' to feeling discrimination.
What we also see is row and column totals for each category. Following the rows, we can see that the total number of women is eight (8) and the total number of men is twelve (12). From the columns, there were a total of four (4) 'No,' thirteen (13) 'Not at all,' three (3) 'Yes,' and zero (0) 'A lot.'
It is here in the proportions of counts in rows versus columns that we determine if there is a relationship between the categoricals.
If we assume that the responses to the question of discrimination are independent of gender, then then the number of female responding 'Yes' should be equal to the fraction of females multiplied by the number responding 'Yes.'
$$\begin{align}
n_{f,y} =& 3 \frac{8}{20}\\
=& 1.2
\end{align}$$
However, from the table we can see that there are actually more than 1.2 females who responded 'Yes.' The question then becomes, is this more likely to be from the same distribution (ie sample is independent of m/f and from the same distribution) or are they from different distributions.
## Chi squared
The chi squared test of independence is calculated as follows:
$$\begin{align}
\chi^2_{cell} =& \frac{\left( observed - expected \right)^2}{expected} \\
\chi^2 =& \sum_{all} \chi^2_{cell}
\end{align}$$
This value is then compared against the chi squared distribution to determine how likely that the sample is independent of gender. The chi squared distribution is dependent on the number of degrees of freedom (dof) of the measurement. The number used in selecting the distribution for dof is:
$$\begin{align}
dof =& \left( number of rows - 1\right) \left( number of columns -1 \right)
\end{align}$$
For our sample, we may recalculate the table to include the expected values and the chi squared value of each cell as well as the likelihood that the sample is independent of m/f:
```R
CrossTable(gender, discrim, prop.c=FALSE, prop.r=FALSE, chisq=TRUE, prop.t=FALSE, prop.chisq=TRUE, expected=TRUE)
```
Warning message in chisq.test(t, correct = FALSE, ...):
“Chi-squared approximation may be incorrect”
Cell Contents
|-------------------------|
| N |
| Expected N |
| Chi-square contribution |
|-------------------------|
Total Observations in Table: 20
| discrim
gender | No | Not at all | Yes | Row Total |
-------------|------------|------------|------------|------------|
Female | 2 | 4 | 2 | 8 |
| 1.600 | 5.200 | 1.200 | |
| 0.100 | 0.277 | 0.533 | |
-------------|------------|------------|------------|------------|
Male | 2 | 9 | 1 | 12 |
| 2.400 | 7.800 | 1.800 | |
| 0.067 | 0.185 | 0.356 | |
-------------|------------|------------|------------|------------|
Column Total | 4 | 13 | 3 | 20 |
-------------|------------|------------|------------|------------|
Statistics for All Table Factors
Pearson's Chi-squared test
------------------------------------------------------------
Chi^2 = 1.517094 d.f. = 2 p = 0.4683464
For each cell we now have count, expected value, and chi squared value. From the test statistic, the likelihood that the sample is independent of gender is 0.468. In this case we cannot reject the NULL.
Note the warning provided by the test, "Chi-squared approximation may be incorrect." Given the number of degrees of freedom, this is a very small sample. A statistical power test would suggest that for a moderate effect size of 0.3, a sample of n=108 would be needed.
```R
library("pwr")
pwr.chisq.test(w=0.3,N=NULL,df=2,sig.level=0.05,power=0.8)
```
Chi squared power calculation
w = 0.3
N = 107.0521
df = 2
sig.level = 0.05
power = 0.8
NOTE: N is the number of observations
```R
```
| ed77731a54fba15875dfd98f63aa99755c645b61 | 8,669 | ipynb | Jupyter Notebook | Chi_squared.ipynb | rbnsnsd2/quantitative_stats | 620b2b0724fd3486f1d81eb0fb2241020781340b | [
"MIT"
]
| null | null | null | Chi_squared.ipynb | rbnsnsd2/quantitative_stats | 620b2b0724fd3486f1d81eb0fb2241020781340b | [
"MIT"
]
| null | null | null | Chi_squared.ipynb | rbnsnsd2/quantitative_stats | 620b2b0724fd3486f1d81eb0fb2241020781340b | [
"MIT"
]
| null | null | null | 38.02193 | 317 | 0.465106 | true | 1,608 | Qwen/Qwen-72B | 1. YES
2. YES | 0.94079 | 0.843895 | 0.793928 | __label__eng_Latn | 0.993665 | 0.682893 |
# The Efficient Frontier of Optimal Portfolio Transactions
### Introduction
[Almgren and Chriss](https://cims.nyu.edu/~almgren/papers/optliq.pdf) showed that for each value of risk aversion there is a unique optimal execution strategy. The optimal strategy is obtained by minimizing the **Utility Function** $U(x)$:
\begin{equation}
U(x) = E(x) + \lambda V(x)
\end{equation}
where $E(x)$ is the **Expected Shortfall**, $V(x)$ is the **Variance of the Shortfall**, and $\lambda$ corresponds to the trader’s risk aversion. The expected shortfall and variance of the optimal trading strategy are given by:
In this notebook, we will learn how to visualize and interpret these equations.
# The Expected Shortfall
As we saw in the previous notebook, even if we use the same trading list, we are not guaranteed to always get the same implementation shortfall due to the random fluctuations in the stock price. This is why we had to reframe the problem of finding the optimal strategy in terms of the average implementation shortfall and the variance of the implementation shortfall. We call the average implementation shortfall, the expected shortfall $E(x)$, and the variance of the implementation shortfall $V(x)$. So, whenever we talk about the expected shortfall we are really talking about the average implementation shortfall. Therefore, we can think of the expected shortfall as follows. Given a single trading list, the expected shortfall will be the value of the average implementation shortfall if we were to implement this trade list in the stock market many times.
To see this, in the code below we implement the same trade list on 50,000 trading simulations. We call each trading simulation an episode. Each episode will consist of different random fluctuations in stock price. For each episode we will compute the corresponding implemented shortfall. After all the 50,000 trading simulations have been carried out we calculate the average implementation shortfall and the variance of the implemented shortfalls. We can then compare these values with the values given by the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model.
```python
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Set the liquidation time
l_time = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
t_risk = 1e-6
# Set the number of episodes to run the simulation
episodes = 10
utils.get_av_std(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk, trs = episodes)
# Get the AC Optimal strategy for the given parameters
ac_strategy = utils.get_optimal_vals(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk)
ac_strategy
```
# Extreme Trading Strategies
Because some investors may be willing to take more risk than others, when looking for the optimal strategy we have to consider a wide range of risk values, ranging from those traders that want to take zero risk to those who want to take as much risk as possible. Let's take a look at these two extreme cases. We will define the **Minimum Variance** strategy as that one followed by a trader that wants to take zero risk and the **Minimum Impact** strategy at that one followed by a trader that wants to take as much risk as possible. Let's take a look at the values of $E(x)$ and $V(x)$ for these extreme trading strategies. The `utils.get_min_param()` uses the above equations for $E(x)$ and $V(x)$, along with the parameters from the trading environment to calculate the expected shortfall and standard deviation (the square root of the variance) for these strategies. We'll start by looking at the Minimum Impact strategy.
```python
import utils
# Get the minimum impact and minimum variance strategies
minimum_impact, minimum_variance = utils.get_min_param()
```
### Minimum Impact Strategy
This trading strategy will be taken by trader that has no regard for risk. In the Almgren and Chriss model this will correspond to having the trader's risk aversion set to $\lambda = 0$. In this case the trader will sell the shares at a constant rate over a long period of time. By doing so, he will minimize market impact, but will be at risk of losing a lot of money due to the large variance. Hence, this strategy will yield the lowest possible expected shortfall and the highest possible variance, for a given set of parameters. We can see that for the given parameters, this strategy yields an expected shortfall of \$197,000 dollars but has a very big standard deviation of over 3 million dollars.
```python
minimum_impact
```
<table class="simpletable">
<caption>AC Optimal Strategy for Minimum Impact</caption>
<tr>
<th>Number of Days to Sell All the Shares:</th> <td>250</td> <th> Initial Portfolio Value:</th> <td>$50,000,000.00</td>
</tr>
<tr>
<th>Half-Life of The Trade:</th> <td>1,284,394.9</td> <th> Expected Shortfall:</th> <td>$197,000.00</td>
</tr>
<tr>
<th>Utility:</th> <td>$197,000.00</td> <th> Standard Deviation of Shortfall:</th> <td>$3,453,707.55</td>
</tr>
</table>
### Minimum Variance Strategy
This trading strategy will be taken by trader that wants to take zero risk, regardless of transaction costs. In the Almgren and Chriss model this will correspond to having a variance of $V(x) = 0$. In this case, the trader would prefer to sell the all his shares immediately, causing a known price impact, rather than risk trading in small increments at successively adverse prices. This strategy will yield the smallest possible variance, $V(x) = 0$, and the highest possible expected shortfall, for a given set of parameters. We can see that for the given parameters, this strategy yields an expected shortfall of over 2.5 million dollars but has a standard deviation equal of zero.
```python
minimum_variance
```
<table class="simpletable">
<caption>AC Optimal Strategy for Minimum Variance</caption>
<tr>
<th>Number of Days to Sell All the Shares:</th> <td>1</td> <th> Initial Portfolio Value:</th> <td>$50,000,000.00</td>
</tr>
<tr>
<th>Half-Life of The Trade:</th> <td>0.2</td> <th> Expected Shortfall:</th> <td>$2,562,500.00</td>
</tr>
<tr>
<th>Utility:</th> <td>$2,562,500.00</td> <th> Standard Deviation of Shortfall:</th> <td>$0.00</td>
</tr>
</table>
# The Efficient Frontier
The goal of Almgren and Chriss was to find the optimal strategies that lie between these two extremes. In their paper, they showed how to compute the trade list that minimizes the expected shortfall for a wide range of risk values. In their model, Almgren and Chriss used the parameter $\lambda$ to measure a trader's risk-aversion. The value of $\lambda$ tells us how much a trader is willing to penalize the variance of the shortfall, $V(X)$, relative to expected shortfall, $E(X)$. They showed that for each value of $\lambda$ there is a uniquely determined optimal execution strategy. We define the **Efficient Frontier** to be the set of all these optimal trading strategies. That is, the efficient frontier is the set that contains the optimal trading strategy for each value of $\lambda$.
The efficient frontier is often visualized by plotting $(x,y)$ pairs for a wide range of $\lambda$ values, where the $x$-coordinate is given by the equation of the expected shortfall, $E(X)$, and the $y$-coordinate is given by the equation of the variance of the shortfall, $V(X)$. Therefore, for a given a set of parameters, the curve defined by the efficient frontier represents the set of optimal trading strategies that give the lowest expected shortfall for a defined level of risk.
In the code below, we plot the efficient frontier for $\lambda$ values in the range $(10^{-7}, 10^{-4})$, using the default parameters in our trading environment. Each point of the frontier represents a distinct strategy for optimally liquidating the same number of stocks. A risk-averse trader, who wishes to sell quickly to reduce exposure to stock price volatility, despite the trading costs incurred in doing so, will likely choose a value of $\lambda = 10^{-4}$. On the other hand, a trader
who likes risk, who wishes to postpones selling, will likely choose a value of $\lambda = 10^{-7}$. In the code, you can choose a particular value of $\lambda$ to see the expected shortfall and level of variance corresponding to that particular value of trader's risk aversion.
```python
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Plot the efficient frontier for the default values. The plot points out the expected shortfall and variance of the
# optimal strategy for the given the trader's risk aversion. Valid range for the trader's risk aversion (1e-7, 1e-4).
utils.plot_efficient_frontier(tr_risk = 1e-6)
```
| 08f89eb5fa88620a89b101caf33e7b1fa80714fd | 97,595 | ipynb | Jupyter Notebook | 11. Deep RL for Finance - 1/.ipynb_checkpoints/Efficient Frontier-checkpoint.ipynb | soheillll/reinforcement-learning-tutorials | 5ae57267ce3d806333cd0056ac96d591c8ef7123 | [
"MIT"
]
| 4 | 2019-05-27T12:05:16.000Z | 2020-06-08T11:06:34.000Z | 11. Deep RL for Finance - 1/.ipynb_checkpoints/Efficient Frontier-checkpoint.ipynb | soheillll/reinforcement-learning-tutorials | 5ae57267ce3d806333cd0056ac96d591c8ef7123 | [
"MIT"
]
| null | null | null | 11. Deep RL for Finance - 1/.ipynb_checkpoints/Efficient Frontier-checkpoint.ipynb | soheillll/reinforcement-learning-tutorials | 5ae57267ce3d806333cd0056ac96d591c8ef7123 | [
"MIT"
]
| 2 | 2020-06-30T15:25:29.000Z | 2020-07-23T02:47:08.000Z | 337.698962 | 44,716 | 0.919811 | true | 2,113 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.812867 | 0.701823 | __label__eng_Latn | 0.997415 | 0.468901 |
```python
import sympy as sp
from sympy import *
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
from scipy.integrate import quad
from scipy.optimize import fmin
import scipy.integrate as integrate
import scipy.special as special
import scipy.stats as st
import sys
font1 = {'size' : 20, 'family':'STIXGeneral'}
from platform import python_version
print(python_version())
```
3.7.7
```python
#LCDM fractions
mptkm = 3.086*10**(19)
H0 = 67.32/mptkm
Oc = 0.265
Ob = 0.0494
Om = Oc + Ob
Orad = 0.000093
ai = 0.000001
arad=0.0002264 #radiation -DM equality
acmb = 0.0009
Gnewton = 6.67*10**(-11)
def Hub(Om, Orad, a):
return H0*np.sqrt(Om/a**3 + Orad/a**4 + (1-Om-Orad))
def rhoc(a):
return 3*Hub(Om, Orad, a)**2/(8*np.pi/Gnewton)
def Omegac(a):
return Oc/a**3*(H0/Hub(Om,Orad,a))**2
def Omegarad(a):
return Orad/a**4*(H0/Hub(Om,Orad,a))**2
def Omegab(a):
return Ob/a**3*(H0/Hub(Om,Orad,a))**2
```
```python
fig0 = plt.figure()
plt.figure(figsize=(10,10))
#Load Omega_pbh data at early and late times
dat1 = np.loadtxt("data/8gev+9orad/peakm_5e-7.dat")
dat2 = np.loadtxt("data/8gev+9orad/peakm_2e11.dat")
dat3 = np.loadtxt("data/8gev+9orad/peakm_5e11.dat")
dat4 = np.loadtxt("data/8gev+9orad/peakm_5e13.dat")
dat5 = np.loadtxt("data/8gev+9orad/peakm_5e33.dat")
avals = np.logspace(-6, 0, num=1000)
ax = plt.subplot(2, 1, 1)
plt.xscale('log')
plt.plot(avals, Omegac(avals),linestyle='dashed', color='b', label= '$\Omega_{\\rm cdm}$',alpha=1.)
plt.plot(dat1[:,0], dat1[:,1], label='$M_{pk} = 5\\times 10^{-7}$kg', alpha=0.6)
plt.plot(dat2[:,0], dat2[:,1], label='$M_{pk} = 2\\times 10^{11}$kg', alpha=0.6)
plt.plot(dat3[:,0], dat3[:,1], label='$M_{pk} = 5\\times 10^{11}$kg',alpha=0.6)
plt.plot(dat4[:,0], dat4[:,1], label='$M_{pk} = 5\\times 10^{13}$kg',alpha=0.6)
plt.plot(dat5[:,0], dat5[:,1], label='$M_{pk} = 5\\times 10^{33}$kg',alpha=0.6)
plt.axhline(y=1., xmin=0., xmax=10,color='k',linestyle='dashed')
plt.axvline(acmb,0.,10, color='k', linestyle='dotted')
#plt.text(2e-6, 0.3 , '$T_{\\rm RH} = 10^{8}{\\rm GeV}$', **font1)
ax.tick_params(axis='both', which='major', labelsize=15)
plt.ylim(-0.,1.45)
plt.xlim(ai,1)
plt.ylabel('Density fraction of PBH ($\Omega_{\\rm PBH}$) ',**font1)
plt.xlabel('scale factor (a)', **font1)
plt.legend(loc='best',prop={'size': 16})
plt.tight_layout(pad=3.0)
ax = plt.subplot(2, 1, 2)
plt.xscale('log')
plt.plot(avals, Omegab(avals),linestyle='dashed', color='r', label= '$\Omega_b$',alpha=1.)
plt.plot(dat1[:,0], dat1[:,2], label='$\lambda = 3.4\\times 10^{96}$', alpha=0.6)
plt.plot(dat2[:,0], dat2[:,2], label='$\lambda = 1.9\\times 10^{98}$', alpha=0.6)
plt.plot(dat3[:,0], dat3[:,2], label='$\lambda = 4.8\\times 10^{98}$', alpha=0.6)
plt.plot(dat4[:,0], dat4[:,2], label='$\lambda = 4.7\\times 10^{100}$', alpha=0.6)
plt.plot(dat5[:,0], dat5[:,2], label='$\lambda = 3.5\\times 10^{120}$', alpha=0.6)
plt.axvline(acmb,0.,10, color='k', linestyle='dotted')
ax.tick_params(axis='both', which='major', labelsize=15)
plt.ylim(0,0.2)
plt.xlim(ai,1)
plt.ylabel('Density fraction of baryons ($\Omega_{\\rm b}$) ',**font1)
plt.xlabel('scale factor (a)', **font1)
plt.legend(loc='best',prop={'size': 16})
#plt.setp(plt.subplot(2,1,1).get_xticklabels(), visible=False)
plt.subplots_adjust(hspace=0.2)
plt.subplots_adjust(wspace=0.)
plt.savefig('plots/omega_all.png', format="png", bbox_inches = 'tight')
```
```python
# Plotting LCDM fractions
fig0 = plt.figure()
plt.figure(figsize=(10,5))
avals = np.logspace(-6, 0, num=1000)
ax = plt.subplot(1, 1, 1)
plt.xscale('log')
plt.plot(avals, Omegac(avals),linestyle='dashed', color='b', label= '$\Omega_c$',alpha=1.)
plt.plot(avals, Omegab(avals),linestyle='dashed', color='g', label= '$\Omega_b$',alpha=1.)
plt.plot(avals, Omegarad(avals),linestyle='dashed', color='r', label= '$\Omega_\gamma$',alpha=1.)
ax.axvspan(ai, 0.000215, alpha=0.5, color='orange')
plt.axhline(y=0.6856, xmin=0., xmax=10,color='k')
ax.tick_params(axis='both', which='major', labelsize=15)
plt.ylim(-0.1,1.5)
plt.xlim(ai,1)
plt.xlabel('scale factor (a)', **font1)
plt.ylabel('Density fraction ($\Omega$) ',**font1)
plt.legend(loc='best',prop={'size': 14})
plt.tight_layout(pad=3.0)
plt.savefig('plots/lcdm_epochs.png', format="png", bbox_inches = 'tight')
```
```python
fig0 = plt.figure()
plt.figure(figsize=(10,10))
#Load Omega_pbh data at early and late times
dat1 = np.loadtxt("data/8gev+9orad+rem/peakm_5e-7_rem.dat")
dat2 = np.loadtxt("data/8gev+9orad+rem/peakm_2e11_rem.dat")
dat4 = np.loadtxt("data/8gev+9orad+rem/peakm_5e13_rem.dat")
dat1a = np.loadtxt("data/8gev+9orad/peakm_5e-7.dat")
dat2b = np.loadtxt("data/8gev+9orad/peakm_2e11.dat")
dat4c = np.loadtxt("data/8gev+9orad/peakm_5e13.dat")
avals = np.logspace(-6, 0, num=1000)
ax = plt.subplot(2, 1, 1)
plt.xscale('log')
#plt.plot(avals, Omegac(avals),linestyle='dashed', color='b', label= '$\Omega_c$',alpha=1.)
plt.plot(dat1[:,0], dat1[:,1]/dat1a[:,1] , label='$M_{pk} = 5\\times 10^{-7}$kg', alpha=0.6)
plt.plot(dat2[:,0], dat2[:,1]/dat2b[:,1], label='$M_{pk} = 2\\times 10^{11}$kg', alpha=0.6)
#plt.plot(dat3[:,0], dat3[:,1], label='$M_{pk} = 5\\times 10^{11}$kg',alpha=0.6)
#plt.plot(dat4[:,0], dat4[:,1]/dat4c [:,1], label='$M_{pk} = 5\\times 10^{13}$kg',alpha=0.6)
#plt.plot(dat5[:,0], dat5[:,1], label='$M_{pk} = 5\\times 10^{33}$kg',alpha=0.6)
plt.axhline(y=1., xmin=0., xmax=10,color='k',linestyle='dashed')
plt.axvline(acmb,0.,10, color='k', linestyle='dotted')
plt.text(2e-6, 0.3 , '$T_{\\rm RH} = 10^{8}{\\rm GeV}$', **font1)
ax.tick_params(axis='both', which='major', labelsize=15)
plt.ylim(0.,2)
plt.xlim(ai,1)
plt.ylabel('Density fraction of PBH ($\Omega_{\\rm PBH}$) ',**font1)
plt.xlabel('scale factor (a)', **font1)
plt.legend(loc='best',prop={'size': 16})
plt.tight_layout(pad=3.0)
ax = plt.subplot(2, 1, 2)
plt.xscale('log')
#plt.plot(avals, Omegab(avals),linestyle='dashed', color='r', label= '$\Omega_b$',alpha=1.)
plt.plot(dat1[:,0], dat1[:,2]/dat1a[:,2], label='$\lambda = 3.4\\times 10^{96}$', alpha=0.6)
plt.plot(dat2[:,0], dat2[:,2]/dat2b[:,2], label='$\lambda = 1.9\\times 10^{98}$', alpha=0.6)
#plt.plot(dat3[:,0], dat3[:,2], label='$\lambda = 4.8\\times 10^{98}$', alpha=0.6)
#plt.plot(dat4[:,0], dat4[:,2]/dat4c[:,2], label='$\lambda = 4.7\\times 10^{100}$', alpha=0.6)
#plt.plot(dat5[:,0], dat5[:,2], label='$\lambda = 3.5\\times 10^{120}$', alpha=0.6)
plt.axvline(acmb,0.,10, color='k', linestyle='dotted')
ax.tick_params(axis='both', which='major', labelsize=15)
plt.ylim(0.6,1.4)
plt.xlim(ai,1)
plt.ylabel('Density fraction of baryons ($\Omega_{\\rm b}$) ',**font1)
plt.xlabel('scale factor (a)', **font1)
plt.legend(loc='best',prop={'size': 16})
#plt.setp(plt.subplot(2,1,1).get_xticklabels(), visible=False)
plt.subplots_adjust(hspace=0.2)
plt.subplots_adjust(wspace=0.)
plt.savefig('plots/remnants.png', format="png", bbox_inches = 'tight')
```
```python
```
| 8d0ff2bcf218131b23b4728ea99e3d2d22399750 | 231,854 | ipynb | Jupyter Notebook | plots.ipynb | nebblu/PBH | b896bb65d0f204603e26b3579265d4cd613c48e7 | [
"MIT"
]
| null | null | null | plots.ipynb | nebblu/PBH | b896bb65d0f204603e26b3579265d4cd613c48e7 | [
"MIT"
]
| null | null | null | plots.ipynb | nebblu/PBH | b896bb65d0f204603e26b3579265d4cd613c48e7 | [
"MIT"
]
| null | null | null | 677.935673 | 115,288 | 0.945034 | true | 2,669 | Qwen/Qwen-72B | 1. YES
2. YES | 0.942507 | 0.787931 | 0.74263 | __label__eng_Latn | 0.125516 | 0.563711 |
## Nonlinear Dimensionality Reduction
G. Richards (2016, 2018), based on materials from Ivezic, Connolly, Miller, Leighly, and VanderPlas.
Today we will talk about the concepts of
* manifold learning
* nonlinear dimensionality reduction
Specifically using the following algorithms
* local linear embedding (LLE)
* isometric mapping (IsoMap)
* t-distributed Stochastic Neighbor Embedding (t-SNE)
Let's start by my echoing the brief note of caution given in Adam Miller's notebook: "astronomers will often try to derive physical insight from PCA eigenspectra or eigentimeseries, but this is not advisable as there is no physical reason for the data to be linearly and orthogonally separable". Moreover, physical components are (generally) positive definite. So, PCA is great for dimensional reduction, but for doing physics there are generally better choices.
While NMF "solves" the issue of negative components, it is still a linear process. For data with non-linear correlations, an entire field, known as [Manifold Learning](http://scikit-learn.org/stable/modules/manifold.html) and [nonlinear dimensionality reduction]( https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction), has been developed, with several algorithms available via the [`sklearn.manifold`](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.manifold) module.
For example, if your data set looks like this:
Then PCA is going to give you something like this.
Clearly not very helpful!
What you were really hoping for is something more like the results below. For more examples see
[Vanderplas & Connolly 2009](http://iopscience.iop.org/article/10.1088/0004-6256/138/5/1365/meta;jsessionid=48A569862A424ECCAEECE2A900D9837B.c3.iopscience.cld.iop.org)
## Local Linear Embedding
[Local Linear Embedding](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.LocallyLinearEmbedding.html#sklearn.manifold.LocallyLinearEmbedding) attempts to embed high-$D$ data in a lower-$D$ space. Crucially it also seeks to preserve the geometry of the local "neighborhoods" around each point. In the case of the "S" curve, it seeks to unroll the data. The steps are
Step 1: define local geometry
- local neighborhoods determined from $k$ nearest neighbors.
- for each point calculate weights that reconstruct a point from its $k$ nearest
neighbors via
$$
\begin{equation}
\mathcal{E}_1(W) = \left|X - WX\right|^2,
\end{equation}
$$
where $X$ is an $N\times K$ matrix and $W$ is an $N\times N$ matrix that minimizes the reconstruction error.
Essentially this is finding the hyperplane that describes the local surface at each point within the data set. So, imagine that you have a bunch of square tiles and you are trying to tile the surface with them.
Step 2: embed within a lower dimensional space
- set all $W_{ij}=0$ except when point $j$ is one of the $k$ nearest neighbors of point $i$.
- $W$ becomes very sparse for $k \ll N$ (only $Nk$ entries in $W$ are non-zero).
- minimize
>$\begin{equation}
\mathcal{E}_2(Y) = \left|Y - W Y\right|^2,
\end{equation}
$
with $W$ fixed to find an $N$ by $d$ matrix ($d$ is the new dimensionality).
Step 1 requires a nearest-neighbor search.
Step 2 requires an
eigenvalue decomposition of the matrix $C_W \equiv (I-W)^T(I-W)$.
LLE has been applied to data as diverse as galaxy spectra, stellar spectra, and photometric light curves. It was introduced by [Roweis & Saul (2000)](https://www.ncbi.nlm.nih.gov/pubmed/11125150).
Skikit-Learn's call to LLE is as follows, with a more detailed example already being given above.
```python
import numpy as np
from sklearn.manifold import LocallyLinearEmbedding
X = np.random.normal(size=(1000,2)) # 1000 points in 2D
R = np.random.random((2,10)) # projection matrix
X = np.dot(X,R) # now a 2D linear manifold in 10D space
k = 5 # Number of neighbors to use in fit
n = 2 # Number of dimensions to fit
lle = LocallyLinearEmbedding(k,n)
lle.fit(X)
proj = lle.transform(X) # 100x2 projection of the data
```
See what LLE does for the digits data, using the 7 nearest neighbors and 2 components.
```python
# Execute this cell to load the digits sample
%matplotlib inline
import numpy as np
from sklearn.datasets import load_digits
from matplotlib import pyplot as plt
digits = load_digits()
grid_data = np.reshape(digits.data[0], (8,8)) #reshape to 8x8
plt.imshow(grid_data, interpolation = "nearest", cmap = "bone_r")
print(grid_data)
X = digits.data
y = digits.target
```
```python
#LLE
from sklearn.manifold import LocallyLinearEmbedding
k = 7 # Number of neighbors to use in fit
n = 2 # Number of dimensions to fit
lle = LocallyLinearEmbedding(k,n)
lle.fit(X)
X_reduced = lle.transform(X)
plt.scatter(X_reduced[:,0], X_reduced[:,1], c=y, cmap="nipy_spectral", edgecolor="None")
plt.colorbar()
```
## Isometric Mapping
is based on multi-dimensional scaling (MDS) framework. It was introduced in the same volume of *Science* as the article above. See [Tenenbaum, de Silva, & Langford (2000)](https://www.ncbi.nlm.nih.gov/pubmed/?term=A+Global+Geometric+Framework+for+Nonlinear+Dimensionality+Reduction).
Geodestic curves are used to recover non-linear structure.
In Scikit-Learn [IsoMap](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.Isomap.html) is implemented as follows:
```python
# Execute this cell
import numpy as np
from sklearn.manifold import Isomap
XX = np.random.normal(size=(1000,2)) # 1000 points in 2D
R = np.random.random((2,10)) # projection matrix
XX = np.dot(XX,R) # X is a 2D manifold in 10D space
k = 5 # number of neighbors
n = 2 # number of dimensions
iso = Isomap(k,n)
iso.fit(XX)
proj = iso.transform(XX) # 1000x2 projection of the data
```
Try 7 neighbors and 2 dimensions on the digits data.
```python
# IsoMap
from sklearn.manifold import Isomap
k = 7 # Number of neighbors to use in fit
n = 2 # Number of dimensions to fit
iso = Isomap(k,n)
iso.fit(X)
X_reduced = iso.transform(X)
plt.scatter(X_reduced[:,0], X_reduced[:,1], c=y, cmap="nipy_spectral", edgecolor="None")
plt.colorbar()
```
## t-SNE
[t-distributed Stochastic Neighbor Embedding (t-SNE)](https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding) is not discussed in the book, Scikit-Learn does have a [t-SNE implementation](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) and it is well worth mentioning this manifold learning algorithm too. SNE itself was developed by [Hinton & Roweis](http://www.cs.toronto.edu/~fritz/absps/sne.pdf) with the "$t$" part being added by [van der Maaten & Hinton](http://jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf). It works like the other manifold learning algorithms.
Try it on the digits data. You'll need to import `TSNE` from `sklearn.manifold`, instantiate it with 2 components, then do a `fit_transform` on the original data.
```python
# t-SNE
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2)
X_reduced = tsne.fit_transform(X)
plt.scatter(X_reduced[:,0], X_reduced[:,1], c=y, cmap="nipy_spectral", edgecolor="None")
plt.colorbar()
```
You'll know if you have done it right if you understand Adam Miller's comment "Holy freakin' smokes. That is magic. (It's possible we just solved science)."
Personally, I think that some exclamation points may be needed in there!
What's even more illuminating is to make the plot using the actual digits to plot the points. Then you can see why certain digits are alike or split into multiple regions. Can you explain the patterns you see here?
```python
# Execute this cell
from matplotlib import offsetbox
#----------------------------------------------------------------------
# Scale and visualize the embedding vectors
def plot_embedding(X):
x_min, x_max = np.min(X, 0), np.max(X, 0)
X = (X - x_min) / (x_max - x_min)
plt.figure()
ax = plt.subplot(111)
for i in range(X.shape[0]):
#plt.text(X[i, 0], X[i, 1], str(digits.target[i]), color=plt.cm.Set1(y[i] / 10.), fontdict={'weight': 'bold', 'size': 9})
plt.text(X[i, 0], X[i, 1], str(digits.target[i]), color=plt.cm.nipy_spectral(y[i]/9.))
shown_images = np.array([[1., 1.]]) # just something big
for i in range(digits.data.shape[0]):
dist = np.sum((X[i] - shown_images) ** 2, 1)
if np.min(dist) < 4e-3:
# don't show points that are too close
continue
shown_images = np.r_[shown_images, [X[i]]]
imagebox = offsetbox.AnnotationBbox(offsetbox.OffsetImage(digits.images[i], cmap=plt.cm.gray_r), X[i])
ax.add_artist(imagebox)
plt.xticks([]), plt.yticks([])
plot_embedding(X_reduced)
plt.show()
```
With the remainder of time in class today, play with the arguments of the algorithms that we have discussed this week and/or try running them on a different data set. For example the iris data set or one of the other samples of data that are included with Scikit-Learn. Or maybe have a look through some of these public data repositories:
- [https://github.com/caesar0301/awesome-public-datasets?utm_content=buffer4245d&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer](https://github.com/caesar0301/awesome-public-datasets?utm_content=buffer4245d&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer)
- [http://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A318739](http://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A318739)
- [http://www.kdnuggets.com/2015/04/awesome-public-datasets-github.html](http://www.kdnuggets.com/2015/04/awesome-public-datasets-github.html)
| 2c2edccf760174bd0d9fd53a94be4e2c419b3864 | 169,730 | ipynb | Jupyter Notebook | notebooks/NonlinearDimensionReduction.ipynb | pranphy/PHYST580-F18 | 52e375765c4b24886b14ce3b08a9c5526da85ff5 | [
"MIT"
]
| null | null | null | notebooks/NonlinearDimensionReduction.ipynb | pranphy/PHYST580-F18 | 52e375765c4b24886b14ce3b08a9c5526da85ff5 | [
"MIT"
]
| null | null | null | notebooks/NonlinearDimensionReduction.ipynb | pranphy/PHYST580-F18 | 52e375765c4b24886b14ce3b08a9c5526da85ff5 | [
"MIT"
]
| null | null | null | 322.680608 | 54,432 | 0.927644 | true | 2,594 | Qwen/Qwen-72B | 1. YES
2. YES | 0.815232 | 0.815232 | 0.664604 | __label__eng_Latn | 0.959201 | 0.382429 |
```python
import numpy as np
```
```python
%%markdown
# Recursion - Examples
## Factorial
\begin{align}
!n &= !(n-1).n \\
!0 &= 1
\end{align}
```
# Recursion - Examples
## Factorial
\begin{align}
!n &= !(n-1).n \\
!0 &= 1
\end{align}
```python
def factorial (n):
if (n >= 1):
return (n-1)*n
else:
return 1
```
```python
n = 4
print ("Factorial of ", n, factorial(n))
```
Factorial of 4 12
```python
%%markdown
## Determinant of a matrix
* [Laplace's formula](http://en.wikipedia.org/wiki/Determinant#Laplace's_formula_and_the_adjugate_matrix)
\begin{align}
\det(A)=\sum _{j=1}^{n}(-1)^{i+j}a_{i,j}M_{i,j} \text{ for a fixed } i
\end{align}
* With $i = 1$
\begin{align}
\det(A)=\sum _{j=1}^{n}(-1)^{j+1}a_{1,j}M_{1,j}
\end{align}
```
## Determinant of a matrix
* [Laplace's formula](http://en.wikipedia.org/wiki/Determinant#Laplace's_formula_and_the_adjugate_matrix)
\begin{align}
\det(A)=\sum _{j=1}^{n}(-1)^{i+j}a_{i,j}M_{i,j} \text{ for a fixed } i
\end{align}
* With $i = 1$
\begin{align}
\det(A)=\sum _{j=1}^{n}(-1)^{j+1}a_{1,j}M_{1,j}
\end{align}
```python
%%markdown
A = \begin{bmatrix}
-2 & 2 & -3 \\
-1 & 1 & 3 \\
2 & 0 & -1
\end{bmatrix}
along the second column ($j = 2$ and the sum runs over $i$) is given by,
\begin{align}
det(A) &= (-1)^{1+2} \cdot 2 \cdot \begin{vmatrix}-1&3 \\ 2&-1\end{vmatrix} + (-1)^{2+2} \cdot 1 \cdot \begin{vmatrix} -2&-3 \\ 2&-1 \end{vmatrix} + (-1)^{3+2} \cdot 0 \cdot \begin{vmatrix} -2&-3\\-1&3\end{vmatrix} \\
&= (-2) \cdot ((-1) \cdot (-1) - 2 \cdot 3) + 1 \cdot ((-2) \cdot (-1)-2 \cdot (-3)) \\
&= (-2) \cdot (-5)+8=18.
\end{align}
along the first column ($j = 1$ and the sum runs over $i$) is given by,
\begin{align}
det(A) &= (-1)^{1+1} \cdot -2 \cdot \begin{vmatrix} 1&3 \\ 0&-1\end{vmatrix} + (-1)^{2+1} \cdot -1 \cdot \begin{vmatrix} 2&-3 \\ 0&-1 \end{vmatrix} + (-1)^{3+1} \cdot 2 \cdot \begin{vmatrix} 2&-3\\1&3\end{vmatrix} \\
&= -2 \cdot (1 \cdot (-1)) + 1 \cdot (2 \cdot (-1)) + 2 \cdot (2 \cdot 3 - 1 \cdot (-3)) \\
&= -2 \cdot (-1) + 1 \cdot (-2 ) + 2 \cdot (6 + 3) \\
&= 2 -2 + 2 \cdot 9 = 18.
\end{align}
```
A = \begin{bmatrix}
-2 & 2 & -3 \\
-1 & 1 & 3 \\
2 & 0 & -1
\end{bmatrix}
along the second column ($j = 2$ and the sum runs over $i$) is given by,
\begin{align}
det(A) &= (-1)^{1+2} \cdot 2 \cdot \begin{vmatrix}-1&3 \\ 2&-1\end{vmatrix} + (-1)^{2+2} \cdot 1 \cdot \begin{vmatrix} -2&-3 \\ 2&-1 \end{vmatrix} + (-1)^{3+2} \cdot 0 \cdot \begin{vmatrix} -2&-3\\-1&3\end{vmatrix} \\
&= (-2) \cdot ((-1) \cdot (-1) - 2 \cdot 3) + 1 \cdot ((-2) \cdot (-1)-2 \cdot (-3)) \\
&= (-2) \cdot (-5)+8=18.
\end{align}
along the first column ($j = 1$ and the sum runs over $i$) is given by,
\begin{align}
det(A) &= (-1)^{1+1} \cdot -2 \cdot \begin{vmatrix} 1&3 \\ 0&-1\end{vmatrix} + (-1)^{2+1} \cdot -1 \cdot \begin{vmatrix} 2&-3 \\ 0&-1 \end{vmatrix} + (-1)^{3+1} \cdot 2 \cdot \begin{vmatrix} 2&-3\\1&3\end{vmatrix} \\
&= -2 \cdot (1 \cdot (-1)) + 1 \cdot (2 \cdot (-1)) + 2 \cdot (2 \cdot 3 - 1 \cdot (-3)) \\
&= -2 \cdot (-1) + 1 \cdot (-2 ) + 2 \cdot (6 + 3) \\
&= 2 -2 + 2 \cdot 9 = 18.
\end{align}
```python
def extractMij(A, i, verbose = False):
"""Extract the minor M_ij of A, where we take j=0"""
n = len(A)
M0 = A[:, np.r_[1:n]]
Mi0 = None
if (i == 0):
Mi0 = M0[list(range(1, n)), :]
else:
Mi0 = M0[list(range(0, i)) + list(range(i+1, n)), :]
return Mi0
```
```python
def determinant(A, verbose = False):
"""Calculate the determinant of the A matrix, according to the Lagrange algorithm.
Note that the matrix indices in classical litterature (including in the
reference document, ie Wikipedia) are usually ranging between 1 and n,
where as the indices in NumPy are ranging between 0 and n-1.
Hence, the variable is here called jm1 (standing for j minus -1).
We select the first column (jm1=0), and go through the rows."""
n = len(A)
det = 0
if (verbose == True):
print("A (size = %s): \n" % n, str(A))
print("Determinant of A: ", determinant(A))
if (n == 2):
det = (A[0, 0] * A[1, 1] - A[1, 0] * A[0, 1])
else:
for im1 in range(0, n):
Mij = extractMij(A, im1)
aij = A[im1, 0]
det += (-1)**im1 * aij * determinant(Mij)
if (verbose == True):
print("(-1)**(i+j), aij, det(Mij) for (i, j) = (%s, 1): " % i, (-1)**i, aij, determinant(Mij))
print("Mij for (i, j) = (%s, 1): \n" % i, str(Mij))
return det
```
```python
A = np.matrix([[-2, 2, -3], [-1, 1, 3], [2, 0, -1]])
determinant(A, verbose=True)
```
A (size = 3):
[[-2 2 -3]
[-1 1 3]
[ 2 0 -1]]
Determinant of A: 18
(-1)**(i+j), aij, det(Mij) for (i, j) = (2, 1): 1 -2 -1
Mij for (i, j) = (2, 1):
[[ 1 3]
[ 0 -1]]
(-1)**(i+j), aij, det(Mij) for (i, j) = (2, 1): 1 -1 -2
Mij for (i, j) = (2, 1):
[[ 2 -3]
[ 0 -1]]
(-1)**(i+j), aij, det(Mij) for (i, j) = (2, 1): 1 2 9
Mij for (i, j) = (2, 1):
[[ 2 -3]
[ 1 3]]
18
```python
A = np.random.randint(1, 10, size=(5, 5))
determinant(A)
```
5469
```python
```
| 725c755d1e15fed5af00ec3354310b3a6cc5b148 | 9,408 | ipynb | Jupyter Notebook | general/recursion-examples.ipynb | machine-learning-helpers/induction-books-python | d26816f92d4f6a64e8c4c2ed6c7c8343c77cd3ad | [
"RSA-MD"
]
| 3 | 2018-02-11T12:34:19.000Z | 2021-09-22T18:06:01.000Z | general/recursion-examples.ipynb | machine-learning-helpers/induction-books-python | d26816f92d4f6a64e8c4c2ed6c7c8343c77cd3ad | [
"RSA-MD"
]
| 17 | 2019-11-22T00:48:20.000Z | 2022-01-16T11:00:50.000Z | general/recursion-examples.ipynb | machine-learning-helpers/induction-python | 631a735a155f0feb7012472fbca13efbc273dfb0 | [
"RSA-MD"
]
| null | null | null | 29.037037 | 249 | 0.426552 | true | 2,220 | Qwen/Qwen-72B | 1. YES
2. YES | 0.661923 | 0.817574 | 0.541171 | __label__eng_Latn | 0.332163 | 0.095652 |
```python
import sympy as sp
x = sp.Symbol('x')
t = sp.Symbol('t')
y = (5*t)*((0.2969*x**0.5)-(0.1260*x)-(0.3516*x**2)+(0.2843*x**3)-(0.1015*x**4))
dy = sp.diff(y,x)
print (dy)
```
5*t*(0.14845*x**(-0.5) - 0.406*x**3 + 0.8529*x**2 - 0.7032*x - 0.126)
```python
```
| 4873ebf19f39af0c4c020fc26ef53801e4269815 | 1,079 | ipynb | Jupyter Notebook | Airfoil Lab/Untitled.ipynb | pantartas/Lab-report | ca1b6722150070f6ecf126a86820418b316b0b7d | [
"MIT"
]
| null | null | null | Airfoil Lab/Untitled.ipynb | pantartas/Lab-report | ca1b6722150070f6ecf126a86820418b316b0b7d | [
"MIT"
]
| null | null | null | Airfoil Lab/Untitled.ipynb | pantartas/Lab-report | ca1b6722150070f6ecf126a86820418b316b0b7d | [
"MIT"
]
| 1 | 2021-12-16T06:32:34.000Z | 2021-12-16T06:32:34.000Z | 18.929825 | 89 | 0.468026 | true | 134 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.956634 | 0.833325 | 0.797187 | __label__yue_Hant | 0.411363 | 0.690465 |
# Input Driven HMM
This notebook is a simple example of an HMM with exogenous inputs. The inputs modulate the probability of discrete state transitions via a multiclass logistic regression. Let $z_t \in \{1, \ldots, K\}$ denote the discrete latent state at time $t$ and $u_t \in \mathbb{R}^U$ be the exogenous input at time~$t$. The transition probability is given by,
$$
\begin{align}
\Pr(z_t = k \mid z_{t-1} = j, u_t) =
\frac{\exp\{\log P_{j,k} + w_k^\mathsf{T} u_t\}}
{\sum_{k'=1}^K \exp\{\log P_{j,k'} + w_{k'}^\mathsf{T} u_t\}}.
\end{align}
$$
The parameters of the transition model are $P \in \mathbb{R}_+^{K \times K}$, a baseline set of (unnormalized) transition weights, and $W \in \mathbb{R}^{K \times U}$, a set of input weights.
## 1. Setup
The line `import ssm` imports the package for use. Here, we have also imported a few other packages for plotting.
```python
import numpy as np
import numpy.random as npr
import matplotlib.pyplot as plt
import ssm
import seaborn as sns
from ssm.util import one_hot, find_permutation
%matplotlib inline
npr.seed(0)
sns.set(palette="colorblind")
```
## 2. Create an Input Driven HMM
SSM is designed to be modular, so that the user can easily mix and match different types of transitions and observations.
We create an input-driven HMM with the following line:
```python
true_hmm = ssm.HMM(num_states, obs_dim, input_dim,
observations="categorical", observation_kwargs=dict(C=num_categories),
transitions="inputdriven")
```
Let's look at what each of the arguments do. The first three arguments specify the number of states, and the dimensionality of the observations and inputs.
**Setting the observation model**
For this example, we have set `observations="categorical"`, which means each observation will take on one of a discrete set of values, i.e $y_t \in \{1, \ldots, C \}$.
For categorical observations, the observations are drawn from a multinomial distribution, with parameters depending on the current state. Assuming $z_t = k$,the observations are a vector $y \in \mathbb{R}^D$, where $y_i \sim \text{mult} (\lambda_{k,i})$, where $\lambda_{k,i}$ is the multinomal parameter associated with coordinate $i$ of the observations in state $k$. Note that each observation variable is independent from the others.
For categorical observations, we also specify the number of discrete observations possible (in this case 3). We do this by creating a dictionary where the keys are the keyword arguments which we want to pass to the observation model. For categorical observations, there is just one keyword argument, `C`, which specifies the number of categories. This is set using `observation_kwargs=dict(C=num_categories)`.
The observations keyword argument should be one of : `"gaussian", "poisson" "studentst", "exponential", "bernoulli", "autoregressive", "robust_autoregressive"`.
**NOTE:**
Setting the observations as "autoregressive" means that each observation will be dependent on the prior observation, as well as on the input (if the input is nonzero). By constrast, the standard "inputdriven" transitions are not affected by previous observations or directly by the inputs.
**Setting the transition model**
In order to create an HMM with exogenous inputs, we set ```transitions="inputdriven"```. This means that the baseline transition matrix $P$ is modified according to a Generalized Linear Model, as described at the top of the page.
SSM support many transition models, set by keyword argument to the constructor of the class. The keyword argument should be one of: `"standard", "sticky", "inputdriven", "recurrent", "recurrent_only", "rbf_recurrent", "nn_recurrent".` We're working on creating standalone documentation to describe these in more detail. For most users, the stationary and input driven transition classes should suffice.
**Creating inputs and sampling**
After creating our HMM object, we create an input array called `inpt` which is simply a jittered sine wave. We also increase the transition weights so that it will be clear (for demonstration purposes) that the input is changing the transition probabilities. In this case, we will actually increase the weights such that the transitions appear almost deterministic.
```python
# Set the parameters of the HMM
time_bins = 1000 # number of time bins
num_states = 2 # number of discrete states
obs_dim = 1 # data dimension
input_dim = 1 # input dimension
num_categories = 3 # number of output types/categories
# Make an HMM
true_hmm = ssm.HMM(num_states, obs_dim, input_dim,
observations="categorical", observation_kwargs=dict(C=num_categories),
transitions="inputdriven")
# Optionally, turn up the input weights to exaggerate the effect
true_hmm.transitions.Ws *= 3
# Create an exogenous input
inpt = np.sin(2 * np.pi * np.arange(time_bins) / 50)[:, None] + 1e-1 * npr.randn(time_bins, input_dim)
# Sample some data from the HMM
true_states, obs = true_hmm.sample(time_bins, input=inpt)
# Compute the true log probability of the data, summing out the discrete states
true_lp = true_hmm.log_probability(obs, inputs=inpt)
# By default, SSM returns categorical observations as a list of lists.
# We convert to a 1D array for plotting.
obs_flat = np.array([x[0] for x in obs])
```
```python
np.dot(inpt[1:], true_hmm.transitions.Ws.T)
```
array([[ 0.77037179, -0.40312928],
[ 1.64201046, -0.85925069],
[ 2.2494287 , -1.17710771],
...,
[-1.92855912, 1.00919927],
[-1.95378084, 1.02239759],
[ 0.23567837, -0.12332857]])
```python
true_hmm.log_Ps
```
```python
true_hmm.transitions.Ws.T.shape
```
(1, 2)
```python
# Plot the data
plt.figure(figsize=(8, 5))
plt.subplot(311)
plt.plot(inpt)
plt.xticks([])
plt.xlim(0, time_bins)
plt.ylabel("input")
plt.subplot(312)
plt.imshow(true_states[None, :], aspect="auto")
plt.xticks([])
plt.xlim(0, time_bins)
plt.ylabel("discrete\nstate")
plt.yticks([])
# Create Cmap for visualizing categorical observations
plt.subplot(313)
plt.imshow(obs_flat[None,:], aspect="auto", )
plt.xlim(0, time_bins)
plt.ylabel("observation")
plt.grid(b=None)
plt.show()
```
### 2.1 Exercise: EM for the input-driven HMM
There are a few good references that derive the EM algorithm for the case of a vanilla HMM (e.g Machine Learning and Pattern Recognition, by Chris Bishop, and [this tutorial](https://www.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/tutorial%20on%20hmm%20and%20applications.pdf) by Lawrence Rabiner. How should the EM updates change for the case of input-driven HMMs?
## 3. Fit an input-driven HMM to data
Below, we'll show to fit an input-driven HMM from data. We'll treat the samples generated above as a dataset, and try to learn the appropriate HMM parameters from this dataset.
We create a new HMM object here, with the same parameters as the HMM in Section 1:
```python
hmm = ssm.HMM(num_states, obs_dim, input_dim,
observations="categorical", observation_kwargs=dict(C=num_categories),
transitions="inputdriven")
```
We fit the dataset simply by calling the `fit` method:
```python
hmm_lps = hmm.fit(obs, inputs=inpt, method="em", num_em_iters=N_iters)
```
Here, the variable `hmm_lps` will be set to a list of log-probabilities at each step of the EM-algorithm, which we'll use to check convergence.
```python
# Now create a new HMM and fit it to the data with EM
N_iters = 100
hmm = ssm.HMM(num_states, obs_dim, input_dim,
observations="categorical", observation_kwargs=dict(C=num_categories),
transitions="inputdriven")
# Fit
hmm_lps = hmm.fit(obs, inputs=inpt, method="em", num_iters=N_iters)
```
0%| | 0/100 [00:00<?, ?it/s]
### 3.1 Permute the latent states, check convergence
As in the vanilla-HMM notebook, we need to find a permutation of the latent states from our new hmm such that they match the states from the true HMM above. SSM accomplishes this with two function calls: first, we call `find_permutation(true_states, inferred_states)` which returns a list of indexes into states.
Then, we call `hmm.permute(permuation)` with the results of our first function call. Finally, we set `inferred_states` to be the underlying states we predict given the data.
Below, we plot the results of the `fit` function in order to check convergence of the EM algorithm. We see that the log-probability from the EM algorithm approaches the true log-probability of the data (which we have stored as `lp_true`).
```python
# Find a permutation of the states that best matches the true and inferred states
hmm.permute(find_permutation(true_states, hmm.most_likely_states(obs, input=inpt)))
inferred_states = hmm.most_likely_states(obs, input=inpt)
```
```python
# Plot the log probabilities of the true and fit models
plt.plot(hmm_lps, label="EM")
plt.plot([0, N_iters], true_lp * np.ones(2), ':k', label="True")
plt.legend(loc="lower right")
plt.xlabel("EM Iteration")
plt.xlim(0, N_iters)
plt.ylabel("Log Probability")
plt.show()
```
### 3.3 Exercise: Change the Fitting Method
As an experiment, try fitting the same dataset using another fitting method. The two other fitting methods supported for HMMs are "sgd" and "adam", which you can set by passing `method="sgd"` and `method="adam"` respectively. For these methods, you'll probably need to increase the number of iteratations to around 1000 or so.
After fitting with a different method, re-run the two cells above to generate a plot. How does the convergence of these other methods converge to EM?
```python
# Plot the true and inferred states
plt.figure(figsize=(8, 3.5))
plt.subplot(211)
plt.imshow(true_states[None, :], aspect="auto")
plt.xticks([])
plt.xlim(0, time_bins)
plt.ylabel("true\nstate")
plt.yticks([])
plt.subplot(212)
plt.imshow(inferred_states[None, :], aspect="auto")
plt.xlim(0, time_bins)
plt.ylabel("inferred\nstate")
plt.yticks([])
plt.show()
```
## 4. Visualize the Learned Parameters
After calling `fit`, our new HMM object will have parameters updated according to the dataset. We can get a sense of whether we successfully learned these parameters by comparing them to the _true_ parameters which generated the data.
Below, we plot the baseline log transition probabilities (the log of the state-transition matrix) as well as the input weights $w$.
```python
# Plot the true and inferred input effects
plt.figure(figsize=(8, 4))
vlim = max(abs(true_hmm.transitions.log_Ps).max(),
abs(true_hmm.transitions.Ws).max(),
abs(hmm.transitions.log_Ps).max(),
abs(hmm.transitions.Ws).max())
plt.subplot(141)
plt.imshow(true_hmm.transitions.log_Ps, vmin=-vlim, vmax=vlim, cmap="RdBu", aspect=1)
plt.xticks(np.arange(num_states))
plt.yticks(np.arange(num_states))
plt.title("True\nBaseline Weights")
plt.grid(b=None)
plt.subplot(142)
plt.imshow(true_hmm.transitions.Ws, vmin=-vlim, vmax=vlim, cmap="RdBu", aspect=num_states/input_dim)
plt.xticks(np.arange(input_dim))
plt.yticks(np.arange(num_states))
plt.title("True\nInput Weights")
plt.grid(b=None)
plt.subplot(143)
plt.imshow(hmm.transitions.log_Ps, vmin=-vlim, vmax=vlim, cmap="RdBu", aspect=1)
plt.xticks(np.arange(num_states))
plt.yticks(np.arange(num_states))
plt.title("Inferred\nBaseline Weights")
plt.grid(b=None)
plt.subplot(144)
plt.imshow(hmm.transitions.Ws, vmin=-vlim, vmax=vlim, cmap="RdBu", aspect=num_states/input_dim)
plt.xticks(np.arange(input_dim))
plt.yticks(np.arange(num_states))
plt.title("Inferred\nInput Weights")
plt.grid(b=None)
plt.colorbar()
plt.show()
```
```python
np.exp(true_hmm.transitions.log_Ps)
```
array([[0.96470641, 0.03529359],
[0.02991731, 0.97008269]])
```python
true_hmm.transitions.Ws
```
array([[ 5.60267397],
[-2.93183364]])
```python
hmm.transitions.Ws
```
array([[ 3.40151154],
[-5.75812834]])
```python
```
| 8f623118b5bbd8c80f29ddcd4c8dfaa5ffc95e17 | 106,383 | ipynb | Jupyter Notebook | notebooks/2 Input Driven HMM.ipynb | nhat-le/ssm | 2f386c04bf7540b0075f40b5d0ae3923296d8bfd | [
"MIT"
]
| null | null | null | notebooks/2 Input Driven HMM.ipynb | nhat-le/ssm | 2f386c04bf7540b0075f40b5d0ae3923296d8bfd | [
"MIT"
]
| null | null | null | notebooks/2 Input Driven HMM.ipynb | nhat-le/ssm | 2f386c04bf7540b0075f40b5d0ae3923296d8bfd | [
"MIT"
]
| null | null | null | 195.198165 | 44,476 | 0.898668 | true | 3,101 | Qwen/Qwen-72B | 1. YES
2. YES | 0.934395 | 0.795658 | 0.743459 | __label__eng_Latn | 0.969825 | 0.565636 |
<center>
</center>
# Non Linear Regression Analysis
Estimated time needed: **20** minutes
## Objectives
After completing this lab you will be able to:
- Differentiate between Linear and non-linear regression
- Use Non-linear regression model in Python
If the data shows a curvy trend, then linear regression will not produce very accurate results when compared to a non-linear regression because, as the name implies, linear regression presumes that the data is linear.
Let's learn about non linear regressions and apply an example on python. In this notebook, we fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014.
<h2 id="importing_libraries">Importing required libraries</h2>
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Though Linear regression is very good to solve many problems, it cannot be used for all datasets. First recall how linear regression, could model a dataset. It models a linear relation between a dependent variable y and independent variable x. It had a simple equation, of degree 1, for example y = $2x$ + 3.
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 2*(x) + 3
y_noise = 1 * np.random.normal(size=x.size)
ydata = y + y_noise
#plt.figure(figsize=(8,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
Non-linear regressions are a relationship between independent variables $x$ and a dependent variable $y$ which result in a non-linear function modeled data. Essentially any relationship that is not linear can be termed as non-linear, and is usually represented by the polynomial of $k$ degrees (maximum power of $x$).
$$ \ y = a x^3 + b x^2 + c x + d \ $$
Non-linear functions can have elements like exponentials, logarithms, fractions, and others. For example: $$ y = \log(x)$$
Or even, more complicated such as :
$$ y = \log(a x^3 + b x^2 + c x + d)$$
Let's take a look at a cubic function's graph.
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 1*(x**3) + 1*(x**2) + 1*x + 3
y_noise = 10* np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
As you can see, this function has $x^3$ and $x^2$ as independent variables. Also, the graphic of this function is not a straight line over the 2D plane. So this is a non-linear function.
Some other types of non-linear functions are:
### Quadratic
$$ Y = X^2 $$
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = np.power(x,2)
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
### Exponential
An exponential function with base c is defined by $$ Y = a + b c^X$$ where b ≠0, c > 0 , c ≠1, and x is any real number. The base, c, is constant and the exponent, x, is a variable.
```python
X = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
Y= np.exp(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
### Logarithmic
The response $y$ is a results of applying logarithmic map from input $x$'s to output variable $y$. It is one of the simplest form of **log()**: i.e. $$ y = \log(x)$$
Please consider that instead of $x$, we can use $X$, which can be polynomial representation of the $x$'s. In general form it would be written as
\begin{equation}
y = \log(X)
\end{equation}
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = np.log(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
### Sigmoidal/Logistic
$$ Y = a + \frac{b}{1+ c^{(X-d)}}$$
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = 1-4/(1+np.power(3, X-2))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
<a id="ref2"></a>
# Non-Linear Regression example
For an example, we're going to try and fit a non-linear model to the datapoints corresponding to China's GDP from 1960 to 2014. We download a dataset with two columns, the first, a year between 1960 and 2014, the second, China's corresponding annual gross domestic income in US dollars for that year.
```python
import numpy as np
import pandas as pd
#downloading dataset
!wget -nv -O china_gdp.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/china_gdp.csv
df = pd.read_csv("china_gdp.csv")
df.head(10)
```
2020-12-02 12:33:48 URL:https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/china_gdp.csv [1218/1218] -> "china_gdp.csv" [1]
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Year</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1960</td>
<td>5.918412e+10</td>
</tr>
<tr>
<th>1</th>
<td>1961</td>
<td>4.955705e+10</td>
</tr>
<tr>
<th>2</th>
<td>1962</td>
<td>4.668518e+10</td>
</tr>
<tr>
<th>3</th>
<td>1963</td>
<td>5.009730e+10</td>
</tr>
<tr>
<th>4</th>
<td>1964</td>
<td>5.906225e+10</td>
</tr>
<tr>
<th>5</th>
<td>1965</td>
<td>6.970915e+10</td>
</tr>
<tr>
<th>6</th>
<td>1966</td>
<td>7.587943e+10</td>
</tr>
<tr>
<th>7</th>
<td>1967</td>
<td>7.205703e+10</td>
</tr>
<tr>
<th>8</th>
<td>1968</td>
<td>6.999350e+10</td>
</tr>
<tr>
<th>9</th>
<td>1969</td>
<td>7.871882e+10</td>
</tr>
</tbody>
</table>
</div>
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
### Plotting the Dataset
This is what the datapoints look like. It kind of looks like an either logistic or exponential function. The growth starts off slow, then from 2005 on forward, the growth is very significant. And finally, it decelerate slightly in the 2010s.
```python
plt.figure(figsize=(8,5))
x_data, y_data = (df["Year"].values, df["Value"].values)
plt.plot(x_data, y_data, 'ro')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
```
### Choosing a model
From an initial look at the plot, we determine that the logistic function could be a good approximation,
since it has the property of starting with a slow growth, increasing growth in the middle, and then decreasing again at the end; as illustrated below:
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = 1.0 / (1.0 + np.exp(-X))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
The formula for the logistic function is the following:
$$ \hat{Y} = \frac1{1+e^{\beta_1(X-\beta_2)}}$$
$\beta_1$: Controls the curve's steepness,
$\beta_2$: Slides the curve on the x-axis.
### Building The Model
Now, let's build our regression model and initialize its parameters.
```python
def sigmoid(x, Beta_1, Beta_2):
y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
return y
```
Lets look at a sample sigmoid line that might fit with the data:
```python
beta_1 = 0.10
beta_2 = 1990.0
#logistic function
Y_pred = sigmoid(x_data, beta_1 , beta_2)
#plot initial prediction against datapoints
plt.plot(x_data, Y_pred*15000000000000.)
plt.plot(x_data, y_data, 'ro')
```
Our task here is to find the best parameters for our model. Lets first normalize our x and y:
```python
# Lets normalize our data
xdata =x_data/max(x_data)
ydata =y_data/max(y_data)
```
#### How we find the best parameters for our fit line?
we can use **curve_fit** which uses non-linear least squares to fit our sigmoid function, to data. Optimal values for the parameters so that the sum of the squared residuals of sigmoid(xdata, *popt) - ydata is minimized.
popt are our optimized parameters.
```python
from scipy.optimize import curve_fit
popt, pcov = curve_fit(sigmoid, xdata, ydata)
#print the final parameters
print(" beta_1 = %f, beta_2 = %f" % (popt[0], popt[1]))
```
beta_1 = 690.447527, beta_2 = 0.997207
Now we plot our resulting regression model.
```python
x = np.linspace(1960, 2015, 55)
x = x/max(x)
plt.figure(figsize=(8,5))
y = sigmoid(x, *popt)
plt.plot(xdata, ydata, 'ro', label='data')
plt.plot(x,y, linewidth=3.0, label='fit')
plt.legend(loc='best')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
```
## Practice
Can you calculate what is the accuracy of our model?
```python
# split data into train/test
msk = np.random.rand(len(df)) < 0.8
train_x = xdata[msk]
test_x = xdata[~msk]
train_y = ydata[msk]
test_y = ydata[~msk]
# build the model using train set
popt, pcov = curve_fit(sigmoid, train_x, train_y)
# predict using test set
y_hat = sigmoid(test_x, *popt)
# evaluation
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
```
Mean absolute error: 0.03
Residual sum of squares (MSE): 0.00
R2-score: 0.98
Double-click **here** for the solution.
<!-- Your answer is below:
# split data into train/test
msk = np.random.rand(len(df)) < 0.8
train_x = xdata[msk]
test_x = xdata[~msk]
train_y = ydata[msk]
test_y = ydata[~msk]
# build the model using train set
popt, pcov = curve_fit(sigmoid, train_x, train_y)
# predict using test set
y_hat = sigmoid(test_x, *popt)
# evaluation
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
-->
<h2>Want to learn more?</h2>
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="https://www.ibm.com/analytics/spss-statistics-software">SPSS Modeler</a>
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://www.ibm.com/cloud/watson-studio">Watson Studio</a>
### Thank you for completing this lab!
## Author
Saeed Aghabozorgi
### Other Contributors
<a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a>
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ---------- | ---------------------------------- |
| 2020-11-03 | 2.1 | Lakshmi | Made changes in URL |
| 2020-08-27 | 2.0 | Lavanya | Moved lab to course repo in GitLab |
| | | | |
| | | | |
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
| 14e4b591323540368f1d6b7edbe3027e34c02774 | 172,829 | ipynb | Jupyter Notebook | ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb | naha7789/ML-IBM-Exercise- | f2897bcaa28fa5c8786147416bf1b0e0078a79a5 | [
"BSD-4-Clause-UC"
]
| 3 | 2020-12-03T09:19:16.000Z | 2020-12-04T18:02:24.000Z | ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb | naha7789/ML-IBM-Exercise | f2897bcaa28fa5c8786147416bf1b0e0078a79a5 | [
"BSD-4-Clause-UC"
]
| null | null | null | ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb | naha7789/ML-IBM-Exercise | f2897bcaa28fa5c8786147416bf1b0e0078a79a5 | [
"BSD-4-Clause-UC"
]
| null | null | null | 199.571594 | 18,572 | 0.904651 | true | 3,628 | Qwen/Qwen-72B | 1. YES
2. YES | 0.847968 | 0.805632 | 0.68315 | __label__eng_Latn | 0.956549 | 0.425518 |
# Explicit Methods for the Model Hyperbolic PDE
The one-dimensional wave equation or linear convection equation is given by the following partial differential equation.
$$
\frac{\partial u}{\partial t} + c \frac{\partial u}{\partial x} = 0
$$
When we solve this PDE numerically, we divide the spatial and temporal domains into a series of mesh points and time points. Assume our problem domain is of length $L$, and that we want to compute the solution of $u(x,t)$ between time equals to zero to some final time, $t_f$ for all values of $x$ between zero and $L$. The first step in solving our PDE numerically is to divide the domain, $0 \leq x \leq L$, into a discrete number of mesh points or nodes, $N_x$. Likewise, we also divide the time domain, $0 \leq t \leq t_f$, into a discrete number of time steps, $N_t$. If we do so uniformally in time and space, then the distance bewteen each mesh point is $\Delta x$, and the distance between each time step is $\Delta t$.
In other words, for a uniform mesh size and a uniform time step, the value of $x$ for the i-th node is
$$
x_i = i \Delta x
$$
and the time at each time step, $n$, is
$$
t_n = n \Delta t
$$
The goal is to approximate this PDE as a difference equation, meaning that we want to represent the PDE using only a discrete number of mesh points, $N_x$, and time points, $N_t$. To do this requires using a Taylor series expansion about each point in the mesh, both in time and space, to approximate the time and space derivatives of $u$ in terms of a difference formula. Since in our Taylor series expansion we choose only a finite number of terms, these difference formulas are known as finite-difference approximations.
Since there are many different finite-difference formulas, let use define a common nomenclature. Let the symbol $\mathcal{D}$ represent a difference approximation. The subscript of $\mathcal{D}$ shall represent the direction of the finite-difference, forward (+), backward (-), or central (0). Last, let the denominator represent the domain over which the finite-difference formula is used. For the Cartesian domain, let us use the $\Delta x$ to represent a finite-difference approximation in the $x$-direction. Likewise, $\Delta y$ would represent a finite-difference approximation in the $y$-direction.
## First-Derivative Approximations
Using this nomenclature described above, let use define a set of finite-difference approximations for the first directive of the variable $\phi$ with respect to $x$. In other words, what are the available finite-difference formulae for
$$
\frac{ \partial \phi }{ \partial x} = \textrm{Finite-difference Approximation} + \textrm{Truncation Error}
$$
These expressions can be derived using a Taylor series expansion. By keeping track of the order of the higher-order terms neglected or truncated from the Taylor series expansion, we can also provide an estimate of the truncation error. We say that a finite-difference approximation is first-order accurate if the truncation error is of order $\mathcal{O} (\Delta x)$. It is second-order accurate if the truncation error is of order $\mathcal{O} (\Delta x^2)$.
**Forward difference**, first-order accurate, $\mathcal{O} (\Delta x)$
- Equally spaced
$$
\frac{\mathcal{D}_{+}\cdot}{\Delta x} \phi_i = \frac{\phi_{i+1} - \phi_i}{\Delta x}
$$
- Nonequally spaced
$$
\frac{\mathcal{D}_{+}\cdot}{\Delta x} \phi_i = \frac{\phi_{i+1} - \phi_i}{x_{i+1} - x_i}
$$
**Backward difference**, first-order accurate, $\mathcal{O} (\Delta x)$
- Equally spaced
$$
\frac{\mathcal{D}_{-}\cdot}{\Delta x} \phi_i = \frac{\phi_{i} - \phi_{i-1}}{\Delta x}
$$
- Nonequally spaced
$$
\frac{\mathcal{D}_{-}\cdot}{\Delta x} \phi_i = \frac{\phi_{i} - \phi_{i-1}}{x_{i} - x_{i-1}}
$$
**Central difference**, second-order accurate, $\mathcal{O} (\Delta x^2)$
- Equally spaced
$$
\frac{\mathcal{D}_{0}\cdot}{\Delta x} \phi_i = \frac{\phi_{i+1} - \phi_{i-1}}{2\Delta x}
$$
- Nonequally spaced
$$
\frac{\mathcal{D}_{0}\cdot}{\Delta x} \phi_i = \frac{\phi_{i+1} - \phi_{i-1}}{x_{i+1} - x_{i-1}}
$$
## Difference Equations for the Model Hyperbolic PDE
Let us use the above finite-difference approximations to derive several difference equations for the model hyperbolic PDEs. We have free reign in this choice, and most importantly, we certainly not limited to those provided above. It is possible to derive, third and fourth-order finite-difference approximations. These higher-order methods can become increasingly complex. Note, however, that while we can freely choose the difference approximation, we still need to check that this difference equation is not only consistent but also stable. Different combinations of time and spatial difference approximations result in different numerically stability.
### Forward-Time, Backward Space
Using a forward-time, backward-space difference approximation, the partial differential equation,
$$
\frac{\partial u}{\partial t} + c \frac{\partial u}{\partial x} = 0
$$
is transformed into the following difference equation,
$$
\frac{\mathcal{D}_{+}}{\Delta t} \Big( u^n_i \Big) + c \frac{\mathcal{D}_{-}}{\Delta x} \Big( u^n_i \Big) = 0,
$$
which we can then expand as
$$
\frac{ u^{n+1}_i - u^n_i }{\Delta t} + c \frac{ u^n_i - u^n_{i-1} }{\Delta x} = 0.
$$
From the truncation error of the difference approximations, we can say that this method is first-order accurate in time and space, in other words the truncation error is of the order $\mathcal{O}(\Delta t, \Delta x)$. We can represent this numerical method using the following diagram.
The *stencil* diagram visually shows the mesh points and time points that are used in the difference equation.
### Forward-Time, Forward Space
Using a forward-time, forward-space difference approximation, the partial differential equation,
$$
\frac{\partial u}{\partial t} + c \frac{\partial u}{\partial x} = 0
$$
is transformed into the following difference equation,
$$
\frac{\mathcal{D}_{+}}{\Delta t} \Big( u^n_i \Big) + c \frac{\mathcal{D}_{+}}{\Delta x} \Big( u^n_i \Big) = 0,
$$
which we can then expand as
$$
\frac{ u^{n+1}_i - u^n_i }{\Delta t} + c \frac{ u^n_{i+1} - u^n_{i} }{\Delta x} = 0.
$$
From the truncation error of the difference approximations, we can say that this method is first-order accurate in time and space, in other words the truncation error is of the order $\mathcal{O}(\Delta t, \Delta x)$. We can represent this numerical method using the following diagram.
The *stencil* diagram visually shows the mesh points and time points that are used in the difference equation.
### Forward-Time, Central Space
Using a forward-time, central-space difference approximation, the partial differential equation,
$$
\frac{\partial u}{\partial t} + c \frac{\partial u}{\partial x} = 0
$$
is transformed into the following difference equation,
$$
\frac{\mathcal{D}_{+}}{\Delta t} \Big( u^n_i \Big) + c \frac{\mathcal{D}_{0}}{\Delta x} \Big( u^n_i \Big) = 0,
$$
which we can then expand as
$$
\frac{ u^{n+1}_i - u^n_i }{\Delta t} + c \frac{ u^n_{i+1} - u^n_{i-1} }{2 \Delta x} = 0.
$$
From the truncation error of the difference approximations, we can say that this method is first-order accurate in time and second-order in space, in other words the truncation error is of the order $\mathcal{O}(\Delta t, \Delta x^2)$. We can represent this numerical method using the following diagram.
The *stencil* diagram visually shows the mesh points and time points that are used in the difference equation.
## Are all these methods stable?
***We need to apply von Neumann stability analysis to each of these difference equations to determine under what conditions, i.e., for what values of $\Delta t$ and $\Delta x$, is the numerical method stable?***
# Summary of Numerical Methods
## Explicit, Forward-Time, Backward-Space (FTBS)
$$
\begin{align}
\textrm{Method :}\quad & u^{n+1}_i = u^n_i - c \Delta t \frac{\mathcal{D}_{-} \cdot}{\Delta x} u_i^n \\
\textrm{Stability Criteria :}\quad & c > 0, \quad \Delta t \le \frac{\Delta x}{|c|} \\
\textrm{Order of Accuracy :}\quad & \mathcal{O}\left(\Delta t, \Delta x\right)
\end{align}
$$
## Explicit, Forward-Time, Forward-Space (FTFS)
$$
\begin{align}
\textrm{Method :}\quad & u^{n+1}_i = u^n_i - c \Delta t \frac{\mathcal{D}_{+} \cdot}{\Delta x} u_i^n \\
\textrm{Stability Criteria :}\quad & c < 0, \quad \Delta t \le \frac{\Delta x}{|c|} \\
\textrm{Order of Accuracy :}\quad & \mathcal{O}\left(\Delta t, \Delta x\right)
\end{align}
$$
Note that this method is only stable for $c < 0$.
## Explicit, Forward-Time, Central-Space (FTCS)
$$
\begin{align}
\textrm{Method :}\quad & u^{n+1}_i = u^n_i - c \Delta t \frac{\mathcal{D}_{0} \cdot}{\Delta x} u_i^n \\
\textrm{Stability Criteria :}\quad & \textrm{Always unstable} \\
\textrm{Order of Accuracy :}\quad & \mathcal{O}\left(\Delta t, \Delta x^2\right)
\end{align}
$$
This method is *always* unstable.
## Lax
$$
\begin{align}
\textrm{Method :}\quad & u^{n+1}_i = \frac{u^n_{i-1} + u^n_{i+1}}{2} - c \Delta t \frac{\mathcal{D}_{0} \cdot}{\Delta x} u_i^n \\
\textrm{Stability Criteria :}\quad & \Delta t \le \frac{\Delta x}{|c|} \\
\textrm{Order of Accuracy :}\quad & \mathcal{O}\left(\Delta t, \frac{\Delta x^2 } { \Delta t}, \Delta x^2\right)
\end{align}
$$
The Lax algorithm is an *inconsistent* difference equation because the truncation error is not gaurenteed to go to zero. It only does so if $\Delta x^2$ goes to zero faster than $\Delta t$.
## Lax-Wendroff
$$
\begin{align}
\textrm{Method :}\quad & u^{n+1}_i = u^n_i - c \Delta t \frac{\mathcal{D}_{0} \cdot}{\Delta x} u_i^n + \frac{1}{2}c^2 \Delta t^2 \frac{\mathcal{D}_{+} \cdot}{\Delta x} \frac{\mathcal{D}_{-} \cdot}{\Delta x} u_i^n \\
\textrm{Stability Criteria :}\quad & \Delta t \le \frac{\Delta x}{|c|} \\
\textrm{Order of Accuracy :}\quad & \mathcal{O}\left(\Delta t^2, \Delta x^2\right)
\end{align}
$$
## MacCormack
$$
\begin{align}
\textrm{Method :}\quad & u^{\overline{n+1}}_i = u^n_i - c \Delta t \frac{\mathcal{D}_{+} \cdot}{\Delta x} u_i^n \\
& u^{n+1}_i = \frac{1}{2} \left[u^n_i + u^{\overline{n+1}}_i - c \Delta t \frac{\mathcal{D}_{-} \cdot}{\Delta x} u^{\overline{n+1}}_i \right] \\
\textrm{Stability Criteria :}\quad & \Delta t \le \frac{\Delta x}{|c|} \\
\textrm{Order of Accuracy :}\quad & \mathcal{O}\left(\Delta t^2, \Delta x^2\right)
\end{align}
$$
## Jameson
$$
\begin{align}
\textrm{Method :}\quad & u^{(0)} = u^n_i \\
& u^{(k)} = u^n_i - \alpha_k c \Delta t \frac{\mathcal{D}_{0} \cdot}{\Delta x} u_i^{(k-1)} \\
& \qquad \textrm{where} \,\, \alpha_k = \frac{1}{5 - k}, \,\, k = 1, 2, 3, 4 \\
& u^{n+1}_i = u^{(4)}_i \\
\textrm{Stability Criteria :}\quad & \Delta t \le \frac{2 \sqrt{2} \Delta x}{|c|} \\
\textrm{Order of Accuracy :}\quad & \mathcal{O}\left(\Delta t^4, \Delta x^2\right)
\end{align}
$$
## Warming-Beam
$$
\begin{align}
\textrm{Method :}\quad & u^{n + 1/2}_i = u^n_i - \frac{ c \Delta t }{2} \frac{\mathcal{D}_{-} \cdot}{\Delta x} u_i^n \\
& u^{n+1}_i = u^n_i - c \Delta t \frac{\mathcal{D}_{-} \cdot}{\Delta x}
\left[ u^{n+1/2}_i + \frac{\Delta x}{2} \frac{\mathcal{D}_{-} \cdot}{\Delta x} u^n_i \right] \\
\textrm{Stability Criteria :}\quad & \Delta t \le \frac{2 \Delta x}{|c|} \\
\textrm{Order of Accuracy :}\quad & \mathcal{O}\left(\Delta t^2, \Delta x^2\right)
\end{align}
$$
## More difference equations
Let us look more closely at the Lax-Wendroff algorithm. We see the term like the following
$$
\frac{\mathcal{D}_{+} \cdot}{\Delta x} \frac{\mathcal{D}_{-} \cdot}{\Delta x} u_i^n
$$
what does mean? How do we apply two difference approximations? *This operators can be linearly combined,* so we just need to apply the first difference approximation, and then apply the second difference approximation on each of the remaing terms. Here is what that looks like for the above expression.
$$
\frac{\mathcal{D}_{+} \cdot}{\Delta x} \left[ \frac{\mathcal{D}_{-} \cdot}{\Delta x} u_i^n \right] =
\frac{\mathcal{D}_{+} \cdot}{\Delta x} \left[ \frac{u^n_i - u^n_{i-1}}{\Delta x} \right] =
\frac{1}{\Delta x} \left[ \frac{\mathcal{D}_{+} \cdot}{\Delta x}\Big( u^n_i \Big) - \frac{\mathcal{D}_{+} \cdot}{\Delta x} \Big( u^n_{i-1} \Big) \right]
$$
Now applying the second difference operator, results in
$$
\frac{\mathcal{D}_{+} \cdot}{\Delta x} \frac{\mathcal{D}_{-} \cdot}{\Delta x} u_i^n =
\frac{1}{\Delta x} \left[ \frac{\mathcal{D}_{+} \cdot}{\Delta x}\Big( u^n_i \Big) - \frac{\mathcal{D}_{+} \cdot}{\Delta x} \Big( u^n_{i-1} \Big) \right] =
\frac{1}{\Delta x} \left[ \frac{u^n_{i+1} - u^n_i}{\Delta x} - \frac{u^n_{i} - u^n_{i-1}}{\Delta x} \right] =
\frac{u^n_{i+1} - 2 u^n_i + u^n_{i-1} }{\Delta x^2}
$$
| 0d3b42f5c5a58015e1efdbc00bf907fca0530b29 | 17,656 | ipynb | Jupyter Notebook | Notebooks/LinearConvection/5-LinearConvection-ExplicitMethods.ipynb | jcschulz/ae269 | 5c467a6e70808bb00e27ffdb8bb0495e0c820ca0 | [
"MIT"
]
| null | null | null | Notebooks/LinearConvection/5-LinearConvection-ExplicitMethods.ipynb | jcschulz/ae269 | 5c467a6e70808bb00e27ffdb8bb0495e0c820ca0 | [
"MIT"
]
| null | null | null | Notebooks/LinearConvection/5-LinearConvection-ExplicitMethods.ipynb | jcschulz/ae269 | 5c467a6e70808bb00e27ffdb8bb0495e0c820ca0 | [
"MIT"
]
| null | null | null | 44.250627 | 743 | 0.555392 | true | 4,058 | Qwen/Qwen-72B | 1. YES
2. YES | 0.930458 | 0.912436 | 0.848984 | __label__eng_Latn | 0.965907 | 0.810807 |
# Model of the potassium A-type (transient) current
*equations taken from Sterratt et al. book*
\begin{equation}
C_m\frac{dV}{dt} = -\bar{g}_{Na}m^3h(V-E_{Na}) - \bar{g}n^4_K(V-E_K) - g_{lk}(V-E_{lk}) - I_A
\end{equation}
\begin{equation}
I_A = \bar{g}_Aa^3b(V-E_A);
\end{equation}
\begin{equation}
a_\infty = \left(\frac{0.0761 \exp (\frac{V+99.22}{31.84}) }{1 + \exp(\frac{V+6.17}{28.93})} \right)^{1/3}\quad
\tau_a = 0.3632 + \frac{1.158}{1 + \exp(\frac{V+60.96}{20.12})}\;
\end{equation}
\begin{equation}
b_\infty = \frac{1}{\left(1 + \exp(\frac{V+58.3}{14.54})\right)^4}\;
\tau_b = 1.24 + \frac{2.678}{1 + \exp(\frac{V-55}{16.027})}
\end{equation}
```python
from __future__ import division
```
```python
from PyDSTool import *
import PyDSTool as dst
```
```python
%pylab inline
style.use('ggplot')
```
Populating the interactive namespace from numpy and matplotlib
WARNING: pylab import has clobbered these variables: ['spy', 'gamma', 'six', 'copy', 'who', 'find', 'diff', 'info']
`%matplotlib` prevents importing * from pylab and numpy
```python
import bokeh
import bokeh.plotting as bkp
from ipywidgets import interact
bkp.output_notebook()
```
<div>
<a href="http://bokeh.pydata.org" target="_blank" class="bk-logo bk-logo-small bk-logo-notebook"></a>
<span>BokehJS successfully loaded.</span>
</div>
```python
```
## Setting the model up
```python
Acurr_params = dict(
Iap = 0,
Ena = 50,
Ek = -77,
Ea = -80,
Elk = -22,
gnabar = 120.,# mS/cm^2
gkbar = 20.0, # mS/cm^2
gabar = 47.7, # mS/cm^2
glk = 0.3
)
Vtest = linspace(-100,50, 1000)
```
```python
def ainf(Vm,vshift=5):
return (0.0761*exp((Vm+vshift+94.22)/31.84)/(1+exp((Vm+vshift+1.17)/28.93)))**(1/3)
def atau(Vm, vshift=5):
return 0.3632 + 1.158/(1 + exp((Vm+vshift + 55.96)/20.12))
```
```python
def binf(Vm, vshift=5):
return 1/(1 + exp((Vm+vshift + 53.3)/14.54))**4
def btau(Vm, vshift=5):
return 1.24 + 2.678/(1 + exp((Vm-vshift+50)/16.027))
```
```python
figure(figsize=(10,5))
subplot(121)
plot(Vtest, clip(ainf(Vtest),0,1), color="orange", label='ainf')
plot(Vtest, binf(Vtest), color="steelblue", label='binf')
legend(loc=0)
subplot(122)
plot(Vtest, atau(Vtest), color="orange", label='atau')
plot(Vtest, btau(Vtest), color="steelblue", label='btau')
legend()
```
```python
q10 = 3.13
Q = q10**((18-6.3)/10) # = 3.8
def vtrap(x,y):
return where(np.abs(x/y) < 1e-6, y*(1-0.5*x/y), x/(exp(x/y)-1))
def nalpha(Vm, nshift=0.7):
return Q*0.5*0.01 * vtrap(-(Vm + 50 + nshift), 10)
def nalpha2(Vm, nshift=0.7):
return Q*0.5*(-0.01) * (Vm+50.7)/(exp(-0.1*(Vm+50.7))-1)
def nbeta (Vm, nshift=0.7):
return Q*0.5*0.125 * exp(-(Vm + 60 + nshift)/80)
```
```python
plot(Vtest, nalpha(Vtest), Vtest, nbeta(Vtest))#, Vtest, nalpha2(Vtest))
```
```python
mshift = -0.3
hshift = -7
def malpha(Vm):
return Q* 0.1 * vtrap(-(Vm + 35 + mshift), 10)
def mbeta(Vm):
return Q* 4 * exp(-(Vm + 60 + mshift)/18)
def halpha(Vm):
return Q* 0.07*exp(-(Vm + 60 + hshift)/20)
def hbeta(Vm):
return Q* 1/(exp(-(Vm + 30 + hshift)/10) + 1)
def minf (V):
return 1 / (1 + mbeta(V)/malpha(V))
def hinf(V):
return 1/(1 + hbeta(V)/halpha(V))
def ninf(V):
return 1/(1 + nbeta(V)/nalpha(V))
subplot(121)
plot(Vtest, malpha(Vtest), Vtest,mbeta(Vtest))
title('Activation')
subplot(122)
plot(Vtest, halpha(Vtest), Vtest,hbeta(Vtest))
title('Inactivation')
```
```python
plot(Vtest, minf(Vtest), Vtest,hinf(Vtest))
```
```python
DSargs = dst.args(name="Acurrent",
pars = Acurr_params,
vars = ['V', 'm', 'h', 'n', 'a', 'b'],
tdomain=[0.0,1000.0])
DSargs.ics = dict(V = -75,
m=0.1,
h=0.1,
n = 0.1,
a = 0.1,
b = 0.1) # Initial conditions
DSargs.fnspecs = dict(ainf = (['V'], 'pow(0.0761*exp((V+99.22)/31.84)/(1 + exp((V+6.17)/28.93)),1./3)'),
atau = (['V'], '0.3632 + 1.158/(1 + exp((V+60.96)/20.12))'),
#
binf = (['V'], '1/(1 + exp((V+58.3)/14.54))**4'),
btau = (['V'], '1.24 + 2.678/(1 + exp((V-55)/16.027))'),
#
malpha = (['V'], '3.8 * (-0.1*(V+34.7))/(exp(-(V + 34.7)/10)-1)'),
mbeta = (['V'], '3.8 * 4 * exp(-(V+59.7)/18)'),
#
halpha = (['V'], '3.8*0.07*exp(-(V+53)/20)'),
hbeta = (['V'], '3.8/(1 + exp(-(V+23)/10))'),
#
nalpha = (['V'], '0.5*3.8*(-0.01*(V+50.7))/(exp(-(V + 50.7)/10)-1)'),
nbeta = (['V'], '0.5*3.8*0.125*exp(-(V+60.7)/80)'),
#
rpulse = (['tx','width'], '0.5*(1 + tanh(100*(tx)) * tanh(100*(-tx+width)))'),
)
DSargs.varspecs = dict(
V = 'Iap*rpulse((t-100), 800) -\
(gnabar*m*m*m*h*(V-Ena) + gkbar*n*n*n*n*(V-Ek) + gabar*a*a*a*b*(V-Ea) + glk*(V-Elk))',
m = 'malpha(V)*(1-m) - mbeta(V)*m',
h = 'halpha(V)*(1-h) - hbeta(V)*h',
n = 'nalpha(V)*(1-n) - nbeta(V)*n',
a = '(ainf(V)-a)/atau(V)',
b = '(binf(V)-b)/btau(V)')
```
```python
#ode = dst.Generator.Radau_ODEsystem(DSargs)
#ode = dst.Generator.Dopri_ODEsystem(DSargs)
ode = dst.Generator.Vode_ODEsystem(DSargs)
traj0 = ode.compute('init')
pts0 = traj0.sample(dt=0.1)
ode.set(ics = pts0[-1])
```
```python
```
<div class="plotdiv" id="78563fcd-654e-465b-9882-ed0153e8e400"></div>
<bokeh.io._CommsHandle at 0x7f12d0ac8e50>
```python
ode.set(pars = dict(Iap=8.3))
#ode.set(algparams=dict(max_pts=50000))
%time traj1 = ode.compute('test')
pts1 = traj1.sample(dt=0.1)
```
CPU times: user 4.15 s, sys: 16 ms, total: 4.17 s
Wall time: 4.12 s
```python
plot(pts1['t'], pts1['V'])
```
```python
```
| 1c1383724116ba207accc70f1660e14d42913877 | 539,568 | ipynb | Jupyter Notebook | A-type current model.ipynb | abrazhe/nbpc | 8465cf8e8db1583b609ce7d8894a840ca728c90d | [
"CC0-1.0"
]
| null | null | null | A-type current model.ipynb | abrazhe/nbpc | 8465cf8e8db1583b609ce7d8894a840ca728c90d | [
"CC0-1.0"
]
| null | null | null | A-type current model.ipynb | abrazhe/nbpc | 8465cf8e8db1583b609ce7d8894a840ca728c90d | [
"CC0-1.0"
]
| null | null | null | 741.164835 | 413,089 | 0.822586 | true | 2,305 | Qwen/Qwen-72B | 1. YES
2. YES | 0.865224 | 0.785309 | 0.679468 | __label__yue_Hant | 0.143338 | 0.416963 |
```python
from sympy import *
init_printing()
from IPython.display import display
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d as a3
import matplotlib.animation as animation
%matplotlib notebook
# specify a second time as a workaround to get interactive plots working
%matplotlib notebook
#reload(gaussian_orbitals)
import gaussian_orbitals
import read_qmcpack
#reload(read_qmcpack)
import numpy as np
import scipy.optimize
from ipywidgets import interact
import ipywidgets
```
### Cusp Correction for Gaussian Orbitals
From "Scheme for adding electron-nucleus cusps to Gaussian orbitals" A. Ma, D. Towler, N. D. Drummond, R. J. Needs, Journal of Chemical Physics 122, 224322(2005) https://doi.org/10.1063/1.1940588
```python
phi = Symbol('phi')
phi_t = Symbol('phitilde')
eta = Symbol('eta')
psi = Symbol('psi')
psi_t = Symbol('psitilde')
phi, phi_t, eta, psi, psi_t
```
Each orbital can be divided into two parts - the s-type functions on the current center, and everything else.
* $\phi$ The s-type functions on the current center (original functions, no cusp correction)
* $\eta$ All non-s-type functions on the current center, and all functions from other centers (no need for cusp correction)
* $\psi$ Total uncorrected orbital ($ = \phi + \eta$)
* $\tilde{\phi}$ Cusp-corrected s-type functions on the current center
* $\tilde{\psi}$ Total cusp-corrected orbital ($ = \tilde{\phi} + \eta$)
Inside some cutoff radius ($r_c$) the s-type part of the orbital is replaced with (Eqn 7 in the paper):
```python
C = Symbol('C')
p = Symbol('p')
p_sym = p
r = Symbol('r',real=True,nonnegative=True)
R = Symbol('R')
eq_phi1 = Eq(phi_t, C + sign(phi_t(0))*exp(p(r)))
eq_R = Eq(R(r), sign(phi_t(0))*exp(p(r)))
eq_phi2 = Eq(phi_t, C + R(r))
display(eq_phi1)
display(eq_R)
display(eq_phi2)
```
```python
alpha = IndexedBase('alpha',shape=(5,))
```
Where $p$ is a polynomial with the $\alpha$'s as coefficients
```python
p = alpha[0] + alpha[1]*r + alpha[2]*r**2 + alpha[3]*r**3 + alpha[4]*r**4
Eq(p_sym, p)
```
```python
rc = Symbol('r_c')
```
```python
R_def = exp(p)
R_def
```
### Solve for polynomial coefficients
Now to express the $\alpha$'s in terms of various contraints on the wavefunction (The value of the wavefunction and derivatives at the constraint points are the $X$'s)
```python
X1,X2,X3,X4,X5 = symbols('X_1 X_2 X_3 X_4 X_5')
# Constraints
# Value of phi tilde matches orbital at r_c
eq1 = Eq(p.subs(r,rc), X1)
eq1
```
```python
# derivative of phi tilde matches orbital at r_c
eq2 = Eq(diff(p,r).subs(r,rc), X2)
eq2
```
```python
# 2nd derivative of phi tilde matches orbital at r_c
eq3 = Eq((diff(p,r,2)+diff(p,r)**2).subs(r,rc),X3)
eq3
```
```python
# Cusp condition - derivative at zero
eq4 = Eq(diff(p,r).subs(r,0),X4)
eq4
```
```python
# Value of phi tilde at 0
eq5 = Eq(p.subs(r,0),X5)
eq5
```
Solve for the polynomial coefficients ($\alpha$'s) in terms of the wavefunction and derivative values ($X$'s). These should match Eqn 14 in the paper.
```python
sln = solve([eq1, eq2, eq3, eq4, eq5],[alpha[0], alpha[1], alpha[2], alpha[3], alpha[4]])[0]
sln
```
```python
Eq(alpha[2],simplify(sln[2]))
```
```python
Eq(alpha[3],expand(sln[3]))
```
```python
Eq(alpha[4],expand(sln[4]))
```
```python
# Expand in terms of X's
p_X = p.subs({alpha[i]:sln[i] for i in range(5)})
display(p_X)
c_p_X = expand(p_X)
for sym in [X5, X4, X3, X2, X1]:
c_p_X = collect(c_p_X, sym)
display(c_p_X)
```
### Effective local energy
Fit this to an 'ideal local energy' to get the final parameter
```python
def del_spherical(e, r):
"""Compute Laplacian for expression e with respect to symbol r.
Currently works only with radial dependence"""
t1 = r*r*diff(e, r)
t2 = diff(t1, r)/(r*r)
return simplify(t2)
```
```python
# Effective one-electron local energy
p_sym = Symbol('p')
phi_tilde = exp(p_sym(r))
Zeff = Symbol('Z_eff')
El = -S.Half * del_spherical(phi_tilde, r)/phi_tilde - Zeff/r
#print R_def
#print del_spherical(R_def, r)
display(El)
El_sym = El.subs(p_sym(r), p).doit()
El_sym
```
```python
def eval_local_energy(gto, alpha_vals, r_val, Zeff_val):
slist = {alpha[0]:alpha_vals[0], alpha[1]:alpha_vals[1], alpha[2]: alpha_vals[2], alpha[3]:alpha_vals[3],
alpha[4]:alpha_vals[4], Zeff:Zeff_val, r:r_val}
return El_sym.subs(slist).evalf()
```
```python
def get_current_local_energy(gto, xs, rc_val, alpha_vals, Zeff_val):
EL_curr = []
EL_at_rc = eval_local_energy(gto, alpha_vals, rc_val, Zeff_val)
dE = -EL_at_rc
#print 'dE = ',dE
for x in xs:
if x < rc_val:
el = eval_local_energy(gto, alpha_vals, x, Zeff_val)
EL_curr.append(el + dE)
else:
val, grad, lap = [g[0] for g in gto.eval_vgl(x, 0.0, 0.0)]
real_el = -.5*lap / val - Zeff_val/x
EL_curr.append(real_el + dE)
return EL_curr
```
### Evaluate for He orbital
```python
basis_set, he_MO = read_qmcpack.parse_qmc_wf('he_sto3g.wfj.xml',['He'])
he_gto = gaussian_orbitals.GTO(basis_set['He'])
rc_val = 0.1
he_Z_val = 2.0
```
MO coeff size = 1
```python
xvals = np.linspace(start=-2.0, stop=2.0, num=40)
yvals = np.array([he_gto.eval_v(x, 0.0, 0.0)[0] for x in xvals])
he_gto.eval_v(1.1, 0.0, 0.0)
```
```python
plt.plot(xvals, yvals)
```
```python
def compute_EL(X5_val):
xslist = {X1:X1_val, X2:X2_val, X3:X3_val, X4:X4_val, X5:X5_val, rc:rc_val}
alpha_vals = [s.subs(xslist) for s in sln]
aslist = {alpha0:alpha_vals[0], alpha1:alpha_vals[1], alpha2:alpha_vals[2], alpha3:alpha_vals[3],
alpha4:alpha_vals[4], Zeff:Zeff_val}
Elof_r = El.subs(aslist)
return Elof_r
```
```python
xs = np.linspace(start=0.012, stop=1.2*rc_val, num=10)
xs
```
array([0.012, 0.024, 0.036, 0.048, 0.06 , 0.072, 0.084, 0.096, 0.108,
0.12 ])
Coefficients from the paper to fit an 'ideal' effective one-electron local energy
```python
beta0 = Symbol('beta_0')
beta_vals = [beta0, 3.25819, -15.0126, 33.7308, -42.8705, 31.2276, -12.1316, 1.94692]
```
```python
El_terms = [beta_vals[n]*r**(n+1) for n in range(1,8)]
EL_ideal_sym = beta0 + sum(El_terms)
EL_ideal_sym
```
```python
# Compute ideal local energy at a point
def compute_ideal_EL(r_val, Z_val, beta0_val=0.0):
Z = Symbol('Z')
slist = {beta0: beta0_val, Z:Z_val, r:r_val}
return (Z*Z*EL_ideal_sym).subs(slist).evalf()
```
```python
# Choose beta_0
El_orig_at_rc = compute_ideal_EL(rc_val, he_Z_val)
Z_val = he_Z_val
print 'EL orig at r_c',El_orig_at_rc
beta0_val = -(El_orig_at_rc)/Z_val/Z_val
beta0_val
```
```python
EL_ideal = [compute_ideal_EL(rval,he_Z_val, beta0_val) for rval in xs]
EL_ideal
```
```python
# Evaluate values of X's
def evalX(phi_func, rc_val, C_val, Z_val, phi_at_zero, eta_at_zero=0.0):
X = [0.0]*5
phi_at_rc, grad_at_rc, lapl_at_rc = phi_func(rc_val)
X[0] = log(abs(phi_at_rc - C_val))
X[1] = grad_at_rc[0] / (phi_at_rc - C_val)
X[2] = (lapl_at_rc - 2.0*grad_at_rc[0]/rc_val)/(phi_at_rc - C_val)
X[3] = -Z_val * (phi_at_zero + eta_at_zero) / (phi_at_zero - C_val)
X[4] = log(abs(phi_at_zero - C_val))
return X
```
```python
def create_phi_func(gto):
def phi_func(r_val):
val,grad,lap = gto.eval_vgl(r_val, 0.0, 0.0)
return val[0], grad[0], lap[0]
return phi_func
```
```python
Xvals = [0.0]*5
C_val = 0.0
he_Z_val = 2.0
he_phi = create_phi_func(he_gto)
evalX(he_phi, rc_val, C_val, he_Z_val, he_phi(0.0)[0])
```
```python
def solve_for_alpha(Xvals):
xslist = {X1:Xvals[0], X2:Xvals[1], X3:Xvals[2], X4:Xvals[3], X5:Xvals[4], rc:rc_val}
alpha_vals = [s.subs(xslist) for s in sln]
return alpha_vals
```
```python
he_alpha_vals = solve_for_alpha(Xvals)
```
```python
print rc_val
```
0.1
```python
EL_curr = get_current_local_energy(he_gto, xs, rc_val, he_alpha_vals, he_Z_val)
EL_curr
```
```python
plt.plot(xs, EL_ideal, xs, EL_curr)
```
```python
def compute_chi2(EL_ideal, EL_curr):
return sum([(e1-e2)**2 for e1,e2 in zip(EL_ideal, EL_curr)])
```
```python
compute_chi2(EL_ideal, EL_curr)
```
```python
def compute_one_cycle(phi_func, gto, rc_val, Z_val, phi_at_zero, eta_at_zero=0.0):
C_val = 0.0
X = evalX(phi_func, rc_val, C_val, Z_val, phi_at_zero, eta_at_zero)
alpha_vals = solve_for_alpha(X)
EL_curr = get_current_local_energy(he_gto, xs, rc_val, alpha_vals, Z_val)
chi2 = compute_chi2(EL_ideal, EL_curr)
return chi2, alpha_vals, EL_curr
```
```python
phi_at_zero = he_phi(0.0)[0]
EL_curr = []
for ioffset in range(10):
chi2, alpha_vals, EL_curr = compute_one_cycle(he_phi, he_gto, rc_val, he_Z_val, phi_at_zero+.01*ioffset)
print chi2
```
25854.2846426019
21111.8921728781
16918.8592218111
13256.3864130293
10106.4199357371
7451.61627411234
5275.30889319427
3561.47675650037
2294.71455959869
1460.20457213250
```python
# See the local energy and ideal local energy change as phi(0) changes
fig, ax = plt.subplots(1,2)
plt.subplots_adjust(wspace = 0.5)
chi2, alpha_vals, EL_curr = compute_one_cycle(he_phi, he_gto, rc_val, he_Z_val, phi_at_zero)
chi2 = float(chi2)
ax[0].plot(xs, EL_ideal,label="Ideal local energy")
ax[0].set_ylabel("Energy")
ax[0].set_xlabel("r")
line, = ax[0].plot(xs, EL_curr, label="Local energy")
ax[0].legend()
chis = [chi2]
offsets = [0.0]
ax[1].set_xlim(-0.01, 20*0.01)
ax[1].set_ylim(0.0, chi2)
ax[1].set_ylabel("$\chi^2$")
ax[1].set_xlabel("$\phi(0)$")
line_chi, = ax[1].plot(offsets, chis, 'bo')
def animate_chi2(ioffset):
offset = ioffset*0.01
chi2, alpha_vals, EL_curr = compute_one_cycle(he_phi, he_gto, rc_val, he_Z_val, phi_at_zero + offset)
print chi2, offset
line.set_ydata(EL_curr)
offsets.append(offset)
chis.append(chi2)
line_chi.set_xdata(offsets)
line_chi.set_ydata(chis)
#line_chi.plot(offsets, chis)
return line,
# Uncomment the following to see the animation
#ani = animation.FuncAnimation(fig, animate_chi2, np.arange(1,20), interval=100, blit=True, repeat=False)
#plt.show()
```
<IPython.core.display.Javascript object>
```python
# Interactive plot with r_c and phi(0) adjustable
phi_slider = ipywidgets.FloatSlider(value=phi_at_zero, min=phi_at_zero/2.0, max=phi_at_zero*2.0)
rc_slider = ipywidgets.FloatSlider(value=rc_val,min=rc_val/1.5,max=rc_val*1.5)
print rc_val
#plt.plot(xs, EL_curr, xs, EL_ideal)
fig2 = plt.figure()
ax2 = fig2.add_subplot(1,1,1)
ax2.set_xlabel("r")
ax2.set_ylabel("Local energy")
line2, = ax2.plot(xs,EL_ideal)
line3, = ax2.plot(xs, EL_curr)
def update(phi0=1.0, rc_new=0.1):
chi2, alpha_vals, EL_curr = compute_one_cycle(he_phi, he_gto, rc_new, he_Z_val, phi0)
line3.set_ydata(EL_curr)
fig2.canvas.draw()
# Uncomment to activate the interactive version
#interact(update, phi0 = phi_slider, rc_new=rc_slider)
```
```python
def chi2_opt(x):
phi_at_zero = x[0]
rc_val = x[1]
chi2, alpha_vals, EL_curr = compute_one_cycle(he_phi, he_gto, rc_val, he_Z_val, phi_at_zero)
return float(chi2)
phi_at_zero = float(he_phi(0.0)[0])
print 'starting phi(0) = ',phi_at_zero
# Optimize phi_0 and rc simultaneously
# This optimization to find the minimum chi2 can take a while.
scipy.optimize.minimize(chi2_opt,[phi_at_zero, rc_val])
```
starting phi(0) = 0.999603733514
fun: 7.740763089368863
hess_inv: array([[ 4.37122586e-05, -3.53898845e-05],
[-3.53898845e-05, 2.86520112e-05]])
jac: array([-15512.01509583, -19156.76275462])
message: 'Desired error not necessarily achieved due to precision loss.'
nfev: 1128
nit: 127
njev: 279
status: 2
success: False
x: array([0.66675309, 0.48046938])
### Dividing the wavefunction
Into $\phi$ and $\eta$ pieces. This is done in QMCPACK by writing zeros to the coefficient matrix.
```python
# For Neon with DEF2-SVP
ne_basis_set, ne_MO_matrix = read_qmcpack.parse_qmc_wf('ne_def2_svp.wfnoj.xml',['Ne'])
#for cg in ne_basis_set:
# print cg
print ne_MO_matrix.shape
#ne_MO_matrix
ne_basis_set['Ne']
```
```python
c_phi = ne_MO_matrix.copy()
c_eta = ne_MO_matrix.copy()
basis_by_index = gaussian_orbitals.get_ijk_inverse_index(ne_basis_set['Ne'])
# Loop over MO
for mo_idx in range(ne_MO_matrix.shape[0]):
for ao_idx in range(ne_MO_matrix.shape[1]):
# Loop over centers (for Ne atom, there is only one)
# If s-type
basis_set, angular_info = basis_by_index[mo_idx]
if basis_set.orbtype == 0:
# s-type, part of phi but not eta
c_eta[mo_idx, ao_idx] = 0.0
else:
# not s-type, part of eta but not phi
c_phi[mo_idx, ao_idx] = 0.0
```
```python
c_phi
```
array([[ 0.990133, -0.031233, 0.009072, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ],
[ 0.322026, 0.643777, 0.460567, -0. , -0. , -0. ,
-0. , -0. , -0. , -0. , -0. , -0. ,
-0. , -0. , -0. ],
[-0. , -0. , -0. , 0.697249, -0. , -0. ,
-0.454527, -0. , -0. , -0. , -0. , -0. ,
-0. , -0. , -0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ]])
```python
ne_phi_mo = gaussian_orbitals.MO(gaussian_orbitals.GTO(ne_basis_set['Ne']), c_phi)
ne_phi_mo.eval_v(0.0, 0.0, 0.0)
```
array([-16.11819129, -4.48709323, 0. , 0. ,
0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ,
0. , 0. ])
```python
ne_eta_mo = gaussian_orbitals.MO(gaussian_orbitals.GTO(ne_basis_set['Ne']), c_eta)
ne_eta_mo.eval_v(0.0, 0.0, 0.0)
```
array([ 0. , 0. , 0. , 0. , 0. ,
-4.47043866, 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ])
```python
mo_idx = 0
gto1 = gaussian_orbitals.GTO(ne_basis_set['Ne'][mo_idx:mo_idx+1])
ne_phi_mo1 = gaussian_orbitals.MO(gaussian_orbitals.GTO(ne_basis_set['Ne']), c_phi[mo_idx:mo_idx+1,:])
print ne_phi_mo1.eval_v(0.0, 0.0, 0.0)
ne_eta_mo1 = gaussian_orbitals.MO(gaussian_orbitals.GTO(ne_basis_set['Ne']), c_eta[mo_idx:mo_idx+1,:])
print ne_eta_mo1.eval_v(0.0, 0.0, 0.0)
```
[-16.11819129]
[0.]
```python
Xvals = [0.0]*5
C_val = 0.0
ne_Z_val = 10.0
ne_phi = create_phi_func(ne_phi_mo1)
ne_eta_at_zero = ne_eta_mo1.eval_v(0.0,0.0,0.0)[0]
evalX(ne_phi, rc_val, C_val, ne_Z_val, ne_phi(0.0)[0], ne_eta_at_zero)
```
```python
```
```python
```
| c291a93f52283399dd9b07d49dea89203d9cecd3 | 282,716 | ipynb | Jupyter Notebook | Wavefunctions/CuspCorrection.ipynb | QMCPACK/qmc_algorithms | 015fd1973e94f98662149418adc6b06dcd78946d | [
"MIT"
]
| 3 | 2018-02-06T06:15:19.000Z | 2019-11-26T23:54:53.000Z | Wavefunctions/CuspCorrection.ipynb | chrinide/qmc_algorithms | 015fd1973e94f98662149418adc6b06dcd78946d | [
"MIT"
]
| null | null | null | Wavefunctions/CuspCorrection.ipynb | chrinide/qmc_algorithms | 015fd1973e94f98662149418adc6b06dcd78946d | [
"MIT"
]
| 4 | 2017-11-14T20:25:00.000Z | 2022-02-28T06:02:01.000Z | 118.539203 | 34,035 | 0.80739 | true | 6,157 | Qwen/Qwen-72B | 1. YES
2. YES | 0.896251 | 0.812867 | 0.728533 | __label__eng_Latn | 0.35066 | 0.530959 |
# Perron-Frobenius matrix completion
The DGP atom library has several functions of positive matrices, including the trace, (matrix) product, sum, Perron-Frobenius eigenvalue, and $(I - X)^{-1}$ (eye-minus-inverse). In this notebook, we use some of these atoms to formulate and solve an interesting matrix completion problem.
In this problem, we are given some entries of an elementwise positive matrix $A$, and the goal is to choose the missing entries so as to minimize the Perron-Frobenius eigenvalue or spectral
radius. Letting $\Omega$ denote the set of indices $(i, j)$ for which $A_{ij}$ is known, the optimization problem is
$$
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \lambda_{\text{pf}}(X) \\
\mbox{subject to} & \prod_{(i, j) \not\in \Omega} X_{ij} = 1 \\
& X_{ij} = A_{ij}, \, (i, j) \in \Omega,
\end{array}
\end{equation}
$$
which is a log-log convex program. Below is an implementation of this problem, with specific problem data
$$
A = \begin{bmatrix}
1.0 & ? & 1.9 \\
? & 0.8 & ? \\
3.2 & 5.9& ?
\end{bmatrix},
$$
where the question marks denote the missing entries.
```python
import cvxpy as cp
n = 3
known_value_indices = tuple(zip(*[[0, 0], [0, 2], [1, 1], [2, 0], [2, 1]]))
known_values = [1.0, 1.9, 0.8, 3.2, 5.9]
X = cp.Variable((n, n), pos=True)
objective_fn = cp.pf_eigenvalue(X)
constraints = [
X[known_value_indices] == known_values,
X[0, 1] * X[1, 0] * X[1, 2] * X[2, 2] == 1.0,
]
problem = cp.Problem(cp.Minimize(objective_fn), constraints)
problem.solve(gp=True)
print("Optimal value: ", problem.value)
print("X:\n", X.value)
```
Optimal value: 4.702374203221372
X:
[[1. 4.63616907 1.9 ]
[0.49991744 0.8 0.37774148]
[3.2 5.9 1.14221476]]
| ca2f028039c54fac897fb1173ab73872575b2b2e | 2,907 | ipynb | Jupyter Notebook | examples/notebooks/dgp/pf_matrix_completion.ipynb | jasondark/cvxpy | 56aaa01b0e9d98ae5a91a923708129a7b37a6f18 | [
"ECL-2.0",
"Apache-2.0"
]
| 3,285 | 2015-01-03T04:02:29.000Z | 2021-04-19T14:51:29.000Z | examples/notebooks/dgp/pf_matrix_completion.ipynb | h-vetinari/cvxpy | 86307f271819bb78fcdf64a9c3a424773e8269fa | [
"ECL-2.0",
"Apache-2.0"
]
| 1,138 | 2015-01-01T19:40:14.000Z | 2021-04-18T23:37:31.000Z | examples/notebooks/dgp/pf_matrix_completion.ipynb | h-vetinari/cvxpy | 86307f271819bb78fcdf64a9c3a424773e8269fa | [
"ECL-2.0",
"Apache-2.0"
]
| 765 | 2015-01-02T19:29:39.000Z | 2021-04-20T00:50:43.000Z | 30.6 | 296 | 0.528724 | true | 603 | Qwen/Qwen-72B | 1. YES
2. YES | 0.946597 | 0.857768 | 0.81196 | __label__eng_Latn | 0.940356 | 0.724789 |
Vi har et vektorfelt
$$
\vec{F} = y \mathbf{i} + x \mathbf{j} + z \mathbf{k}
$$
Er vektorfeltet $\vec{F}$ konservativt?
Vi ser på kurveintegralet
$$
\int_{C_i} \vec{F} \cdot d\vec{r}
$$
mellom punktene $A=(0, 0, 0)$ og $B=(1,1,2)$, langs to forskjellige baner gitt ved
$$
C_1 : \begin{cases}
x(t) = t \\
y(t) = t \\
z(t) = 2t^2
\end{cases},
\quad \quad
C_2 : \begin{cases}
x(t) = t \\
y(t) = t^2 \\
z(t) = 2t
\end{cases},
\quad \quad t\in [0, 1].
$$
Dette tilsvarer posisjonsvektorene
$$
\vec{r_1} = t \mathbf{i} + t \mathbf{j} + 2t^2 \mathbf{k}, \quad \quad \vec{r_2} = t \mathbf{i} + t^2 \mathbf{j} + 2t \mathbf{k}, \quad \quad t\in [0, 1].
$$
Hvis $\int_{C_i} \vec{F} \cdot d\vec{r}$ er uavhengig av vei, så er vektorfeltet $\vec{F}$ konservativt.
```python
import numpy as np
import sympy as sp
from sympy.vector import CoordSys3D
t = sp.Symbol('t', real=True)
N = CoordSys3D('N')
```
```python
r1 = t*N.i + t*N.j + 2*t**2*N.k
r2 = t*N.i + t**2*N.j + 2*t*N.k
r1
```
$\displaystyle (t)\mathbf{\hat{i}_{N}} + (t)\mathbf{\hat{j}_{N}} + (2 t^{2})\mathbf{\hat{k}_{N}}$
```python
dr1dt = r1.diff(t, 1)
dr2dt = r2.diff(t, 1)
F = lambda r: r.dot(N.j)*N.i + r.dot(N.i)*N.j + r.dot(N.k)*N.k
```
```python
sp.Integral(F(r2).dot(dr2dt), (t, 1, 0)).doit()
```
$\displaystyle -3$
```python
%matplotlib notebook
from mpl_toolkits import mplot3d
import matplotlib.pyplot as plt
ax = plt.axes(projection='3d')
ti = np.linspace(0, 1, 100)
def plot(r, ti):
ax.plot3D(sp.lambdify(t, r.dot(N.i))(ti),
sp.lambdify(t, r.dot(N.j))(ti),
sp.lambdify(t, r.dot(N.k))(ti))
plot(r1, ti)
plot(r2, ti)
```
<IPython.core.display.Javascript object>
```python
```
| 48a399a1ad475c65f5805cd1909bc427215948f5 | 167,572 | ipynb | Jupyter Notebook | notebooks/Konservativt vektorfelt.ipynb | mikaem/MEK1100-22 | cddd990347d14983ffc61305182a1810f8af9367 | [
"BSD-2-Clause"
]
| 2 | 2022-01-19T23:27:44.000Z | 2022-02-07T12:59:47.000Z | notebooks/Konservativt vektorfelt.ipynb | mikaem/MEK1100-22 | cddd990347d14983ffc61305182a1810f8af9367 | [
"BSD-2-Clause"
]
| null | null | null | notebooks/Konservativt vektorfelt.ipynb | mikaem/MEK1100-22 | cddd990347d14983ffc61305182a1810f8af9367 | [
"BSD-2-Clause"
]
| null | null | null | 172.221994 | 128,495 | 0.853406 | true | 744 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91848 | 0.872347 | 0.801234 | __label__nob_Latn | 0.185847 | 0.699867 |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial1.ipynb" target="_parent"></a>
# Tutorial 1: Gradient Descent and AutoGrad
**Week 1, Day 2: Linear Deep Learning**
**By Neuromatch Academy**
__Content creators:__ Saeed Salehi, Vladimir Haltakov, Andrew Saxe
__Content reviewers:__ Polina Turishcheva, Antoine De Comite, Kelson Shilling-Scrivo
__Content editors:__ Anoop Kulkarni, Spiros Chavlis
__Production editors:__ Khalid Almubarak, Spiros Chavlis
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'></p>
---
#Tutorial Objectives
Day 2 Tutorial 1 will continue on buiding PyTorch skillset and motivate its core functionality, Autograd. In this notebook, we will cover the key concepts and ideas of:
* Gradient descent
* PyTorch Autograd
* PyTorch nn module
```python
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/3qevp/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
```
---
# Setup
This a GPU-Free tutorial!
```python
# @title Install dependencies
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# init airtable form
atform = AirtableForm('appn7VdPRseSoMXEG','W1D2_T1','https://portal.neuromatchacademy.org/api/redirect/to/9c55f6cb-cdf9-4429-ac1c-ec44fe64c303')
```
```python
# Imports
import torch
import numpy as np
from torch import nn
from math import pi
import matplotlib.pyplot as plt
```
```python
# @title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
```
```python
# @title Plotting functions
from mpl_toolkits.axes_grid1 import make_axes_locatable
def ex3_plot(model, x, y, ep, lss):
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
ax1.set_title("Regression")
ax1.plot(x, model(x).detach().numpy(), color='r', label='prediction')
ax1.scatter(x, y, c='c', label='targets')
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.legend()
ax2.set_title("Training loss")
ax2.plot(np.linspace(1, epochs, epochs), losses, color='y')
ax2.set_xlabel("Epoch")
ax2.set_ylabel("MSE")
plt.show()
def ex1_plot(fun_z, fun_dz):
"""Plots the function and gradient vectors
"""
x, y = np.arange(-3, 3.01, 0.02), np.arange(-3, 3.01, 0.02)
xx, yy = np.meshgrid(x, y, sparse=True)
zz = fun_z(xx, yy)
xg, yg = np.arange(-2.5, 2.6, 0.5), np.arange(-2.5, 2.6, 0.5)
xxg, yyg = np.meshgrid(xg, yg, sparse=True)
zxg, zyg = fun_dz(xxg, yyg)
plt.figure(figsize=(8, 7))
plt.title("Gradient vectors point towards steepest ascent")
contplt = plt.contourf(x, y, zz, levels=20)
plt.quiver(xxg, yyg, zxg, zyg, scale=50, color='r', )
plt.xlabel('$x$')
plt.ylabel('$y$')
ax = plt.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plt.colorbar(contplt, cax=cax)
cbar.set_label('$z = h(x, y)$')
plt.show()
```
```python
# @title Set random seed
# @markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
```
```python
# @title Set device (GPU or CPU). Execute `set_device()`
# especially if torch modules used.
# inform the user if the notebook uses GPU or CPU.
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
```
```python
SEED = 2021
set_seed(seed=SEED)
DEVICE = set_device()
```
Random seed 2021 has been set.
GPU is enabled in this notebook.
If you want to disable it, in the menu under `Runtime` ->
`Hardware accelerator.` and select `None` from the dropdown menu
---
# Section 0: Introduction
Today, we will go through 3 tutorials. Starting with Gradient Descent, the workhorse of deep learning algorithms, in this tutorial. The second tutorial will help us build a better intuition about neural networks and basic hyper-parameters. Finally, in tutorial 3, we learn about the learning dynamics, what the (a good) deep network is learning, and why sometimes they may perform poorly.
```python
# @title Video 0: Introduction
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Qf4y1578t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"i7djAv2jnzY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 0:Introduction')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
---
# Section 1: Gradient Descent Algorithm
*Time estimate: ~30-45 mins*
Since the goal of most learning algorithms is **minimizing the risk (also known as the cost or loss) function**, optimization is often the core of most machine learning techniques! The gradient descent algorithm, along with its variations such as stochastic gradient descent, is one of the most powerful and popular optimization methods used for deep learning. Today we will introduce the basics, but you will learn much more about Optimization in the coming days (Week 1 Day 4).
## Section 1.1: Gradients & Steepest Ascent
```python
# @title Video 1: Gradient Descent
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Pq4y1p7em", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"UwgA_SgG0TM", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 1: Gradient Descent')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
Before introducing the gradient descent algorithm, let's review a very important property of gradients. The gradient of a function always points in the direction of the steepest ascent. The following exercise will help clarify this.
### Analytical Exercise 1.1: Gradient vector (Optional)
Given the following function:
\begin{equation}
z = h(x, y) = \sin(x^2 + y^2)
\end{equation}
find the gradient vector:
\begin{equation}
\begin{bmatrix}
\dfrac{\partial z}{\partial x} \\ \\ \dfrac{\partial z}{\partial y}
\end{bmatrix}
\end{equation}
*hint: use the chain rule!*
**Chain rule**: For a composite function $F(x) = g(h(x)) \equiv (g \circ h)(x)$:
\begin{equation}
F'(x) = g'(h(x)) \cdot h'(x)
\end{equation}
or differently denoted:
\begin{equation}
\frac{dF}{dx} = \frac{dg}{dh} ~ \frac{dh}{dx}
\end{equation}
---
#### Solution:
We can rewrite the function as a composite function:
\begin{equation}
z = f\left( g(x,y) \right), ~~ f(u) = \sin(u), ~~ g(x, y) = x^2 + y^2
\end{equation}
Using chain rule:
\begin{align}
\dfrac{\partial z}{\partial x} &= \dfrac{\partial f}{\partial g} \dfrac{\partial g}{\partial x} = \cos(g(x,y)) ~ (2x) = \cos(x^2 + y^2) \cdot 2x \\ \\
\dfrac{\partial z}{\partial y} &= \dfrac{\partial f}{\partial g} \dfrac{\partial g}{\partial y} = \cos(g(x,y)) ~ (2y) = \cos(x^2 + y^2) \cdot 2y
\end{align}
### Coding Exercise 1.1: Gradient Vector
Implement (complete) the function which returns the gradient vector for $z=\sin(x^2 + y^2)$.
```python
def fun_z(x, y):
"""Function sin(x^2 + y^2)
Args:
x (float, np.ndarray): variable x
y (float, np.ndarray): variable y
Return:
z (float, np.ndarray): sin(x^2 + y^2)
"""
z = np.sin(x**2 + y**2)
return z
def fun_dz(x, y):
"""Function sin(x^2 + y^2)
Args:
x (float, np.ndarray): variable x
y (float, np.ndarray): variable y
Return:
(tuple): gradient vector for sin(x^2 + y^2)
"""
#################################################
## Implement the function which returns gradient vector
## Complete the partial derivatives dz_dx and dz_dy
# Complete the function and remove or comment the line below
#raise NotImplementedError("Gradient function `fun_dz`")
#################################################
dz_dx = 2 * x * np.sin(x**2 + y**2)
dz_dy = 2 * y * np.sin(x**2 + y**2)
return (dz_dx, dz_dy)
#add event to airtable
atform.add_event('Coding Exercise 1.1: Gradient Vector')
## Uncomment to run
ex1_plot(fun_z, fun_dz)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial1_Solution_0c8e3872.py)
*Example output:*
We can see from the plot that for any given $x_0$ and $y_0$, the gradient vector $\left[ \dfrac{\partial z}{\partial x}, \dfrac{\partial z}{\partial y}\right]^{\top}_{(x_0, y_0)}$ points in the direction of $x$ and $y$ for which $z$ increases the most. It is important to note that gradient vectors only see their local values, not the whole landscape! Also, length (size) of each vector, which indicates the steepness of the function, can be very small near local plateaus (i.e. minima or maxima).
Thus, we can simply use the aforementioned formula to find the local minima.
In 1847, Augustin-Louis Cauchy used **negative of gradients** to develop the Gradient Descent algorithm as an **iterative** method to **minimize** a **continuous** and (ideally) **differentiable function** of **many variables**.
```python
# @title Video 2: Gradient Descent - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rf4y157bw", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"8s22ffAfGwI", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 2: Gradient Descent ')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
## Section 1.2: Gradient Descent Algorithm
Let $f(\mathbf{w}): \mathbb{R}^d \rightarrow \mathbb{R}$ be a differentiable function. Gradient Descent is an iterative algorithm for minimizing the function $f$, starting with an initial value for variables $\mathbf{w}$, taking steps of size $\eta$ (learning rate) in the direction of the negative gradient at the current point to update the variables $\mathbf{w}$.
\begin{equation}
\mathbf{w}^{(t+1)} = \mathbf{w}^{(t)} - \eta \nabla f (\mathbf{w}^{(t)})
\end{equation}
where $\eta > 0$ and $\nabla f (\mathbf{w})= \left( \frac{\partial f(\mathbf{w})}{\partial w_1}, ..., \frac{\partial f(\mathbf{w})}{\partial w_d} \right)$. Since negative gradients always point locally in the direction of steepest descent, the algorithm makes small steps at each point **towards** the minimum.
<br/>
**Vanilla Algorithm**
---
> **inputs**: initial guess $\mathbf{w}^{(0)}$, step size $\eta > 0$, number of steps $T$
> *For* $t = 0, 1, \dots , T-1$ *do* \
$\qquad$ $\mathbf{w}^{(t+1)} = \mathbf{w}^{(t)} - \eta \nabla f (\mathbf{w}^{(t)})$\
*end*
> *return*: $\mathbf{w}^{(t+1)}$
---
<br/>
Hence, all we need is to calculate the gradient of the loss function with respect to the learnable parameters (i.e. weights):
\begin{equation}
\dfrac{\partial Loss}{\partial \mathbf{w}} = \left[ \dfrac{\partial Loss}{\partial w_1}, \dfrac{\partial Loss}{\partial w_2} , ..., \dfrac{\partial Loss}{\partial w_d} \right]^{\top}
\end{equation}
### Analytical Exercise 1.2: Gradients
Given $f(x, y, z) = \tanh \left( \ln \left[1 + z \frac{2x}{sin(y)} \right] \right)$, how easy is it to derive $\dfrac{\partial f}{\partial x}$, $\dfrac{\partial f}{\partial y}$ and $\dfrac{\partial f}{\partial z}$? (*hint: you don't have to actually calculate them!*)
## Section 1.3: Computational Graphs and Backprop
```python
# @title Video 3: Computational Graph
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1c64y1B7ZG", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"2z1YX5PonV4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 3: Computational Graph ')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
*Exercise 1.2* is an example of how overwhelming the derivation of gradients can get, as the number of variables and nested functions increases. This function is still extraordinarily simple compared to the loss functions of modern neural networks. So how can we (as well as PyTorch and similar frameworks) approach such beasts?
Let’s look at the function again:
\begin{equation}
f(x, y, z) = \tanh \left(\ln \left[1 + z \frac{2x}{sin(y)} \right] \right)
\end{equation}
We can build a so-called computational graph (shown below) to break the original function into smaller and more approachable expressions.
<center></center>
Starting from $x$, $y$, and $z$ and following the arrows and expressions, you would see that our graph returns the same function as $f$. It does so by calculating intermediate variables $a,b,c,d,$ and $e$. This is called the **forward pass**.
Now, let’s start from $f$, and work our way against the arrows while calculating the gradient of each expression as we go. This is called the **backward pass**, from which the **backpropagation of errors** algorithm gets its name.
<center></center>
By breaking the computation into simple operations on intermediate variables, we can use the chain rule to calculate any gradient:
\begin{equation}
\dfrac{\partial f}{\partial x} = \dfrac{\partial f}{\partial e}~\dfrac{\partial e}{\partial d}~\dfrac{\partial d}{\partial c}~\dfrac{\partial c}{\partial a}~\dfrac{\partial a}{\partial x} = \left( 1-\tanh^2(e) \right) \cdot \frac{1}{d+1}\cdot z \cdot \frac{1}{b} \cdot 2
\end{equation}
Conveniently, the values for $e$, $b$, and $d$ are available to us from when we did the forward pass through the graph. That is, the partial derivatives have simple expressions in terms of the intermediate variables $a,b,c,d,e$ that we calculated and stored during the forward pass.
### Analytical Exercise 1.3: Chain Rule (Optional)
For the function above, calculate the $\dfrac{\partial f}{\partial y}$ using the computational graph and chain rule.
---
#### Solution:
\begin{equation}
\dfrac{\partial f}{\partial y} = \dfrac{\partial f}{\partial e}~\dfrac{\partial e}{\partial d}~\dfrac{\partial d}{\partial c}~\dfrac{\partial c}{\partial b}~\dfrac{\partial b}{\partial y} = \left( 1-\tanh^2(e) \right) \cdot \frac{1}{d+1}\cdot z \cdot \frac{-a}{b^2} \cdot \cos(y)
\end{equation}
For more: [Calculus on Computational Graphs: Backpropagation](https://colah.github.io/posts/2015-08-Backprop/)
---
# Section 2: PyTorch AutoGrad
*Time estimate: ~30-45 mins*
```python
# @title Video 4: Auto-Differentiation
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1UP4y1s7gv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"IBYFCNyBcF8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 4: Auto-Differentiation ')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
Deep learning frameworks such as PyTorch, JAX, and TensorFlow come with a very efficient and sophisticated set of algorithms, commonly known as Automatic Differentiation. AutoGrad is PyTorch's automatic differentiation engine. Here we start by covering the essentials of AutoGrad, and you will learn more in the coming days.
## Section 2.1: Forward Propagation
Everything starts with the forward propagation (pass). PyTorch tracks all the instructions, as we declare the variables and operations, and it builds the graph when we call the `.backward()` pass. PyTorch rebuilds the graph every time we iterate or change it (or simply put, PyTorch uses a dynamic graph).
For gradient descent, it is only required to have the gradients of cost function with respect to the variables we wish to learn. These variables are often called "learnable / trainable parameters" or simply "parameters" in PyTorch. In neural nets, weights and biases are often the learnable parameters.
### Coding Exercise 2.1: Buiding a Computational Graph
In PyTorch, to indicate that a certain tensor contains learnable parameters, we can set the optional argument `requires_grad` to `True`. PyTorch will then track every operation using this tensor while configuring the computational graph. For this exercise, use the provided tensors to build the following graph, which implements a single neuron with scalar input and output.
<br/>
<center></center>
```python
#add event to airtable
atform.add_event('Coding Exercise 2.1: Computational Graph ')
class SimpleGraph:
def __init__(self, w, b):
"""Initializing the SimpleGraph
Args:
w (float): initial value for weight
b (float): initial value for bias
"""
assert isinstance(w, float)
assert isinstance(b, float)
self.w = torch.tensor([w], requires_grad=True)
self.b = torch.tensor([b], requires_grad=True)
def forward(self, x):
"""Forward pass
Args:
x (torch.Tensor): 1D tensor of features
Returns:
torch.Tensor: model predictions
"""
assert isinstance(x, torch.Tensor)
#################################################
## Implement the the forward pass to calculate prediction
## Note that prediction is not the loss, but the value after `tanh`
# Complete the function and remove or comment the line below
#raise NotImplementedError("Forward Pass `forward`")
#################################################
prediction = torch.tanh(self.w * x + self.b)
return prediction
def sq_loss(y_true, y_prediction):
"""L2 loss function
Args:
y_true (torch.Tensor): 1D tensor of target labels
y_prediction (torch.Tensor): 1D tensor of predictions
Returns:
torch.Tensor: L2-loss (squared error)
"""
assert isinstance(y_true, torch.Tensor)
assert isinstance(y_prediction, torch.Tensor)
#################################################
## Implement the L2-loss (squred error) given true label and prediction
# Complete the function and remove or comment the line below
#raise NotImplementedError("Loss function `sq_loss`")
#################################################
loss = (y_true - y_prediction)**2
return loss
feature = torch.tensor([1]) # input tensor
target = torch.tensor([7]) # target tensor
## Uncomment to run
simple_graph = SimpleGraph(-0.5, 0.5)
print(f"initial weight = {simple_graph.w.item()}, "
f"\ninitial bias = {simple_graph.b.item()}")
prediction = simple_graph.forward(feature)
square_loss = sq_loss(target, prediction)
print(f"for x={feature.item()} and y={target.item()}, "
f"prediction={prediction.item()}, and L2 Loss = {square_loss.item()}")
```
initial weight = -0.5,
initial bias = 0.5
for x=1 and y=7, prediction=0.0, and L2 Loss = 49.0
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial1_Solution_6668feea.py)
It is important to appreciate the fact that PyTorch can follow our operations as we arbitrarily go through classes and functions.
## Section 2.2: Backward Propagation
Here is where all the magic lies. In PyTorch, `Tensor` and `Function` are interconnected and build up an acyclic graph, that encodes a complete history of computation. Each variable has a `grad_fn` attribute that references a function that has created the Tensor (except for Tensors created by the user - these have `None` as `grad_fn`). The example below shows that the tensor `c = a + b` is created by the `Add` operation and the gradient function is the object `<AddBackward...>`. Replace `+` with other single operations (e.g., `c = a * b` or `c = torch.sin(a)`) and examine the results.
```python
a = torch.tensor([1.0], requires_grad=True)
b = torch.tensor([-1.0], requires_grad=True)
c = a + b
print(f'Gradient function = {c.grad_fn}')
```
Gradient function = <AddBackward0 object at 0x7fe2c07f8430>
For more complex functions, printing the `grad_fn` would only show the last operation, even though the object tracks all the operations up to that point:
```python
print(f'Gradient function for prediction = {prediction.grad_fn}')
print(f'Gradient function for loss = {square_loss.grad_fn}')
```
Gradient function for prediction = <TanhBackward object at 0x7fe2c07f8820>
Gradient function for loss = <PowBackward0 object at 0x7fe2c07f89a0>
Now let's kick off the backward pass to calculate the gradients by calling `.backward()` on the tensor we wish to initiate the backpropagation from. Often, `.backward()` is called on the loss, which is the last node on the graph. Before doing that, let's calculate the loss gradients by hand:
$$\frac{\partial{loss}}{\partial{w}} = - 2 x (y_t - y_p)(1 - y_p^2)$$
$$\frac{\partial{loss}}{\partial{b}} = - 2 (y_t - y_p)(1 - y_p^2)$$
Where $y_t$ is the target (true label), and $y_p$ is the prediction (model output). We can then compare it to PyTorch gradients, which can be obtained by calling `.grad` on the relevant tensors.
**Important Notes**
* Learnable parameters (i.e. `reguires_grad` tensors) are "contagious". Let's look at a simple example: `Y = W @ X`, where `X` is the feature tensors and `W` is the weight tensor (learnable parameters, `reguires_grad`), the newly generated output tensor `Y` will be also `reguires_grad`. So any operation that is applied to `Y` will be part of the computational graph. Therefore, if we need to plot or store a tensor that is `reguires_grad`, we must first `.detach()` it from the graph by calling the `.detach()` method on that tensor.
* `.backward()` accumulates gradients in the leaf nodes (i.e., the input nodes to the node of interest). We can call `.zero_grad()` on the loss or optimizer to zero out all `.grad` attributes (see [autograd.backward](https://pytorch.org/docs/stable/autograd.html#torch.autograd.backward) for more).
* Recall that in python we can access variables and associated methods with `.method_name`. You can use the command `dir(my_object)` to observe all variables and associated methods to your object, e.g., `dir(simple_graph.w)`.
```python
# analytical gradients (remember detaching)
ana_dloss_dw = - 2 * feature * (target - prediction.detach())*(1 - prediction.detach()**2)
ana_dloss_db = - 2 * (target - prediction.detach())*(1 - prediction.detach()**2)
square_loss.backward() # first we should call the backward to build the graph and calculate the derivative w.r.t. weights and bias
autograd_dloss_dw = simple_graph.w.grad # we access the derivative w.r.t weights
autograd_dloss_db = simple_graph.b.grad # we access the derivative w.r.t bias
print(ana_dloss_dw == autograd_dloss_dw)
print(ana_dloss_db == autograd_dloss_db)
```
References and more:
* [A GENTLE INTRODUCTION TO TORCH.AUTOGRAD](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html)
* [AUTOMATIC DIFFERENTIATION PACKAGE - TORCH.AUTOGRAD](https://pytorch.org/docs/stable/autograd.html)
* [AUTOGRAD MECHANICS](https://pytorch.org/docs/stable/notes/autograd.html)
* [AUTOMATIC DIFFERENTIATION WITH TORCH.AUTOGRAD](https://pytorch.org/tutorials/beginner/basics/autogradqs_tutorial.html)
---
# Section 3: PyTorch's Neural Net module (`nn.Module`)
*Time estimate: ~30 mins*
```python
# @title Video 5: PyTorch `nn` module
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1MU4y1H7WH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jzTbQACq7KE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 5: PyTorch `nn` module')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
PyTorch provides us with ready-to-use neural network building blocks, such as layers (e.g. linear, recurrent, ...), different activation and loss functions, and much more, packed in the [`torch.nn`](https://pytorch.org/docs/stable/nn.html) module. If we build a neural network using `torch.nn` layers, the weights and biases are already in `requires_grad` mode and will be registered as model parameters.
For training, we need three things:
* **Model parameters** - Model parameters refer to all the learnable parameters of the model, which are accessible by calling `.parameters()` on the model. Please note that NOT all the `requires_grad` tensors are seen as model parameters. To create a custom model parameter, we can use [`nn.Parameter`](https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html) (*A kind of Tensor that is to be considered a module parameter*).
* **Loss function** - The loss that we are going to be optimizing, which is often combined with regularization terms (conming up in few days).
* **Optimizer** - PyTorch provides us with many optimization methods (different versions of gradient descent). Optimizer holds the current state of the model and by calling the `step()` method, will update the parameters based on the computed gradients.
You will learn more details about choosing the right model architecture, loss function, and optimizer later in the course.
## Section 3.1: Training loop in PyTorch
We use a regression problem to study the training loop in PyTorch.
The task is to train a wide nonlinear (using $\tanh$ activation function) neural net for a simple $\sin$ regression task. Wide neural networks are thought to be really good at generalization.
```python
# @markdown #### Generate the sample dataset
set_seed(seed=SEED)
n_samples = 32
inputs = torch.linspace(-1.0, 1.0, n_samples).reshape(n_samples, 1)
noise = torch.randn(n_samples, 1) / 4
targets = torch.sin(pi * inputs) + noise
plt.figure(figsize=(8, 5))
plt.scatter(inputs, targets, c='c')
plt.xlabel('x (inputs)')
plt.ylabel('y (targets)')
plt.show()
```
Let's define a very wide (512 neurons) neural net with one hidden layer and `Tanh` activation function.
```python
## A Wide neural network with a single hidden layer
class WideNet(nn.Module):
def __init__(self):
"""Initializing the WideNet
"""
n_cells = 512
super().__init__()
self.layers = nn.Sequential(
nn.Linear(1, n_cells),
nn.Tanh(),
nn.Linear(n_cells, 1),
)
def forward(self, x):
"""Forward pass
Args:
x (torch.Tensor): 2D tensor of features
Returns:
torch.Tensor: model predictions
"""
return self.layers(x)
```
We can now create an instance of our neural net and print its parameters.
```python
# creating an instance
set_seed(seed=SEED)
wide_net = WideNet()
print(wide_net)
```
Random seed 2021 has been set.
WideNet(
(layers): Sequential(
(0): Linear(in_features=1, out_features=512, bias=True)
(1): Tanh()
(2): Linear(in_features=512, out_features=1, bias=True)
)
)
```python
# Create a mse loss function
loss_function = nn.MSELoss()
# Stochastic Gradient Descent optimizer (you will learn about momentum soon)
lr = 0.003 # learning rate
sgd_optimizer = torch.optim.SGD(wide_net.parameters(), lr=lr, momentum=0.9)
```
The training process in PyTorch is interactive - you can perform training iterations as you wish and inspect the results after each iteration.
Let's perform one training iteration. You can run the cell multiple times and see how the parameters are being updated and the loss is reducing. This code block is the core of everything to come: please make sure you go line-by-line through all the commands and discuss their purpose with the pod.
```python
# Reset all gradients to zero
sgd_optimizer.zero_grad()
# Forward pass (Compute the output of the model on the features (inputs))
prediction = wide_net(inputs)
# Compute the loss
loss = loss_function(prediction, targets)
print(f'Loss: {loss.item()}')
# Perform backpropagation to build the graph and compute the gradients
loss.backward()
# Optimizer takes a tiny step in the steepest direction (negative of gradient)
# and "updates" the weights and biases of the network
sgd_optimizer.step()
```
Loss: 0.888942301273346
### Coding Exercise 3.1: Training Loop
Using everything we've learned so far, we ask you to complete the `train` function below.
```python
def train(features, labels, model, loss_fun, optimizer, n_epochs):
"""Training function
Args:
features (torch.Tensor): features (input) with shape torch.Size([n_samples, 1])
labels (torch.Tensor): labels (targets) with shape torch.Size([n_samples, 1])
model (torch nn.Module): the neural network
loss_fun (function): loss function
optimizer(function): optimizer
n_epochs (int): number of training iterations
Returns:
list: record (evolution) of training losses
"""
loss_record = [] # keeping recods of loss
for i in range(n_epochs):
#################################################
## Implement the missing parts of the training loop
# Complete the function and remove or comment the line below
#raise NotImplementedError("Training loop `train`")
#################################################
optimizer.zero_grad() # set gradients to 0
predictions = model(features) # Compute model prediction (output)
loss = loss_fun(predictions, labels) # Compute the loss
loss.backward() # Compute gradients (backward pass)
optimizer.step() # update parameters (optimizer takes a step)
loss_record.append(loss.item())
return loss_record
#add event to airtable
atform.add_event('Coding Exercise 3.1: Training Loop')
set_seed(seed=2021)
epochs = 1847 # Cauchy, Exercices d'analyse et de physique mathematique (1847)
## Uncomment to run
losses = train(inputs, targets, wide_net, loss_function, sgd_optimizer, epochs)
ex3_plot(wide_net, inputs, targets, epochs, losses)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial1_Solution_5204c053.py)
*Example output:*
---
# Summary
In this tutorial we covered one of the most basic concepts of deep learning; the computational graph and how a network learns via gradient descent and the backpropagation algorithm. We have seen all of these using PyTorch modules and we compared the analytical solutions with the ones provided directly by the PyTorch module.
```python
# @title Video 6: Tutorial 1 Wrap-up
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Pg41177VU", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"TvZURbcnXc4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 6: Tutorial 1 Wrap-up')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
```python
# @title Airtable Submission Link
from IPython import display as IPyDisplay
IPyDisplay.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
</a>
</div>""" )
```
<div>
<a href= "https://portal.neuromatchacademy.org/api/redirect/to/9c55f6cb-cdf9-4429-ac1c-ec44fe64c303?data=eyJmb3JtX2lkIjogImFwcG43VmRQUnNlU29NWEVHIiwgInRhYmxlX25hbWUiOiAiVzFEMl9UMSIsICJhbnN3ZXJzIjoge30sICJldmVudHMiOiBbeyJldmVudCI6ICJpbml0IiwgInRzIjogMTYyODAxMjA5Ny4zNzk0Nzh9LCB7ImV2ZW50IjogIlZpZGVvIDA6SW50cm9kdWN0aW9uIiwgInRzIjogMTYyODAxMjE2My43MjU3MjA0fSwgeyJldmVudCI6ICJWaWRlbyAxOiBHcmFkaWVudCBEZXNjZW50IiwgInRzIjogMTYyODAxMjU4Mi43MTM4Mjg4fSwgeyJldmVudCI6ICJDb2RpbmcgRXhlcmNpc2UgMS4xOiBHcmFkaWVudCBWZWN0b3IiLCAidHMiOiAxNjI4MDEzMDExLjgzNDQ1Mzh9LCB7ImV2ZW50IjogIlZpZGVvIDI6IEdyYWRpZW50IERlc2NlbnQgIiwgInRzIjogMTYyODAxMzEwNS43ODUzNzA2fSwgeyJldmVudCI6ICJWaWRlbyAzOiBDb21wdXRhdGlvbmFsIEdyYXBoICIsICJ0cyI6IDE2MjgwMTQxNjEuMTcwMTg5fSwgeyJldmVudCI6ICJWaWRlbyA0OiBBdXRvLURpZmZlcmVudGlhdGlvbiAiLCAidHMiOiAxNjI4MDE1MTE0LjY2MDY2MDN9LCB7ImV2ZW50IjogIkNvZGluZyBFeGVyY2lzZSAyLjE6IENvbXB1dGF0aW9uYWwgR3JhcGggIiwgInRzIjogMTYyODAxNTY3MC41Mjg1ODczfSwgeyJldmVudCI6ICJDb2RpbmcgRXhlcmNpc2UgMi4xOiBDb21wdXRhdGlvbmFsIEdyYXBoICIsICJ0cyI6IDE2MjgwMTU2ODkuNjI5MjExNH0sIHsiZXZlbnQiOiAiVmlkZW8gNTogUHlUb3JjaCBgbm5gIG1vZHVsZSIsICJ0cyI6IDE2MjgwMTcwNzQuMTU3Nzc2fSwgeyJldmVudCI6ICJDb2RpbmcgRXhlcmNpc2UgMi4xOiBDb21wdXRhdGlvbmFsIEdyYXBoICIsICJ0cyI6IDE2MjgwMTcyODkuMTY2OTM0M30sIHsiZXZlbnQiOiAiQ29kaW5nIEV4ZXJjaXNlIDIuMTogQ29tcHV0YXRpb25hbCBHcmFwaCAiLCAidHMiOiAxNjI4MDE3MzYyLjUwNzU4ODl9LCB7ImV2ZW50IjogIlZpZGVvIDU6IFB5VG9yY2ggYG5uYCBtb2R1bGUiLCAidHMiOiAxNjI4MDE3MzcwLjc5MTQwMTZ9LCB7ImV2ZW50IjogIkNvZGluZyBFeGVyY2lzZSAzLjE6IFRyYWluaW5nIExvb3AiLCAidHMiOiAxNjI4MDE4Nzc2Ljc4NDU2ODV9LCB7ImV2ZW50IjogIkNvZGluZyBFeGVyY2lzZSAzLjE6IFRyYWluaW5nIExvb3AiLCAidHMiOiAxNjI4MDE4Nzg0Ljk4Mzg1MTJ9LCB7ImV2ZW50IjogIkNvZGluZyBFeGVyY2lzZSAzLjE6IFRyYWluaW5nIExvb3AiLCAidHMiOiAxNjI4MDE4OTA2LjIzODQ0MDN9LCB7ImV2ZW50IjogIkNvZGluZyBFeGVyY2lzZSAzLjE6IFRyYWluaW5nIExvb3AiLCAidHMiOiAxNjI4MDE4OTIxLjAxOTA1NH0sIHsiZXZlbnQiOiAiQ29kaW5nIEV4ZXJjaXNlIDMuMTogVHJhaW5pbmcgTG9vcCIsICJ0cyI6IDE2MjgwMTg5MzYuNDI1MDI0N30sIHsiZXZlbnQiOiAiQ29kaW5nIEV4ZXJjaXNlIDMuMTogVHJhaW5pbmcgTG9vcCIsICJ0cyI6IDE2MjgwMTg5NTEuMjI3MDg4N30sIHsiZXZlbnQiOiAiQ29kaW5nIEV4ZXJjaXNlIDMuMTogVHJhaW5pbmcgTG9vcCIsICJ0cyI6IDE2MjgwMTg5NjcuNzk0MzcyM30sIHsiZXZlbnQiOiAiVmlkZW8gNjogVHV0b3JpYWwgMSBXcmFwLXVwIiwgInRzIjogMTYyODAxODk5MC40MjAwODU0fSwgeyJldmVudCI6ICJDb2RpbmcgRXhlcmNpc2UgMy4xOiBUcmFpbmluZyBMb29wIiwgInRzIjogMTYyODAxOTA5Ni42Mzk2NjA0fSwgeyJldmVudCI6ICJDb2RpbmcgRXhlcmNpc2UgMy4xOiBUcmFpbmluZyBMb29wIiwgInRzIjogMTYyODAxOTExNy4wOTk3ODcyfSwgeyJldmVudCI6ICJDb2RpbmcgRXhlcmNpc2UgMy4xOiBUcmFpbmluZyBMb29wIiwgInRzIjogMTYyODAxOTIzMy4zMTQwNDExfSwgeyJldmVudCI6ICJDb2RpbmcgRXhlcmNpc2UgMy4xOiBUcmFpbmluZyBMb29wIiwgInRzIjogMTYyODAxOTI3Mi4yMzk4MzE0fSwgeyJldmVudCI6ICJ1cmwgZ2VuZXJhdGVkIiwgInRzIjogMTYyODAxOTMxMC4xODIzMzY2fV19" target="_blank">
</a>
</div>
```python
```
| b8c94ef8370266877a64269a2a6163021685156f | 444,190 | ipynb | Jupyter Notebook | tutorials/W1D2_LinearDeepLearning/student/ED_W1D2_Tutorial1.ipynb | eduardojdiniz/course-content-dl | 8d66641683651bce7b0179b6d890aef5a048a8b9 | [
"CC-BY-4.0",
"BSD-3-Clause"
]
| null | null | null | tutorials/W1D2_LinearDeepLearning/student/ED_W1D2_Tutorial1.ipynb | eduardojdiniz/course-content-dl | 8d66641683651bce7b0179b6d890aef5a048a8b9 | [
"CC-BY-4.0",
"BSD-3-Clause"
]
| null | null | null | tutorials/W1D2_LinearDeepLearning/student/ED_W1D2_Tutorial1.ipynb | eduardojdiniz/course-content-dl | 8d66641683651bce7b0179b6d890aef5a048a8b9 | [
"CC-BY-4.0",
"BSD-3-Clause"
]
| null | null | null | 252.524161 | 256,748 | 0.913251 | true | 11,954 | Qwen/Qwen-72B | 1. YES
2. YES | 0.752013 | 0.689306 | 0.518367 | __label__eng_Latn | 0.875465 | 0.042668 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.