text
stringlengths
87
777k
meta.hexsha
stringlengths
40
40
meta.size
int64
682
1.05M
meta.ext
stringclasses
1 value
meta.lang
stringclasses
1 value
meta.max_stars_repo_path
stringlengths
8
226
meta.max_stars_repo_name
stringlengths
8
109
meta.max_stars_repo_head_hexsha
stringlengths
40
40
meta.max_stars_repo_licenses
listlengths
1
5
meta.max_stars_count
int64
1
23.9k
meta.max_stars_repo_stars_event_min_datetime
stringlengths
24
24
meta.max_stars_repo_stars_event_max_datetime
stringlengths
24
24
meta.max_issues_repo_path
stringlengths
8
226
meta.max_issues_repo_name
stringlengths
8
109
meta.max_issues_repo_head_hexsha
stringlengths
40
40
meta.max_issues_repo_licenses
listlengths
1
5
meta.max_issues_count
int64
1
15.1k
meta.max_issues_repo_issues_event_min_datetime
stringlengths
24
24
meta.max_issues_repo_issues_event_max_datetime
stringlengths
24
24
meta.max_forks_repo_path
stringlengths
8
226
meta.max_forks_repo_name
stringlengths
8
109
meta.max_forks_repo_head_hexsha
stringlengths
40
40
meta.max_forks_repo_licenses
listlengths
1
5
meta.max_forks_count
int64
1
6.05k
meta.max_forks_repo_forks_event_min_datetime
stringlengths
24
24
meta.max_forks_repo_forks_event_max_datetime
stringlengths
24
24
meta.avg_line_length
float64
15.5
967k
meta.max_line_length
int64
42
993k
meta.alphanum_fraction
float64
0.08
0.97
meta.converted
bool
1 class
meta.num_tokens
int64
33
431k
meta.lm_name
stringclasses
1 value
meta.lm_label
stringclasses
3 values
meta.lm_q1_score
float64
0.56
0.98
meta.lm_q2_score
float64
0.55
0.97
meta.lm_q1q2_score
float64
0.5
0.93
text_lang
stringclasses
53 values
text_lang_conf
float64
0.03
1
label
float64
0
1
## Basics of Differential Equations (This presentation assumes knowledge of differential and integral calculus of a single variable. For those who have studied diffEQs before, this material covers first order, non-homogeneous, ordinary diffEQs. If you don't know what that means, no worries. Prior exposure to diffEQs is neither assumed nor necessary. If viewing this as a Jupyter notebook, select Cell > Run All so all the libraries are loaded before re-running any of the cells.) The study of differential equations began with real world problems. Consider how a relatively hot object, like a cup of coffee, cools to the abient room temperature when left on a table for a long enough period of time. How do we model this mathematically? For some object, assume the temperature _T_ at some time _t_ decreases to some ambient temperature (we'll assume 0 for simplicity). When taking the temperature at time 0, the value might be 180 degrees. At time 10, it might be 144 degrees. At time 20, it might be 116 degrees. At time 30, it might be 92 degrees, and so on. In looking at the data, you might notice that for each time interval of 10, the temperature is 20% lower than it was at the last measurement. Continuing the trend and graphing, we have this: ```python # the examples in this notebook are designed to be evaluated in sequence from beginning to end # uncomment these to filter Matplotlib deprecation warnings for sympy import warnings warnings.filterwarnings('ignore') %matplotlib inline import matplotlib.pyplot as plt import numpy as np t = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120] T = [180, 144, 116, 92, 74, 59, 47, 37, 29, 23, 19, 15, 12] plt.scatter(t,T) plt.xlabel("t",labelpad=10) plt.ylabel("T",labelpad=10) plt.axis('square'); plt.show() ``` If we imagine a continuous function that connects these points, at any of the points, the rate of change of the function of temperature is given by this equation: $ \hspace{0.5in} T(t) = 0.8T_{(t - 10)} $ Understand what this means. At any time _t_, the current temperature is 80% of the previous temperature taken 10 time units ago. If we change this internal to 1 time unit ago and assume a linear interpolation of the _T_ values (this is just an approximation, because the function clearly isn't piecewise-linear), just take -20% and divide it by 10, and the change is -2% per time unit. Taking this incremental approximation and somewhat cavalierly converting it to an equation involving the time rate of change of _T_ (or the derivative of _T_ with respect to _t_), we have this: $ \hspace{0.5in} T'(t) = -0.02T(t) $ or, equivalently: $ \hspace{0.5in}(1)\hspace{0.25in}T'(t) + 0.02T(t) = 0,\ T(0) = 180 $ The question is, what is $T(t)$, such that this equation is true? (In other words, what is the approximate equation fitting the data shown above?) The previous equation is called an unforced linear differential equation, and it has this general form: $ \hspace{0.5in}(2)\hspace{0.25in}y'(t) + ry(t) = 0 $ This form is foundational in the study of differential equations, and it models many phenomena. This is the solution for equation (2), where $s = y(0)$, and _s_ can be considered a starting value when _t_ represents time: $ \hspace{0.5in}(3)\hspace{0.25in}y(t) = se^{-rt} $ (Note: Equations are frequently referred to simply by "(N)", where N is an integer.) Given a value _r_, to solve (2), a $y(t)$ is needed that gives solutions for all _t_, and (3) is such a solution. To show this, assume this is a solution: $ \hspace{0.5in}(4)\hspace{0.25in}g(t) = ke^{zt} $ Now substitute (4) into (2) and do the differentiation: $ \hspace{0.5in} kze^{zt} + rke^{zt} = 0 $ Solving for _z_, we get $z = -r$. By substituting _-r_ for _z_ in (4) and letting $k = s$, (3) is confirmed as the solution. Here is a method for doing this in Python: ```python import sympy as sym r, t = sym.symbols('r t') g = sym.Function('g') ode = sym.Eq(sym.diff(g(t), t), -r*g(t)) sol = sym.dsolve(ode, g(t)) sol ``` $\displaystyle g{\left(t \right)} = C_{1} e^{- r t}$ This result is equivalent to (4) or (3). Now we verify (3) produces 0 if substitued into (2): ```python r, s, t = sym.symbols('r s t') y = s*sym.exp(-r*t) print(sym.diff(y, t) + r*y) ``` 0 So (3) is a solution of (2). This information can be used to solve our original temperature cooling equation: $ \hspace{0.5in}(1)\hspace{0.25in}T'(t) + 0.02T(t) = 0,\ T(0) = 180 $ We now know the general form of the solution is $T(t) = se^{-0.02t}$. With $t = 0$, from our original data we have $180 = se^{-0.02*0}$; so $s = 180$. The solution is: $\hspace{0.5in}(5)\hspace{0.25in}T(t) = 180e^{-0.02t}$ Here is a graph of the solution versus the original data: ```python # legend font size adjustment SMALL_SIZE=12 plt.rc('legend', fontsize=SMALL_SIZE) # domain t = np.linspace(-1,120,120) # the solution to the approximate equation T = 180*np.exp(-0.02*t) # attributes of the graph axes fig, ax = plt.subplots() ax.spines['left'].set_position('zero') ax.spines['right'].set_color('none') ax.spines['bottom'].set_position('zero') ax.spines['top'].set_color('none') # plot the function plt.plot(t,T, 'y', label="T = $180e^{-0.02t}$") plt.axis('square'); plt.xlabel("t",labelpad=10) plt.legend(loc='upper right') # plot the dots t = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120] T = [180, 144, 116, 92, 74, 59, 47, 37, 29, 23, 19, 15, 12] plt.scatter(t,T); ``` Just for a sanity check against the original data, let's calculate _T(50)_ and _T(100)_. For the original data set, these values are 59 and 19, respectively. ```python print("{0:.1f}".format(180*np.exp(-0.02*50))) print("{0:.1f}".format(180*np.exp(-0.02*100))) ``` 66.2 24.4 As we said initially, equation (1) is an approximation, and the solution falls on the high side of the data. Nevertheless, this example is a useful beginning for exploring this type of differential equation. The preceding example represents something called called exponential decay. It relates to things like temperature cooling, capacitor discharge, and half-life of a radioactive element, among other natural phenomena. If $r < 0$, then the differential equation would be this: $ \hspace{0.5in} T'(t) - 0.02T(t) = 0 $ and the solution would be: $ \hspace{0.5in} T(t) = 180e^{0.02t} $ with the following graph: ```python t = np.linspace(-1,120,120) T = 180*np.exp(0.02*t) fig, ax = plt.subplots() ax.spines['left'].set_position('zero') ax.spines['right'].set_color('none') ax.spines['bottom'].set_position('zero') ax.spines['top'].set_color('none') plt.plot(t,T, 'y', label="T = $180e^{0.02t}$") plt.xlabel("t",labelpad=10) plt.legend(loc='lower right') plt.show() ``` This is called exponential growth. It relates to things like population growth, compound interest, and atomic collisions during fission. Going back to our equation (1), if $s < 0$: $ \hspace{0.5in} T'(t) + 0.02T(t) = 0,\ y(0) = -180 $ and the solution would be: $ \hspace{0.5in} T(t) = -180e^{-0.02t} $ with the following graph: ```python T = -180*np.exp(-0.02*t) fig, ax = plt.subplots() ax.spines['left'].set_position('zero') ax.spines['right'].set_color('none') ax.spines['bottom'].set_visible(False) plt.rcParams['xtick.top'] = plt.rcParams['xtick.labeltop'] = True plt.plot(t,T, 'y', label="T = $-180e^{-0.02t}$") plt.legend(loc='lower center') plt.show() ``` This is another type of exponential growth--one that has an upper bound. This as a reflection of (5) across the x-axis. This graph could represent the warming of an object to an ambient temperature. Previously, we saw this differential equation: $ \hspace{0.5in}(2)\hspace{0.25in}y'(t) + ry(t) = 0 $ has solutions of this form: $ \hspace{0.5in}(4)\hspace{0.25in} g(t) = ke^{-rt} $ where $g(t)$ can be substituted for $y(t)$. When $r > 0$, $g(t)$ decreases and conversely, when $r < 0$, $g(t)$ increases. If we change (2) to this: $ \hspace{0.5in}(6)\hspace{0.25in} y'(t) + ry(t) = f(t) $ with $y(0) = k$, we have what is called a forced exponential differential equation. Such equations have this solution: $ \hspace{0.5in}(7)\hspace{0.25in} y(t) = ke^{-rt} + e^{-rt}\int\limits_0^t e^{rs}f(s)\,\mathrm{d} s $ if the integral can be evaluated (which frequently, it can't). Let's demonstrate this in Python. ```python r, s = sym.symbols('r s') y = sym.Function('y')(s) f = sym.Function('f')(s) y_ = sym.Derivative(y, s) # y' + ry - f = 0 sol = sym.dsolve(y_ + r*y - f, y) sol # sym.pprint(sol) ``` $\displaystyle y{\left(s \right)} = \left(C_{1} + \int f{\left(s \right)} e^{r s}\, ds\right) e^{- r s}$ This has the same form as (7) with $C_1 = k$ but without the bounds of integration. Using (7), we can construct the left side of (6) and evaluate (6) at $t = 0$: ```python k, s, t = sym.symbols('k s t') f = sym.Function('f')(t) # formulate the left side of equation (5) I = sym.Integral(sym.exp(r*s)*f,(s,0,t)) I_eval = I.doit() y = sym.expand(sym.exp(-r*t)*(k + I_eval)) # pprint(y) y_prime = sym.diff(y,t) s = y_prime + r*y # pprint(simplify(s)) def eval_s(t_): return s.subs([(t, t_)]) print(eval_s(0)) ``` f(0) As we can see, $y'(0) + ry(0) = f(0)$ when $y(t) = ke^{-rt} + e^{-rt}\int\limits_0^t e^{rs}f(s) \, \mathrm{d} s\,$. Why does $y(t) = ke^{-rt} + e^{-rt}\int\limits_0^t e^{rs}f(s) \, \mathrm{d} s$ work? If we start with $y'(s) + ry(s) = f(s)$ and multiply through by $e^{rs}$, we get this: $\hspace{0.5in} e^{rs}y'(s) + e^{rs}ry(s) = e^{rs}f(s)$ which is the same as this: $\hspace{0.5in} \frac{\mathrm{d}\,e^{rs}y(s)}{\mathrm{d}s} = e^{rs}f(s)$ Integrating both sides with respect to $s$ on the interval from 0 to $t$, we get $y(t) = ke^{-rt} + e^{-rt}\int\limits_0^t e^{rs}f(s) \, \mathrm{d} s\,$, which we said is the solution to a forced exponential differential equation. While this is fine on paper, in general, the integral may not be doable analytically. But let's look at some that are doable. Here is an example forced equation with analytic and graphical solutions: $\hspace{0.5in} y'(t) + 2y(t) = 3e^{-0.3t}$ for $y(0) = 5$ ```python sym.init_printing() from sympy.plotting import plot t = sym.symbols('t') y = sym.Function('y') y1 = sym.Derivative(y(t), t) eqdiff = y1 + 2*y(t) - 3*sym.exp(-0.3*t) sol1 = sym.dsolve(eqdiff, y(t), ics={y(0): '5'}) sol1 ``` ```python # sym.plot(f.rhs); # this works # %% Plot xtol = 1e-3 p1 = plot(sol1.rhs, show=False, xlim=[-0.5,10], ylim=[0,10], ylabel='y(t)') p1.show() ``` Another example: $\hspace{0.5in} y'(t) - 1.2y(t) = 0.8t$ for $y(0) = 0$ ```python # be sure to run previous cells before running the following examples that depend on sympy! t = sym.symbols('t') y = sym.Function('y') y1 = sym.Derivative(y(t), t) eqdiff = y1 - 1.2*y(t) - 0.8*t sol2 = sym.dsolve(eqdiff, y(t), ics={y(0): '0'}) sol2 ``` ```python xtol = 1e-3 p2 = plot(sol2.rhs, show=False, xlim=[-0.1,8], ylim=[0,2000], ylabel='y(t)') p2.show() ``` Another example: $\hspace{0.5in} y'(t) + 0.5y(t) = \sin(2t)$ for $y(0) = -2$ ```python t = sym.symbols('t') y = sym.Function('y') y1 = sym.Derivative(y(t), t) eqdiff = y1 + 0.5*y(t) - sym.sin(2*t) sol3 = sym.dsolve(eqdiff, y(t), ics={y(0): '-2'}) sol3 ``` ```python xtol = 1e-3 p3 = plot(sol3.rhs, show=False, xlim=[-0.1,10], ylim=[-3.5,2], ylabel='y(t)') p3.show() ``` This is an example of an equation that yields an integral that cannot be evaluated analytically: $\hspace{0.5in} y'(t) + 0.5y(t) = 3.7e^{\sin(t)}$ for $y(0) = -3$ ```python t = sym.symbols('t') y = sym.Function('y') y1 = sym.Derivative(y(t), t) eqdiff = y1 + 0.5*y(t) - 3.7*sym.exp(sym.sin(t)) sol4 = sym.dsolve(eqdiff, y(t), ics={y(0): '-3'}) sol4 ``` $\int e^{0.5t}e^{sin(t)}\mathrm{dt}$ cannot be solved. Most forced exponential differential equations result in integrals that have no closed form solution, so numerical approximation methods (think Taylor or Maclaurin series) are used instead of analytical ones. Another notable behavior is if $r > 0$ and $y'(t) + ry(t) = f(t)$, then any solutions settle into the same steady state. If $r < 0$, this is unpredictable. Here is an example of the $r > 0$ case. $\hspace{0.5in} y'(t) + ry(t) = 4\sin(3t)$ We will set $r$ to a random value and assign three random values to $y(0)$. (Run it multiple times.) ```python import random r = round(random.uniform(0.1,10),1) y0_1 = round(random.uniform(-10,0),1) y0_2 = round(random.uniform(0.1,10),1) y0_3 = round(random.uniform(-10,10),1) t = sym.symbols('t') y = sym.Function('y') y1 = sym.Derivative(y(t), t) eqdiff = y1 + r*y(t) - 4*sym.sin(3*t) sol5 = sym.dsolve(eqdiff, y(t), ics={y(0): y0_1}) p5 = plot(sol5.rhs, show=False, xlim=[-0.1,6], ylim=[-10,10], line_color='red', ylabel='y(t)') sol6 = sym.dsolve(eqdiff, y(t), ics={y(0): y0_2}) p6 = plot(sol6.rhs, show=False, ylim=[-10,6], line_color='green') sol7 = sym.dsolve(eqdiff, y(t), ics={y(0): y0_3}) p7 = plot(sol7.rhs, show=False, ylim=[-10,6], line_color='black') p5.append(p6[0]) p5.append(p7[0]) p5.show() ``` Here is another example. In this case, _p_ is another randomized variable. $\hspace{0.5in} y'(t) + ry(t) = 6e^{-t} + pt$ ```python r = round(random.uniform(0.1,10),1) p = round(random.uniform(-10,10),1) y0_1 = round(random.uniform(-10,0),1) y0_2 = round(random.uniform(0.1,10),1) y0_3 = round(random.uniform(-10,10),1) t = sym.symbols('t') y = sym.Function('y') y1 = sym.Derivative(y(t), t) eqdiff = y1 + r*y(t) - 6*sym.exp(-t) - p*t sol8 = sym.dsolve(eqdiff, y(t), ics={y(0): y0_1}) p8 = plot(sol8.rhs, show=False, xlim=[-0.1,6], ylim=[-10,10], line_color='red', ylabel='y(t)') sol9 = sym.dsolve(eqdiff, y(t), ics={y(0): y0_2}) p9 = plot(sol9.rhs, show=False, xlim=[-0.1,6], ylim=[-10,10], line_color='green') sol10 = sym.dsolve(eqdiff, y(t), ics={y(0): y0_3}) p10 = plot(sol10.rhs, show=False, xlim=[-0.1,6], ylim=[-10,10], line_color='black') p8.append(p9[0]) p8.append(p10[0]) p8.show() ``` So why do these cases settle out to common steady-state values? Consider $y(t) = ke^{-rt} + e^{-rt}\int\limits_0^t e^{rs}f(s) \, \mathrm{d} s\,$ as $t$ gets arbitrarily large. In that case, $ke^{-rt}$ tends to 0 leaving just the second term, which determines the steady-state behavior. More specifically, $f(t)$ in the forced equation $y'(t) + ry(t) = f(t)$ determines it. But if $r < 0$, this is not the case. Consider this equation: $\hspace{0.5in} y'(t) - 0.1y(t) = sin(2t)$ (Run several times.) ```python r = -0.1 y0_1 = round(random.uniform(-5,0),1) y0_2 = round(random.uniform(0.1,5),1) y0_3 = round(random.uniform(-5,5),1) t = sym.symbols('t') y = sym.Function('y') y1 = sym.Derivative(y(t), t) eqdiff = y1 + r*y(t) - sym.sin(2*t) sol11 = sym.dsolve(eqdiff, y(t), ics={y(0): y0_1}) p11 = plot(sol11.rhs, show=False, xlim=[-0.1,6], ylim=[-10,10], line_color='red', ylabel='y(t)') sol12 = sym.dsolve(eqdiff, y(t), ics={y(0): y0_2}) p12 = plot(sol12.rhs, show=False, xlim=[-0.1,6], ylim=[-10,10], line_color='green') sol13 = sym.dsolve(eqdiff, y(t), ics={y(0): y0_3}) p13 = plot(sol13.rhs, show=False, xlim=[-0.1,6], ylim=[-10,10], line_color='black') p11.append(p12[0]) p11.append(p13[0]) p11.show() ``` These solutions do not converge to a steady state. With $r < 0$, the $ke^{-rt}$ term of the solution equation does not tend to 0 but gets arbitrarily large. This can cause the solutions to diverge. ### Application of First Order Linear Differential Equations Compound interest is a context is which differential equations apply. Assume you have \\$1000 that is earning 4% interest. How much money would you have if the interest were compounded annually for 10 years? Here is the equation where $y(t)$ is the balance at time $t$. $\hspace{0.5in}(8)\hspace{0.25in}y(t) = 1000e^{0.04t}$ ```python t=10 y_t = 1000*np.exp(0.04*t) print("y(10) =",round(y_t,2)) ``` y(10) = 1491.82 See it graphically: ```python t = np.arange(0, 10, 0.1) y = 1000*np.exp(0.04*t) plt.plot(t, y, 'y', label="y = $1000e^{0.04t}$") plt.legend(loc='upper left'); ``` Differentiation of both sides of (8) yields this: $\hspace{0.5in}y'(t) = 0.04\times1000e^{0.04t}$ which is equivalent to this: $\hspace{0.5in}y'(t) = 0.04y(t)$ which is the same as this unforced differential equation with $y(0) = 1000$: $\hspace{0.5in}y'(t) - 0.04y(t) = 0$ Understand what this means. The rate of change of money with respect to time minus 0.04 times the amount of money are the same. Now suppose you want to add \\$200 to the account each year. Our equation then becomes this: $\hspace{0.5in}y'(t) − 0.4y(t) = 200, y(0) = 1000$ which we recognize as a forced differential equation. The solution will be of this form: $\hspace{0.5in}y(t) = 1000e^{0.4t} + e^{0.4t}\int_0^{10}\,e^{−0.4s} \times 200\,\mathrm{d}s$ The symbolic solution in Python is as follows: ```python t = sym.symbols('t') y = sym.Function('y') y1 = sym.Derivative(y(t), t) eqdiff = y1 - 0.04*y(t) - 200 sol1 = sym.dsolve(eqdiff, y(t), ics={y(0): '1000'}) sol1 ``` and with $t = 10$, we have this: ```python sol1.subs(t, 10) ``` See it graphically: ```python t = np.arange(0, 10, 0.1) y = 6000*np.exp(0.04*t) - 5000 plt.plot(t, y, 'y', label="y = $6000e^{0.04t} - 5000$") plt.legend(loc='upper left'); ``` Using the same growth rate and initial value, how much money would you need to contribute on an annual basis to have \\$10000 in 10 years? This means we want to solve for d: $\hspace{0.5in}y'(t) − 0.04y(t) = d, y(0) = 1000$ so that $y(10) = 10000$. Here is a solution in Python: ```python y = sym.Function('y') t, d = sym.symbols('t d') y1 = sym.Derivative(y(t), t) eqdiff = y1 - 0.04*y(t) - d sol2 = sym.dsolve(eqdiff, y(t), ics={y(0): '1000'}) dval = sym.solve(sol2.subs(t, 10).subs(y(10), 10000), d) print("d =", dval[0].n(5)) sol3 = sol2.subs(d, dval[0].n(5)) # print(sol3.rhs) p14 = plot(sol3.rhs, show=False, xlim=[0,10], ylim=[1000,10000], ylabel='y(t)') p14.show() ``` Written by Dan Liddell. October, 2021. These sources were consulted in preparing this content and provided ideas, examples, and source code for this material: [https://personal.math.ubc.ca/~pwalls/math-python/](https://personal.math.ubc.ca/~pwalls/math-python/) [https://www.scipy.org/docs.html](https://www.scipy.org/docs.html) [https://stackexchange.com/](https://stackexchange.com/) [https://stackoverflow.com/](https://stackoverflow.com/) A generous amount of credit goes to the following: Davis, Bill and Jerry Uhl. Differential Equations&Mathematica [sic]. version 6.0. Math Everywhere, Inc., 2007. Published as a Mathematica notebook. Math 285 -- Introduction to Differential Equations course at University of Illinois at Urbana-Champaign.
f8f047bd0698f0bbad65a5ecf9865e549eaaa507
252,501
ipynb
Jupyter Notebook
DE.1.1_Basics.ipynb
liddell-d/DiffEQs_in_Python
db733160daf4c5c2e0573fc10e4342ab7178bfa9
[ "Unlicense" ]
null
null
null
DE.1.1_Basics.ipynb
liddell-d/DiffEQs_in_Python
db733160daf4c5c2e0573fc10e4342ab7178bfa9
[ "Unlicense" ]
null
null
null
DE.1.1_Basics.ipynb
liddell-d/DiffEQs_in_Python
db733160daf4c5c2e0573fc10e4342ab7178bfa9
[ "Unlicense" ]
null
null
null
207.137818
18,892
0.902428
true
6,405
Qwen/Qwen-72B
1. YES 2. YES
0.870597
0.882428
0.768239
__label__eng_Latn
0.95764
0.62321
# Formal Language Definition ```python import pandas as pd import lux from lux.vis.VisList import VisList from lux.vis.Vis import Vis ``` ```python # Collecting basic usage statistics for Lux (For more information, see: https://tinyurl.com/logging-consent) lux.logger = True # Remove this line if you do not want your interactions recorded ``` The Lux intent specification can be defined as a context-free grammar (CFG). Here, we introduce a formal definition of the intent language in Lux for interested readers. ```python df = pd.read_csv('https://github.com/lux-org/lux-datasets/blob/master/data/cars.csv?raw=true') df["Year"] = pd.to_datetime(df["Year"], format='%Y') ``` ## Composing a Lux *Intent* with `Clause` objects An *intent* in Lux corresponds to the Kleene star of `Clause` objects, i.e., it can have either zero, one, or multiple `Clause`s. \begin{equation} \langle Intent\rangle \rightarrow \langle Clause\rangle^* \\ \end{equation} ```python spec1 = lux.Clause("MilesPerGal") spec2 = lux.Clause("Horsepower") spec3 = lux.Clause("Origin=USA") intent = [spec1, spec2, spec3] ``` Here is an example of how we can formulate an intent as a list of `Clause` and generate a visualization. In this tutorial, we will discuss how the `Clause` breaks down to different production rules. ```python Vis(intent,df) ``` LuxWidget(current_vis={'config': {'view': {'continuousWidth': 400, 'continuousHeight': 300}, 'axis': {'labelCo… <Vis (x: MilesPerGal, y: Horsepower -- [Origin=USA]) mark: scatter, score: 0.0 > A `Clause` can either be an `Axis` specification or a `Filter` specification. Note that it is not possible for a `Clause` to be both an `Axis` and a `Filter`, but they can be specified as separate `Clause`s in the intent. \begin{equation} \begin{split} \langle Clause\rangle &\rightarrow \langle Axis \rangle \\ &\rightarrow \langle Filter \rangle \end{split} \end{equation} ```python axisSpec = lux.Clause(attribute="MilesPerGal") # Equivalent, easier-to-specify Clause syntax : lux.Clause("MilesPerGal") axisSpec ``` ```python filterSpec = lux.Clause(attribute="Origin",filter_op="=",value="USA") # Equivalent, easier-to-specify Clause syntax : lux.Clause("Origin=USA") filterSpec ``` ## `Axis` specification An `Axis` requires an `attribute` specification, and an optional `channel`, `aggregation`, or `bin_size` specification. \begin{equation} \langle Axis \rangle \rightarrow \langle attribute \rangle \langle channel \rangle \langle aggregation \rangle \langle bin\_size \rangle \end{equation} An `attribute` can either be a single column in the dataset, a list of columns, or a `wildcard`. \begin{equation} \begin{split} \langle attribute \rangle &\rightarrow \textrm{attribute} \\ &\rightarrow \textrm{attribute} \cup \langle attribute \rangle\\ &\rightarrow \langle wildcard \rangle \end{split} \end{equation} ```python # user is interested in "MilesPerGal" attribute = lux.Clause("Origin") # user is interested in "MilesPerGal","Horsepower", or "Weight" attribute = lux.Clause(["MilesPerGal","Horsepower","Weight"]) # user is interested in any attribute attribute = lux.Clause("?") ``` Optional specification of the `Axis` include : \begin{equation} \begin{aligned} &\langle channel\rangle \rightarrow (\textrm{x } |\textrm{ y }|\textrm{ color }|\textrm{ auto})\\ &\langle aggregation\rangle \rightarrow (\textrm{mean }| \textrm{ sum } | \textrm{ count } | \textrm{ min } | \textrm{ max } | \textrm{ any numpy aggregation function }| \textrm{ auto})\\ &\langle bin\_size \rangle \rightarrow ( \textrm{any integer } | \textrm{ auto})\\ \end{aligned} \end{equation} ```python # Ensure that "MilesPerGal" is placed on the x axis axisSpec = lux.Clause("MilesPerGal",channel="x") # Apply sum on "MilesPerGal" axisSpec = lux.Clause("MilesPerGal",aggregation="sum") # Divide "MilesPerGal" into 50 bins axisSpec = lux.Clause("MilesPerGal",bin_size=50) ``` #### Example: Effects of optional attribute specification parameters By default, if we specify only an attribute, the system automatically infers the appropriate `channel`, `aggregation`, or `bin_size`. ```python axisSpec = "MilesPerGal" Vis([axisSpec],df) ``` We can increase the `bin_size` as an optional parameter: ```python bin50MPG = lux.Clause("MilesPerGal",bin_size=50) Vis([bin50MPG],df) ``` For bar charts, Lux uses a default aggregation of `mean` and displays a horizontal bar chart with `Origin` on the y axis. ```python axisSpec1 = "MilesPerGal" axisSpec2 = "Origin" Vis([axisSpec1,axisSpec2],df) ``` We can change the `mean` to a `sum` aggregation: ```python axisSpec1 = lux.Clause("MilesPerGal",aggregation="sum") axisSpec2 = "Origin" Vis([axisSpec1,axisSpec2],df) ``` Or we can set the `Origin` on the x-axis to get a vertical bar chart instead: ```python axisSpec1 = "MilesPerGal" axisSpec2 = lux.Clause("Origin",channel="x") Vis([axisSpec1,axisSpec2],df) ``` ### `Wildcard` attribute specifier The `wildcard` consists of an "any" specifier (?) with an optional `constraint` clause, that constrains the set of attributes that Lux enumerates over. \begin{equation} \langle wildcard \rangle \rightarrow \textrm{( ? )} \langle constraint\rangle\\ \end{equation} \begin{equation} \langle constraint \rangle \rightarrow \langle data\_model\rangle \langle data\_type\rangle\\ \end{equation} \begin{equation} \begin{aligned} &\langle data\_type\rangle \rightarrow (\textrm{quantitative }| \textrm{ nominal } | \textrm{ ordinal } | \textrm{ temporal } | \textrm{ auto})\\ &\langle data\_model\rangle \rightarrow (\textrm{dimension }|\textrm{ measure }|\textrm{ auto})\\ \end{aligned} \end{equation} ```python # user is interested in any ordinal attribute wildcard = lux.Clause("?",data_type="temporal") # user is interested in any measure attribute wildcard = lux.Clause("?",data_model="measure") ``` #### Example: `Origin` with respect to other measure variables ```python origin = lux.Clause("Origin") anyMeasure = lux.Clause("?",data_model="measure") VisList([origin, anyMeasure],df) ``` ## `Filter` specification \begin{equation} \langle Filter \rangle \rightarrow \langle attribute\rangle (=|>|<|\leq|\geq|\neq) \langle value\rangle\\ \end{equation} \begin{equation} \begin{split} \langle value \rangle &\rightarrow \textrm{value} \\ &\rightarrow \textrm{value} \cup \langle value \rangle\\ &\rightarrow (\textrm{?}) \end{split} \end{equation} ```python # user is interested in only Ford cars value = "ford" filterSpec = lux.Clause(attribute="Brand", filter_op="=",value=value) # user is interested in cars that are either Ford, Chevrolet, or Toyota value = ["ford","chevrolet","toyota"] filterSpec = lux.Clause(attribute="Brand", filter_op="=",value=value) # user is interested in cars that are of any Brand value = "?" filterSpec = lux.Clause(attribute="Brand", filter_op="=",value=value) ``` #### Example: Distribution of `Horsepower` for different Brands ```python horsepower = lux.Clause("Horsepower") anyBrand = lux.Clause(attribute="Brand", filter_op="=",value="?") VisList([horsepower, anyBrand],df) ```
230223e6298c1ff4702bad8be2b570ad88aa6099
13,248
ipynb
Jupyter Notebook
tutorial/7-formal-definition.ipynb
thyneb19/lux-binder-1
3e3c1da21da1db10595980b7e512861fab0c4f9b
[ "Apache-2.0" ]
27
2020-11-28T11:26:32.000Z
2022-03-19T18:34:19.000Z
tutorial/7-formal-definition.ipynb
thyneb19/lux-binder-1
3e3c1da21da1db10595980b7e512861fab0c4f9b
[ "Apache-2.0" ]
3
2020-10-29T23:55:20.000Z
2020-12-22T06:21:01.000Z
tutorial/7-formal-definition.ipynb
thyneb19/lux-binder-1
3e3c1da21da1db10595980b7e512861fab0c4f9b
[ "Apache-2.0" ]
21
2020-10-27T12:23:33.000Z
2022-03-07T09:32:47.000Z
27.428571
216
0.546573
true
2,027
Qwen/Qwen-72B
1. YES 2. YES
0.685949
0.824462
0.565539
__label__eng_Latn
0.660117
0.152267
# A Python Tour of Data Science: Data Exploitation [Michaël Defferrard](http://deff.ch), *PhD student*, [EPFL](http://epfl.ch) [LTS2](http://lts2.epfl.ch) The data `X.npy` and `y.npy` can be obtained by running the [data acquisition and exploration demo](01_demo_acquisition_exploration.ipynb). ```python # Cross-platform (Windows / Mac / Linux) paths. import os.path folder = os.path.join('..', 'data', 'credit_card_defaults') import numpy as np X = np.load(os.path.join(folder, 'X.npy')) y = np.load(os.path.join(folder, 'y.npy')) n, d = X.shape print('The data is a {} with {} samples of dimensionality {}.'.format(type(X), n, d)) ``` ## 1 Pre-Processing Back to [NumPy](http://www.numpy.org/), the fundamental package for scientific computing with Python. It provides multi-dimensional arrays, data types and linear algebra routines. Note that [scikit-learn](http://scikit-learn.org) provides many helpers for those tasks. Pre-processing usually consists of: 1. Data types transformation. The data has not necessarilly the format the chosen learning algorithm expects. This was done in the previous notebook before doing statistics with `statsmodels`. 1. Data normalization. Some algorithms expect data to be centered and scaled. Some will train faster. 1. Data randomization. If the samples are presented in sequence, it'll train faster if they are not correlated. 1. Train / test splitting. You may have to be careful here, e.g. not including future events in the training set. ```python # Center and scale. # Note: on a serious project, should be done after train / test split. X = X.astype(np.float) X -= X.mean(axis=0) X /= X.std(axis=0) ``` ```python # Training and testing sets. test_size = 10000 print('Split: {} testing and {} training samples'.format(test_size, y.size - test_size)) perm = np.random.permutation(y.size) X_test = X[perm[:test_size]] X_train = X[perm[test_size:]] y_test = y[perm[:test_size]] y_train = y[perm[test_size:]] ``` ## 2 A first Predictive Model The ingredients of a Machine Learning (ML) model are: 1. A predictive function, e.g. the linear transformation $f(x) = x^Tw + b$. 1. An error function, e.g. the least squares $E = \sum_{i=1}^n \left( f(x_i) - y_i \right)^2 = \| f(X) - y \|_2^2$. 1. An optional regularization, e.g. the Thikonov regularization $R = \|w\|_2^2$. 1. Which makes up the loss / objective function $L = E + \alpha R$. Our model has a sole hyper-parameter, $\alpha \geq 0$, which controls the shrinkage. A Machine Learning (ML) problem can often be cast as a (convex or smooth) optimization problem which objective is to find the parameters (here $w$ and $b$) who minimize the loss, e.g. $$\hat{w}, \hat{b} = \operatorname*{arg min}_{w,b} L = \operatorname*{arg min}_{w,b} \| Xw + b - y \|_2^2 + \alpha \|w\|_2^2.$$ If the problem is convex and smooth, one can compute the gradients $$\frac{\partial L}{\partial{w}} = 2 X^T (Xw+b-y) + 2\alpha w,$$ $$\frac{\partial L}{\partial{b}} = 2 \sum_{i=1}^n (x_i^Tw+b-y_i) = 2 \sum_{i=1}^n (x_i^Tw-y_i) + 2n \cdot b,$$ which can be used in a [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent) scheme or to form closed-form solutions: $$\frac{\partial L}{\partial{w}} = 0 \ \rightarrow \ 2 X^T X\hat{w} + 2\alpha \hat{w} = 2 X^T y - 2 X^T b \ \rightarrow \ \hat{w} = (X^T X + \alpha I)^{-1} X^T (y-b),$$ $$\frac{\partial L}{\partial{b}} = 0 \ \rightarrow \ 2n\hat{b} = 2\sum_{i=1}^n (y_i) - \underbrace{2\sum_{i=1}^n (x_i^Tw)}_{=0 \text{ if centered}} \ \rightarrow \ \hat{b} = \frac1n I^T y = \operatorname{mean}(y).$$ What if the resulting problem is non-smooth ? See the [PyUNLocBoX](http://pyunlocbox.readthedocs.io), a convex optimization toolbox which implements [proximal splitting methods](https://en.wikipedia.org/wiki/Proximal_gradient_method). ### 2.1 Take a *symbolic* Derivative Let's verify our manually derived gradients ! [SymPy](http://www.sympy.org/) is our computer algebra system (CAS) (like [Mathematica](https://www.wolfram.com/mathematica), [Maple](https://www.maplesoft.com/products/Maple)) of choice. ```python import sympy as sp sp.init_printing() X, y, w, b, a = sp.symbols('x y w b a') L = (X*w + b - y)**2 + a*w**2 dLdw = sp.diff(L, w) dLdb = sp.diff(L, b) from IPython.display import display display(L) display(dLdw) display(dLdb) ``` ### 2.2 Build the Classifier Relying on the derived equations, we can implement our model relying only on the [NumPy](http://www.numpy.org/) linear algebra capabilities (really wrappers to [BLAS](http://www.netlib.org/blas) / [LAPACK](http://www.netlib.org/lapack) implementations such as [ATLAS](http://math-atlas.sourceforge.net), [OpenBLAS](http://www.openblas.net) or [MKL](https://software.intel.com/intel-mkl)). A ML model is best represented as a class, with hyper-parameters and parameters stored as attributes, and is composed of two essential methods: 1. `y_pred = model.predict(X_test)`: return the predictions $y$ given the features $X$. 1. `model.fit(X_train, y_train)`: learn the model parameters such as to predict $y$ given $X$. ```python class RidgeRegression(object): """Our ML model.""" def __init__(self, alpha=0): "The class' constructor. Initialize the hyper-parameters." self.a = alpha def predict(self, X): """Return the predicted class given the features.""" return np.sign(X.dot(self.w) + self.b) def fit(self, X, y): """Learn the model's parameters given the training data, the closed-form way.""" n, d = X.shape self.b = np.mean(y) Ainv = np.linalg.inv(X.T.dot(X) + self.a * np.identity(d)) self.w = Ainv.dot(X.T).dot(y - self.b) def loss(self, X, y, w=None, b=None): """Return the current loss. This method is not strictly necessary, but it provides information on the convergence of the learning process.""" w = self.w if w is None else w # The ternary conditional operator b = self.b if b is None else b # makes those tests concise. import autograd.numpy as np # See below for autograd. return np.linalg.norm(np.dot(X, w) + b - y)**2 + self.a * np.linalg.norm(w, 2)**2 ``` Now that our model can learn its parameters and predict targets, it's time to evaluate it. Our metric for binary classification is the accuracy, which gives the percentage of correcly classified test samples. Depending on the application, the time spent for inference or training might also be important. ```python def accuracy(y_pred, y_true): """Our evaluation metric, the classification accuracy.""" return np.sum(y_pred == y_true) / y_true.size def evaluate(model): """Helper function to instantiate, train and evaluate the model. It returns the classification accuracy, the loss and the execution time.""" import time t = time.process_time() model.fit(X_train, y_train) y_pred = model.predict(X_test) acc = accuracy(y_pred, y_test) loss = model.loss(X_test, y_test) t = time.process_time() - t print('accuracy: {:.2f}%, loss: {:.2f}, time: {:.2f}ms'.format(acc*100, loss, t*1000)) return model alpha = 1e-2*n model = RidgeRegression(alpha) evaluate(model) models = [] models.append(model) ``` Okay we got around 80% accuracy with such a simple model ! Inference and training time looks good. For those of you who don't now about numerical mathematics, solving a linear system of equations by inverting a matrix can be numerically instable. Let's do it the proper way and use a proper solver. ```python def fit_lapack(self, X, y): """Better way (numerical stability): solve the linear system with LAPACK.""" n, d = X.shape self.b = np.mean(y) A = X.T.dot(X) + self.a * np.identity(d) b = X.T.dot(y - self.b) self.w = np.linalg.solve(A, b) # Let's monkey patch our object (Python is a dynamic language). RidgeRegression.fit = fit_lapack # Yeah just to be sure. models.append(evaluate(RidgeRegression(alpha))) assert np.allclose(models[-1].w, models[0].w) ``` ### 2.3 Learning as Gradient Descent Descending the gradient of our objective will lead us to a local minimum. If the objective is convex, that minimum will be global. Let's implement the gradient computed above and a simple gradient descent algorithm $$w^{(t+1)} = w^{(t)} - \gamma \frac{\partial L}{\partial w}$$ where $\gamma$ is the learning rate, another hyper-parameter. ```python class RidgeRegressionGradient(RidgeRegression): """This model inherits from `ridge_regression`. We overload the constructor, add a gradient function and replace the learning algorithm, but don't touch the prediction and loss functions.""" def __init__(self, alpha=0, rate=0.1, niter=1000): """Here are new hyper-parameters: the learning rate and the number of iterations.""" super().__init__(alpha) self.rate = rate self.niter = niter def grad(self, X, y, w): A = X.dot(w) + self.b - y return 2 * X.T.dot(A) + 2 * self.a * w def fit(self, X, y): n, d = X.shape self.b = np.mean(y) self.w = np.random.normal(size=d) for i in range(self.niter): self.w -= self.rate * self.grad(X, y, self.w) # Show convergence. if i % (self.niter//10) == 0: print('loss at iteration {}: {:.2f}'.format(i, self.loss(X, y))) models.append(evaluate(RidgeRegressionGradient(alpha, 1e-6))) ``` Tyred of derivating gradients by hand ? Welcome [autograd](https://github.com/HIPS/autograd/), our tool of choice for [automatic differentation](https://en.wikipedia.org/wiki/Automatic_differentiation). Alternatives are [Theano](http://deeplearning.net/software/theano/) and [TensorFlow](https://www.tensorflow.org/). ```python class RidgeRegressionAutograd(RidgeRegressionGradient): """Here we derive the gradient during construction and update the gradient function.""" def __init__(self, *args): super().__init__(*args) from autograd import grad self.grad = grad(self.loss, argnum=2) models.append(evaluate(RidgeRegressionAutograd(alpha, 1e-6))) ``` ### 2.4 Learning as generic Optimization Sometimes we don't want to implement the optimization by hand and would prefer a generic optimization algorithm. Let's make use of [SciPy](https://www.scipy.org/), which provides high-level algorithms for, e.g. [optimization](http://docs.scipy.org/doc/scipy/reference/optimize.html), [statistics](http://docs.scipy.org/doc/scipy/reference/stats.html), [interpolation](http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html), [signal processing](http://docs.scipy.org/doc/scipy/reference/tutorial/signal.html), [sparse matrices](http://docs.scipy.org/doc/scipy/reference/sparse.html), [advanced linear algebra](http://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html). ```python class RidgeRegressionOptimize(RidgeRegressionGradient): def __init__(self, alpha=0, method=None): """Here's a new hyper-parameter: the optimization algorithm.""" super().__init__(alpha) self.method = method def fit(self, X, y): """Fitted with a general purpose optimization algorithm.""" n, d = X.shape self.b = np.mean(y) # Objective and gradient w.r.t. the variable to be optimized. f = lambda w: self.loss(X, y, w) jac = lambda w: self.grad(X, y, w) # Solve the problem ! from scipy.optimize import minimize w0 = np.random.normal(size=d) res = minimize(f, w0, method=self.method, jac=jac) self.w = res.x models.append(evaluate(RidgeRegressionOptimize(alpha, method='Nelder-Mead'))) models.append(evaluate(RidgeRegressionOptimize(alpha, method='BFGS'))) ``` Accuracy may be lower (depending on the random initialization) as the optimization may not have converged to the global minima. Training time is however much longer ! Especially for gradient-less optimizers such as Nelder-Mead. ## 3 More interactivity Interlude: the interactivity of Jupyter notebooks can be pushed forward with [IPython widgets](https://ipywidgets.readthedocs.io). Below, we construct a slider for the model hyper-parameter $\alpha$, which will train the model and print its performance at each change of the value. Handy when exploring the effects of hyper-parameters ! Although it's less usefull if the required computations are long. ```python import ipywidgets from IPython.display import clear_output slider = ipywidgets.widgets.FloatSlider( value=-2, min=-4, max=2, step=1, description='log(alpha) / n', ) def handle(change): """Handler for value change: fit model and print performance.""" value = change['new'] alpha = np.power(10, value) * n clear_output() print('alpha = {:.2e}'.format(alpha)) evaluate(RidgeRegression(alpha)) slider.observe(handle, names='value') display(slider) slider.value = 1 # As if someone moved the slider. ``` ## 4 Machine Learning made easier Tired of writing algorithms ? Try [scikit-learn](http://scikit-learn.org), which provides many ML algorithms and related tools, e.g. metrics, cross-validation, model selection, feature extraction, pre-processing, for [predictive modeling](https://en.wikipedia.org/wiki/Predictive_modelling). ```python from sklearn import linear_model, metrics # The previously developed model: Ridge Regression. model = linear_model.RidgeClassifier(alpha) model.fit(X_train, y_train) y_pred = model.predict(X_test) models.append(model) # Evaluate the predictions with a metric: the classification accuracy. acc = metrics.accuracy_score(y_test, y_pred) print('accuracy: {:.2f}%'.format(acc*100)) # It does indeed learn the same parameters. assert np.allclose(models[-1].coef_, models[0].w, rtol=1e-1) ``` ```python # Let's try another model ! models.append(linear_model.LogisticRegression()) models[-1].fit(X_train, y_train) acc = models[-1].score(X_test, y_test) print('accuracy: {:.2f}%'.format(acc*100)) ``` ## 5 Deep Learning (DL) Of course ! We got two low-level Python libraries: (1) [TensorFlow](https://www.tensorflow.org/) and (2) [Theano](http://deeplearning.net/software/theano/). Both of them treat data as tensors and construct a computational graph ([dataflow paradigm](https://en.wikipedia.org/wiki/Dataflow_programming)), composed of any mathematical expressions, that get evaluated on CPUs or GPUs. Theano is the pioneer and features an optimizing compiler which will turn the computational graph into efficient code. TensorFlow has a cleaner API (not need to define expressions as strings) and does not require a compilation step (which is painful when developing models). While you'll only use Theano / TensorFlow to develop DL models, these are the higher-level libraries you'll use to define and test DL architectures on your problem: * [Keras](https://keras.io/): TensorFlow & Theano backends * [Lasagne](http://lasagne.readthedocs.io): Theano backend * [nolearn](https://github.com/dnouri/nolearn): sklearn-like abstraction of Lasagne * [Blocks](http://blocks.readthedocs.io): Theano backend * [TFLearn](http://tflearn.org): TensorFlow backend ```python import os os.environ['KERAS_BACKEND'] = 'theano' import keras class NeuralNet(object): def __init__(self): """Define Neural Network architecture.""" self.model = keras.models.Sequential() self.model.add(keras.layers.Dense(output_dim=46, input_dim=23, activation='relu')) self.model.add(keras.layers.Dense(output_dim=1, activation='sigmoid')) self.model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy']) def fit(self, X, y): y = y / 2 + 0.5 # [-1,1] -> [0,1] self.model.fit(X, y, nb_epoch=5, batch_size=32) def predict(self, X): classes = self.model.predict_classes(X, batch_size=32) return classes[:,0] * 2 - 1 models.append(NeuralNet()) models[-1].fit(X_train, y_train) loss_acc = models[-1].model.evaluate(X_test, y_test/2+0.5, batch_size=32) print('\n\nTesting set: {}'.format(loss_acc)) ``` ## 6 Evaluation Now that we tried several predictive models, it is time to evaluate them with our chosen metrics and choose the one best suited to our particular problem. Let's plot the *classification accuracy* and the *prediction time* for each classifier with [matplotlib](http://matplotlib.org), the goto 2D plotting library for scientific Python. Its API is similar to matlab. Result: The NeuralNet gives the best accuracy, by a small margin over the much simple logistic regression, but is the slowest method. Which to choose ? Again, it depends on your priorities. ```python from matplotlib import pyplot as plt plt.style.use('ggplot') %matplotlib inline # Or notebook for interaction. names, acc, times = [], [], [] for model in models: import time t = time.process_time() y_pred = model.predict(X_test) times.append((time.process_time()-t) * 1000) acc.append(accuracy(y_pred, y_test) * 100) names.append(type(model).__name__) plt.figure(figsize=(15,5)) plt.subplot(121) plt.plot(acc, '.', markersize=20) plt.title('Accuracy [%]') plt.xticks(range(len(names)), names, rotation=90) plt.subplot(122) plt.plot(times, '.', markersize=20) plt.title('Prediction time [ms]') plt.xticks(range(len(names)), names, rotation=90) plt.show() ```
ec1cf991cf70ec3da1c6046cab0aa3c3789ac1e2
24,359
ipynb
Jupyter Notebook
toolkit/02_demo_exploitation.ipynb
vinceHardy/learning
941e5979d471567411e7593c36617ef4a8e47f70
[ "MIT" ]
1
2019-11-05T06:17:40.000Z
2019-11-05T06:17:40.000Z
toolkit/02_demo_exploitation.ipynb
johnqoe/ntds_2016
2c207029e7c93807fe57b0a4ae098c8afe38a661
[ "MIT" ]
null
null
null
toolkit/02_demo_exploitation.ipynb
johnqoe/ntds_2016
2c207029e7c93807fe57b0a4ae098c8afe38a661
[ "MIT" ]
null
null
null
39.802288
693
0.587545
true
4,532
Qwen/Qwen-72B
1. YES 2. YES
0.861538
0.891811
0.768329
__label__eng_Latn
0.895826
0.623419
# Videos and Exercises for Session 11: Regression and Regularization In this combined teaching module and exercise set, you will learn about linear regression models in a machine learning perspective. We will see how overfitting can arise and how we can tackle it with a modification of the linear regression model. The structure of this notebook is as follows: 1. Linear Regression Mechanics 2. Overfitting and Underfitting in Linear Regression - Exploring Overfitting in Linear Regression - A Cure for Overfitting in Linear Regression 3. Modelling Houseprices (Exercise) ## Packages First, we need to import our standard stuff. Notice that we are not interested in seeing the convergence warning in scikit-learn, so we suppress them for now. ```python import warnings from sklearn.exceptions import ConvergenceWarning warnings.filterwarnings(action='ignore', category=ConvergenceWarning) import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns %matplotlib inline ``` # Part 1: Linear Regression Mechanics ## Implementing and evaluating the gradient decent Normally we use OLS to estimate linear regression models, but this is only way of solving the problem of minimizing the least squares problem (that minimizes the sum of squared errors). In the video below we show how to implement gradient descent below and compare it along with other approximate solutions to OLS. You may find PML pp. 310-312, 319-324 useful as background reading. ```python from IPython.display import YouTubeVideo YouTubeVideo('pl3ep6qRaZw', width=640, height=360) ``` Before continuing, we want to provide some brief intuition for how taking steps in the opposite direction of the gradient of a loss function helps us find the weights that minimize the loss. In the video, encountered the following derivation of how weights are updated with gradient descent in a regression type problem: \begin{align}\frac{\partial SSE}{\partial\hat{\textbf{w}}} & =-\textbf{X}^{T}\textbf{e}\qquad\text{(the gradient)}\\ \Delta\hat{\textbf{w}} & =-\eta\cdot\frac{\partial SSE}{\partial\hat{\textbf{w}}}\qquad\text{(gradient descent)}\\ & =\eta\cdot\textbf{X}^{T}\textbf{e}\\ & =\eta\cdot\textbf{X}^{T}(\textbf{y}-\textbf{X}\hat{\textbf{w}}) \end{align} You may ask: Why do we take steps in the opposite direction of the gradient? Consider the illustration of a one-dimensional problem below. Here $w$ is our weight, and $J(w)$ is the loss function that we want to minimize by choosing an appropriate weight. In the example, we started out by guessing a too _high_ value for the weight. At this point, the loss function is _increasing_ in the size of the weight. So what do we do? We take a step in the opposite direction of the gradient and _decrease_ the size of the weight. If the gradient is steep, we take a big step (we know that we are probably relatively far away from the solution), and if the gradient is relatively flat, we take a small step (we might be close to the solution). <center></center> We continue straight to an exercise where you are to implement a new estimator that we code up from scratch. We solve the numerical optimization using the gradient decent algorithm. This will be very similar to what we just saw in the video, but we will pay a bit more attention to each step in the process. Using our algorithm, we will fit it to some data, and compare our own solution to the standard solution from `sklearn` > **Ex. 11.1.0**: Import the dataset `tips` from the `seaborn`. *Hint*: use the `load_dataset` method in seaborn ```python ### BEGIN SOLUTION tips = sns.load_dataset("tips") ### END SOLUTION ``` > **Ex. 11.1.1**: Convert non-numeric variables to dummy variables for each category (remember to leave one column out for each catagorical variable, so you have a reference). Restructure the data so we get a dataset `y` containing the variable tip, and a dataset `X` containing the features. > *Hint*: You might want to use the `get_dummies` method in pandas, with the `drop_first = True` parameter. ```python ### BEGIN SOLUTION tips_num = pd.get_dummies(tips, drop_first=True) X = tips_num.drop('tip', axis = 1) y = tips_num['tip'] ### END SOLUTION ``` > **Ex. 11.1.2**: Divide the features and target into test and train data. Make the split 50 pct. of each. The split data should be called `X_train`, `X_test`, `y_train`, `y_test`. > *Hint*: You may use `train_test_split` in `sklearn.model_selection`. ```python ### BEGIN SOLUTION from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=.5, random_state=161193) ### END SOLUTION ``` > **Ex. 11.1.3**: Normalize your features by converting to zero mean and one std. deviation. > *Hint*: Take a look at `StandardScaler` in `sklearn.preprocessing`. If in doubt about which distribution to scale, you may read [this post](https://stats.stackexchange.com/questions/174823/how-to-apply-standardization-normalization-to-train-and-testset-if-prediction-i). ```python ### BEGIN SOLUTION from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) ### END SOLUTION ``` > **Ex. 11.1.4**: Make a function called `compute_error` to compute the prediction errors given input target `y_`, input features `X_` and input weights `w_`. You should use matrix multiplication. > > *Hint:* You can use the net-input fct. from yesterday. ```python ### BEGIN SOLUTION def net_input(X_, w_): ''' Computes the matrix product between X and w. Note that X is assumed not to contain a bias/intercept column.''' return np.dot(X_, w_[1:]) + w_[0] # We have to add w_[0] separately because this is the constant term. We could also have added a constant term (columns of 1's to X_ and multipliced it to all of w_) def compute_error(y_, X_, w_): return y_ - net_input(X_, w_) ### END SOLUTION ``` > **Ex. 11.1.5**: Make a function to update the weights given input target `y_`, input features `X_` and input weights `w_` as well as learning rate, $\eta$, i.e. greek `eta`. You should use matrix multiplication. ```python # INCLUDED IN ASSIGNMENT 2 ``` > **Ex. 11.1.6**: Use the code below to initialize weights `w` at zero given feature set `X`. Notice how we include an extra weight that includes the bias term. Set the learning rate `eta` to 0.001. Make a loop with 50 iterations where you iteratively apply your weight updating function. >```python w = np.zeros(1+X_train.shape[1]) ``` ```python # INCLUDED IN ASSIGNMENT 2 ``` > **Ex. 11.1.7**: Make a function to compute the mean squared error. Alter the loop so it makes 100 iterations and computes the MSE for test and train after each iteration, plot these in one figure. > Hint: You can use the following code to check that your model works: >```python from sklearn.linear_model import LinearRegression reg = LinearRegression() reg.fit(X_train, y_train) assert((w[1:] - reg.coef_).sum() < 0.01) ``` ```python ### BEGIN SOLUTION def MSE(y_,X_,w_): return (compute_error(y_, X_, w_)**2).mean() w = np.zeros(1+X_train.shape[1]) MSE_train = [MSE(y_train, X_train, w)] MSE_test = [MSE(y_test, X_test, w)] for i in range(100): w = update_weights(y_train, X_train, w) MSE_train.append(MSE(y_train, X_train, w)) MSE_test.append(MSE(y_test, X_test, w)) ### END SOLUTION pd.Series(MSE_train).plot() pd.Series(MSE_test).plot() ``` The following bonus exercises are for those who have completed all other exercises until now and have a deep motivation for learning more. > **Ex. 11.1.8 (BONUS)**: Implement your linear regression model as a class. > A solution is found on p. 320 in Python for Machine Learning. # Part 2: Overfitting and Underfitting in Linear Regression ## Exploring Overfitting in Linear Regression How does overfitting manifest itself in linear regression? In the video below we simulate what happens as make a better and better taylor approximation, i.e. we estimate a polynomial of higher and higher order. Two issues arise simultaneously - one is related to the number of parameters and the to the size of the parameters. You may find PML pp. 334-339 useful as background reading. ```python YouTubeVideo('HbeTpK-2oeU', width=640, height=360) ``` ## A Cure for Overfitting in Linear Regression How do we fix the two issues of excessively large weights/coefficients and too many spurious solutions? The video below provides a solution by directly incorporating these issues into the optimization problem. You may find PML pp. 73-76, 123-136, 332-334 useful as background reading. ```python YouTubeVideo('r6a8WFm9jAI', width=640, height=360) ``` Above we tackled overfitting, but what about ***underfitting***? The video below shows how to address underfitting and also zooms in on some important details about regularization. You may find PML pp. 73-76, 123-136, 332-334 useful as background reading. ```python YouTubeVideo('IWBtYT1KI_Q', width=640, height=360) ``` > **Ex. 11.2.1 (BONUS)**: Is it possible to add a penalty to our linear model above and solve this Lasso model with gradient descent? Is there a simple fix? > > *Hint:* Gradient descent essentially relies on a differentiable loss function (read more [here](https://stats.stackexchange.com/questions/177800/why-proximal-gradient-descent-instead-of-plain-subgradient-methods-for-lasso)) ```python ### BEGIN SOLUTION ANSWER: No, we cannot exactly solve for the Lasso with gradient descent. However, we can make an approximate solution which is pretty close and quite intuitive - see good explanation here: https://stats.stackexchange.com/questions/177800/why-proximal-gradient-descent-instead-of-plain-subgradient-methods-for-lasso. ### END SOLUTION ``` # Part 3: Modelling Houseprices In this example, we will try to predict houseprices using a lot of variable (or features as they are called in Machine Learning). We are going to work with Kaggle's dataset on house prices, see information [here](https://www.kaggle.com/c/house-prices-advanced-regression-techniques). Kaggle is an organization that hosts competitions in building predictive models. > **Ex. 11.3.0:** Load the california housing data with scikit-learn using the code below. Now: > 1. Inspect *cal_house*. How are the data stored? > 2. Create a pandas DataFrame called *X*, using `data`. Name the columns using `feature_names`. > 3. Crate a pandas Series called *y* using `target`. > 4. Make a train test split of equal size. ```python from sklearn.datasets import fetch_california_housing from sklearn.model_selection import train_test_split cal_house = fetch_california_housing() ### BEGIN SOLUTION X = pd.DataFrame(data=cal_house['data'], columns=cal_house['feature_names'])\ .iloc[:,:-2] y = cal_house['target'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5, random_state=1) X_train.describe() ### END SOLUTION ``` > **Ex.11.3.1**: Generate interactions between all features to third degree (make sure you **exclude** the bias/intercept term). How many variables are there? Will OLS fail? After making interactions, rescale the features to have zero mean, unit std. deviation. Should you use the distribution of the training data to rescale the test data? > *Hint 1*: Try importing `PolynomialFeatures` from `sklearn.preprocessing` > *Hint 2*: If in doubt about which distribution to scale, you may read [this post](https://stats.stackexchange.com/questions/174823/how-to-apply-standardization-normalization-to-train-and-testset-if-prediction-i). ```python # INCLUDED IN ASSIGNMENT 2 ``` > **Ex.11.3.2**: Estimate the Lasso model on the rescaled train data set, using values of $\lambda$ in the range from $10^{-4}$ to $10^4$. For each $\lambda$ calculate and save the Root Mean Squared Error (RMSE) for the rescaled test and train data. Take a look at the fitted coefficients for different sizes of $\lambda$. What happens when $\lambda$ increases? Why? > *Hint 1*: use `logspace` in numpy to create the range. > *Hint 2*: read about the `coef_` feature [here](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html#sklearn.linear_model.Lasso). ```python # INCLUDED IN ASSIGNMENT 2 ``` > **Ex.11.3.3**: Make a plot with on the x-axis and the RMSE measures on the y-axis. What happens to RMSE for train and test data as $\lambda$ increases? The x-axis should be log scaled. Which one are we interested in minimizing? > Bonus: Can you find the lambda that gives the lowest MSE-test score? ```python # INCLUDED IN ASSIGNMENT 2 ```
8ec9e1696ba38c32bcb8fa077e59ba2b8feabe96
21,052
ipynb
Jupyter Notebook
teaching_material/module_11/module_11_exercises_sol.ipynb
noaheeg/isds2021
5dc62f6beccd1fd2854dd04b80f95b68a0221310
[ "MIT" ]
17
2021-07-14T21:23:12.000Z
2021-09-01T09:13:08.000Z
teaching_material/module_11/module_11_exercises_sol.ipynb
noaheeg/isds2021
5dc62f6beccd1fd2854dd04b80f95b68a0221310
[ "MIT" ]
28
2021-07-02T19:26:35.000Z
2021-08-04T16:47:57.000Z
teaching_material/module_11/module_11_exercises_sol.ipynb
noaheeg/isds2021
5dc62f6beccd1fd2854dd04b80f95b68a0221310
[ "MIT" ]
37
2021-07-07T15:38:21.000Z
2021-08-23T16:36:15.000Z
33.152756
741
0.610963
true
3,192
Qwen/Qwen-72B
1. YES 2. YES
0.887205
0.841826
0.746872
__label__eng_Latn
0.988397
0.573565
# Exercise 1 2021/NJC/Functions/Q2 H2 Mathematics For some unknown constants $a$ and $k$, the functions $f$, $g$ and $h$ are defined by $$ f :x\mapsto\left(2-x\right)\left(4+x\right),\,x<2,$$ $$ g :x\mapsto\left(2-x\right)\left(4+x\right),\,x\leq k,$$ $$ h :x\mapsto2^{x^{3}},\,x\in\left(-\infty,a\right).$$ - (i) Find the range of $f$, and show that $f^{-1}$ does not exist. - (ii) Find the greatest value of $k$ such that $g^{-1}$ exists. Using this greatest value of $k$, define $g^{-1}$ in a similar form. - (iii) Find the range of values of $a$ such that the composite function $fh$ exists. ## Answer - (i) The graph of $f$ looks like <center> </center> The maximum turning point is at $(1,9)$. Thus, $R_f=(-\infty,9]$. The horizontal line $y=1$ passes through the graph twice, thus, the function cannot be one-to-one and consequently, cannot have an inverse. - (ii) From the graph drawn earlier, the greatest value of $k$ for $g^{-1}$ to exist is 1. With the value of $k$, let $y=(2-x)(4+x)$, we make $x$ the subject. ```python from sympy import * from h2_math import * # Denote that x is a variable in the expression x = symbols('x') # Defining a function with name f f = make_fn('(2-x)*(4+x)') print_inverse(f) ``` $\displaystyle x=- \sqrt{9 - y} - 1$ $\displaystyle x=\sqrt{9 - y} - 1$ Since $x\leq k$, $g^{-1}:x\mapsto -1-\sqrt{9-x}$. - (iii) For $fh$ to exists, we need $R_h\subseteq D_f$ as such we need to solve $2^{a^3}<2$, so $a< (\frac{\ln 9}{\ln 2})^{\frac{1}{3}}=1$. # Exercise 2 2021/NJC/Functions/Q2 H2 Mathematics The functions $f$ and $g$ are defined by $$ f:x\mapsto e^{\left|3-x\right|},\,x\in\mathbb{R},$$ $$ g:x\mapsto\left(x-1\right)^{2}+a,\,x\in\mathbb{R}\,\text{and\,}a\,\text{is a positive constant.}$$ - (i) Sketch the graph of $y = f(x)$, and show that $f^{-1}$ does not exist. - (ii) The function $f$ has an inverse if its domain is restricted to $x\geq b$. State the smallest possible value of $b$ and define, in similar form, the inverse function $f^{-1}$ corresponding to this domain for $f$. - (iii) Using the value of $b$ in (ii), find the smallest possible value of $a$ such that the composite function $f^{-1}g$ exists. State the range of $f^{-1}g$ for this value of $a$. ## Answer (i) We have that <center> </center> The horizontal line $y=2$ passes through the graph twice, thus, the function cannot be one-to-one and consequently, cannot have an inverse. (ii) From the graph, we see that that the smallest possible value of $b$ is 1. ```python from sympy import * from h2_math import * # Denote that x is a variable in the expression x = symbols('x') # Defining a function with name f f = make_fn('(exp(1))**(x-3)') print_inverse(f) ``` $\displaystyle x=\log{\left(y \right)} + 3$ Thus, $f^{-1}(x)=\ln(x)+3$. - (iii) $f^{-1}g$ exists if $[a,\infty)=R_g \subseteq D_{f^{-1}}=R_f=[1,\infty)$. Therefore, the smallest possible value of $a$ is 1. If $a=1$, $R_g=[1,\infty)$ and therefore, $R_{f^{-1}g}=[3,\infty)$ based on the graph drawn in part(i). # Exercise 3 2021/NJC/Functions/Q3 H2 Mathematics It is given that $\lambda$ is an unknown constant. The functions $f$ and $g$ are defined as follows: $$ f:x\mapsto\frac{5-x}{1-x},\,x\in\mathbb{R},x\neq1,$$ $$ g:x\mapsto2x^{2}+4x+\lambda,\,x\in\mathbb{R},x>-2.$$ - (a) (i) Explain why $f^{-1}$ exists, and show that $f^{-1}(x)=f(x)$. - (a) (ii) Hence, or otherwise, evaluate $f^{51}(4)$, where $f^{n}(x)$ denotes $$ \underset{n\,\text{times}}{\underbrace{fff\cdots f}}(x).$$ - (b) (i) Find the range of values of $\lambda$ such that $fg$ exists. - (b) (ii) Given that $fg$ exists, find the range of $fg$ in terms of $\lambda$. ## Answer - (a) (i) We observe that the graph of $f(x)$ is a one-to-one function as any horizontal line $y=k$, $k\in \mathbb{R}$ cuts the graph at most once. <center> </center> As such, the inverse function exist. Let $y=\frac{5-x}{1-x}$, we make $x$ the subject ```python from sympy import * from h2_math import * # Denote that x is a variable in the expression x = symbols('x') # Defining a function with name f f = make_fn('(5-x)/(1-x)') print_inverse(f) ``` $\displaystyle x=\frac{y - 5}{y - 1}$ Thus, $f^{-1}(x)=\frac{x-5}{x-1}=f(x)$. - (a) (ii) From part (i), we see that $f^2(x)=x$, which further implies that $f^51(x) =f^{{50}}(f(x))=f(x)=\frac{5-x}{1-x}$. As such, $$f^{{51}}(4)=\frac{5-4}{1-4}=-\frac{1}{3}.$$ We verify our solution by bruteforcing our way to get the composite function. ```python from sympy import * from h2_math import * x = symbols('x') # Defining a function with name f f = make_fn('(5-x)/(1-x)') print('Checking the composition of function') display(Math(f'f^{{51}}(x)={ latex(self_compose(f,51))}')) print('Evaluating at x=4') # We use .subs() method to do the substitution and evaluate them display(Math(f'f^{{51}}(4)={ latex(self_compose(f,51).subs(x,4))}')) ``` Checking the composition of function $\displaystyle f^{51}(x)=\frac{x - 5}{x - 1}$ Evaluating at x=4 $\displaystyle f^{51}(4)=- \frac{1}{3}$ - (b) (i) $fg$ exists when $R_g \subseteq D_f$. First, we note that by completing square $g(x)=2\left(x+1\right)^{2}+\left(\lambda-2\right)$ with minimum turning point point at $x=-1>-2$. <center> </center> As such, $R_g=[\lambda -2,\infty)$. Thus, for $R_g$ to be a subset of $D_f$, we would then require \begin{align*} \lambda - 2 &> 1\\ \lambda &>3. \end{align*} - (b) (ii) We note that for $x>1$, if $x_0<x_1$, $f(x_0)<f(x_1)$ and $f(x)<1$ as $x=1$ is a vertical asympotote of $f$. Therefore, \begin{align*} R_{fg} &= [f(\lambda - 2),1)\\ &= [\frac{\lambda - 7}{\lambda -3},1). \end{align*} # Exercise 4 2021/NJC/Functions/Q4 H2 Mathematics The function $f$ is defined by $$f:x\mapsto\frac{ax}{bx-a},\,x\in\mathbb{R},x\neq\frac{a}{b},$$ where $a$ and $b$ are non-zero constants. - (a) (i) Determine if $f^{-1}$ exists. If yes, find $f^{-1}(x)$. - (a) (ii) Hence, or otherwise, find the rule of the composite function, $f^2(x)$ and state the range of $f^2$. - (a) (iii) Solve the equation $f^{-1}(x)=x$. - (b) The function $g$ is defined by $$ g:x\mapsto\frac{1}{x},\text{for all real non-zero }x.$$ State whether the composite function $fg$ exists, justifying your answer. ## Answer - (a) (i) We observe that the graph of $f(x)$ is a one-to-one function as any horizontal line $y=k$, $k\in \mathbb{R}$ cuts the graph at most once. <center> </center> As such, the inverse function exist. Let $y=\frac{ax}{bx-a}$, we make $x$ the subject ```python # Modules for algebraic manipulation and pretty printing from sympy import * from h2_math import * # Denote that x is a variable in the expression x = symbols('x') # Since we have more symbols beyond x in our expression # We also need to represent the real numbers a and b symbolically a, b = symbols('a b', real=True) # Defining a function with name f class f(Function): @classmethod def eval(cls, x): # Return the rule of the function return (a*x)/(b*x-a) print_inverse(f) ``` $\displaystyle x=\frac{a y}{- a + b y}$ Thus, $f^{-1}(x)=\frac{a x}{- a + b x}$. From the graph, we see that $R_f = (-\infty,0) \cup (0,\infty)$. Therefore, $D_{f^{-1}} = (-\infty,0) \cup (0,\infty)$ as well. - (a) (ii) The smart way is to notice that $f(x)=f^{-1}(x)$ and as such, \begin{align*} f(x) &= f^{-1}(x)\\ f(f(x)) &= f(f^{-1}(x))\\ f^2(x) &= x. \end{align*} Altenatively, we can brute force it by evaluating the rule of the composite function via the definition given. ```python # Modules for algebraic manipulation and pretty printing from sympy import * from h2_math import * # Denote that x is a variable in the expression x = symbols('x') # We also need to represent the real numbers a and b symbolically a, b = symbols('a b', real=True) # Defining a function with name f class f(Function): @classmethod def eval(cls, x): # Return the rule of the function return (a*x)/(b*x-a) print('Checking the composition of function') display(Math(f'f^2(x)={ latex(self_compose(f,2,simp=False))}')) print('After simplification') display(Math(f'f^2(x)={ latex(self_compose(f,2))}')) ``` Checking the composition of function $\displaystyle f^2(x)=\frac{a^{2} x}{\left(- a + b x\right) \left(\frac{a b x}{- a + b x} - a\right)}$ After simplification $\displaystyle f^2(x)=x$ Since $f^2(x)=x$ and $D_f=\mathbb{R}\backslash\left\{ \frac{a}{b}\right\} $, we have $R_f=\mathbb{R}\backslash\left\{ \frac{a}{b}\right\} $ as well. - (a) (iii) We have that ```python from sympy import * from h2_math import * # Denote that x is a variable in the expression x = symbols('x') # We also need to represent the real numbers a and b symbolically a, b = symbols('a b', real=True) # Defining a function with name f class f(Function): @classmethod def eval(cls, x): # Return the rule of the function return (a*x)/(b*x-a) for x_vals in solve(f(x)-x,x): display(Math(f'x={ latex(x_vals) }.')) ``` $\displaystyle x=0.$ $\displaystyle x=\frac{2 a}{b}.$ - (b) Observe that the graph of $g$ looks like <center> </center> As such $R_g=\mathbb{R}\backslash{0}$. Since $D_f=\mathbb{R}\backslash\left\{ \frac{a}{b}\right\} $, we have that $R_g \nsubseteq D_f$ as $\frac{a}{b}\in R_g$, but $\frac{a}{b} \notin D_f$. Thus, the function $fg$ does not exist. ## Exercise 5 2021/NJC/Functions/Q4 H2 Mathematics It is given that $$f\left(x\right)=\left|\frac{x-2a}{2}\right|\,\text{for}\,0\leq x<4a,\,\text{where }a\,\text{is a positive constant,}$$ and that $f(x)=f(x+4a)$ for all real values of $x$. - **(i)** Find the values of $f(-5a)$ and $f(8a)$ in terms of $a$. - **(ii)** Sketch the graph of $y=f(x)$ for $-6a\leq x\leq 10a$. - **(iii)** Hence or otherwise, find the exact value of ${\displaystyle \int_{-6a}^{10a}f\left(x\right)\,dx}$ in terms of $a$. ### Answer - (i) Since $f(x) = f(x+4a)$ and $a$ is a positive constant, we have that $$\begin{align*} f(-5a) &= f(-a) \\ &= f(3a) \\ &= \left|\frac{3a-2a}{2}\right| \\ &=\left|\frac{a}{2}\right| \\ &=\frac{a}{2} \\ f(8a) &= f(4a) \\ &= f(0) \\ &= \left|\frac{0-2a}{2}\right| \\ &=\left|\frac{-2a}{2}\right| \\ &=\left|a\right| \\ &=a \\ \end{align*}$$ - (ii) The graph looks like <center> </center> - (iii) From the graph above, we can easily see that that ${\displaystyle \int_{-6a}^{10a}f\left(x\right)\,dx}$ is $$\begin{align} 4\left(\frac{1}{2}\left(2a-\left(-2\right)\right)\left(a\right)\right) = 8a^{2}. \end{align}$$ # Exercise 6 2021/NJC/Functions/Q4 H2 Mathematics The function $f$ is defined as $$ f:x\mapsto x^2-\lambda x+3, x\in \mathbb{R},$$ where $\lambda$ is a non-zero constant. - **(i)** Find the range of $f$, giving your answer in terms of $\lambda$. The function $f$ has an inverse function if its domain is restricted to $x\leq k$. - **(ii)** State the greatest value of $k$ in terms of $\lambda$. Using the result in **(ii)** for the case $\lambda=4$, - **(iii)** sketch, on the same diagram, the graphs of $y=f(x)$ and $y=f^{-1}(x)$, illustrating clearly the relationship between the two graphs, and labelling the axial intercept(s), if any. - **(iv)** Find the exact solution of $f(x)=f^{-1}(x)$. - (i) By completing the square, we see that $$ \begin{align*} x^2-\lambda x+3 &= \left(x-\frac{\lambda}{2}\right)^{2}+3-\left(\frac{\lambda}{2}\right)^{2} \\ &\geq 3-\frac{\lambda^2}{4}. \end{align*}$$ Therefore, $R_f=[3-\frac{\lambda^2}{4},\infty)$. - (ii) Since $f$ is a quadratic function with only one minimum turning point in the domain (\mathbb{R}) at $x=\frac{\lambda}{2}$, $k=\frac{\lambda}.{2}$. - (iii) The graphs is given below. <center> </center> - (iv) We see that the solution to $f\left(x\right)=f^{-1}(x)$ is also a solution $f(x)=x$ (observe the graph). Thus, we have that ```python from sympy import * from h2_math import * # Denote that x is a variable in the expression x = symbols('x') # Defining a function with name f f = make_fn('x**2-4*x+3') for x_vals in solve(f(x)-x,x): # The next line is for use in Markdown # print(latex(x_vals)) display(Math(f'x={ latex(x_vals) }.')) ``` $\displaystyle x=\frac{5}{2} - \frac{\sqrt{13}}{2}.$ $\displaystyle x=\frac{\sqrt{13}}{2} + \frac{5}{2}.$ Since $x \leq \frac{\lambda}{2}=\frac{4}{2}=2$, we have that $x= \frac{5}{2} - \frac{\sqrt{13}}{2}$. # Exercise 7 2021/NJC/Functions/Q7 H2 Mathematics The function $f$ is defined by $$f:x\mapsto (7+x)(1-x)-15, x\in [-2,\infty).$$ - (i) Show that $f^{-1}$ exists, and find its domain. - (ii) Sketch, on the same diagram, the graphs of $$y =f(x), y=f^{-1}(x) \text{ and } y=f^{-1}f(x),$$ showing clearly the relationship between the graphs. - (iii) Hence find the exact solution of $f^{-1}f(x)\leq f(x)$. ### Solution - (i) From the following sketch of the graph of $y=f(x)$, any horizontal line intersects the graph of $y=f(x)$ at most once. Thus the function $f$ is one-one. It follows that $f^{-1}$ exists. Note that $D_{f^{-1}}=R_f=(-\infty,0]$. <center> </center> - (ii) We have <center> </center> - (iii) From the graph in (ii), we observe that the solution to $f^{-1}f(x)\leq f(x)$ is in the form $-2\leq x\leq k$ where $k$ is the $x$-coordinate of the point of intersection of the three curves: $y=f(x)$, $y=f^{-1}(x)$ and $y=f^{-1}f(x)=x$. Solving $f(x)=x$ for $k$: ```python from sympy import * from h2_math import * # Denote that x is a variable in the expression x = symbols('x') # Defining a function with name f f = make_fn('(7+x)*(1-x)-15') for x_vals in solve(f(x)-x,x): # The next line is for use in Markdown # print(latex(x_vals)) display(Math(f'x={ latex(x_vals) }.')) ``` $\displaystyle x=- \frac{7}{2} - \frac{\sqrt{17}}{2}.$ $\displaystyle x=- \frac{7}{2} + \frac{\sqrt{17}}{2}.$ Since $-2\leq x$, $k=- \frac{7}{2} + \frac{\sqrt{17}}{2}$. As such, $-2\leq x \leq - \frac{7}{2} + \frac{\sqrt{17}}{2}$. # Exercise 8 2021/NJC/Functions/Q8 H2 Mathematics The functions $f$ and $g$ are defined by $$\begin{align*} f :x &\mapsto\frac{3x}{x+2},\ &,x>-2\\ g :x &\mapsto x^{2}+1,\ &,x\leq5 \end{align*}$$ Sketch, on separate diagrams, the graph of $y=f(x)$ and $y=g(x)$. Determine, whether each of the following composite functions exists: - (a) $fg$ - (b) $gf$ If any of the functions exists, then find its rule, domain and range. ### Solution We have the graphs <center> </center> - (a) From the graphs above, we see that $R_g=[1,\infty)$ and $D_f=(-2,\infty)$. As $R_g \subseteq D_f$, $fg$ exists. The composition is given by ```python from sympy import * from h2_math import * # Defining the symbol for the parameters/arguments/input of the functions f,g x = symbols('x') # Defining functions f and g f = make_fn('(3*x)/(x+2)') g = make_fn('x**2+1') print('The rule of fg') #Compose f(g(x)) display(Math(f'fg(x)={ latex(f(g(x)))}')) # print(latex(f(g(x)))) For Markdown use ``` The rule of fg $\displaystyle fg(x)=\frac{3 x^{2} + 3}{x^{2} + 3}$ By definition, $D_{fg}=D_g=(-\infty,5]$. To find the range, observe that $y=f(x)$ is an increasing function, i.e as $x_1\leq x_2$, we have $f(x_1)\leq f(x_2)$. As such, as $R_g=[1,\infty)$, we have $R_{fg}=[1,3)$. - (b) Similarly, from the graphs above, we see that $R_f=[-\infty,3)$ and $D_f=(-\infty,5)$. As $R_f \subseteq D_g$, $gf$ exists. The composition is given by ```python from sympy import * from h2_math import * # Defining the symbol for the parameters/arguments/input of the functions f,g x = symbols('x') # Defining functions f and g f = make_fn('(3*x)/(x+2)') g = make_fn('x**2+1') print('The rule of gf') #Compose g(f(x)) display(Math(f'gf(x)={ latex(g(f(x)))}')) #print(latex(g(f(x)))) #For Markdown use ``` The rule of gf $\displaystyle gf(x)=\frac{9 x^{2}}{\left(x + 2\right)^{2}} + 1$ By definition, $D_{gf}=D_f=(-2,\infty]$. Unlike part (i), we can't $g$ isn't an increasing function (it has a local minimum turning point). As such, to find the range, we can just sketch the graph $gf$ with the domain $D_{gf}$ mentioned above. Observe that from the graph, $R_{gf}=[1,\infty)$. <center> </center> # Exercise 9 2021/NJC/Functions/Q9 H2 Mathematics The functions $f$ and $g$ are defined by $$\begin{align*} f :x &\mapsto\frac{2}{x^{2}+1},\,x\leq k\,\text{ and }k\,\text{ is an unknown constant}, \\ g :x &\mapsto\frac{1}{x-1},\,x\in[2,\infty) \end{align*}$$ It is given that $f^{-1}$ exists. State the largest value of $k$, and use this value to - (i) define $f^{-1}$ and - (ii) show that the composite function $f^{-1}g$ exists. Also, find its range. ### Solution We see that the graph of $y=\frac{2}{x^2+1}$ on the whole real number line looks like. <center> </center> As such, if we want $f^{-1}$ to exists and $x\leq k$, then the largest value of $k$ is $0$. - (i) ```python from sympy import * from h2_math import * # Denote that x is a variable in the expression x = symbols('x') # Defining a function with name f f = make_fn('2/(x**2+1)') print_inverse(f) ``` $\displaystyle x=\sqrt{\frac{2 - y}{y}}$ $\displaystyle x=- \sqrt{- \frac{y - 2}{y}}$ As $x\leq k = 0$, we have that $f^{-1}(x)=-\sqrt{-\frac{x-2}{x}}$, $D_f=(-\infty,0]$. - (ii) We see that the graph of $y=g(x)$ is given below <center> </center> We have that $R_g=(0,1] \subseteq D_{f^{-1}}=R_f=(0,2]$. As such $f^{-1}g$ exists. To find the range of $f^{-1}g$, we plot the graph of $y=f^{-1}g(x)$ with $D_{f^{-1}g}=D_g=[2,\infty)$. ```python from sympy import * from h2_math import * # Defining the symbol for the parameters/arguments/input of the functions f,g x = symbols('x') # Defining functions f and g f_inv = make_fn('-sqrt((2-x)/x)') g = make_fn('1/(x-1)') print('The rule of f^{-1}g') #Compose g(f(x)) display(Math(f'f^{{-1}}g(x)={ latex(simplify(f_inv(g(x))))}')) #print(latex(g(f(x)))) #For Markdown use ``` The rule of f^{-1}g $\displaystyle f^{-1}g(x)=- \sqrt{2 x - 3}$ The graph look like <center> </center> As such, $R_{f^{-1}g}=(-\infty,-1]$. # Exercise 10 2021/NJC/Functions/Q10 H2 Mathematics The function $f$ is defined by $$f:x\mapsto x^2-4x+1, x\leq a \text{ and }a \text{ is an unknown constant}.$$ - (i) Find the largest value of $a$ for which $f^{-1}$ exists, and define $f^{-1}$ for this case. - (ii) Using the value of $a$ in (i), sketch, on the same the diagram, the graphs of $$y=f(x), y=f^{-1}(x) \text{ and } ff^{-1}(x).$$ Hence solve $f(x)=f^{-1}(x)$, leaving your answer in exact form. ### Solution - (i) By completing the square, we see that $$ \begin{align*} x^2- 4x+1 &= \left(x-2\right)^{2}+1-\left(2)^{2}\right) \\ &= \left(x-2\right)^{2}-3 \\ &\geq -3. \end{align*}$$ Since $f$ is a quadratic function with only one minimum turning point in the $\mathbb{R}$ at $x=2$, $a=2$. Next, we find the rule of $f^{-1}$. ```python from sympy import * from h2_math import * # Denote that x is a variable in the expression x = symbols('x') # Defining a function with name f f = make_fn('x**2-4*x+1') print_inverse(f) ``` $\displaystyle x=2 - \sqrt{y + 3}$ $\displaystyle x=\sqrt{y + 3} + 2$ As $x\leq 2$, $f^{-1}(x)=2-\sqrt{x+3}$ with $D_{f^{-1}}=R_f=[-3,\infty)$. - (ii) The graphs are given below <center> </center> We see that the solution to $f\left(x\right)=f^{-1}(x)$ is also a solution $f(x)=x$ (observe the graph). Thus, we have that ```python from sympy import * from h2_math import * # Denote that x is a variable in the expression x = symbols('x') # Defining a function with name f f = make_fn('x**2-4*x+1') for x_vals in solve(f(x)-x,x): # The next line is for use in Markdown display(Math(f'x={ latex(x_vals) }.')) #print(latex(x_vals)) ``` $\displaystyle x=\frac{5}{2} - \frac{\sqrt{21}}{2}.$ $\displaystyle x=\frac{\sqrt{21}}{2} + \frac{5}{2}.$ Since $x\leq 2$, $x= \frac{5}{2} - \frac{\sqrt{21}}{2}$. # Exercise 11 2021/NJC/Functions/Q11 H2 Mathematics The functions $f$ and $g$ are defined by $$\begin{align*} f&:x\mapsto\frac{1}{2}\left(x^{3}-3x+2\right),x\in\mathbb{R}\\ g&:x\mapsto\sqrt{x+1},x\in[-1,\infty) \end{align*}$$ - (i) Sketch the graphs of $y = f(x)$ and $y = g(x)$, labelling clearly the coordinates of stationary points and axial intercepts, if any. - (ii) Determine if $f^{-1}$ exists, justifying your answer. - (iii) The composite function $gf$ exists if the domain of $f$ is restricted to $x\geq k$. Given that the range of $gf$ is $[1,\infty)$, find the range of possible values of $k$. ### Solution - (i) We see that the graphs looked like <center> </center> - (ii) The horizontal line $y=1$ intersects the graph of $y=f(x)$ more than once. Hence the function $f$ is not one-one. It follows that $f^{-1}$ does not exist. - (iii) Observe from the graph of $y = g(x)$ that in order to obtain $R_{gf}=[1,\infty)$, we require $R_f=[0,\infty)$. Therefore from the graph of $y = f(x)$, required answer is $-2\leq k \leq 1$. # Exercise 12 2021/NJC/Functions/Q12 H2 Mathematics The function $f$ is defined by $$f:x\mapsto \frac{e^x-1}{e-1} \text{ for }x\in \mathbb{R}.$$ Sketch the graph of $y=f(x)$ and state the range of $f$. Another function $h$ is defined by $$h:x\mapsto\begin{cases} \left(x-1\right)^{2}+1 & ,\,x\le1\\ 1-\frac{\left|1-x\right|}{2} & ,\,1<x\leq4 \end{cases}$$ Sketch the graph of $y=h(x)$ for $x\leq 4$ and explain why the composite function $f^{-1}h$ exists. Hence find the exact value of $(f^{-1}h)^{-1}(3)$. ### Solution The graph of $f$ looks like <center> </center> From the graph, we see that $R_h=[-\frac{1}{2},\infty)\subseteq (\frac{1}{1-e},\infty)=R_f=D_{f^{-1}}$. So, $f^{-1}h$ exists. <center> </center> Let the value that we are asked to find be $a$. Thus, we have $$\begin{align*} \left(f^{-1}h\right)^{-1}\left(3\right) &=a \\ f^{-1}h\left(\left(f^{-1}h\right)^{-1}\left(3\right)\right) &=f^{-1}h\left(a\right) \\ 3 &=f^{-1}h\left(a\right) \\ f\left(3\right) &=f\left(f^{-1}h\left(a\right)\right) \\ \frac{e^{3}-1}{e-1} &=h\left(a\right) \\ 1+e+e^{2} & =h\left(a\right) \end{align*}$$ Since $1+e+e^2 > 0$, from the graph of $h$, we conclude that the piece of the piecewise function $h$ that can give rise to this value is $h(x)=(x-1)^2+1$ when $x<1$. As such, $$1+e+e^2 = (a-1)^2+1.$$ ```python from sympy import * from h2_math import * # Denote that a is a variable in the expression a = symbols('a') # Defining a function with name h h = make_fn('(x-1)**2+1') for a_vals in solve(h(a)-(1+exp(1)+exp(2)),a): # The next line is for use in Markdown display(Math(f'a={ latex(a_vals) }.')) # print(latex(a_vals)) ``` $\displaystyle a=1 + \sqrt{1 + e} e^{\frac{1}{2}}.$ $\displaystyle a=- \sqrt{1 + e} e^{\frac{1}{2}} + 1.$ As $a<1$, we have that $(f^{-1}h)^{-1}(3)=- e^{\frac{1}{2}}\sqrt{1 + e} + 1 = 1-\sqrt{e^2+e}.$ # Exercise 13 2018/NYJC/JC1 MYE/Q6 H2 Mathematics The functions $f$ and $g$ are defined by $$\begin{align*} f :x &\mapsto\sin x+\cos x,\,-\frac{\pi}{4}\leq x\le\frac{5\pi}{4}, \\ g :x &\mapsto x^{2}+1,\,x>-2 \end{align*}$$ - (i) By writing $f(x)=R\sin (x+\alpha)$ where $R>0$ and $0<\alpha <\frac{\pi}{2}$, show that $gf$ exists and find the exact range of $gf$. - (ii) Explain why $f$ does not have an inverse. - (iii) The function $h$ is such that $h(x)=f(x)$ and the domain of $h$ is of the form $(a,\frac{5\pi}{4})$. State the exact value of the minimum value of $a$ for $h^{-1}$ to exist. - (iv) On a single diagram, sketch the graph of $y=h(x)$ and $y=h^{-1}(x)$. Your diagram should indicate the coordinates of the endpoints and the relationship between the two graphs. ### Solution - (i) By R-formula, $f(x)= \sqrt{2} \sin(x+\frac{\pi}{4})$. As such, $R_f=[-\sqrt{2},\sqrt{2}]$. Since $R_f \subseteq (-2,\infty)=D_g$, the composite function $gf$ exists. Next, we find the rule of $gf$. ```python from sympy import * from h2_math import * # Defining the symbol for the parameters/arguments/input of the functions f,g x = symbols('x') # We define f,g f = make_fn('sqrt(2)*sin(x+pi/4)') g = make_fn('x**2+1') print('The rule of gf') #Compose g(f(x)) display(Math(f'gf(x)={ latex(simplify(g(f(x))))}')) #print(latex(g(f(x)))) #For Markdown use ``` The rule of gf $\displaystyle gf(x)=\sin{\left(2 x \right)} + 2$ The graph of $gf$ looks like <center> </center> And from the graph, we see that $R_{gf}=[1,3]$. - (ii) We see that the graph of $f$ looks like <center> </center> The line $y=0$ cuts the graph at 2 points $x=-\frac{\pi}{4}$ and $x=\frac{3\pi}{4}$. As such, $f$ isn't a one-one function and consequently, $f$ doesn't have an inverse. - (iii) From the graph, if we are to restrict the domain of $h$ such that $h^{-1}$, the value of $a$ must be the $x$-coordinate of the maximum turning point of the graph in the domain. This happens when $$\begin{align*} f\left(x\right) &=\sqrt{2} \\ \sqrt{2}\sin\left(x+\frac{\pi}{4}\right) &=\sqrt{2} \end{align*}$$ And, solving gives $x=\frac{\pi}{4}$. As such, $a=\frac{\pi}{4}$. - (iv) The graph looks like <center> </center>
d87a3bb67f56f994a225f9c5967b6db16aaedeb3
45,901
ipynb
Jupyter Notebook
Tutorials/TUT_01_Functions.ipynb
beertino/H2_math_w_python
322169987bc3f3b46d4c5ba48db54ab6f6224ccf
[ "MIT" ]
null
null
null
Tutorials/TUT_01_Functions.ipynb
beertino/H2_math_w_python
322169987bc3f3b46d4c5ba48db54ab6f6224ccf
[ "MIT" ]
null
null
null
Tutorials/TUT_01_Functions.ipynb
beertino/H2_math_w_python
322169987bc3f3b46d4c5ba48db54ab6f6224ccf
[ "MIT" ]
null
null
null
28.99621
262
0.48707
true
9,023
Qwen/Qwen-72B
1. YES 2. YES
0.76908
0.766294
0.589341
__label__eng_Latn
0.96095
0.207567
# <center> SIAM CSE Poster: Param Distribution Part **Requires:** * `pymc3` for the hierarchical bayes modeling ```python import pymc3 as pm import numpy as np import scipy.stats as sps from scipy.integrate import quad import matplotlib.pyplot as plt %matplotlib inline ``` Generate data from a `double_tent` pdf. ```python class double_tent: ''' double_tent defines a pdf class object on 0,1 with two triangular peaks loc: defines the location of two peaks weight: defines the height of peaks. Two hieghts must sum to 1. Requires: scipy.stats.trapz ''' def __init__(self,loc,weight): # check that locations and weights are appropriate if loc[0]<0 or loc[0]>1: raise ValueError('Peaks must be in [0,1]') elif loc[1]<0 or loc[1]>1: raise ValueError('Peaks must be in [0,1]') if weight[0]+weight[1]!=1: raise ValueError('Weights must sum to 1') self.loc = loc # defines two peaks of pdf self.weight = weight # defines two heights of pdf # defines pdf function self.pdf_fun = lambda x: weight[0]*sps.trapz.pdf(x,loc[0],loc[0],loc=0,scale=0.385)+ \ weight[1]*sps.trapz.pdf(x,loc[1],loc[1],loc=0.8,scale=0.2) def pdf(self,x): return self.pdf_fun(x) def rvs(self,N): '''Generates N samples from the pdf''' # accept reject for sample from f sample = np.ones(N)*np.NaN i=0 # index for while loop while i<N: test = np.random.uniform(0,1) # test value # acceptance criteria if np.random.uniform(0,1)<self.pdf(test)/4: sample[i]=test i+=1 return sample ``` ```python ##### FIXED PARAMETERS - DEFINE YOUR EXPERIMENT ##### start_time = 1 end_time = 3 sigma2 = 1E-3 sigma = np.sqrt(sigma2) # fixed noise level in the data data_n = 100 sample_size = 800 #### ``` ### Generate the Data We generate data from a double tent function $\lambda$. ```python # # define target distribution and noise distribution # target_alpha = 2 # target_beta = 5 # define the target distribution lam_dist = double_tent(loc=[0.6,0.2],weight=[0.75,0.25]) noise_dist = sps.norm(0,sigma) ``` ```python x = np.linspace(-0.1,1.1,100) plt.plot(x,lam_dist.pdf(x)) ``` The $Q$ map is: \begin{align} Q(\lambda,\delta)=0.5\cdot \exp(-\lambda t)+\delta \end{align} where we assume $t=0.5$ is constant and $\delta$ is a random variable representing a noise parameter. ```python # defines the map from parameters to data def data_map(param,t=np.array([2])): q_map = 0.5*np.exp(-t*param) noise = noise_dist.rvs(size=q_map.shape) q_out = q_map+noise return q_out, (q_map,noise) ``` ```python # gets a sample of lambda for the observed data lam_sample = lam_dist.rvs(data_n) data_sample, (this_q,this_noise) = data_map(lam_sample) ``` ```python fig = plt.figure(figsize=(10,8)) ax = fig.add_subplot(2,2,1) ax.scatter(lam_sample,this_q) ax = fig.add_subplot(2,2,2) ax.set_title('$\lambda$ Dist: Doubletent') ax.hist(lam_sample,edgecolor='k') ax = fig.add_subplot(2,2,3) ax.scatter(lam_sample,data_sample) ax = fig.add_subplot(2,2,4) ax.set_title('Data Distribution') ax.hist(data_sample,edgecolor='k') ``` # Infer Using Hierarchical Parameteric Bayes Here we use the following parametric Bayesian model: \begin{align} \alpha,\beta\ &\sim \chi^2(df=1) \\ \lambda\ \mid\ \alpha,\beta &\sim \text{beta_distr}(\alpha,\beta) \\ d\ \mid \ \lambda, (\alpha,\beta) &\sim N(Q(\lambda),\sigma^2) \end{align} which leads to: \begin{align} \pi^{post}(\lambda\mid \{d_1,\ldots,d_n\})&\propto \int_{\Omega}\pi^{prior}(\lambda\mid \alpha,\beta)\cdot\pi^{prior}(\alpha,\beta)\cdot \pi^{likelihood}(\{d_1,\ldots,d_n\}\mid \lambda,(\alpha,\beta))\ d\Omega \end{align} where $(\alpha,\beta)\in\Omega$ is the $[0,\infty]\times[0,\infty]$. ```python with pm.Model() as model: ab = pm.ChiSquared('hyper',nu=1,shape=(2,))#pm.Uniform('hyper',lower=0,upper=10,shape=(2,)) lam = pm.Beta('lambda',alpha=ab[0],beta=ab[1], shape=data_n) Q_map = pm.Deterministic('Q',0.5*pm.math.exp(-2*lam)) dat = pm.Normal('data',mu=Q_map,sigma=sigma,observed=data_sample) trace = pm.sample(500,tune=500) ``` Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [lambda, hyper] Sampling 4 chains: 100%|██████████| 4000/4000 [00:04<00:00, 891.48draws/s] ```python # saves the posterior bayes sample as single array # and computes the posterior-predictive sample bayes_sample = trace['lambda'].reshape(-1,) ppc = pm.sample_posterior_predictive(trace, samples=500, model=model) pp_sample = ppc['data'].reshape(-1,) ``` 100%|██████████| 500/500 [00:00<00:00, 1491.02it/s] ```python plt.scatter(trace['hyper'][:,0],trace['hyper'][:,1]) ``` ```python # plots the original target sample versus bayes sample plt.hist(lam_sample,density=True,edgecolor='k',label='true sample') plt.hist(bayes_sample,density=True,edgecolor='k',alpha=0.4,label='bayes sample') plt.legend() ``` ```python # plots posterior predictive sample vs. data sample plt.hist(data_sample,density=True,edgecolor='k',label="data sample") plt.hist(pp_sample,density=True,alpha=0.5,edgecolor='k',label='pp sample') plt.legend() ``` # Infer Using Data Consistent Version Data Consistent model looks like: \begin{align} \pi^{init}(\lambda) &\sim U[0,1] \Leftrightarrow \text{beta_distr}(\alpha=1,\beta=1) \\ \pi^{obs}(Q(\lambda)) &\sim \pi(d)\text{, estimated via KDE} \\ \pi^{pf}(Q(\lambda))&\sim \text{can be estimated using samples of lambda, KDE} \end{align} The updated distribution will be: \begin{align} \pi^{update}(\lambda)&=\pi^{init}(\lambda)\cdot\dfrac{\pi^{obs}(Q(\lambda))}{\pi^{pf}(Q(\lambda))} \end{align} ```python # define prior distribution lam_init = sps.beta(a=1,b=1) ``` ```python # get prior sample and push-forward sample lam_init_sample = lam_init.rvs(18000) pf_init_sample, z = data_map(lam_init_sample) # estimate the data distribution and push-forward using KDE data_dist = sps.gaussian_kde(data_sample) pf_dist = sps.gaussian_kde(pf_init_sample) ``` ```python # accept-reject algorithm # calculate maximum of the ratio M = np.max(data_dist(pf_init_sample)/pf_dist(pf_init_sample)) # generate random numbers from uniform for accept-reject for each sample value test_value = np.random.uniform(0,1,np.shape(pf_init_sample)) # calculate the ratio for accept reject: data_kde/push_kde/M and compare to test sample # is the kde ratio > test value? accept_or_reject_samples = np.greater(data_dist(pf_init_sample)/pf_dist(pf_init_sample)/M, test_value) # accepted values of posterior sample updated_sample = lam_init_sample[accept_or_reject_samples] print('Acceptance Rate about 1/5: update samples = ', np.shape(updated_sample)) ``` Acceptance Rate about 1/5: update samples = (1863,) ```python # computes the push-forward of the updated sample pf_update,_ = data_map(updated_sample) ``` ```python # plots the original target sample versus update sample plt.figure() plt.hist(lam_sample,density=True,edgecolor='k',label='true sample') plt.hist(updated_sample,density=True,edgecolor='k',label='update sample',alpha=0.5) plt.legend() ``` ```python # plots data sample versus push-forward of update sample plt.hist(data_sample,density=True,edgecolor='k',label="data sample") plt.hist(pf_update,density=True,alpha=0.5,edgecolor='k',label='pf-update sample') plt.legend() ``` # Wrong Bayesian Model A Bayesian model that is not correct but "looks" more like Data Consistent in terms of form: \begin{align} \lambda &\sim \text{beta_distr}(\alpha=1,\beta=1) \ [\text{uniform prior}]\\ d\ \mid \ \lambda &\sim N(Q(\lambda),\sigma^2) \end{align} Then the posterior will be: \begin{align} \pi^{posterior}(\lambda\mid \{d_1,\ldots,d_n\})\propto \pi^{prior}(\lambda)\cdot \pi^{likelihood}(\{d_1,\ldots,d_n\}\mid \lambda) \end{align} ```python with pm.Model() as bad_model: lam2 = pm.Beta('lambda',alpha=1,beta=1) Q_map2 = pm.Deterministic('Q',0.5*pm.math.exp(-2*lam2)) dat2 = pm.Normal('data',mu=Q_map2,sigma=sigma,observed=data_sample) bad_trace = pm.sample(500,tune=500) ``` Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [lambda] Sampling 4 chains: 100%|██████████| 4000/4000 [00:00<00:00, 5280.08draws/s] ```python # saves the posterior bayes sample as single array # and computes the posterior-predictive sample bad_bayes_sample = bad_trace['lambda'].reshape(-1,) bad_ppc = pm.sample_posterior_predictive(bad_trace, samples=500, model=bad_model) bad_pp_sample = bad_ppc['data'].reshape(-1,) ``` 100%|██████████| 500/500 [00:00<00:00, 1242.45it/s] ```python plt.hist(bad_bayes_sample,edgecolor='k',density=True) print('Mean: ', np.mean(bad_bayes_sample)) print('Variance: ', np.mean(bad_bayes_sample)) ``` ```python # plots the original target sample versus bayes sample plt.hist(lam_sample,density=True,edgecolor='k',label='true sample') plt.hist(bad_bayes_sample,density=True,edgecolor='k',alpha=0.4,label='bayes sample') plt.legend() ``` ```python # plots posterior predictive sample vs. data sample plt.hist(data_sample,density=True,edgecolor='k',label="data sample") plt.hist(bad_pp_sample,density=True,alpha=0.5,edgecolor='k',label='pp sample') plt.legend() ``` # Comparison of Models ```python import importlib ``` ```python import seaborn ``` ```python # get prior sample for plotting hyper prior hyper_prior_sample = sps.chi2.rvs(df=1,size=(2000,2000)) ``` ```python # plot the hyperprior and posterior sample seaborn.set_style("white") seaborn.set_context("poster") # plot hyper prior hyper_figure = seaborn.jointplot(hyper_prior_sample[0],hyper_prior_sample[1], kind='kde', xlim=(0,3), ylim=(0,3),color='xkcd:faded green') hyper_figure.ax_joint.plot(0,0,color='xkcd:faded green',label='Prior sample') # plot the posterior sample hyper_figure.ax_joint.scatter(trace['hyper'][:,0],trace['hyper'][:,1], color='xkcd:windows blue',label='Posterior sample') hyper_figure.ax_joint.legend() hyper_figure.fig.set_figheight(6.5) hyper_figure.fig.set_figwidth(6.5) hyper_figure.ax_joint.set_xlabel('$\\alpha$') hyper_figure.ax_joint.set_ylabel('$\\beta$') ``` ```python plt.rcParams.update({'font.size': 24,'lines.linewidth':4}) # plt.rcParams['figure.figsize'] = 20, 20 legend_fsize = 20 ``` ```python # posterior kdes for plotting posterior_kde=sps.gaussian_kde(bayes_sample) bad_posterior_kde=sps.gaussian_kde(bad_bayes_sample) updated_dist = sps.gaussian_kde(updated_sample) ``` ```python # plot the objects of other spaces fig_objects,axes = plt.subplots(1,3,sharey=True) fig_objects.set_figwidth(32/2.54) fig_objects.set_figheight(fig_objects.get_figwidth()/2.5) ### set other params # color scheme # c_prior = 'xkcd:medium green' # c_post = 'xkcd:windows blue' c_prior = 'xkcd:neon pink' c_post = 'xkcd:blue' #### plot the parameter space (first row) x_lam = np.linspace(-0.1,1.1,250) ## HIERARCHICAL BAYES # prior for hierarchical bayes ax = axes[0] # ab_plot = [(0.5,0.5),(1,1),(5,1),(1,3),(2,2)] # for ab in ab_plot: # ax.plot(x_lam,sps.beta.pdf(x_lam,a=ab[0],b=ab[1]), # label='Prior $\\alpha,\\beta = $'+str(ab)) ax.plot(x_lam,sps.beta.pdf(x_lam,a=1,b=1),color=c_prior, label='Prior') # posterior for hierarchical bayes ax.hist(bayes_sample,density=True,color=c_post,alpha=0.4,edgecolor='gray') ax.plot(x_lam,posterior_kde(x_lam),label='Posterior',color=c_post) ax.legend(loc=1, fontsize=legend_fsize) ax.set_title('Hierarchical Bayes') ## DATA CONSISTENT ax = axes[1] # Initial for Data Consistent ax.plot(x_lam,lam_init.pdf(x_lam),label='Initial $\mid \\alpha,\\beta = $'+str((1,1)),color=c_prior) # Update for Data Consistent ax.hist(updated_sample,density=True,color=c_post,alpha=0.4,edgecolor='gray') ax.plot(x_lam,updated_dist(x_lam),label='Update',color=c_post) ax.legend(loc=1, fontsize=legend_fsize) ax.set_title('Data Consistent') ## Bad Posterior ax = axes[2] # Prior for Bad Bayes ax.plot(x_lam,lam_init.pdf(x_lam),label='Prior',color=c_prior) # Posterior for Bad Bayes ax.hist(bad_bayes_sample,density=True,color=c_post, edgecolor=c_post,alpha=0.7) ax.plot(x_lam,bad_posterior_kde(x_lam),label='Posterior',color=c_post) ax.set_ylim(0,4) ax.legend(loc=1, fontsize=legend_fsize) ax.set_title('Regular Bayes') plt.tight_layout() plt.subplots_adjust(left=0, right = 1, wspace = 0.05) ``` ```python # posterior predictive KDEs pp_kde=sps.gaussian_kde(pp_sample) bad_pp_kde=sps.gaussian_kde(bad_pp_sample) pf_update_kde=sps.gaussian_kde(pf_update) likelihood_pdf = sps.norm(0.5*np.exp(-2*0.5*0.5),sigma) ``` ```python # plot the objects of other spaces fig_data,axes = plt.subplots(1,3,sharey=True) fig_data.set_figwidth(32/2.54) fig_data.set_figheight(fig_objects.get_figwidth()/2.4) ### set other params # color scheme # c_data = 'xkcd:dusty purple' # c_data_use = 'xkcd:amber' # c_pf = 'xkcd:medium green' # c_post = 'xkcd:windows blue' c_data = 'xkcd:forest green' c_data_use = 'xkcd:orange' c_pf = 'xkcd:neon pink' c_post = 'xkcd:blue' #### plot the parameter space (first row) x_q = np.linspace(-0.1,0.55,250) for ax in axes: ax.hist(data_sample,density=True,label='Data',color=c_data, edgecolor='xkcd:black',alpha=0.4) ## HIERARCHICAL BAYES # posterior predictive for hierarchical bayes ax = axes[0] ax.plot(x_q,pp_kde(x_q),label='Posterior Predictive',color=c_post) ax.plot(x_q,1/4*likelihood_pdf.pdf(x_q),color=c_data_use, label='Likelihood$\mid \\lambda = 0.5$') ax.legend(loc=1, fontsize=legend_fsize) ax.set_title('Hierarchical Bayes') # ## DATA CONSISTENT ax = axes[1] # Push-forward and Obeserved ax.plot(x_q,pf_dist(x_q),label='Pushforward',color=c_pf) ax.plot(x_q,data_dist(x_q),color=c_data_use,label='Observed') #ax.plot(x_q,data_dist(x_q),color=c_post, ls=':',alpha=0.5) ax.legend(loc=1, fontsize=legend_fsize) ax.set_title('Data Consistent') # ## Bad Posterior ax = axes[2] # Posterior Predictive ax.plot(x_q,bad_pp_kde(x_q),label='Posterior Predictive',color=c_post) ax.plot(x_q,1/4*likelihood_pdf.pdf(x_q),color=c_data_use, label='Likelihood$\mid \\lambda = 0.5$') ax.set_ylim(0,9.5) ax.legend(loc=1, fontsize=legend_fsize) ax.set_title('Regular Bayes') plt.tight_layout() plt.subplots_adjust(left=0, right = 1, wspace = 0.05) # right = 0.9 # the right side of the subplots of the figure # bottom = 0.1 # the bottom of the subplots of the figure # top = 0.9 # the top of the subplots of the figure # wspace = 0.2 # the amount of width reserved for space between subplots, # # expressed as a fraction of the average axis width # hspace = 0.2 # the amount of height reserved for space between subplots, # # expressed as a fraction of the average axis height # ) ``` ```python final_fig, axes = plt.subplots(1,2) final_fig.set_figwidth(32/2.54) final_fig.set_figheight(12/2.54) bayes_color = 'xkcd:orange' dci_color = 'xkcd:neon pink' # compare in lambda ax = axes[0] ax.plot(x_lam,posterior_kde(x_lam),label='Posterior',color=bayes_color) ax.plot(x_lam,updated_dist(x_lam),label='Update',color=dci_color) ax.legend(loc=1, fontsize=legend_fsize) ax.set_xlabel('Parameter Space $\\Lambda$') # ax.set_title('Data Consistent vs. Hierarchical Bayes') # compare in data ax = axes[1] ax.hist(data_sample,density=True,label='Data',color=c_data,edgecolor='k',alpha=0.4) ax.plot(x_q,data_dist(x_q),color=dci_color,label='Pushforward Update') ax.plot(x_q,pp_kde(x_q),label='Posterior Predictive',color=bayes_color) ax.legend(loc=1, fontsize=legend_fsize) ax.set_xlabel('Data Space $\mathcal{D}$') # ax.set_title('Data Consistent vs. Hierarchical Bayes') plt.subplots_adjust(left=0, right = 1, wspace = 0.125) plt.tight_layout() ``` ```python # save all figures! folder = 'figures/' example = 'distr_EX_' hyper_figure.savefig(folder+example+'hyper_param.png') fig_objects.savefig(folder+example+'lambda_space.png') fig_data.savefig(folder+example+'data_space.png') final_fig.savefig(folder+example+'comparison.png', bbox_inches='tight') ``` ```python ! make; make clean ``` lualatex poster This is LuaTeX, Version 1.0.4 (TeX Live 2017/Debian) restricted system commands enabled. (./poster.tex LaTeX2e <2017-04-15> (using write cache: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic)(u sing read cache: /var/lib/texmf/luatex-cache/generic /home/jovyan/.texlive2017/ texmf-var/luatex-cache/generic) luaotfload | main : initialization completed in 0.097 seconds Babel <3.18> and hyphenation patterns for 1 language(s) loaded. (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamer.cls Document Class: beamer 2018/02/20 v3.50 A class for typesetting presentations (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasemodes.sty (/usr/share/texlive/texmf-dist/tex/latex/etoolbox/etoolbox.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasedecode.sty)) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifpdf.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseoptions.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty)) (/usr/share/texlive/texmf-dist/tex/latex/geometry/geometry.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifvtex.sty) (/usr/share/texlive/texmf-dist/tex/generic/ifxetex/ifxetex.sty)) (/usr/share/texlive/texmf-dist/tex/latex/base/size11.clo(load luc: /home/jovyan /.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lmroman10-regular.luc)) (/usr/share/texlive/texmf-dist/tex/latex/pgf/basiclayer/pgfcore.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/graphicx.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/graphics.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/trig.sty) (/usr/share/texlive/texmf-dist/tex/latex/graphics-cfg/graphics.cfg) (/usr/share/texlive/texmf-dist/tex/latex/graphics-def/luatex.def))) (/usr/share/texlive/texmf-dist/tex/latex/pgf/systemlayer/pgfsys.sty (/usr/share/texlive/texmf-dist/tex/latex/pgf/utilities/pgfrcs.sty (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfutil-common.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfutil-common-lists.t ex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfutil-latex.def (/usr/share/texlive/texmf-dist/tex/latex/ms/everyshi.sty)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfrcs.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsys.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfkeys.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfkeysfiltered.code.t ex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgf.cfg) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsys-luatex.def (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsys-common-pdf.de f))) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsyssoftpath.code. tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsysprotocol.code. tex)) (/usr/share/texlive/texmf-dist/tex/latex/xcolor/xcolor.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics-cfg/color.cfg)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcore.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmath.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathcalc.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathutil.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathparser.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.basic.code .tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.trigonomet ric.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.random.cod e.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.comparison .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.base.code. tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.round.code .tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.misc.code. tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.integerari thmetics.code.tex))) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfloat.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepoints.code.te x) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepathconstruct. code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepathusage.code .tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorescopes.code.te x) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoregraphicstate.c ode.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoretransformation s.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorequick.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreobjects.code.t ex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepathprocessing .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorearrows.code.te x) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreshade.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreimage.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreexternal.code. tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorelayers.code.te x) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoretransparency.c ode.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepatterns.code. tex))) (/usr/share/texlive/texmf-dist/tex/latex/pgf/utilities/xxcolor.sty) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/atbegshi.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/infwarerr.sty) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ltxcmds.sty)) (/usr/share/texlive/texmf-dist/tex/latex/hyperref/hyperref.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/hobsub-hyperref.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/hobsub-generic.sty (/usr/share/texlive/texmf-dist/scripts/oberdiek/oberdiek.luatex.lua))) (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/auxhook.sty) (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/kvoptions.sty) (/usr/share/texlive/texmf-dist/tex/latex/hyperref/pd1enc.def) (/usr/share/texlive/texmf-dist/tex/latex/latexconfig/hyperref.cfg) (/usr/share/texlive/texmf-dist/tex/latex/url/url.sty) Package hyperref Message: Stopped early. ) (/usr/share/texlive/texmf-dist/tex/latex/hyperref/hluatex.def (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/rerunfilecheck.sty)) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaserequires.sty (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasecompatibility.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasefont.sty (/usr/share/texlive/texmf-dist/tex/latex/amsfonts/amssymb.sty (/usr/share/texlive/texmf-dist/tex/latex/amsfonts/amsfonts.sty)) (/usr/share/texlive/texmf-dist/tex/latex/sansmathaccent/sansmathaccent.sty (/usr/share/texlive/texmf-dist/tex/latex/filehook/filehook.sty))) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetranslator.sty (/usr/share/texlive/texmf-dist/tex/latex/translator/translator.sty)) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasemisc.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetwoscreens.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseoverlay.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetitle.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasesection.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseframe.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseverbatim.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseframesize.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseframecomponents.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasecolor.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasenotes.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetoc.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetemplates.sty (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseauxtemplates.sty (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseboxes.sty))) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaselocalstructure.sty (/usr/share/texlive/texmf-dist/tex/latex/tools/enumerate.sty)) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasenavigation.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetheorems.sty (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsmath.sty For additional information on amsmath, use the `?' option. (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amstext.sty (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsgen.sty)) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsbsy.sty) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsopn.sty)) (/usr/share/texlive/texmf-dist/tex/latex/amscls/amsthm.sty)) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasethemes.sty))(load luc : /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lmsans10-r egular.luc) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerthemedefault.sty (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerfontthemedefault.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamercolorthemedefault.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerinnerthemedefault.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerouterthemedefault.sty))) (/usr/share/texlive/texmf-dist/tex/latex/base/fontenc.sty (/usr/share/texlive/texmf-dist/tex/latex/base/t1enc.def) (/usr/share/texmf/tex/latex/lm/t1lmss.fd)) (/usr/share/texmf/tex/latex/lm/lmodern.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamerposter/beamerposter.sty (/usr/share/texlive/texmf-dist/tex/latex/xkeyval/xkeyval.sty (/usr/share/texlive/texmf-dist/tex/generic/xkeyval/xkeyval.tex (/usr/share/texlive/texmf-dist/tex/generic/xkeyval/xkvutils.tex))) (/usr/share/texlive/texmf-dist/tex/latex/type1cm/type1cm.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp.sty `Fixed Point Package', Version 0.8, April 2, 1995 (C) Michael Mehlich (/usr/share/texlive/texmf-dist/tex/latex/fp/defpattern.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-basic.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-addons.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-snap.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-exp.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-trigo.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-pas.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-random.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-eqn.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-upn.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-eval.sty)) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-SUB ) ) ( FP-UPN ( FP-SUB ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) )) (./beamerthemeposter.sty (/usr/share/texlive/texmf-dist/tex/latex/base/exscale.sty) (/usr/share/texlive/texmf-dist/tex/latex/ms/ragged2e.sty (/usr/share/texlive/texmf-dist/tex/latex/ms/everysel.sty)) (/usr/share/texlive/texmf-dist/tex/latex/changepage/changepage.sty) (/usr/share/texlive/texmf-dist/tex/latex/fontspec/fontspec.sty (/usr/share/texlive/texmf-dist/tex/latex/l3packages/xparse/xparse.sty (/usr/share/texlive/texmf-dist/tex/latex/l3kernel/expl3.sty (/usr/share/texlive/texmf-dist/tex/latex/l3kernel/expl3-code.tex) (/usr/share/texlive/texmf-dist/tex/latex/l3kernel/l3pdfmode.def))) (/usr/share/texlive/texmf-dist/tex/latex/fontspec/fontspec-luatex.sty (/usr/share/texlive/texmf-dist/tex/latex/base/fontenc.sty (/usr/share/texlive/texmf-dist/tex/latex/base/tuenc.def))(load luc: /home/jovya n/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lmsans17-regular.luc) (/usr/share/texlive/texmf-dist/tex/latex/fontspec/fontspec.cfg)(load luc: /home /jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lmsans10-bold.luc )(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/ lmsans17-oblique.luc)))(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-ca che/generic/fonts/otl/raleway-regular.luc)(load luc: /home/jovyan/.texlive2017/ texmf-var/luatex-cache/generic/fonts/otl/raleway-bold-italic.luc)(load luc: /ho me/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/raleway-bold.lu c)(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl /raleway-regular-italic.luc)(load luc: /home/jovyan/.texlive2017/texmf-var/luat ex-cache/generic/fonts/otl/lato-regular.luc)(load luc: /home/jovyan/.texlive201 7/texmf-var/luatex-cache/generic/fonts/otl/lato-bolditalic.luc)(load luc: /home /jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lato-bold.luc)(lo ad luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lato -italic.luc) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerfontthemeprofessionalfont s.sty)(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts /otl/lato-light.luc)(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache /generic/fonts/otl/lato-lightitalic.luc)) (./beamercolorthemelabsix.sty) (/usr/share/texlive/texmf-dist/tex/latex/booktabs/booktabs.sty) (/usr/share/texlive/texmf-dist/tex/latex/multirow/multirow.sty) (/usr/share/texlive/texmf-dist/tex/latex/pgf/frontendlayer/tikz.sty (/usr/share/texlive/texmf-dist/tex/latex/pgf/basiclayer/pgf.sty (/usr/share/texlive/texmf-dist/tex/generic/pgf/modules/pgfmoduleshapes.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgf/modules/pgfmoduleplot.code.tex ) (/usr/share/texlive/texmf-dist/tex/latex/pgf/compatibility/pgfcomp-version-0-65 .sty) (/usr/share/texlive/texmf-dist/tex/latex/pgf/compatibility/pgfcomp-version-1-18 .sty)) (/usr/share/texlive/texmf-dist/tex/latex/pgf/utilities/pgffor.sty (/usr/share/texlive/texmf-dist/tex/latex/pgf/utilities/pgfkeys.sty (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfkeys.code.tex)) (/usr/share/texlive/texmf-dist/tex/latex/pgf/math/pgfmath.sty (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmath.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgffor.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmath.code.tex))) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/tikz.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/pgflibraryplothandlers .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/modules/pgfmodulematrix.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibrarytopaths.code.tex))) (/usr/share/texlive/texmf-dist/tex/latex/pgfplots/pgfplots.sty (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.revision.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotscore.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/sys/pgfplotssysgeneric.code .tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/libs/pgfplotslibrary.code.t ex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/oldpgfcompatib/pgfplotsoldp gfsupp_loader.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/pgflibraryfpu.code.tex ) Package pgfplots: loading complementary arithmetics for your pgf version... (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/oldpgfcompatib/pgfplotsoldp gfsupp_pgflibraryfpu.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/oldpgfcompatib/pgfplotsoldp gfsupp_pgfmathfloat.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/luamath/pgflibraryluam ath.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/oldpgfcompatib/pgfplotsoldp gfsupp_pgflibraryluamath.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotsutil.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotslists tructure.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotslists tructureext.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotsarray .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotsmatri x.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/numtable/pgfplotstableshare d.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotsdeque .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotsbinary.code.te x (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotsbinary.data.co de.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotsutil.verb.code .tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/libs/pgflibrarypgfplots.sur fshading.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/sys/pgflibrarypgfplots.surf shading.pgfsys-luatex.def))) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotscolormap.code. tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotscolor.code.tex )) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsstackedplots.code.t ex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsplothandlers.code.t ex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsmeshplothandler.cod e.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsmeshplotimage.code. tex))) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.scaling.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotscoordprocessing.cod e.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.errorbars.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.markers.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsticks.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.paths.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibrarydecorations.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/modules/pgfmoduledecorations.cod e.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibrarydecorations.pathmorphing.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/decorations/pgflibrary decorations.pathmorphing.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibrarydecorations.pathreplacing.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/decorations/pgflibrary decorations.pathreplacing.code.tex))) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibraryplotmarks.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/pgflibraryplotmarks.co de.tex))) (/usr/share/texlive/texmf-dist/tex/latex/tools/array.sty) No file poster.aux. *geometry* driver: auto-detecting *geometry* detected driver: pdftex (/usr/share/texlive/texmf-dist/tex/context/base/mkii/supp-pdf.mkii [Loading MPS to PDF converter (version 2006.09.02).] ) (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/epstopdf-base.sty (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/grfext.sty) (/usr/share/texlive/texmf-dist/tex/latex/latexconfig/epstopdf-sys.cfg)) ABD: EveryShipout initializing macros (/usr/share/texlive/texmf-dist/tex/latex/hyperref/nameref.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/gettitlestring.sty)) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-basic-dictionary -English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-bibliography-dic tionary-English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-environment-dict ionary-English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-months-dictionar y-English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-numbers-dictiona ry-English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-theorem-dictiona ry-English.dict) ABD: EverySelectfont initializing macros Package pgfplots Warning: running in backwards compatibility mode (unsuitable t ick labels; missing features). Consider writing \pgfplotsset{compat=1.15} into your preamble. on input line 116. No file poster.nav. (/usr/share/texmf/tex/latex/lm/ot1lmr.fd) (/usr/share/texmf/tex/latex/lm/omllmm.fd) (/usr/share/texmf/tex/latex/lm/omslmsy.fd) LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <12.6> not available (Font) size <12> substituted on input line 116. (./col-1.tex LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <37.62> not available (Font) size <24.88> substituted on input line 3. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <26.33388> not available (Font) size <24.88> substituted on input line 3. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <18.81> not available (Font) size <17.28> substituted on input line 3. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <26.12> not available (Font) size <24.88> substituted on input line 16. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <18.2839> not available (Font) size <17.28> substituted on input line 16. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <13.06> not available (Font) size <12> substituted on input line 16. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <31.35> not available (Font) size <24.88> substituted on input line 47. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <21.9449> not available (Font) size <20.74> substituted on input line 47. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <15.675> not available (Font) size <14.4> substituted on input line 47. ) (./col-2.tex) (./notation.tex) (./references.tex) (./col-3.tex LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <21.78> not available (Font) size <20.74> substituted on input line 47. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <15.24593> not available (Font) size <14.4> substituted on input line 47. ) Overfull \vbox (7.6233pt too high) detected at line 149 [1{/var/lib/texmf/fonts/map/pdftex/updmap/pdftex.map}<./figures/distr_EX_lambda _space.png><./figures/distr_EX_data_space.png><./figures/distr_EX_comparison.pn g><./figures/diagram.png><./figures/ref-theory.png><./figures/ref-stability.png ><./figures/ref-bet.png><./figures/ref-jsm.png><./figures/ref-website.png><./fi gures/exponential_decay_response_sigma-10E-4.png><./figures/updated_convergence _sigma-10E-4.png><./figures/updated_stability_D10_sigma-10E-4.png><./figures/po sterior_stability_D10_sigma-10E-4.png><./figures/updated_stability_D100_sigma-1 0E-4.png><./figures/posterior_stability_D100_sigma-10E-4.png>] (./poster.aux (./col-1.aux) (./col-2.aux) (./notation.aux) (./references.aux) (./col-3.aux)) Package rerunfilecheck Warning: File `poster.out' has changed. (rerunfilecheck) Rerun to get outlines right (rerunfilecheck) or use package `bookmark'. LaTeX Font Warning: Size substitutions with differences (Font) up to 12.73999pt have occurred. ) (see the transcript file for additional information) 43488 words of node memory still in use: 698 hlist, 61 vlist, 87 rule, 222 disc, 86 local_par, 68 math, 1324 glue, 55 2 kern, 259 penalty, 2228 glyph, 1064 attribute, 79 glue_spec, 1064 attribute_l ist, 13 write, 152 pdf_literal, 250 pdf_colorstack, 16 pdf_setmatrix, 16 pdf_sa ve, 16 pdf_restore nodes avail lists: 1:2,2:250,3:473,4:291,5:611,6:2315,7:1425,8:90,9:790,10:11,11:2 22 </usr/share/fonts/truetype/lato/Lato-LightItalic.ttf>{/usr/share/texmf/fonts/en c/dvips/lm/lm-mathit.enc}{/usr/share/texmf/fonts/enc/dvips/lm/lm-mathsy.enc}{/u sr/share/texmf/fonts/enc/dvips/lm/lm-rm.enc}</usr/share/fonts/truetype/lato/Lat o-Light.ttf></usr/share/fonts/truetype/lato/Lato-Regular.ttf></usr/share/fonts/ truetype/lato/Lato-Bold.ttf></usr/share/texlive/texmf-dist/fonts/opentype/impal lari/raleway/Raleway-Regular.otf></usr/share/texlive/texmf-dist/fonts/opentype/ impallari/raleway/Raleway-Bold.otf></usr/share/texlive/texmf-dist/fonts/type1/p ublic/amsfonts/cm/cmex10.pfb></usr/share/texmf/fonts/type1/public/lm/lmmi12.pfb ></usr/share/texmf/fonts/type1/public/lm/lmmib10.pfb></usr/share/texmf/fonts/ty pe1/public/lm/lmr17.pfb></usr/share/texmf/fonts/type1/public/lm/lmsy10.pfb> Output written on poster.pdf (1 page, 2130121 bytes). Transcript written on poster.log. lualatex poster This is LuaTeX, Version 1.0.4 (TeX Live 2017/Debian) restricted system commands enabled. (./poster.tex LaTeX2e <2017-04-15> (using write cache: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic)(u sing read cache: /var/lib/texmf/luatex-cache/generic /home/jovyan/.texlive2017/ texmf-var/luatex-cache/generic) luaotfload | main : initialization completed in 0.104 seconds Babel <3.18> and hyphenation patterns for 1 language(s) loaded. (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamer.cls Document Class: beamer 2018/02/20 v3.50 A class for typesetting presentations (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasemodes.sty (/usr/share/texlive/texmf-dist/tex/latex/etoolbox/etoolbox.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasedecode.sty)) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifpdf.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseoptions.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty)) (/usr/share/texlive/texmf-dist/tex/latex/geometry/geometry.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifvtex.sty) (/usr/share/texlive/texmf-dist/tex/generic/ifxetex/ifxetex.sty)) (/usr/share/texlive/texmf-dist/tex/latex/base/size11.clo(load luc: /home/jovyan /.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lmroman10-regular.luc)) (/usr/share/texlive/texmf-dist/tex/latex/pgf/basiclayer/pgfcore.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/graphicx.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/graphics.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/trig.sty) (/usr/share/texlive/texmf-dist/tex/latex/graphics-cfg/graphics.cfg) (/usr/share/texlive/texmf-dist/tex/latex/graphics-def/luatex.def))) (/usr/share/texlive/texmf-dist/tex/latex/pgf/systemlayer/pgfsys.sty (/usr/share/texlive/texmf-dist/tex/latex/pgf/utilities/pgfrcs.sty (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfutil-common.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfutil-common-lists.t ex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfutil-latex.def (/usr/share/texlive/texmf-dist/tex/latex/ms/everyshi.sty)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfrcs.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsys.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfkeys.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfkeysfiltered.code.t ex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgf.cfg) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsys-luatex.def (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsys-common-pdf.de f))) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsyssoftpath.code. tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsysprotocol.code. tex)) (/usr/share/texlive/texmf-dist/tex/latex/xcolor/xcolor.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics-cfg/color.cfg)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcore.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmath.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathcalc.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathutil.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathparser.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.basic.code .tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.trigonomet ric.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.random.cod e.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.comparison .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.base.code. tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.round.code .tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.misc.code. tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.integerari thmetics.code.tex))) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfloat.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepoints.code.te x) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepathconstruct. code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepathusage.code .tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorescopes.code.te x) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoregraphicstate.c ode.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoretransformation s.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorequick.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreobjects.code.t ex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepathprocessing .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorearrows.code.te x) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreshade.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreimage.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreexternal.code. tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorelayers.code.te x) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoretransparency.c ode.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepatterns.code. tex))) (/usr/share/texlive/texmf-dist/tex/latex/pgf/utilities/xxcolor.sty) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/atbegshi.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/infwarerr.sty) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ltxcmds.sty)) (/usr/share/texlive/texmf-dist/tex/latex/hyperref/hyperref.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/hobsub-hyperref.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/hobsub-generic.sty (/usr/share/texlive/texmf-dist/scripts/oberdiek/oberdiek.luatex.lua))) (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/auxhook.sty) (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/kvoptions.sty) (/usr/share/texlive/texmf-dist/tex/latex/hyperref/pd1enc.def) (/usr/share/texlive/texmf-dist/tex/latex/latexconfig/hyperref.cfg) (/usr/share/texlive/texmf-dist/tex/latex/url/url.sty) Package hyperref Message: Stopped early. ) (/usr/share/texlive/texmf-dist/tex/latex/hyperref/hluatex.def (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/rerunfilecheck.sty)) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaserequires.sty (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasecompatibility.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasefont.sty (/usr/share/texlive/texmf-dist/tex/latex/amsfonts/amssymb.sty (/usr/share/texlive/texmf-dist/tex/latex/amsfonts/amsfonts.sty)) (/usr/share/texlive/texmf-dist/tex/latex/sansmathaccent/sansmathaccent.sty (/usr/share/texlive/texmf-dist/tex/latex/filehook/filehook.sty))) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetranslator.sty (/usr/share/texlive/texmf-dist/tex/latex/translator/translator.sty)) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasemisc.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetwoscreens.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseoverlay.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetitle.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasesection.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseframe.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseverbatim.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseframesize.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseframecomponents.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasecolor.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasenotes.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetoc.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetemplates.sty (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseauxtemplates.sty (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseboxes.sty))) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaselocalstructure.sty (/usr/share/texlive/texmf-dist/tex/latex/tools/enumerate.sty)) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasenavigation.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetheorems.sty (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsmath.sty For additional information on amsmath, use the `?' option. (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amstext.sty (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsgen.sty)) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsbsy.sty) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsopn.sty)) (/usr/share/texlive/texmf-dist/tex/latex/amscls/amsthm.sty)) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasethemes.sty))(load luc : /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lmsans10-r egular.luc) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerthemedefault.sty (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerfontthemedefault.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamercolorthemedefault.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerinnerthemedefault.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerouterthemedefault.sty))) (/usr/share/texlive/texmf-dist/tex/latex/base/fontenc.sty (/usr/share/texlive/texmf-dist/tex/latex/base/t1enc.def) (/usr/share/texmf/tex/latex/lm/t1lmss.fd)) (/usr/share/texmf/tex/latex/lm/lmodern.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamerposter/beamerposter.sty (/usr/share/texlive/texmf-dist/tex/latex/xkeyval/xkeyval.sty (/usr/share/texlive/texmf-dist/tex/generic/xkeyval/xkeyval.tex (/usr/share/texlive/texmf-dist/tex/generic/xkeyval/xkvutils.tex))) (/usr/share/texlive/texmf-dist/tex/latex/type1cm/type1cm.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp.sty `Fixed Point Package', Version 0.8, April 2, 1995 (C) Michael Mehlich (/usr/share/texlive/texmf-dist/tex/latex/fp/defpattern.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-basic.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-addons.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-snap.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-exp.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-trigo.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-pas.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-random.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-eqn.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-upn.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-eval.sty)) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-SUB ) ) ( FP-UPN ( FP-SUB ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) )) (./beamerthemeposter.sty (/usr/share/texlive/texmf-dist/tex/latex/base/exscale.sty) (/usr/share/texlive/texmf-dist/tex/latex/ms/ragged2e.sty (/usr/share/texlive/texmf-dist/tex/latex/ms/everysel.sty)) (/usr/share/texlive/texmf-dist/tex/latex/changepage/changepage.sty) (/usr/share/texlive/texmf-dist/tex/latex/fontspec/fontspec.sty (/usr/share/texlive/texmf-dist/tex/latex/l3packages/xparse/xparse.sty (/usr/share/texlive/texmf-dist/tex/latex/l3kernel/expl3.sty (/usr/share/texlive/texmf-dist/tex/latex/l3kernel/expl3-code.tex) (/usr/share/texlive/texmf-dist/tex/latex/l3kernel/l3pdfmode.def))) (/usr/share/texlive/texmf-dist/tex/latex/fontspec/fontspec-luatex.sty (/usr/share/texlive/texmf-dist/tex/latex/base/fontenc.sty (/usr/share/texlive/texmf-dist/tex/latex/base/tuenc.def))(load luc: /home/jovya n/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lmsans17-regular.luc) (/usr/share/texlive/texmf-dist/tex/latex/fontspec/fontspec.cfg)(load luc: /home /jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lmsans10-bold.luc )(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/ lmsans17-oblique.luc)))(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-ca che/generic/fonts/otl/raleway-regular.luc)(load luc: /home/jovyan/.texlive2017/ texmf-var/luatex-cache/generic/fonts/otl/raleway-bold-italic.luc)(load luc: /ho me/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/raleway-bold.lu c)(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl /raleway-regular-italic.luc)(load luc: /home/jovyan/.texlive2017/texmf-var/luat ex-cache/generic/fonts/otl/lato-regular.luc)(load luc: /home/jovyan/.texlive201 7/texmf-var/luatex-cache/generic/fonts/otl/lato-bolditalic.luc)(load luc: /home /jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lato-bold.luc)(lo ad luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lato -italic.luc) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerfontthemeprofessionalfont s.sty)(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts /otl/lato-light.luc)(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache /generic/fonts/otl/lato-lightitalic.luc)) (./beamercolorthemelabsix.sty) (/usr/share/texlive/texmf-dist/tex/latex/booktabs/booktabs.sty) (/usr/share/texlive/texmf-dist/tex/latex/multirow/multirow.sty) (/usr/share/texlive/texmf-dist/tex/latex/pgf/frontendlayer/tikz.sty (/usr/share/texlive/texmf-dist/tex/latex/pgf/basiclayer/pgf.sty (/usr/share/texlive/texmf-dist/tex/generic/pgf/modules/pgfmoduleshapes.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgf/modules/pgfmoduleplot.code.tex ) (/usr/share/texlive/texmf-dist/tex/latex/pgf/compatibility/pgfcomp-version-0-65 .sty) (/usr/share/texlive/texmf-dist/tex/latex/pgf/compatibility/pgfcomp-version-1-18 .sty)) (/usr/share/texlive/texmf-dist/tex/latex/pgf/utilities/pgffor.sty (/usr/share/texlive/texmf-dist/tex/latex/pgf/utilities/pgfkeys.sty (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfkeys.code.tex)) (/usr/share/texlive/texmf-dist/tex/latex/pgf/math/pgfmath.sty (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmath.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgffor.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmath.code.tex))) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/tikz.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/pgflibraryplothandlers .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/modules/pgfmodulematrix.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibrarytopaths.code.tex))) (/usr/share/texlive/texmf-dist/tex/latex/pgfplots/pgfplots.sty (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.revision.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotscore.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/sys/pgfplotssysgeneric.code .tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/libs/pgfplotslibrary.code.t ex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/oldpgfcompatib/pgfplotsoldp gfsupp_loader.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/pgflibraryfpu.code.tex ) Package pgfplots: loading complementary arithmetics for your pgf version... (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/oldpgfcompatib/pgfplotsoldp gfsupp_pgflibraryfpu.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/oldpgfcompatib/pgfplotsoldp gfsupp_pgfmathfloat.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/luamath/pgflibraryluam ath.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/oldpgfcompatib/pgfplotsoldp gfsupp_pgflibraryluamath.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotsutil.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotslists tructure.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotslists tructureext.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotsarray .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotsmatri x.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/numtable/pgfplotstableshare d.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotsdeque .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotsbinary.code.te x (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotsbinary.data.co de.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotsutil.verb.code .tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/libs/pgflibrarypgfplots.sur fshading.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/sys/pgflibrarypgfplots.surf shading.pgfsys-luatex.def))) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotscolormap.code. tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotscolor.code.tex )) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsstackedplots.code.t ex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsplothandlers.code.t ex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsmeshplothandler.cod e.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsmeshplotimage.code. tex))) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.scaling.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotscoordprocessing.cod e.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.errorbars.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.markers.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsticks.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.paths.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibrarydecorations.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/modules/pgfmoduledecorations.cod e.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibrarydecorations.pathmorphing.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/decorations/pgflibrary decorations.pathmorphing.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibrarydecorations.pathreplacing.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/decorations/pgflibrary decorations.pathreplacing.code.tex))) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibraryplotmarks.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/pgflibraryplotmarks.co de.tex))) (/usr/share/texlive/texmf-dist/tex/latex/tools/array.sty) (./poster.aux (./col-1.aux) (./col-2.aux) (./notation.aux) (./references.aux) (./col-3.aux)) *geometry* driver: auto-detecting *geometry* detected driver: pdftex (/usr/share/texlive/texmf-dist/tex/context/base/mkii/supp-pdf.mkii [Loading MPS to PDF converter (version 2006.09.02).] ) (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/epstopdf-base.sty (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/grfext.sty) (/usr/share/texlive/texmf-dist/tex/latex/latexconfig/epstopdf-sys.cfg)) ABD: EveryShipout initializing macros (/usr/share/texlive/texmf-dist/tex/latex/hyperref/nameref.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/gettitlestring.sty)) (./poster.out) (./poster.out) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-basic-dictionary -English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-bibliography-dic tionary-English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-environment-dict ionary-English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-months-dictionar y-English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-numbers-dictiona ry-English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-theorem-dictiona ry-English.dict) ABD: EverySelectfont initializing macros Package pgfplots Warning: running in backwards compatibility mode (unsuitable t ick labels; missing features). Consider writing \pgfplotsset{compat=1.15} into your preamble. on input line 116. (./poster.nav) (/usr/share/texmf/tex/latex/lm/ot1lmr.fd) (/usr/share/texmf/tex/latex/lm/omllmm.fd) (/usr/share/texmf/tex/latex/lm/omslmsy.fd) LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <12.6> not available (Font) size <12> substituted on input line 116. (./col-1.tex LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <37.62> not available (Font) size <24.88> substituted on input line 3. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <26.33388> not available (Font) size <24.88> substituted on input line 3. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <18.81> not available (Font) size <17.28> substituted on input line 3. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <26.12> not available (Font) size <24.88> substituted on input line 16. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <18.2839> not available (Font) size <17.28> substituted on input line 16. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <13.06> not available (Font) size <12> substituted on input line 16. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <31.35> not available (Font) size <24.88> substituted on input line 47. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <21.9449> not available (Font) size <20.74> substituted on input line 47. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <15.675> not available (Font) size <14.4> substituted on input line 47. ) (./col-2.tex) (./notation.tex) (./references.tex) (./col-3.tex LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <21.78> not available (Font) size <20.74> substituted on input line 47. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <15.24593> not available (Font) size <14.4> substituted on input line 47. ) Overfull \vbox (7.6233pt too high) detected at line 149 [1{/var/lib/texmf/fonts/map/pdftex/updmap/pdftex.map}<./figures/distr_EX_lambda _space.png><./figures/distr_EX_data_space.png><./figures/distr_EX_comparison.pn g><./figures/diagram.png><./figures/ref-theory.png><./figures/ref-stability.png ><./figures/ref-bet.png><./figures/ref-jsm.png><./figures/ref-website.png><./fi gures/exponential_decay_response_sigma-10E-4.png><./figures/updated_convergence _sigma-10E-4.png><./figures/updated_stability_D10_sigma-10E-4.png><./figures/po sterior_stability_D10_sigma-10E-4.png><./figures/updated_stability_D100_sigma-1 0E-4.png><./figures/posterior_stability_D100_sigma-10E-4.png>] (./poster.aux (./col-1.aux) (./col-2.aux) (./notation.aux) (./references.aux) (./col-3.aux)) LaTeX Font Warning: Size substitutions with differences (Font) up to 12.73999pt have occurred. ) (see the transcript file for additional information) 43488 words of node memory still in use: 698 hlist, 61 vlist, 87 rule, 222 disc, 86 local_par, 68 math, 1324 glue, 55 2 kern, 259 penalty, 2228 glyph, 1064 attribute, 79 glue_spec, 1064 attribute_l ist, 13 write, 152 pdf_literal, 250 pdf_colorstack, 16 pdf_setmatrix, 16 pdf_sa ve, 16 pdf_restore nodes avail lists: 1:2,2:250,3:473,4:291,5:611,6:2315,7:1425,8:90,9:790,10:11,11:2 22 </usr/share/fonts/truetype/lato/Lato-LightItalic.ttf>{/usr/share/texmf/fonts/en c/dvips/lm/lm-mathit.enc}{/usr/share/texmf/fonts/enc/dvips/lm/lm-mathsy.enc}{/u sr/share/texmf/fonts/enc/dvips/lm/lm-rm.enc}</usr/share/fonts/truetype/lato/Lat o-Light.ttf></usr/share/fonts/truetype/lato/Lato-Regular.ttf></usr/share/fonts/ truetype/lato/Lato-Bold.ttf></usr/share/texlive/texmf-dist/fonts/opentype/impal lari/raleway/Raleway-Regular.otf></usr/share/texlive/texmf-dist/fonts/opentype/ impallari/raleway/Raleway-Bold.otf></usr/share/texlive/texmf-dist/fonts/type1/p ublic/amsfonts/cm/cmex10.pfb></usr/share/texmf/fonts/type1/public/lm/lmmi12.pfb ></usr/share/texmf/fonts/type1/public/lm/lmmib10.pfb></usr/share/texmf/fonts/ty pe1/public/lm/lmr17.pfb></usr/share/texmf/fonts/type1/public/lm/lmsy10.pfb> Output written on poster.pdf (1 page, 2130121 bytes). Transcript written on poster.log. lualatex poster This is LuaTeX, Version 1.0.4 (TeX Live 2017/Debian) restricted system commands enabled. (./poster.tex LaTeX2e <2017-04-15> (using write cache: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic)(u sing read cache: /var/lib/texmf/luatex-cache/generic /home/jovyan/.texlive2017/ texmf-var/luatex-cache/generic) luaotfload | main : initialization completed in 0.098 seconds Babel <3.18> and hyphenation patterns for 1 language(s) loaded. (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamer.cls Document Class: beamer 2018/02/20 v3.50 A class for typesetting presentations (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasemodes.sty (/usr/share/texlive/texmf-dist/tex/latex/etoolbox/etoolbox.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasedecode.sty)) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifpdf.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseoptions.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty)) (/usr/share/texlive/texmf-dist/tex/latex/geometry/geometry.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifvtex.sty) (/usr/share/texlive/texmf-dist/tex/generic/ifxetex/ifxetex.sty)) (/usr/share/texlive/texmf-dist/tex/latex/base/size11.clo(load luc: /home/jovyan /.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lmroman10-regular.luc)) (/usr/share/texlive/texmf-dist/tex/latex/pgf/basiclayer/pgfcore.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/graphicx.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/graphics.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/trig.sty) (/usr/share/texlive/texmf-dist/tex/latex/graphics-cfg/graphics.cfg) (/usr/share/texlive/texmf-dist/tex/latex/graphics-def/luatex.def))) (/usr/share/texlive/texmf-dist/tex/latex/pgf/systemlayer/pgfsys.sty (/usr/share/texlive/texmf-dist/tex/latex/pgf/utilities/pgfrcs.sty (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfutil-common.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfutil-common-lists.t ex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfutil-latex.def (/usr/share/texlive/texmf-dist/tex/latex/ms/everyshi.sty)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfrcs.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsys.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfkeys.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfkeysfiltered.code.t ex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgf.cfg) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsys-luatex.def (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsys-common-pdf.de f))) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsyssoftpath.code. tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/systemlayer/pgfsysprotocol.code. tex)) (/usr/share/texlive/texmf-dist/tex/latex/xcolor/xcolor.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics-cfg/color.cfg)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcore.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmath.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathcalc.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathutil.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathparser.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.basic.code .tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.trigonomet ric.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.random.cod e.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.comparison .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.base.code. tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.round.code .tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.misc.code. tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.integerari thmetics.code.tex))) (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmathfloat.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepoints.code.te x) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepathconstruct. code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepathusage.code .tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorescopes.code.te x) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoregraphicstate.c ode.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoretransformation s.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorequick.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreobjects.code.t ex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepathprocessing .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorearrows.code.te x) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreshade.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreimage.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreexternal.code. tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorelayers.code.te x) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcoretransparency.c ode.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepatterns.code. tex))) (/usr/share/texlive/texmf-dist/tex/latex/pgf/utilities/xxcolor.sty) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/atbegshi.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/infwarerr.sty) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ltxcmds.sty)) (/usr/share/texlive/texmf-dist/tex/latex/hyperref/hyperref.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/hobsub-hyperref.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/hobsub-generic.sty (/usr/share/texlive/texmf-dist/scripts/oberdiek/oberdiek.luatex.lua))) (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/auxhook.sty) (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/kvoptions.sty) (/usr/share/texlive/texmf-dist/tex/latex/hyperref/pd1enc.def) (/usr/share/texlive/texmf-dist/tex/latex/latexconfig/hyperref.cfg) (/usr/share/texlive/texmf-dist/tex/latex/url/url.sty) Package hyperref Message: Stopped early. ) (/usr/share/texlive/texmf-dist/tex/latex/hyperref/hluatex.def (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/rerunfilecheck.sty)) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaserequires.sty (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasecompatibility.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasefont.sty (/usr/share/texlive/texmf-dist/tex/latex/amsfonts/amssymb.sty (/usr/share/texlive/texmf-dist/tex/latex/amsfonts/amsfonts.sty)) (/usr/share/texlive/texmf-dist/tex/latex/sansmathaccent/sansmathaccent.sty (/usr/share/texlive/texmf-dist/tex/latex/filehook/filehook.sty))) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetranslator.sty (/usr/share/texlive/texmf-dist/tex/latex/translator/translator.sty)) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasemisc.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetwoscreens.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseoverlay.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetitle.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasesection.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseframe.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseverbatim.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseframesize.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseframecomponents.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasecolor.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasenotes.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetoc.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetemplates.sty (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseauxtemplates.sty (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaseboxes.sty))) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbaselocalstructure.sty (/usr/share/texlive/texmf-dist/tex/latex/tools/enumerate.sty)) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasenavigation.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasetheorems.sty (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsmath.sty For additional information on amsmath, use the `?' option. (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amstext.sty (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsgen.sty)) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsbsy.sty) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsopn.sty)) (/usr/share/texlive/texmf-dist/tex/latex/amscls/amsthm.sty)) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerbasethemes.sty))(load luc : /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lmsans10-r egular.luc) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerthemedefault.sty (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerfontthemedefault.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamercolorthemedefault.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerinnerthemedefault.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerouterthemedefault.sty))) (/usr/share/texlive/texmf-dist/tex/latex/base/fontenc.sty (/usr/share/texlive/texmf-dist/tex/latex/base/t1enc.def) (/usr/share/texmf/tex/latex/lm/t1lmss.fd)) (/usr/share/texmf/tex/latex/lm/lmodern.sty) (/usr/share/texlive/texmf-dist/tex/latex/beamerposter/beamerposter.sty (/usr/share/texlive/texmf-dist/tex/latex/xkeyval/xkeyval.sty (/usr/share/texlive/texmf-dist/tex/generic/xkeyval/xkeyval.tex (/usr/share/texlive/texmf-dist/tex/generic/xkeyval/xkvutils.tex))) (/usr/share/texlive/texmf-dist/tex/latex/type1cm/type1cm.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp.sty `Fixed Point Package', Version 0.8, April 2, 1995 (C) Michael Mehlich (/usr/share/texlive/texmf-dist/tex/latex/fp/defpattern.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-basic.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-addons.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-snap.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-exp.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-trigo.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-pas.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-random.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-eqn.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-upn.sty) (/usr/share/texlive/texmf-dist/tex/latex/fp/fp-eval.sty)) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-SUB ) ) ( FP-UPN ( FP-SUB ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) ) ( FP-UPN ( FP-MUL ) ( FP-ROUND ) )) (./beamerthemeposter.sty (/usr/share/texlive/texmf-dist/tex/latex/base/exscale.sty) (/usr/share/texlive/texmf-dist/tex/latex/ms/ragged2e.sty (/usr/share/texlive/texmf-dist/tex/latex/ms/everysel.sty)) (/usr/share/texlive/texmf-dist/tex/latex/changepage/changepage.sty) (/usr/share/texlive/texmf-dist/tex/latex/fontspec/fontspec.sty (/usr/share/texlive/texmf-dist/tex/latex/l3packages/xparse/xparse.sty (/usr/share/texlive/texmf-dist/tex/latex/l3kernel/expl3.sty (/usr/share/texlive/texmf-dist/tex/latex/l3kernel/expl3-code.tex) (/usr/share/texlive/texmf-dist/tex/latex/l3kernel/l3pdfmode.def))) (/usr/share/texlive/texmf-dist/tex/latex/fontspec/fontspec-luatex.sty (/usr/share/texlive/texmf-dist/tex/latex/base/fontenc.sty (/usr/share/texlive/texmf-dist/tex/latex/base/tuenc.def))(load luc: /home/jovya n/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lmsans17-regular.luc) (/usr/share/texlive/texmf-dist/tex/latex/fontspec/fontspec.cfg)(load luc: /home /jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lmsans10-bold.luc )(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/ lmsans17-oblique.luc)))(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-ca che/generic/fonts/otl/raleway-regular.luc)(load luc: /home/jovyan/.texlive2017/ texmf-var/luatex-cache/generic/fonts/otl/raleway-bold-italic.luc)(load luc: /ho me/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/raleway-bold.lu c)(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl /raleway-regular-italic.luc)(load luc: /home/jovyan/.texlive2017/texmf-var/luat ex-cache/generic/fonts/otl/lato-regular.luc)(load luc: /home/jovyan/.texlive201 7/texmf-var/luatex-cache/generic/fonts/otl/lato-bolditalic.luc)(load luc: /home /jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lato-bold.luc)(lo ad luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts/otl/lato -italic.luc) (/usr/share/texlive/texmf-dist/tex/latex/beamer/beamerfontthemeprofessionalfont s.sty)(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache/generic/fonts /otl/lato-light.luc)(load luc: /home/jovyan/.texlive2017/texmf-var/luatex-cache /generic/fonts/otl/lato-lightitalic.luc)) (./beamercolorthemelabsix.sty) (/usr/share/texlive/texmf-dist/tex/latex/booktabs/booktabs.sty) (/usr/share/texlive/texmf-dist/tex/latex/multirow/multirow.sty) (/usr/share/texlive/texmf-dist/tex/latex/pgf/frontendlayer/tikz.sty (/usr/share/texlive/texmf-dist/tex/latex/pgf/basiclayer/pgf.sty (/usr/share/texlive/texmf-dist/tex/generic/pgf/modules/pgfmoduleshapes.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgf/modules/pgfmoduleplot.code.tex ) (/usr/share/texlive/texmf-dist/tex/latex/pgf/compatibility/pgfcomp-version-0-65 .sty) (/usr/share/texlive/texmf-dist/tex/latex/pgf/compatibility/pgfcomp-version-1-18 .sty)) (/usr/share/texlive/texmf-dist/tex/latex/pgf/utilities/pgffor.sty (/usr/share/texlive/texmf-dist/tex/latex/pgf/utilities/pgfkeys.sty (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgfkeys.code.tex)) (/usr/share/texlive/texmf-dist/tex/latex/pgf/math/pgfmath.sty (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmath.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/utilities/pgffor.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/math/pgfmath.code.tex))) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/tikz.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/pgflibraryplothandlers .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/modules/pgfmodulematrix.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibrarytopaths.code.tex))) (/usr/share/texlive/texmf-dist/tex/latex/pgfplots/pgfplots.sty (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.revision.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotscore.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/sys/pgfplotssysgeneric.code .tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/libs/pgfplotslibrary.code.t ex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/oldpgfcompatib/pgfplotsoldp gfsupp_loader.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/pgflibraryfpu.code.tex ) Package pgfplots: loading complementary arithmetics for your pgf version... (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/oldpgfcompatib/pgfplotsoldp gfsupp_pgflibraryfpu.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/oldpgfcompatib/pgfplotsoldp gfsupp_pgfmathfloat.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/luamath/pgflibraryluam ath.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/oldpgfcompatib/pgfplotsoldp gfsupp_pgflibraryluamath.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotsutil.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotslists tructure.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotslists tructureext.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotsarray .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotsmatri x.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/numtable/pgfplotstableshare d.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/liststructure/pgfplotsdeque .code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotsbinary.code.te x (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotsbinary.data.co de.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotsutil.verb.code .tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/libs/pgflibrarypgfplots.sur fshading.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/sys/pgflibrarypgfplots.surf shading.pgfsys-luatex.def))) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotscolormap.code. tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/util/pgfplotscolor.code.tex )) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsstackedplots.code.t ex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsplothandlers.code.t ex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsmeshplothandler.cod e.tex (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsmeshplotimage.code. tex))) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.scaling.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotscoordprocessing.cod e.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.errorbars.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.markers.code.tex ) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplotsticks.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgfplots/pgfplots.paths.code.tex) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibrarydecorations.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/modules/pgfmoduledecorations.cod e.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibrarydecorations.pathmorphing.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/decorations/pgflibrary decorations.pathmorphing.code.tex)) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibrarydecorations.pathreplacing.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/decorations/pgflibrary decorations.pathreplacing.code.tex))) (/usr/share/texlive/texmf-dist/tex/generic/pgf/frontendlayer/tikz/libraries/tik zlibraryplotmarks.code.tex (/usr/share/texlive/texmf-dist/tex/generic/pgf/libraries/pgflibraryplotmarks.co de.tex))) (/usr/share/texlive/texmf-dist/tex/latex/tools/array.sty) (./poster.aux (./col-1.aux) (./col-2.aux) (./notation.aux) (./references.aux) (./col-3.aux)) *geometry* driver: auto-detecting *geometry* detected driver: pdftex (/usr/share/texlive/texmf-dist/tex/context/base/mkii/supp-pdf.mkii [Loading MPS to PDF converter (version 2006.09.02).] ) (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/epstopdf-base.sty (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/grfext.sty) (/usr/share/texlive/texmf-dist/tex/latex/latexconfig/epstopdf-sys.cfg)) ABD: EveryShipout initializing macros (/usr/share/texlive/texmf-dist/tex/latex/hyperref/nameref.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/gettitlestring.sty)) (./poster.out) (./poster.out) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-basic-dictionary -English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-bibliography-dic tionary-English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-environment-dict ionary-English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-months-dictionar y-English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-numbers-dictiona ry-English.dict) (/usr/share/texlive/texmf-dist/tex/latex/translator/translator-theorem-dictiona ry-English.dict) ABD: EverySelectfont initializing macros Package pgfplots Warning: running in backwards compatibility mode (unsuitable t ick labels; missing features). Consider writing \pgfplotsset{compat=1.15} into your preamble. on input line 116. (./poster.nav) (/usr/share/texmf/tex/latex/lm/ot1lmr.fd) (/usr/share/texmf/tex/latex/lm/omllmm.fd) (/usr/share/texmf/tex/latex/lm/omslmsy.fd) LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <12.6> not available (Font) size <12> substituted on input line 116. (./col-1.tex LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <37.62> not available (Font) size <24.88> substituted on input line 3. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <26.33388> not available (Font) size <24.88> substituted on input line 3. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <18.81> not available (Font) size <17.28> substituted on input line 3. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <26.12> not available (Font) size <24.88> substituted on input line 16. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <18.2839> not available (Font) size <17.28> substituted on input line 16. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <13.06> not available (Font) size <12> substituted on input line 16. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <31.35> not available (Font) size <24.88> substituted on input line 47. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <21.9449> not available (Font) size <20.74> substituted on input line 47. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <15.675> not available (Font) size <14.4> substituted on input line 47. ) (./col-2.tex) (./notation.tex) (./references.tex) (./col-3.tex LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <21.78> not available (Font) size <20.74> substituted on input line 47. LaTeX Font Warning: Font shape `OMX/cmex/m/n' in size <15.24593> not available (Font) size <14.4> substituted on input line 47. ) Overfull \vbox (7.6233pt too high) detected at line 149 [1{/var/lib/texmf/fonts/map/pdftex/updmap/pdftex.map}<./figures/distr_EX_lambda _space.png><./figures/distr_EX_data_space.png><./figures/distr_EX_comparison.pn g><./figures/diagram.png><./figures/ref-theory.png><./figures/ref-stability.png ><./figures/ref-bet.png><./figures/ref-jsm.png><./figures/ref-website.png><./fi gures/exponential_decay_response_sigma-10E-4.png><./figures/updated_convergence _sigma-10E-4.png><./figures/updated_stability_D10_sigma-10E-4.png><./figures/po sterior_stability_D10_sigma-10E-4.png><./figures/updated_stability_D100_sigma-1 0E-4.png><./figures/posterior_stability_D100_sigma-10E-4.png>] (./poster.aux (./col-1.aux) (./col-2.aux) (./notation.aux) (./references.aux) (./col-3.aux)) LaTeX Font Warning: Size substitutions with differences (Font) up to 12.73999pt have occurred. ) (see the transcript file for additional information) 43488 words of node memory still in use: 698 hlist, 61 vlist, 87 rule, 222 disc, 86 local_par, 68 math, 1324 glue, 55 2 kern, 259 penalty, 2228 glyph, 1064 attribute, 79 glue_spec, 1064 attribute_l ist, 13 write, 152 pdf_literal, 250 pdf_colorstack, 16 pdf_setmatrix, 16 pdf_sa ve, 16 pdf_restore nodes avail lists: 1:2,2:250,3:473,4:291,5:611,6:2315,7:1425,8:90,9:790,10:11,11:2 22 </usr/share/fonts/truetype/lato/Lato-LightItalic.ttf>{/usr/share/texmf/fonts/en c/dvips/lm/lm-mathit.enc}{/usr/share/texmf/fonts/enc/dvips/lm/lm-mathsy.enc}{/u sr/share/texmf/fonts/enc/dvips/lm/lm-rm.enc}</usr/share/fonts/truetype/lato/Lat o-Light.ttf></usr/share/fonts/truetype/lato/Lato-Regular.ttf></usr/share/fonts/ truetype/lato/Lato-Bold.ttf></usr/share/texlive/texmf-dist/fonts/opentype/impal lari/raleway/Raleway-Regular.otf></usr/share/texlive/texmf-dist/fonts/opentype/ impallari/raleway/Raleway-Bold.otf></usr/share/texlive/texmf-dist/fonts/type1/p ublic/amsfonts/cm/cmex10.pfb></usr/share/texmf/fonts/type1/public/lm/lmmi12.pfb ></usr/share/texmf/fonts/type1/public/lm/lmmib10.pfb></usr/share/texmf/fonts/ty pe1/public/lm/lmr17.pfb></usr/share/texmf/fonts/type1/public/lm/lmsy10.pfb> Output written on poster.pdf (1 page, 2130121 bytes). Transcript written on poster.log. rm -rf *.aux *.bbl *.blg *.log *.nav *.out *.snm *.toc ```python ```
e6414c840548ac0eeec95fbaa1185bb22f062be1
475,388
ipynb
Jupyter Notebook
SIAM_CSE19_Poster-Distribution.ipynb
mathematicalmichael/jsm19
3aeeb237b82bf83c091ffddcfb400fa1086682cb
[ "MIT" ]
1
2019-05-24T19:00:32.000Z
2019-05-24T19:00:32.000Z
SIAM_CSE19_Poster-Distribution.ipynb
mathematicalmichael/jsm19
3aeeb237b82bf83c091ffddcfb400fa1086682cb
[ "MIT" ]
7
2019-05-24T18:54:11.000Z
2019-07-24T21:36:40.000Z
SIAM_CSE19_Poster-Distribution.ipynb
mathematicalmichael/jsm19
3aeeb237b82bf83c091ffddcfb400fa1086682cb
[ "MIT" ]
null
null
null
188.79587
78,164
0.87974
true
32,421
Qwen/Qwen-72B
1. YES 2. YES
0.835484
0.815232
0.681113
__label__yue_Hant
0.154303
0.420786
# 3D Reconstruction and Structure from Motion Suppose we are given a series of images from a disaster situation. How can we begin to make sense of the contents of the image? Typically this involves answering two questions: - _What_ is in an image (e.g. debris, buildings, etc.)? - _Where_ are these things located _in 3D space_ ? We are first going to set the first question aside and focus entirely on the second. This will be the first in two lectures on so called "structure from motion", which is the idea that you can recreate a 3D scene from a series of 2D images by taking those images from a moving camera. Today's lecture will focus on the theory of 3D reconstruction and how to implement it. Tomorrow's lecture will focus on a real world example of structure from motion, and we will work to assign GPS coordinates to pixels on an image. We will first start with two images and work our way up to multiple images. ## The Camera Model: Let's first introduce the model that we will be basing our analysis on. For this session, we will be using the _pinhole model_ of a camera, depicted in the following picture: In this model of the camera, light rays are reflected off of an object (in this case a kettle), they pass through an infinitesimally small pinhole, and arrive at the opposite wall of the camera, creating a photo-negative (think upside down) image of the object. The pinhole is located at a point called the _center of projection_. Because the image on the actual camera is a photo-negative, we typically instead center our reference frame on the center of projection and work as if the image were on the opposite side, which we will call the _image plane_. The distance _f_ between the center of projection and the image plane is called the _focal length_ which is usually measured in millimeters. On the image plane, the center of the plane is called the _principal point_ and the line that connects the center of projection and the principal point is called the _principal axis_. We can align our coordinate frame as follows: let's center our coordinate system such that the origin is the center of projection, let's align the Z-axis such that it passes through principal axis, and let's align the X- and Y-axis such that the X-axis goes along the horizonal direction of the image plane and the Y-axis goes along the vertical direction of the image plane. Note that X increases to the right, but Y increases *downward*. This is shown in the following image: The image plane is given by a _uv-plane_ that is centered on the principal point and is orthogonal to the principal axis. Note that cameras typically measure coordinates starting from the upper-left corner, and increasing to the left and downwards (which motivates our decision to align our Y axis downwards). Furthermore, note that cameras cannot capture an image with perfect precision. Rather, they are composed of small elements called _pixels_ that vary in intensity in order to capture the scene. Therefore, we would rather operate on the discrete pixel plane, which we will call the lowercase xy-plane. You can compare both planes in the image below: All objects create a projection on the xy-plane, which is what we actually see on the image. For example, the point $M$ on the kettle is projected onto the xy-plane on the point _m_. Suppose we have the coordinates ($X$, $Y$, $Z$) of the point $M$. **What are the coordinates on the xy-plane of the point $m$?** State your answer in terms of $\alpha$ (the focal length measured in pixels, not millimeters), $p_x$ and $p_y$ (the coordinates of the principal point $p$ in the xy-plane), and $X$, $Y$ and $Z$, the coordinates of the point $M$. <details> <summary>ANSWER</summary> We can use geometry to arrive at the answer. Note that $C$, $p$ and $m$ form a similar triangle with $C$, $Z$ and $M$. Therefore, $\frac{x-p_x}{\alpha} = \frac{X}{Z}$ and $\frac{y-p_y}{\alpha} = \frac{Y}{Z}$. Now it's a matter of solving: $x = \alpha\frac{X}{Z} + p_x$ and $y = \alpha\frac{Y}{Z} + p_y$ </details> ### List of variables used in this lesson | Variable | Meaning | |:--------:|:--------------------------------------------------------------------------------------------------:| | M | A point in 3D space | | m1, m2 | Projection of M on image 1 and 2 | | C1, C2 | Center of camera 1, 2 (a point in 3D space) | | p | A point on the z axis of C (directly in front of it) at the center of a given photo | | a | Focal length of a camera | | R | Rotation of a camera image | | t | Translation of a camera image | | K | A intrinsic matrix used to calculate a camera's offset, a function of its focal length and point p | ## The Problem: Depth Ambiguity Scene reconstruction requires us to go the other way around: we know the coordinates ($x$, $y$) of $m$ and we want the ($X$, $Y$, $Z$) coordinates of the point $M$. This presents a problem: we have two equations and three unknowns. Specifically, there is an ambiguity in the depth $Z$ that is impossible to resolve with just one image. Therefore, the point $M$ can be anywhere along the ray connecting the center of projection and the point $m$. This is shown in the image below: However, this can be remediated by adding a second image into the equation. This is because, assuming we have perfect cameras*, the rays connecting both centers of projection to the point $M$ should only intersect at one point, as is shown in the image below: Let's assume that $C_1$ and $C_2$ are both from the same camera but different locations. Furthermore, let's center our coordinate system such that $C_1$ is at the origin. What do we need in order to find the coordinates of point $M$ ? - The corresponding points $m_1$ and $m_2$ - The focal length $\alpha$ and the coordinates $p_x$ and $p_y$. Because these are internal to the camera, we call these the _intrinsic parameters_. - The relative rotation $R$ and translation $t$ of $C_2$ with respect to $C_1$. Here, $R$ is a 3x3 matrix and $t$ is a 3x1 vector. We call $R$ and $t$ the _extrinsic parameters_. \* The perfect cameras assumption is actually critical. To see why, imagine if by some measurement error (imperfect lens, pixel accuracy, etc), $m_2$ is slightly off. **What would happen in the previous example?** <details> <summary>ANSWER</summary> The rays corresponding to points $m_1$ and $m_2$ will not intersect, so there is no solution to this problem! </details> In practice, no camera is actually perfect. There are tools that work under the hood to mitigate this. We will briefly discuss them, but will not go in depth. Let's get started! ```python import cv2 import numpy as np from matplotlib import pyplot as plt import pandas as pd import os import sys ``` ```python %matplotlib inline # Load and display the two images img1 = cv2.imread("kitchen_example/images/img1.jpg", cv2.IMREAD_GRAYSCALE) plt.imshow(img1, cmap='gray') plt.show() img2 = cv2.imread("kitchen_example/images/img2.jpg", cv2.IMREAD_GRAYSCALE) plt.imshow(img2, cmap='gray') plt.show() ``` ## Corresponding points Our first task is to find the points $m_1$ and $m_2$. Note that it's not just a matter of finding arbitrary points, but also of ensuring the $m_1$ is actually pointing to the same object $M$ as $m_2$. Of course, one way to do this is to manually go through both images and match points on one image with points on the other. This is not a scalable solution as you increase the number of images. To this end, experts in computer vision work with the concept of _features_ which represent points of interest within an image. To understand how features work, consider the image below. **Which of the options, A through F are most easily identifiable in the image?** <details> <summary>ANSWER</summary> A and B are the hardest because they could be anywhere on the surface, so surfaces are not good. C and D are a bit easier, because edges are more immediately recognizable. However, the points can still be anywhere along the edge. E and F are the easiest, since there is only one corner that looks like them. What this shows is that corners are good points to track, followed by edges and finally surfaces. </details> Features, therefore, are typically corners that are automatically extracted based on color differences around the corner. Features are composed of both the _coordinates_ of the point of interest as well as a _descriptor_ of what those points look like. There are all sorts of features that will do the job. Here, we will show ORB features, which were created by the makers of OpenCV. We first extract the features and then match them with a feature in the opposite image that has the closest descriptor. It's worth noting that most of these matches will not be very good, so we only want to keep the best matches we have. ### Feature extraction ```python MAX_FEATURES = 5000 # creating the ORB feature extractor orb = cv2.ORB_create(MAX_FEATURES) # creating feature points for first and second images kp1, dc1 = orb.detectAndCompute(img1, None) kp2, dc2 = orb.detectAndCompute(img2, None) img1_ORB = cv2.drawKeypoints(img1, kp1, None, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) plt.imshow(img1_ORB) plt.show() ``` ### Feature matching ```python # most matches are not very good, so we want to keep only the best ones. # here, we keep only the top 20% GOOD_MATCH_PERCENT = 0.2 # create BFMatcher object bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = True) # match descriptors matches = bf.match(dc1, dc2) # Sort matches by score matches.sort(key=lambda x: x.distance, reverse=False) # Remove worst matches numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT) matches = matches[:numGoodMatches] img_match = cv2.drawMatches(img1,kp1,img2,kp2,matches,None, flags=2) plt.imshow(img_match) plt.show() pts1 = np.float32([ kp1[m.queryIdx].pt for m in matches ]).reshape(-1,1,2) pts2 = np.float32([ kp2[m.trainIdx].pt for m in matches ]).reshape(-1,1,2) ``` ## Intrinsic parameters We now proceed to obtain the intrinsic parameters. We have already seen that we can gather important data about the image by looking at its metadata. We can do something similar here: It might seem tempting to take the focal length as is, and to take the height and width of the image, divide by two and call that the coordinates of the principal point. While this might certainly be a start if you have no other options, there are some limitations to take into account. First, recall that the focal length here is measured in millimeters, but we need the length in pixels $\alpha = m*f$, where $m$ is the size in millimeters of each pixel. This information is rarely found in the metadata. Usually, if you want to get the focal length in pixels just from the metadata, you would have to find the camera specifications online and make the conversion of millimeters to pixels. Also, the focal length is typically rounded to the nearest integer in the metadata, further complicating matters. Second, because of manufacturing constraints in positioning the sensor the principal point is always off from the middle of an image by a few pixels. While the middle of an image is most likely a good approximation, the error will continue to add up as you go farther away from the camera center. Finally, even if you have the focal length and principal point coordinates, the reconstruction must account for _distortion_. Distortion typically comes from imperfect lenses. The most well known form of distortion is _radial distortion_. This has the effect of making straight lines bulge outwards the farther away you are from the center. The figure below shows an example of this: Distortion is rarely in the image metadata. Sometimes cameras have distortion control that corrects for distortion when it's taking the image, so you don't have to worry about it. Still, this is relatively rare and it does not show up in the metadata whether distortion control was applied. So if you're not familiar with the camera you're using, you're still in the dark. One way to get around this is to perform explicit camera calibration. A common way to do this is to map points with a known 3D-to-2D correspondence (like a chessboard) and perform an optimization to correct for the distortion. How this typically works is by assigning coordinates to the inner corners of the camera (e.g. the top left corner will be (0, 0, 0), the one on the right might be (1, 0, 0), the one below would be (0, 1, 0)), and then choosing the focal length, principal point and distortion parameters that best explain deviations in the expected 2D coordinates. Typically you have to take something on the order of 20-30 images of a chessboard from slightly varying locations for this procedure to work. ```python import glob # termination criteria criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) objp = np.zeros((6*9,3), np.float32) objp[:,:2] = np.mgrid[0:6,0:9].T.reshape(-1,2) # Arrays to store object points and image points from all the images. objpoints = [] # 3d point in real world space imgpoints = [] # 2d points in image plane. images = glob.glob('chessboard/*.jpg') for fname in images: img = cv2.imread(fname) gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Find the chess board corners ret, corners = cv2.findChessboardCorners(gray, (6,9),None) # If found, add object points, image points (after refining them) if ret == True: print("corners found!") objpoints.append(objp) corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria) imgpoints.append(corners2) # Draw and display the corners img = cv2.drawChessboardCorners(img, (6,9), corners2,ret) #plt.imshow(img, cmap="gray") #plt.show() else: print("corners not found, trying next image.") ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None) print(mtx) np.save('camera_mat.npy', mtx) print(dist) ``` corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! corners found! [[3.31666140e+03 0.00000000e+00 2.00701025e+03] [0.00000000e+00 3.33904291e+03 1.48586298e+03] [0.00000000e+00 0.00000000e+00 1.00000000e+00]] [[ 4.66900428e-01 -2.96560179e+00 -1.66952307e-03 -5.79841461e-05 5.69842923e+00]] ```python with open("chessboard/20200720_110201.jpg", 'rb') as image_file: tags = exifread.process_file(image_file, details=False) print(tags["EXIF FocalLength"]) print(img1.shape) ``` 43/10 (3000, 4000) Usually the intrinsic parameters are stored as a matrix, called the *intrinsic matrix*. The form of the intrinsic matrix is as follows: \begin{equation} K = \begin{bmatrix} \alpha_x & 0 & p_x\\ 0 & \alpha_y & p_y\\ 0 & 0 & 1 \end{bmatrix} \end{equation} I loaded the camera matrix from my phone below. Notice that the coordinates of the principal point are close to, but not quite, the middle of the image. Also notice that we introduced $\alpha_x$ and $\alpha_y$, even though we've only worked with $\alpha$ thus far. This is a more general notation, and it takes into account the possibility that cameras might not have square pixels. If one side of a pixel is larger than the other, it makes more sense to talk about focal length measured in pixel width or height. Still, cameras typically have square pixels, and this is reflected in the fact that $\alpha_x$ is very close to $\alpha_y$, so we can chalk up the discrepancy to measurement error. As a final note, some very specialized cameras have non-rectangular pixels. It turns out $K$ can account for that by changing the value of $K_{1, 2}$, otherwise known as the skew. For most cameras, you can assume it equals 0. ```python # the cell above takes a very long time to run. I already ran it and saved the matrix. K = np.load('camera_mat.npy') print(K) K ``` [[3.31666140e+03 0.00000000e+00 2.00701025e+03] [0.00000000e+00 3.33904291e+03 1.48586298e+03] [0.00000000e+00 0.00000000e+00 1.00000000e+00]] array([[3.31666140e+03, 0.00000000e+00, 2.00701025e+03], [0.00000000e+00, 3.33904291e+03, 1.48586298e+03], [0.00000000e+00, 0.00000000e+00, 1.00000000e+00]]) ### Exercise Print out the chessboard that was provided to you in chessboard_original/ (or use your own chessboard). Using your phone (or whatever other camera), take 20 or so photos of the chessboard lying on a flat surface and upload them to the chessboard/ directory. Uncomment the cell with the callibration code and run the callibration with your images. Compare the values in the estimated camera matrix with the values you would have expected. How similar are they to the expected values? What is the size of a pixel in your camera in millimeters? ```python ``` ## Extrinsic parameters We now need to find the rotation and translation $R$ and $t$ of both cameras. To reiterate, $t$ is a 3x1 vector that denotes the distance from the origin in $X$, $Y$ and $Z$\*. $R$ is a 3x3 matrix that encodes the rotation of the camera. We made our lives a bit easier by centering the coordinate system on the first camera, such that $t_1 = [0, 0, 0]'$ and $R_1$ is the identity matrix. Remember that we also aligned the coordinate system such that the vector $[0, 0, 1]'$ is in the direction that the camera is pointing. One way to interpret the rotation matrix $R_2$ is by saying that it is the matrix that rotates the vector $[0, 0, 1]'$ into the vector that is aligned with the direction of the second camera. One of the most amazing results in multiview geometry is that $R_2$ and $t_2$ can both be recovered (up to scale, in the case of $t_2$) just by knowing the corresponding points and the intrinsic parameters. At a very high level, this is a key tenet of something called _epipolar geometry_. Consider images (a) and (b) below. On the image you will find random corresponding points between the two images. On image (a), clearly the camera that took (b) is somewhere to the right. Imagine we could extend (a) so much that you could actually see the camera that took (b), and let's draw a line between the camera that took (b) and all of the corresponding points. What you would find is a series of lines, called epipolar lines, that all converge at the camera that took (b), whose point is called the epipole. This is what we see in (c) and (d). (source: https://www.youtube.com/watch?v=QzYn0OPO0Yw) It turns out there is a matrix called the _fundamental matrix_ that encodes this information. This matrix can be estimated using the corresponding points we got before. Furthermore, by combining it with the intrinsic parameters, we get another matrix called the _essential matrix_ which can then be decomposed into a relative rotation and translation $R_2$ and $t_2$, which is what we're after! This is of course a criminally high level summary of how this works. For more information, this is an excellent source: https://www.youtube.com/watch?v=QzYn0OPO0Yw. \* This is actually a bit incorrect. $t$ represents the vector connecting the center of projection to the origin *from the point of view of the camera being considered*, not the first camera. Still, the general idea is that it denotes displacement. ```python # converting to pixel coordinates pts1 = np.int32(pts1) pts2 = np.int32(pts2) # Estimate the essential matrix E, mask = cv2.findEssentialMat(pts1,pts2,K) # We select only inlier points pts1_inliers = (pts1[mask.ravel()==1]) pts2_inliers = (pts2[mask.ravel()==1]) ``` ```python # Recover pose using what we've learned so far retval, R, t, mask = cv2.recoverPose(E, pts1_inliers, pts2_inliers, K) print(R) print(t) ``` [[ 0.94182083 -0.09896575 0.32121534] [ 0.08510747 0.99474354 0.05693862] [-0.32516186 -0.02628816 0.94529292]] [[-0.98203659] [-0.07243396] [ 0.17423392]] ## Finalizing the reconstruction We now have everything we need to reconstruct the scene in 3D! We have corresponding points in two images, the intrinsic parameters of the camera and the extrinsic parameters as the camera took the two images. There is just one caveat. Remember that the value $t_2$ that we obtained was up to scale. It turns out that our entire scene is scaled such that the length of $t_2 = 1$. This means that while the relative distances between the reconstructed points are conserved, the absolute positions are not. Because we want to take advantage of visualization software that currently exists, we will use a package called OpenSfM (https://github.com/mapillary/OpenSfM). OpenSfM automatically goes through the trouble of extracting the features, matching them, finding the poses of the cameras and doing the triangulation. One thing it does not do, however, is the camera calibration. We can either provide our own intrinsic camera parameters or we can let the algorithm take its best guess through a process called _bundle adjustment_. Bundle adjustment essentially works by identifying the _reprojection error_ (the error in reprojecting the reconstructed 3D features back onto the image) and slightly modifying the camera positions and parameters, as well as the positions of the features, so that the error is minimized. As a final note, what do we do when we have more than three images? Clearly if you can find the camera pose of image 2 relative to image 1, then it's possible to find the pose of image 3 relative to image 2. Therefore, multiple images can take part in a reconstruction. Here, bundle adjustment is very important to ensure that errors in the reconstruction do not keep adding up. Here are the steps to reconstruct a scene: - Extract the focal length from the metadata - Detect features from images (in this particular case, we are using so called HAHOG features) - Match features across images - Reconstruct the scene using the inferred rotations and translations - Visualize the scene OpenSfM has a very good command line interface, so we'll move to the terminal for this portion. However, the commands are on the following cell in case you want to refer back to them. ```python # Take initial guess of intrinsic parameters through metadata !opensfm extract_metadata kitchen_example/ # Detect features points !opensfm detect_features kitchen_example/ # Match feature points across images !opensfm match_features kitchen_example/ # This creates "tracks" for the features. That is to say, if a feature in image 1 is matched with one in image 2, # and in turn that one is matched with one in image 3, then it links the matches between 1 and 3. In this case, # it does not matter since we only have two images !opensfm create_tracks kitchen_example/ # Calculates the essential matrix, the camera pose and the reconstructed feature points !opensfm reconstruct kitchen_example/ # For visualization using Open3D !opensfm export_ply kitchen_example/ ``` 2020-07-20 19:56:09,774 INFO: Loading existing EXIF for img1.jpg 2020-07-20 19:56:09,775 INFO: Loading existing EXIF for img2.jpg 2020-07-20 19:56:11,803 INFO: Skip recomputing ROOT_HAHOG features for image img1.jpg 2020-07-20 19:56:11,803 INFO: Skip recomputing ROOT_HAHOG features for image img2.jpg 2020-07-20 19:56:13,836 INFO: Matching 1 image pairs 2020-07-20 19:56:13,842 INFO: Computing pair matching with 1 processes 2020-07-20 19:56:13,867 DEBUG: No segmentation for img2.jpg, no features masked. 2020-07-20 19:56:13,868 DEBUG: No segmentation for img1.jpg, no features masked. 2020-07-20 19:56:14,375 DEBUG: Matching img2.jpg and img1.jpg. Matcher: FLANN (symmetric) T-desc: 0.506 T-robust: 0.001 T-total: 0.508 Matches: 396 Robust: 362 Success: True 2020-07-20 19:56:14,375 DEBUG: Image img2.jpg matches: 1 out of 1 2020-07-20 19:56:14,376 DEBUG: Image img1.jpg matches: 0 out of 0 2020-07-20 19:56:14,376 INFO: Matched 1 pairs for 2 ref_images (perspective-perspective: 1) in 0.5394126999999571 seconds (0.5394131800021569 seconds/pair). 2020-07-20 19:56:16,438 INFO: reading features 2020-07-20 19:56:16,463 DEBUG: Merging features onto tracks 2020-07-20 19:56:16,467 DEBUG: Good tracks: 362 2020-07-20 19:56:18,502 INFO: Starting incremental reconstruction 2020-07-20 19:56:18,505 INFO: Starting reconstruction with img1.jpg and img2.jpg 2020-07-20 19:56:18,521 INFO: Two-view reconstruction inliers: 361 / 362 2020-07-20 19:56:18,544 INFO: Triangulated: 362 2020-07-20 19:56:18,551 DEBUG: Ceres Solver Report: Iterations: 4, Initial cost: 8.297278e+00, Final cost: 6.950114e+00, Termination: CONVERGENCE 2020-07-20 19:56:18,581 DEBUG: Ceres Solver Report: Iterations: 3, Initial cost: 7.186157e+00, Final cost: 6.737168e+00, Termination: CONVERGENCE 2020-07-20 19:56:18,622 DEBUG: Ceres Solver Report: Iterations: 10, Initial cost: 2.200140e+01, Final cost: 1.615703e+01, Termination: CONVERGENCE 2020-07-20 19:56:18,624 INFO: Removed outliers: 0 2020-07-20 19:56:18,624 INFO: ------------------------------------------------------- 2020-07-20 19:56:18,667 DEBUG: Ceres Solver Report: Iterations: 10, Initial cost: 1.938163e+01, Final cost: 1.615703e+01, Termination: CONVERGENCE 2020-07-20 19:56:18,701 INFO: Removed outliers: 0 2020-07-20 19:56:18,707 INFO: {'points_count': 362, 'cameras_count': 2, 'observations_count': 724, 'average_track_length': 2.0, 'average_track_length_notwo': -1} 2020-07-20 19:56:18,707 INFO: Reconstruction 0: 2 images, 362 points 2020-07-20 19:56:18,707 INFO: 1 partial reconstructions in total. ```python !opensfm undistort kitchen_example/ !opensfm compute_depthmaps kitchen_example/ ``` 2020-07-20 20:41:45,004 DEBUG: Undistorting the reconstruction 2020-07-20 20:41:45,020 DEBUG: Undistorting image img1.jpg 2020-07-20 20:41:47,625 DEBUG: Undistorting image img2.jpg 2020-07-20 20:41:52,189 INFO: Computing neighbors 2020-07-20 20:41:52,210 INFO: Computing depthmap for image img1.jpg with PATCH_MATCH_SAMPLE 2020-07-20 20:42:14,751 INFO: Computing depthmap for image img2.jpg with PATCH_MATCH_SAMPLE 2020-07-20 20:42:37,527 INFO: Cleaning depthmap for image img1.jpg 2020-07-20 20:42:39,811 INFO: Cleaning depthmap for image img2.jpg 2020-07-20 20:42:42,106 INFO: Pruning depthmap for image img1.jpg 2020-07-20 20:42:43,652 INFO: Pruning depthmap for image img2.jpg 2020-07-20 20:42:45,301 INFO: Merging depthmaps ```python import open3d as o3d from open3d import JVisualizer pcd = o3d.io.read_point_cloud("kitchen_example/reconstruction.ply") visualizer = JVisualizer() visualizer.add_geometry(pcd) visualizer.show() # UNCOMMENT ONLY IF RUNNING LOCALLY # o3d.visualization.draw_geometries([pcd]) ``` JVisualizer with 1 geometries ### Exercise We will now let you try out 3D reconstruction for yourself using OpenSfM! First, we need to make sure you have the proper file structure to make this work. Create a directory called "workspace/" (or whatever else you want). In that directory, create a directory called "images/". From the kitchen_example/ directory, copy "config.yaml" and paste it in workspace/. Take two or more images of your work space and upload them to workspace/images. Now run the commands above to reconstruct the scene and visualize it. Is the reconstruction accurate? What about your scene might make it difficult for the reconstruction to work well? Go to workspace/reports and examine the files there. They will give you a sense of how many features were extracted, how many features were matched, and whether some reconstructions are only partial. Do you have multiple partial reconstructions or one larger reconstruction? If you have partial reconstructions, why do you think that is? ```python img1 = cv2.imread("workspace/images/img1.jpg", cv2.IMREAD_GRAYSCALE) plt.imshow(img1, cmap='gray') plt.show() img2 = cv2.imread("workspace/images/img2.jpg", cv2.IMREAD_GRAYSCALE) plt.imshow(img2, cmap='gray') plt.show() ``` ```python MAX_FEATURES = 5000 # creating the ORB feature extractor orb = cv2.ORB_create(MAX_FEATURES) # creating feature points for first and second images kp1, dc1 = orb.detectAndCompute(img1, None) kp2, dc2 = orb.detectAndCompute(img2, None) img1_ORB = cv2.drawKeypoints(img1, kp1, None, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) plt.imshow(img1_ORB) plt.show() ``` ```python GOOD_MATCH_PERCENT = 0.2 # create BFMatcher object bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = True) # match descriptors matches = bf.match(dc1, dc2) # Sort matches by score matches.sort(key=lambda x: x.distance, reverse=False) # Remove worst matches numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT) matches = matches[:numGoodMatches] img_match = cv2.drawMatches(img1,kp1,img2,kp2,matches,None, flags=2) plt.imshow(img_match) plt.show() pts1 = np.float32([ kp1[m.queryIdx].pt for m in matches ]).reshape(-1,1,2) pts2 = np.float32([ kp2[m.trainIdx].pt for m in matches ]).reshape(-1,1,2) ``` ```python K = np.array([[3.31666140e+03, 0.00000000e+00, 2.00701025e+03], [0.00000000e+00, 3.33904291e+03, 1.48586298e+03], [0.00000000e+00, 0.00000000e+00, 1.00000000e+00]]) ``` ```python pts1 = np.int32(pts1) pts2 = np.int32(pts2) # Estimate the essential matrix E, mask = cv2.findEssentialMat(pts1,pts2,K) # We select only inlier points pts1_inliers = (pts1[mask.ravel()==1]) pts2_inliers = (pts2[mask.ravel()==1]) ``` ```python retval, R, t, mask = cv2.recoverPose(E, pts1_inliers, pts2_inliers, K) print(R) print(t) ``` [[ 0.98359866 -0.12902517 0.12604038] [ 0.12829603 0.99163822 0.01392003] [-0.1267825 0.00247875 0.99192744]] [[-0.97485292] [-0.15737803] [ 0.15777813]] ```python # Take initial guess of intrinsic parameters through metadata !opensfm extract_metadata workspace/ # Detect features points !opensfm detect_features workspace/ # Match feature points across images !opensfm match_features workspace/ # This creates "tracks" for the features. That is to say, if a feature in image 1 is matched with one in image 2, # and in turn that one is matched with one in image 3, then it links the matches between 1 and 3. In this case, # it does not matter since we only have two images !opensfm create_tracks workspace/ # Calculates the essential matrix, the camera pose and the reconstructed feature points !opensfm reconstruct workspace/ # For visualization using Open3D !opensfm export_ply workspace/ ``` 2020-07-20 21:23:01,928 INFO: Loading existing EXIF for img1.jpg 2020-07-20 21:23:01,929 INFO: Loading existing EXIF for img2.jpg 2020-07-20 21:23:03,837 INFO: Skip recomputing ROOT_HAHOG features for image img1.jpg 2020-07-20 21:23:03,837 INFO: Skip recomputing ROOT_HAHOG features for image img2.jpg 2020-07-20 21:23:05,747 INFO: Matching 1 image pairs 2020-07-20 21:23:05,753 INFO: Computing pair matching with 1 processes 2020-07-20 21:23:05,777 DEBUG: No segmentation for img2.jpg, no features masked. 2020-07-20 21:23:05,778 DEBUG: No segmentation for img1.jpg, no features masked. 2020-07-20 21:23:06,235 DEBUG: Matching img2.jpg and img1.jpg. Matcher: FLANN (symmetric) T-desc: 0.455 T-robust: 0.002 T-total: 0.457 Matches: 818 Robust: 778 Success: True 2020-07-20 21:23:06,236 DEBUG: Image img2.jpg matches: 1 out of 1 2020-07-20 21:23:06,236 DEBUG: Image img1.jpg matches: 0 out of 0 2020-07-20 21:23:06,236 INFO: Matched 1 pairs for 2 ref_images (perspective-perspective: 1) in 0.4883900179993361 seconds (0.4883904779999284 seconds/pair). 2020-07-20 21:23:08,128 INFO: reading features 2020-07-20 21:23:08,152 DEBUG: Merging features onto tracks 2020-07-20 21:23:08,159 DEBUG: Good tracks: 778 2020-07-20 21:23:10,019 INFO: Starting incremental reconstruction 2020-07-20 21:23:10,025 INFO: Starting reconstruction with img1.jpg and img2.jpg 2020-07-20 21:23:10,058 INFO: Two-view reconstruction inliers: 778 / 778 2020-07-20 21:23:10,104 INFO: Triangulated: 778 2020-07-20 21:23:10,117 DEBUG: Ceres Solver Report: Iterations: 3, Initial cost: 1.095092e+01, Final cost: 8.994927e+00, Termination: CONVERGENCE 2020-07-20 21:23:10,213 DEBUG: Ceres Solver Report: Iterations: 3, Initial cost: 9.416828e+00, Final cost: 8.694985e+00, Termination: CONVERGENCE 2020-07-20 21:23:10,310 DEBUG: Ceres Solver Report: Iterations: 5, Initial cost: 2.391270e+01, Final cost: 1.953910e+01, Termination: CONVERGENCE 2020-07-20 21:23:10,315 INFO: Removed outliers: 0 2020-07-20 21:23:10,316 INFO: ------------------------------------------------------- 2020-07-20 21:23:10,418 DEBUG: Ceres Solver Report: Iterations: 5, Initial cost: 2.176169e+01, Final cost: 1.953908e+01, Termination: CONVERGENCE 2020-07-20 21:23:10,424 INFO: Removed outliers: 0 2020-07-20 21:23:10,435 INFO: {'points_count': 778, 'cameras_count': 2, 'observations_count': 1556, 'average_track_length': 2.0, 'average_track_length_notwo': -1} 2020-07-20 21:23:10,435 INFO: Reconstruction 0: 2 images, 778 points 2020-07-20 21:23:10,435 INFO: 1 partial reconstructions in total. ```python ``` ```python !opensfm undistort workspace/ ``` 2020-07-20 21:23:16,587 DEBUG: Undistorting the reconstruction 2020-07-20 21:23:16,616 DEBUG: Undistorting image img1.jpg 2020-07-20 21:23:18,852 DEBUG: Undistorting image img2.jpg ```python !opensfm compute_depthmaps workspace/ ``` 2020-07-20 21:23:25,163 INFO: Computing neighbors 2020-07-20 21:23:25,196 INFO: Using precomputed raw depthmap img1.jpg 2020-07-20 21:23:25,196 INFO: Using precomputed raw depthmap img2.jpg 2020-07-20 21:23:25,196 INFO: Using precomputed clean depthmap img1.jpg 2020-07-20 21:23:25,196 INFO: Using precomputed clean depthmap img2.jpg 2020-07-20 21:23:25,196 INFO: Using precomputed pruned depthmap img1.jpg 2020-07-20 21:23:25,196 INFO: Using precomputed pruned depthmap img2.jpg 2020-07-20 21:23:25,196 INFO: Merging depthmaps ```python import open3d as o3d from open3d import JVisualizer pcd = o3d.io.read_point_cloud("workspace/undistorted/depthmaps/merged.ply") visualizer = JVisualizer() visualizer.add_geometry(pcd) visualizer.show() # UNCOMMENT ONLY IF RUNNING LOCALLY # o3d.visualization.draw_geometries([pcd]) ``` JVisualizer with 1 geometries ```python ```
a1b0f6baa804e68b5d17d043ce46602447caeceb
806,906
ipynb
Jupyter Notebook
11-3D_Reconstruction_and_SfM.ipynb
agard111/08-3D-Reconstruction
e338e9b711db992266de3d9bf1ea2f3038ed057f
[ "MIT" ]
null
null
null
11-3D_Reconstruction_and_SfM.ipynb
agard111/08-3D-Reconstruction
e338e9b711db992266de3d9bf1ea2f3038ed057f
[ "MIT" ]
null
null
null
11-3D_Reconstruction_and_SfM.ipynb
agard111/08-3D-Reconstruction
e338e9b711db992266de3d9bf1ea2f3038ed057f
[ "MIT" ]
null
null
null
789.536204
127,248
0.945373
true
9,795
Qwen/Qwen-72B
1. YES 2. YES
0.885631
0.867036
0.767874
__label__eng_Latn
0.993398
0.622361
--- author: Nathan Carter ([email protected]) --- This answer assumes you have imported SymPy as follows. ```python from sympy import * # load all math functions init_printing( use_latex='mathjax' ) # use pretty math output ``` | Mathematical notation | Python code | Requires SymPy? | |--|--|--| | $x+y$ | `x+y` | no | | $x-y$ | `x-y` | no | | $xy$ | `x*y` | no | | $\frac xy$ | `x/y` | no | | $\left\lfloor\frac xy\right\rfloor$ | `x//y` | no | | remainder of $x\div y$ | `x%y` | no | | $x^y$ | `x**y` | no | | $\vert x\vert$ | `abs(x)` | no | | $\ln x$ | `log(x)` | yes | | $\log_a b$ | `log(b,a)` | yes | | $e^x$ | `E` | yes | | $\pi$ | `pi` | yes | | $\sin x$ | `sin(x)` | yes | | $\sin^{-1} x$ | `asin(x)` | yes | | $\sqrt x$ | `sqrt(x)` | yes | Other trigonometric functions are also available besides just `sin`, including `cos`, `tan`, etc. Note that SymPy gives precise answers to mathematical queries, which may not be what you want. ```python sqrt(2) ``` $\displaystyle \sqrt{2}$ If you want a decimal approximation instead, you can use the `N` function. ```python N(sqrt(2)) ``` $\displaystyle 1.4142135623731$ Or you can use the `evalf` function. ```python sqrt(2).evalf() ``` $\displaystyle 1.4142135623731$ By contrast, if you need an exact rational number when Python gives you an approximation, you can use the `Rational` function to build one. Note the differences below: ```python 1/3 ``` $\displaystyle 0.333333333333333$ ```python Rational(1,3) ``` $\displaystyle \frac{1}{3}$
db9e8c44739005b10ebd9894adb10eac29091fe2
4,404
ipynb
Jupyter Notebook
database/tasks/How to do basic mathematical computations/Python, using SymPy.ipynb
nathancarter/how2data
7d4f2838661f7ce98deb1b8081470cec5671b03a
[ "MIT" ]
null
null
null
database/tasks/How to do basic mathematical computations/Python, using SymPy.ipynb
nathancarter/how2data
7d4f2838661f7ce98deb1b8081470cec5671b03a
[ "MIT" ]
null
null
null
database/tasks/How to do basic mathematical computations/Python, using SymPy.ipynb
nathancarter/how2data
7d4f2838661f7ce98deb1b8081470cec5671b03a
[ "MIT" ]
2
2021-07-18T19:01:29.000Z
2022-03-29T06:47:11.000Z
21.588235
99
0.482516
true
511
Qwen/Qwen-72B
1. YES 2. YES
0.908618
0.853913
0.77588
__label__eng_Latn
0.887981
0.640963
# Transport equation with source term $$ \renewcommand{\DdQq}[2]{{\mathrm D}_{#1}{\mathrm Q}_{#2}} \renewcommand{\drondt}{\partial_t} \renewcommand{\drondx}{\partial_x} \renewcommand{\dx}{\Delta x} \renewcommand{\dt}{\Delta t} \renewcommand{\grandO}{{\mathcal O}} \renewcommand{\density}[2]{\,f_{#1}^{#2}} \renewcommand{\fk}[1]{\density{#1}{\vphantom{\star}}} \renewcommand{\fks}[1]{\density{#1}{\star}} \renewcommand{\moment}[2]{\,m_{#1}^{#2}} \renewcommand{\mk}[1]{\moment{#1}{\vphantom{\star}}} \renewcommand{\mke}[1]{\moment{#1}{e}} \renewcommand{\mks}[1]{\moment{#1}{\star}} $$ In this tutorial, we propose to add a source term in the advection equation. The problem reads $$\drondt u + c \drondx u = S(t, x, u), \quad t>0, , \quad x\in(0, 1),$$ where $c$ is a constant scalar (typically $c=1$). Additional boundary and initial conditions will be given in the following. $S$ is the source term that can depend on the time $t$, the space $x$ and the solution $u$. In order to simulate this problem, we use the $\DdQq{1}{2}$ scheme and we add an additional `key:value` in the dictionary for the source term. We deal with two examples. ## A friction term In this example, we takes $S(t, x, u) = -\alpha u$ where $\alpha$ is a positive constant. The dictionary of the simulation then reads: ```python %matplotlib inline import sympy as sp import numpy as np import pylbm ``` /home/loic/miniconda3/envs/pylbm/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters ```python C, ALPHA, X, u, LA = sp.symbols('C, ALPHA, X, u, LA') c = 0.3 alpha = 0.5 def init(x): middle, width, height = 0.4, 0.1, 0.5 return height/width**10 * (x%1-middle-width)**5 * \ (middle-x%1-width)**5 * (abs(x%1-middle)<=width) def solution(t, x): return init(x - c*t)*np.exp(-alpha*t) dico = { 'box':{'x':[0., 1.], 'label':-1}, 'space_step':1./128, 'scheme_velocity':LA, 'schemes':[ { 'velocities':[1,2], 'conserved_moments':u, 'polynomials':[1,LA*X], 'relaxation_parameters':[0., 2.], 'equilibrium':[u, C*u], 'source_terms':{u:-ALPHA*u}, 'init':{u:(init,)}, }, ], 'parameters': {LA: 1., C: c, ALPHA: alpha}, 'generator': 'numpy', } sol = pylbm.Simulation(dico) # build the simulation viewer = pylbm.viewer.matplotlib_viewer fig = viewer.Fig() ax = fig[0] ax.axis(0., 1., -.1, .6) x = sol.domain.x ax.plot(x, sol.m[u], width=2, color='k', label='initial') while sol.t < 1: sol.one_time_step() ax.plot(x, sol.m[u], width=2, color='b', label=r'$D_1Q_2$') ax.plot(x, solution(sol.t, x), width=2, color='r', label='exact') ax.legend() ``` ## A source term depending on time and space If the source term $S$ depends explicitely on the time or on the space, we have to specify the corresponding variables in the dictionary through the key *parameters*. The time variable is prescribed by the key *'time'*. Moreover, sympy functions can be used to define the source term like in the following example. This example is just for testing the feature... no physical meaning in mind ! ```python t, C, X, u, LA = sp.symbols('t, C, X, u, LA') c = 0.3 def init(x): middle, width, height = 0.4, 0.1, 0.5 return height/width**10 * (x%1-middle-width)**5 * \ (middle-x%1-width)**5 * (abs(x%1-middle)<=width) dico = { 'box':{'x':[0., 1.], 'label':-1}, 'space_step':1./128, 'scheme_velocity':LA, 'schemes':[ { 'velocities':[1,2], 'conserved_moments':u, 'polynomials':[1,LA*X], 'relaxation_parameters':[0., 2.], 'equilibrium':[u, C*u], 'source_terms':{u:-sp.Abs(X-t)**2*u}, 'init':{u:(init,)}, }, ], 'generator': 'cython', 'parameters': {LA: 1., C: c, 'time': t}, } sol = pylbm.Simulation(dico) # build the simulation viewer = pylbm.viewer.matplotlib_viewer fig = viewer.Fig() ax = fig[0] ax.axis(0., 1., -.1, .6) x = sol.domain.x ax.plot(x, sol.m[u], width=2, color='k', label='initial') while sol.t < 1: sol.one_time_step() ax.plot(x, sol.m[u], width=2, color='b', label=r'$D_1Q_2$') ax.legend() ``` ```python ```
10caf4d2c640c1f717e7ca443b342eb31519773f
23,555
ipynb
Jupyter Notebook
notebooks/08_advection_reaction.ipynb
Mopolino8/pylbm
b457ccdf1e7a1009807bd1136a276886f81a9e7d
[ "BSD-3-Clause" ]
null
null
null
notebooks/08_advection_reaction.ipynb
Mopolino8/pylbm
b457ccdf1e7a1009807bd1136a276886f81a9e7d
[ "BSD-3-Clause" ]
null
null
null
notebooks/08_advection_reaction.ipynb
Mopolino8/pylbm
b457ccdf1e7a1009807bd1136a276886f81a9e7d
[ "BSD-3-Clause" ]
1
2019-11-24T17:13:26.000Z
2019-11-24T17:13:26.000Z
110.070093
16,612
0.829251
true
1,449
Qwen/Qwen-72B
1. YES 2. YES
0.849971
0.771843
0.656045
__label__eng_Latn
0.833152
0.362543
```python class Dice: ### fig:dice1 ### def __init__(self): self.numbers = {1,2,3,4,5,6} dice = Dice() print(dice.numbers) ``` {1, 2, 3, 4, 5, 6} ```python 1 in dice.numbers ### fig:1_in_dice #### ``` True ```python 7 not in dice.numbers ### fig:1_not_in_dice #### ``` True ```python import random # モジュールのインポート ### fig:dice2 ### class Dice: def __init__(self): self.numbers = {1,2,3,4,5,6} def roll(self): return random.choice(list(self.numbers)) # 追加 def rolls(self,num): return random.choices(list(self.numbers), k=num) # 追加 dice = Dice() ### roll, rollsを使ってみましょう ### print(dice.roll()) print(dice.rolls(10)) ``` 3 [1, 5, 3, 5, 6, 4, 3, 3, 6, 4] ```python S_bag = {"R","R","R","W","W","W","W"} ### fig:wrong_set ### print(S_bag) ``` {'R', 'W'} ```python S_bag = {"R_1","R_2","R_3","W_1","W_2","W_3","W_4"} ### fig:ball_set ### print(S_bag) ``` {'R_1', 'R_3', 'R_2', 'W_1', 'W_3', 'W_4', 'W_2'} ```python class Bag: ### fig:bag_and_ball ### def __init__(self): self.contents = set() # 空の集合をひとつ作っておく def add(self,ball): self.contents.add(ball) # ボールを袋に入れる操作 class Ball: def __init__(self, color): self.color = color # ボールは色を示す変数(文字列の予定)だけ持つ bag = Bag() # 袋を一つ作る for i in range(3): # 赤い玉を3つ加える bag.add(Ball("red")) for i in range(4): # 白い玉を4つ加える bag.add(Ball("white")) print(bag.contents) ``` {<__main__.Ball object at 0x1089ee048>, <__main__.Ball object at 0x1089ee470>, <__main__.Ball object at 0x1089ee080>, <__main__.Ball object at 0x1089eeb00>, <__main__.Ball object at 0x1089ee978>, <__main__.Ball object at 0x1089ee5c0>, <__main__.Ball object at 0x1089ee9e8>} ```python class Bag: ### fig:bag_and_ball2 ### def __init__(self): # @@@省略@@@ self.contents = set() # @@@省略@@@ def add(self,ball): # @@@省略@@@ self.contents.add(ball) # @@@省略@@@ def pop(self): # このメソッドを追加 if not self.contents: return None # 空ならNoneを返す b = random.choice(list(self.contents)) self.contents.remove(b) # setから選んだボールを消す return b class Ball: # @@@省略@@@ def __init__(self, color): # @@@省略@@@ self.color = color # @@@省略@@@ bag = Bag() for i in range(3): bag.add(Ball("red")) for i in range(4): bag.add(Ball("white")) result = [] # 選んだボールの色を記録するためのリスト b = bag.pop() while b: result.append(b.color) b = bag.pop() print(result) ``` ['red', 'red', 'red', 'white', 'white', 'white', 'white'] ```python class Agent: ### fig:dice_probability ### def P_dice(self, dice, event): return len(event)/len(dice.numbers) agent = Agent() print(agent.P_dice(dice, {2,4,6})) # 偶数 print(agent.P_dice(dice, {3,4,5,6})) # 3以上 print(agent.P_dice(dice, {})) # 要素が空 print(agent.P_dice(dice, {1,2,3,4,5,6})) # 全要素 ``` 0.5 0.6666666666666666 0.0 1.0 ```python print(agent.P_dice(dice, 3)) # 集合にせずに出目を与える ### fig:only_give_element ### ``` ```python import sympy ### fig:powerset ### s = sympy.FiniteSet(*dice.numbers) print(s.powerset()) ``` {EmptySet(), {1}, {2}, ..., {1, 3, 4, 5, 6}, {2, 3, 4, 5, 6}, {1, 2, 3, 4, 5, 6}} ```python s = sympy.FiniteSet(*dice.numbers) print(s**3) twice = set(e for e in s**3) print(twice) print(len(twice)) ``` {1, 2, 3, 4, 5, 6} x {1, 2, 3, 4, 5, 6} x {1, 2, 3, 4, 5, 6} {(4, 2, 2), (1, 4, 4), (2, 2, 4), (5, 5, 1), (5, 2, 1), (1, 4, 2), (5, 5, 3), (3, 1, 6), (5, 2, 3), (5, 5, 5), (3, 1, 4), (2, 6, 5), (3, 2, 2), (4, 1, 5), (3, 1, 2), (2, 6, 3), (6, 5, 5), (2, 5, 3), (4, 4, 2), (1, 2, 2), (6, 6, 3), (2, 6, 1), (6, 5, 3), (3, 2, 6), (2, 5, 1), (4, 6, 1), (4, 1, 1), (1, 2, 4), (6, 6, 1), (5, 3, 2), (1, 5, 5), (6, 5, 1), (3, 2, 4), (4, 6, 3), (4, 1, 3), (1, 2, 6), (2, 5, 5), (4, 6, 5), (1, 3, 5), (6, 3, 1), (4, 3, 6), (6, 6, 5), (5, 3, 6), (1, 5, 1), (3, 4, 5), (2, 3, 4), (1, 3, 3), (6, 3, 3), (4, 3, 4), (5, 6, 2), (5, 3, 4), (1, 5, 3), (2, 3, 6), (1, 3, 1), (6, 3, 5), (4, 3, 2), (5, 6, 4), (6, 4, 4), (3, 3, 1), (5, 1, 5), (3, 4, 1), (6, 2, 6), (5, 6, 6), (6, 4, 6), (3, 3, 3), (3, 4, 3), (2, 3, 2), (6, 2, 4), (6, 1, 2), (3, 3, 5), (2, 4, 2), (5, 1, 1), (3, 6, 6), (6, 2, 2), (5, 4, 3), (1, 6, 4), (6, 4, 2), (5, 1, 3), (2, 4, 4), (3, 6, 4), (4, 5, 2), (6, 1, 6), (5, 4, 1), (1, 6, 6), (2, 4, 6), (3, 6, 2), (2, 1, 5), (4, 5, 4), (4, 2, 5), (6, 1, 4), (3, 5, 2), (2, 2, 3), (4, 5, 6), (2, 1, 3), (5, 4, 5), (1, 6, 2), (2, 2, 1), (4, 4, 5), (2, 1, 1), (4, 2, 1), (1, 1, 2), (5, 2, 4), (3, 5, 6), (4, 4, 3), (4, 2, 3), (1, 1, 4), (5, 2, 6), (3, 5, 4), (2, 2, 5), (4, 4, 1), (1, 1, 6), (1, 4, 5), (5, 2, 2), (1, 4, 3), (5, 5, 2), (3, 1, 5), (2, 6, 6), (6, 5, 6), (1, 4, 1), (4, 1, 4), (5, 5, 4), (3, 1, 3), (2, 6, 4), (6, 5, 4), (3, 2, 3), (2, 5, 2), (4, 1, 6), (1, 2, 1), (5, 5, 6), (3, 1, 1), (2, 6, 2), (6, 5, 2), (3, 2, 1), (1, 2, 3), (6, 6, 2), (5, 3, 3), (2, 5, 6), (4, 1, 2), (1, 2, 5), (5, 3, 1), (1, 5, 4), (3, 2, 5), (2, 5, 4), (4, 6, 2), (1, 3, 6), (6, 6, 6), (1, 5, 6), (3, 4, 4), (4, 6, 4), (1, 3, 4), (6, 3, 2), (5, 6, 1), (6, 6, 4), (5, 3, 5), (3, 4, 6), (2, 3, 5), (4, 6, 6), (1, 3, 2), (6, 3, 4), (4, 3, 5), (5, 6, 3), (6, 4, 5), (1, 5, 2), (6, 3, 6), (4, 3, 3), (5, 6, 5), (5, 1, 4), (2, 4, 1), (3, 4, 2), (2, 3, 1), (6, 2, 5), (4, 3, 1), (6, 4, 1), (3, 3, 2), (2, 4, 3), (5, 1, 6), (2, 3, 3), (6, 2, 3), (6, 1, 3), (5, 4, 2), (6, 4, 3), (3, 3, 4), (2, 4, 5), (4, 5, 1), (2, 1, 6), (6, 2, 1), (6, 1, 1), (1, 6, 5), (3, 3, 6), (5, 1, 2), (3, 6, 5), (2, 1, 4), (4, 5, 3), (5, 4, 6), (3, 5, 3), (3, 6, 3), (2, 1, 2), (4, 5, 5), (4, 2, 4), (1, 1, 1), (6, 1, 5), (5, 4, 4), (1, 6, 1), (3, 5, 1), (2, 2, 2), (4, 4, 6), (3, 6, 1), (4, 2, 6), (1, 1, 3), (1, 6, 3), (4, 4, 4), (1, 1, 5), (5, 2, 5), (1, 4, 6), (3, 5, 5), (2, 2, 6)} 216 ```python 6**3 ``` 216 ```python ```
d52b8a3989013d4098582e63be59622170736f77
12,530
ipynb
Jupyter Notebook
math/prob.ipynb
naokawa0609/naokawa
5547cea501259ce5b5bc8708e30760ad2f4187b2
[ "MIT" ]
148
2019-03-27T00:20:16.000Z
2022-03-30T22:34:11.000Z
math/prob.ipynb
naokawa0609/naokawa
5547cea501259ce5b5bc8708e30760ad2f4187b2
[ "MIT" ]
3
2018-11-07T04:33:13.000Z
2018-12-31T01:35:16.000Z
math/prob.ipynb
naokawa0609/naokawa
5547cea501259ce5b5bc8708e30760ad2f4187b2
[ "MIT" ]
116
2019-04-18T08:35:53.000Z
2022-03-24T05:17:46.000Z
35.196629
2,387
0.433759
true
3,382
Qwen/Qwen-72B
1. YES 2. YES
0.672332
0.763484
0.513314
__label__krc_Cyrl
0.980121
0.03093
# Tutorial and example about approx. ODE with Euler Method ```python import numpy as np import math ``` - __define__ $f=(t,y)$ - __input__ $t_{0}$ and $y_{0}$ - __input__ step size, $h$ and number of steps $n$ - __for__ $j$ from 1 to $n$: - $m=f(t_{0},y_{0})$ - $y_{1}$ = $y_{0}+h*m$ - $t_{1}=t_{0}+h$ - Print $t_{1}$ and $y_{1}$ - $t_{0}=t_{1}$ - $y_{0}=y_{1}$ ```python def euler(givenfunction,t0,y0,h,n,i): m=givenfunction(t0,y0) y1=y0+h*m t1=t0+h i=i+1 if i==n: print('t= %.2f'%t1,'approx= %.6f'%y1) times.append(round(t1,2)) results.append(round(y1,5)) return (t1,y1) else: print('t= %.2f'%t1,'approx= %.6f'%y1) times.append(round(t1,2)) results.append(round(y1,5)) return euler(givenfunction,t1,y1,h,n,i) ``` For the given equation and initial condition \begin{equation} y^{\prime}+2 y=2-\mathbf{e}^{-4 t} \quad y(0)=1 \end{equation} Pick a $h$ and values of $t$, ```python def func(t,y): y_prime = 2-math.exp(-4*t)-2*y return y_prime ``` ```python h=0.1 t0=0 y0=1 n=50 times=[t0] results=[y0] ``` ```python x=euler(func,t0,y0,h,n,i=1) ``` t= 0.10 approx= 0.900000 t= 0.20 approx= 0.852968 t= 0.30 approx= 0.837441 t= 0.40 approx= 0.839834 t= 0.50 approx= 0.851677 t= 0.60 approx= 0.867808 t= 0.70 approx= 0.885175 t= 0.80 approx= 0.902059 t= 0.90 approx= 0.917571 t= 1.00 approx= 0.931324 t= 1.10 approx= 0.943228 t= 1.20 approx= 0.953355 t= 1.30 approx= 0.961861 t= 1.40 approx= 0.968937 t= 1.50 approx= 0.974780 t= 1.60 approx= 0.979576 t= 1.70 approx= 0.983495 t= 1.80 approx= 0.986684 t= 1.90 approx= 0.989273 t= 2.00 approx= 0.991368 t= 2.10 approx= 0.993061 t= 2.20 approx= 0.994426 t= 2.30 approx= 0.995526 t= 2.40 approx= 0.996411 t= 2.50 approx= 0.997122 t= 2.60 approx= 0.997693 t= 2.70 approx= 0.998151 t= 2.80 approx= 0.998519 t= 2.90 approx= 0.998814 t= 3.00 approx= 0.999050 t= 3.10 approx= 0.999239 t= 3.20 approx= 0.999391 t= 3.30 approx= 0.999513 t= 3.40 approx= 0.999610 t= 3.50 approx= 0.999688 t= 3.60 approx= 0.999750 t= 3.70 approx= 0.999800 t= 3.80 approx= 0.999840 t= 3.90 approx= 0.999872 t= 4.00 approx= 0.999898 t= 4.10 approx= 0.999918 t= 4.20 approx= 0.999934 t= 4.30 approx= 0.999948 t= 4.40 approx= 0.999958 t= 4.50 approx= 0.999966 t= 4.60 approx= 0.999973 t= 4.70 approx= 0.999979 t= 4.80 approx= 0.999983 t= 4.90 approx= 0.999986 Analytical solution \begin{equation} y(t)=1+\frac{1}{2} \mathbf{e}^{-4 t}-\frac{1}{2} \mathbf{e}^{-2 t} \end{equation} ```python def solution(t): y=1+0.5*math.exp(-4*t)-0.5*math.exp(-2*t) return y ``` ```python a_results=[] time_list=np.arange(0,0.6,0.01) for t in time_list: a_results.append(solution(t)) ``` ```python ``` ```python import numpy as np import matplotlib.pyplot as plt def f(t): y=1+0.5*np.exp(-4*t)-0.5*np.exp(-2*t) return y t2 = np.arange(0.0, 6.0, 0.02) plt.figure(1) plt.plot(times,results,'bo',t2, f(t2), 'k') plt.show() ``` Blue dots are numerical approximations with Euler Method ```python ```
8b396639c1eebb43c3cb616fdac6cee1ca0f17d4
20,406
ipynb
Jupyter Notebook
euler_method_ode.ipynb
bkoyuncu/euler_ODE
d639bc3d14da1c215f4d24ea3a12e7a1727519e4
[ "MIT" ]
null
null
null
euler_method_ode.ipynb
bkoyuncu/euler_ODE
d639bc3d14da1c215f4d24ea3a12e7a1727519e4
[ "MIT" ]
null
null
null
euler_method_ode.ipynb
bkoyuncu/euler_ODE
d639bc3d14da1c215f4d24ea3a12e7a1727519e4
[ "MIT" ]
null
null
null
72.106007
13,900
0.805449
true
1,448
Qwen/Qwen-72B
1. YES 2. YES
0.877477
0.865224
0.759214
__label__eng_Latn
0.276341
0.602241
# Neutron transport in H slab Please indicate your name below, since you will need to submit this notebook completed latest the day after the datalab. Don't forget to save your progress during the datalab to avoid any loss due to crashes. ```python name='' ``` In this experiment we will put together our previous knowledge to track neutrons in a homogeneous medium filled with liquid hydrogen. We will track neutrons in 3-dimensions (although investigate the spatial dependence of the flux in 1 dimension only, and we will consider that the space is infinite in $y$ and $z$ directions, and finite in $x$). This is not a very realistic problem, and it is also not a practically relevant one, however it provides an excellent opportunity to us to see how the pieces of neutron transport can be put together. Since the physics is very simple in this case, our approximations are more or less valid: - there is only two reaction on H-1: scattering and capture - there are no resonances in the H-1 cross sections. - scattering can be considered elastic and isotropic in CoM almost for every neutron energy - we consider that H atoms are at rest (ie. temperature is 0K), so we do not need to consider any upscattering. - we even neglect any molecular bounds between H atoms. We are not going to write too much code here, because in fact we have already done most of the necessary preparation during the previous datalabs. We will consider that the density of the liquid is 0.07085 g/cm3, and there is a 3 MeV neutron point source placed at $x=0.0,y=0.0,z=0.0$. Our goal will be to plot the trajectories of neutrons and to estimate the flux vs the x coordinate (although, one could argue that for a point source we should measure the flux vs the radial distance from the source since the problem is spherical). ```python import numpy as np import matplotlib.pyplot as plt ``` ## XS ### Microscopic XS First we obtain the microscopic cross sections for scattering and capture. We can see that the scattering is nearly constant at epithermal energies, and the capture reaction is also very smooth. We can also notice that most of the reactions are going to be scattering reactions. ```python xsscatter=np.loadtxt('05b-xs_Hscatter.dat',skiprows=2) Es=xsscatter[:,0] XSs=xsscatter[:,1] xscapture=np.loadtxt('05b-xs_Hcapture.dat',skiprows=2) Ec=xscapture[:,0] XSc=xscapture[:,1] plt.figure() plt.loglog(Es,XSs,label='scatter') plt.loglog(Ec,XSc,label='capture') plt.xlabel('Energy (eV)') plt.ylabel('XS (barn)') plt.legend() plt.show() ``` ### Macroscopic XS Now, we will multiply with the atom density to obtain the macroscopic cross sections. We will consider that our liquid is made of H atoms. Although, we know that it is not the case in reality. ```python density = 0.07085 #g/cm3 A = 1 Numdens = density * 6.022E23 / A #let's ignore that it is H2 molecule MXSc= XSc * Numdens*1e-24 MXSs= XSs * Numdens*1e-24 ``` ## Path to next collision In the previous datalab we saw that $\exp(-\Sigma_t x)$ is the probability that a neutron moves a distance dx without any interaction. and $\Sigma_t \exp(-\Sigma_t x)dx$ is the probability that the neutron has its interaction in dx. So $p(x)=\Sigma_t \exp(-\Sigma_t x)$ Thus $F(x)=1-\exp(\Sigma_tx)$ If we take the inverse, to sample a random path $x=-\frac{\ln(1-r)}{\Sigma_t}$ but if r is uniform over $[0,1)$, than $1-r$ is also uniform over $[0,1)$, so this simplifies to $x=-\frac{\ln r}{\Sigma_t}$ **Note** speed is everything in MC calculations. Although here we don't tried to avoid every unecessary operation, but this example is just to highlight, that sometimes operations can be avoided with some reasoning. So we can define the `distanceToCollision` function to sample a distance between two collision sites. ```python def distanceToCollision(SigT,N=1): x=np.random.uniform(0,1,N) return -np.log(x)/SigT ``` Let's play a bit with this one. Between 1-10000 eV the scattering cross section is still more or less constant, and several order of magnitude larger than the capture cross section, so for these energies we do not expect to have large differences between mean free path. For faster energies (eg. 1 MeV) we see a longer distance. ```python fig, axs = plt.subplots(2, 2, figsize=(10,7)) fig.subplots_adjust(hspace=.7) fig.subplots_adjust(wspace=.3) for i,E in enumerate([1e0,1e2,1e4,1e6]): SigS=np.interp(E,Es,MXSs) SigC=np.interp(E,Ec,MXSc) SigT=SigS+SigC print('Energy: {} eV, Total XS: {} 1/cm'.format(E,SigT)) ds=distanceToCollision(SigT,10000) axs[int(i>=2), i%2].hist(ds,50) axs[int(i>=2), i%2].set_xlabel('distance between collision (cm)') axs[int(i>=2), i%2].set_ylabel('occurance') axs[int(i>=2), i%2].set_title( '@%.1e eV is \n Empirical mfp %.2f cm \n Theoretical mfp is %.2f cm '%(E,np.mean(ds),1/SigT)) ``` ## Reaction type The probability of reaction $i$ happening at energy $E$ is \begin{equation} \frac{\Sigma_i(E)}{\Sigma_t(E)} \end{equation} In our example only two reactions might happen: scattering or capture, So a simple condition can be used to decide which happens. Note that the function would not necessarily require $\Sigma_t$ as input. We also saw that `np.random.choice` could handle this for us. ```python def reactionType(SigS,SigC,SigT): #TODO: more generic for any number of reactions. x=np.random.uniform(0,1) if x < SigS/SigT: return 'scatter' else: return 'capture' ``` ## Scattering and directions During developing the theory of elastic scattering, we have assumed that scattering is isotropic in the Center-of-Mass frame, and we found a relation between the CM and LAB angles: \begin{equation} \tan \theta_L = \frac{\sin \theta_C}{\frac{1}{A}+\cos \theta_C} \end{equation} It was also derived how the outgoing neutron energy depends on the incoming energy and the scattering cosine: \begin{equation} E_f=\Big[\frac{(1+\alpha)+(1-\alpha)\cos \theta_C}{2}\Big]E_i \end{equation} Now we implement the `elasticScatter()` function which will sample the outgoing LAB energy for a neutron at a certain ingoing LAB energy. ```python def elasticScatter(E): muC=np.random.uniform(-1,1) thetaC=np.arccos(muC) E=(((1+alpha)+(1-alpha)*muC)/2)*E thetaL=np.arctan2(np.sin(thetaC),((1/A)+muC)) muL=np.cos(thetaL) return E, muL ``` When the neutron is born we assume it is being emitted isotropically from the source, thus we need a function which samples random directions. For the random directions we have to keep in my, that it is not theta which is uniformly distributed, but the cosine of the angle, as we saw during the previous datalab. For transforming the directions of a neutron after scattering we can use the following formulae (from https://docs.openmc.org/en/v0.10.0/methods/physics.html), which is just based on coordinate transformation. \begin{equation} u' = \mu u + \frac{\sqrt{1 - \mu^2} ( uw \cos\phi - v \sin\phi )}{\sqrt{1 - w^2}} \end{equation} \begin{equation} v' = \mu v + \frac{\sqrt{1 - \mu^2} ( vw \cos\phi + u \sin\phi )}{\sqrt{1 - w^2}} \end{equation} \begin{equation} w' = \mu w - \sqrt{1 - \mu^2} \sqrt{1 - w^2} \cos\phi \end{equation} ```python def randomDir(): mu=np.random.uniform(-1,1) theta=np.arccos(mu) phi=np.random.uniform(0,2*np.pi) u=np.sin(theta)*np.cos(phi) v=np.sin(theta)*np.sin(phi) w=np.cos(theta) return np.array([u,v,w]) def transformDir(u,v,w,mu): """ transform coordinates according to openMC documentation. TODO: could be updated to receive a direction array Parameters ---------- u : float Old x-direction v : float Old y-direction w : float Old z-direction mu : float Lab cosine of scattering angle """ phi=np.random.uniform(0,2*np.pi) un=mu*u+(np.sqrt(1-mu**2)*(u*w*np.cos(phi)-v*np.sin(phi)))/(np.sqrt(1-w**2)) vn=mu*v+(np.sqrt(1-mu**2)*(v*w*np.cos(phi)+u*np.sin(phi)))/(np.sqrt(1-w**2)) wn=mu*w-np.sqrt(1-mu**2)*np.sqrt(1-w**2)*np.cos(phi) return np.array([un,vn,wn]) ``` ## Flux scoring The flux can be interpreted as the total distance traveled by neutrons in a volume. That said, it can be estimated by summing all the distances traveled by neutrons in a volume (this is called *track-length estimator*). However, in our case we would like to obtain the space dependence of the flux, thus the knowledge of traveled disctance in slices of the slab would be required. For which one needs to know where a certain particle crosses the surface dividing volumes. This is not a difficult task (one only needs to find the intersection of a line and a plane/surface), and for heterogeneous geometries (ie. which are built of regions filled with different materials) it is anyhow required to keep track of such events. However for the current demonstration we do not wish to implement such ray tracing methodology. For us the simpler choice is to use the definition of flux through the definition of reacton rates and use a *collision estimator*. $\phi = \frac{1}{W} \sum_{i \in C} \frac{w_i}{\Sigma_t (E_i)}$, where $w_i$ are the weights of the particles (at the time of the reaction), and $W$ is the total weight. This plays a role only in more advanced Monte Carlo methods. For us, $w_i=1$ always, thus $W$ will be the number of simulated particles. So in fact we will need to add $1/(N\Sigma_t)$ at every collision event, and in case we would like to get the spatial dependence of the flux, we can score this into some predefined space bins. The final value of the estimator will give the flux per source particle. If one is interested in the total physical flux, the results should be renormalized with the source rate. # Main Now it is time to put everything together! Let's break down neutron transport of one single neutron into a flowchart: This needs to be repeated for the number of neutrons which allows us to get a converged estimate of the flux. The real value of MC based techniques is apparent: in case the trajectories are independent from each other, they can be evaluated in parallel (and to some extent this is true even when the trajectories are dependent such as for secondary neutrons emerging from fission events). For the moment we will not bother with functions or classes (although clearly it would make a much better code, just think about the elegance how we interacted with the `Tree` objects), but only plainly put this algorithm into code. We documented the program with comments below. ```python from mpl_toolkits.mplot3d import Axes3D N=500 D=100 #cm half width of the 1-dimensional slab E0=3e6 #eV A=1 alpha=(A-1)**2/(A+1)**2 #x coordinates and flux initialization #we will score in 201 bins along the x-axis x=np.linspace(-D,D,201) flux=np.zeros(len(x)) #We create the canvas for a 3D plot of the trajectories fig = plt.figure(figsize=plt.figaspect(1.0)*1.5) #Adjusts the aspect ratio and enlarges the figure (text does not enlarge) ax = fig.gca(projection='3d') for i in range(N): #for every neutron #we initializes lists to store the coordinates of the neutron along its trajectory Xs=[] Ys=[] Zs=[] #we initialize the neutron energy E=E0 #we sample a random source location on a plane x=0.0, y and z is between -500 and 500. This is an arbitrary choice #we just picked a large number, considering that the slab is infinite in y and z direction. #this would be a planar source: coord=np.array([0.0,np.random.uniform(-500.0,500.0),np.random.uniform(-500.0,500.0)]) #plate coord=np.array([0.0,0.0,0.0]) #point source #we sample a random initial direction the neutron travels to direction=randomDir() #now we track the neutron until it dies. We could pick a slighly higher energy condition than 0.0. #in case more nuclides are present one needs an other step to sample nuclide while E>0.0: #GET the macroscopic cross sections at energy E SigS=np.interp(E,Es,MXSs) SigC=np.interp(E,Ec,MXSc) SigT=SigS+SigC #GET distance to collision dist=distanceToCollision(SigT) #STORE the locations for the trajectory Xs.append(coord[0]) Ys.append(coord[1]) Zs.append(coord[2]) #COORDINATE OF NEXT COLLISION coord=coord + dist*direction #TYPE OF NEXT COLLISION rtype=reactionType(SigS,SigC,SigT) if np.abs(coord[0])>D: #if the x coordinate is larger than the half width: leakage break elif rtype=='capture': #if neutron captured we break out from the while loop #and score to the flux estimator flux[np.digitize(coord[0],x)]=flux[np.digitize(coord[0],x)]+1/SigT/N break else: #if neutron scattered we score to the flux estimate #and calculate the direction after the scattering event flux[np.digitize(coord[0],x)]=flux[np.digitize(coord[0],x)]+1/SigT/N E,muL=elasticScatter(E) direction=transformDir(direction[0],direction[1],direction[2],muL) #we plot the trajectory of the given neutron. We could store the trajectory as well for later use #but we only do this for the first 100 neutrons, otherwise the figure would be to busy if i<100: ax.plot3D(Xs,Ys,Zs,label=str(i)) #once all the neutrons are tracked we plot the trajectories and we plot the flux ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z') #plt.legend() ax.azim = 113 ax.elev = 28 plt.show() plt.figure() plt.plot(x,flux) plt.xlabel('x (cm)') plt.ylabel('flux (per source neutron)') plt.show() ``` ## Experiments Study what happens if you - increase the number of particles in the simulation (increase it by one order of magnitude, for more the calculation will take ages). - change the density of the medium - change the initial energy of the neutrons Conclude your findings! ```python ```
17b6b5398efdcf4be177509a0435da4d9277f5d1
19,172
ipynb
Jupyter Notebook
Datalabs/Datalab05/5b-FixedSourceMC.ipynb
ezsolti/RFP
5a410dd30ad61686b5d54d7778462e5e217be159
[ "MIT" ]
21
2021-06-18T15:25:22.000Z
2022-03-21T07:34:42.000Z
Datalabs/Datalab05/5b-FixedSourceMC.ipynb
ezsolti/RFP
5a410dd30ad61686b5d54d7778462e5e217be159
[ "MIT" ]
null
null
null
Datalabs/Datalab05/5b-FixedSourceMC.ipynb
ezsolti/RFP
5a410dd30ad61686b5d54d7778462e5e217be159
[ "MIT" ]
5
2021-06-19T00:28:02.000Z
2022-01-09T18:58:27.000Z
39.858628
732
0.597069
true
3,741
Qwen/Qwen-72B
1. YES 2. YES
0.798187
0.743168
0.593187
__label__eng_Latn
0.995166
0.216502
# Quantization of Signals *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* ## Introduction [Digital signal processors](https://en.wikipedia.org/wiki/Digital_signal_processor) and general purpose processors can only perform arithmetic operations within a limited number range. So far we considered discrete signals with continuous amplitude values. These cannot be handled by processors in a straightforward manner. [Quantization](https://en.wikipedia.org/wiki/Quantization_%28signal_processing%29) is the process of mapping a continuous amplitude to a countable set of amplitude values. This refers also to the *requantization* of a signal from a large set of countable amplitude values to a smaller set. Scalar quantization is an instantaneous and memoryless operation. It can be applied to the continuous amplitude signal, also referred to as *analog signal* or to the (time-)discrete signal. The quantized discrete signal is termed as *digital signal*. The connections between the different domains are illustrated in the following. ### Model of the Quantization Process In order to discuss the effects of quantizing a continuous amplitude signal, a mathematical model of the quantization process is required. We restrict our considerations to a discrete real-valued signal $x[k]$. The following mapping is used in order to quantize the continuous amplitude signal $x[k]$ \begin{equation} x_Q[k] = g( \; \lfloor \, f(x[k]) \, \rfloor \; ) \end{equation} where $g(\cdot)$ and $f(\cdot)$ denote real-valued mapping functions, and $\lfloor \cdot \rfloor$ a rounding operation. The quantization process can be split into two stages 1. **Forward quantization** The mapping $f(x[k])$ maps the signal $x[k]$ such that it is suitable for the rounding operation. This may be a scaling of the signal or a non-linear mapping. The result of the rounding operation is an integer number $\lfloor \, f(x[k]) \, \rfloor \in \mathbb{Z}$, which is termed as *quantization index*. 2. **Inverse quantization** The mapping $g(\cdot)$, maps the quantization index to the quantized value $x_Q[k]$ such that it constitutes an approximation of $x[k]$. This may be a simple scaling or non-linear operation. The quantization error (quantization noise) $e[k]$ is defined as \begin{equation} e[k] = x_Q[k] - x[k] \end{equation} Rearranging yields that the quantization process can be modeled by adding the quantization error to the discrete signal #### Example - Quantization of a sine signal In order to illustrate the introduced model, the quantization of one period of a sine signal is considered \begin{equation} x[k] = \sin[\Omega_0 k] \end{equation} using \begin{align} f(x[k]) &= 3 \cdot x[k] \\ i &= \lfloor \, f(x[k]) \, \rfloor \\ g(i) &= \frac{1}{3} \cdot i \end{align} where $\lfloor \cdot \rfloor$ denotes the [nearest integer function](https://en.wikipedia.org/wiki/Nearest_integer_function) and $i$ the quantization index. The quantized signal is then given as \begin{equation} x_Q[k] = \frac{1}{3} \cdot \lfloor \, 3 \cdot \sin[\Omega_0 k] \, \rfloor \end{equation} The discrete signals are not shown by stem plots for ease of illustration. ```python %matplotlib inline import numpy as np import matplotlib.pyplot as plt N = 1024 # length of signal # generate signal x = np.sin(2*np.pi/N * np.arange(N)) # quantize signal xi = np.round(3 * x) xQ = 1/3 * xi e = xQ - x # plot (quantized) signals fig, ax1 = plt.subplots(figsize=(10,4)) ax2 = ax1.twinx() ax1.plot(x, 'r', label=r'signal $x[k]$') ax1.plot(xQ, 'b', label=r'quantized signal $x_Q[k]$') ax1.plot(e, 'g', label=r'quantization error $e[k]$') ax1.set_xlabel('k') ax1.set_ylabel(r'$x[k]$, $x_Q[k]$, $e[k]$') ax1.axis([0, N, -1.2, 1.2]) ax1.legend() ax2.set_ylim([-3.6, 3.6]) ax2.set_ylabel('quantization index') ax2.grid() ``` **Exercise** * Investigate the quantization error $e[k]$. Is its amplitude bounded? * If you would represent the quantization index (shown on the right side) by a binary number, how much bits would you need? * Try out other rounding operations like `np.floor()` and `np.ceil()` instead of `np.round()`. What changes? Solution: It can be concluded from the illustration that the quantization error is bounded as $|e[k]| < \frac{1}{3}$. There are in total 7 quantization indexes needing 3 bits in a binary representation. The properties of the quantization error are different for different rounding operations. ### Properties Without knowledge of the quantization error $e[k]$, the signal $x[k]$ cannot be reconstructed exactly from its quantization index or quantized representation $x_Q[k]$. The quantization error $e[k]$ itself depends on the signal $x[k]$. Therefore, quantization is in general an irreversible process. The mapping from $x[k]$ to $x_Q[k]$ is furthermore non-linear, since the superposition principle does not hold in general. Summarizing, quantization is an inherently irreversible and non-linear process. It potentially removes information from the signal. ### Applications Quantization has widespread applications in Digital Signal Processing. For instance * [Analog-to-Digital conversion](https://en.wikipedia.org/wiki/Analog-to-digital_converter) * [Lossy compression](https://en.wikipedia.org/wiki/Lossy_compression) of signals (speech, music, video, ...) * Storage and transmission ([Pulse-Code Modulation](https://en.wikipedia.org/wiki/Pulse-code_modulation), ...) **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2018*.
052c5fa6e120db79d48e2bf14a62ce590bf33ce2
51,091
ipynb
Jupyter Notebook
quantization/introduction.ipynb
mgrubisic/digital-signal-processing-lecture
7098b958639eb5cfcabd110d26ddd30ff8444e0a
[ "MIT" ]
2
2018-12-29T19:13:49.000Z
2020-05-25T09:53:21.000Z
quantization/introduction.ipynb
mgrubisic/digital-signal-processing-lecture
7098b958639eb5cfcabd110d26ddd30ff8444e0a
[ "MIT" ]
null
null
null
quantization/introduction.ipynb
mgrubisic/digital-signal-processing-lecture
7098b958639eb5cfcabd110d26ddd30ff8444e0a
[ "MIT" ]
3
2020-10-17T07:48:22.000Z
2022-03-17T06:28:58.000Z
254.18408
42,464
0.90057
true
1,540
Qwen/Qwen-72B
1. YES 2. YES
0.833325
0.857768
0.714799
__label__eng_Latn
0.988072
0.49905
# PHYS 2211 - Introductory Physics Laboratory I # Measurement andError Propagation ### Name: Tatiana Krivosheev ### Partners: Oleg Krivosheev #### Annex A ```python import matplotlib import numpy as np import matplotlib.pyplot as plt import sympy %matplotlib inline ``` #### Annex A - Data and Calculations #### 1. Rectangular Block ```python class ListTable(list): """ Overridden list class which takes a 2-dimensional list of the form [[1,2,3],[4,5,6]], and renders an HTML Table in IPython Notebook. """ def _repr_html_(self): html = ["<table>"] for row in self: html.append("<tr>") for col in row: html.append("<td>{0}</td>".format(col)) html.append("</tr>") html.append("</table>") return ''.join(html) ``` ```python # plain text plt.title('alpha > beta') ``` ```python # math text plt.title(r'$\alpha > \beta$') ``` ```python from sympy import symbols, init_printing init_printing(use_latex=True) delta = symbols('delta') delta**2/3 ``` ```python from sympy import symbols, init_printing init_printing(use_latex=True) delta = symbols('delta') table = ListTable() table.append(['measuring device', ' ', 'delta', 'w', 'delta w', 'h', 'delta h']) table.append([' ', '(cm)', '(cm)', '(cm)','(cm)', '(cm)', '(cm)']) lr=4.9 wr=2.5 hr=1.2 lc=4.90 wc=2.54 hc=1.27 deltar=0.1 deltac=0.01 table.append(['ruler',lr, deltar, wr, deltar, hr, deltar]) table.append(['vernier caliper', lc, deltac, wc, deltac, hc, deltac]) table ``` <table><tr><td>measuring device</td><td> </td><td>delta</td><td>w</td><td>delta w</td><td>h</td><td>delta h</td></tr><tr><td> </td><td>(cm)</td><td>(cm)</td><td>(cm)</td><td>(cm)</td><td>(cm)</td><td>(cm)</td></tr><tr><td>ruler</td><td>4.9</td><td>0.1</td><td>2.5</td><td>0.1</td><td>1.2</td><td>0.1</td></tr><tr><td>vernier caliper</td><td>4.9</td><td>0.01</td><td>2.54</td><td>0.01</td><td>1.27</td><td>0.01</td></tr></table> ```python s(t) = \mathcal{A}\/\sin(2 \omega t) ``` ```python table = ListTable() table.append(['l', 'deltal', 'w', 'deltaw', 'h', 'deltah']) table.append(['(cm)', '(cm)', '(cm)','(cm)', '(cm)', '(cm)']) lr=4.9 wr=2.5 hr=1.2 lc=4.90 wc=2.54 hc=1.27 deltar=0.1 deltac=0.01 for i in range(0,len(x)): xx = x[i] yy = y[i] ttable.append([lr, deltar, wr, deltar, hr, deltar])able.append([lr, deltar, wr, deltar, hr, deltar]) table ``` <table><tr><td>l</td><td>deltal</td><td>w</td><td>deltaw</td><td>h</td><td>deltah</td></tr><tr><td>(cm)</td><td>(cm)</td><td>(cm)</td><td>(cm)</td><td>(cm)</td><td>(cm)</td></tr><tr><td>4.9</td><td>4.9</td></tr><tr><td>0.1</td><td>0.01</td></tr><tr><td>2.5</td><td>2.54</td></tr><tr><td>0.1</td><td>0.01</td></tr><tr><td>1.2</td><td>1.27</td></tr><tr><td>0.1</td><td>0.01</td></tr></table> ```python # code below demonstrates... import numpy as np x = [7,10,15,20,25,30,35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95] y= [0.228,0.298,0.441,0.568,0.697,0.826,0.956, 1.084, 1.211, 1.339,1.468, 1.599, 1.728, 1.851, 1.982, 2.115, 2.244, 2.375, 2.502] plt.scatter(x, y) plt.title('Linearity test') plt. xlabel('Length (cm)') plt. ylabel('Voltage (V)') fit = np.polyfit(x,y,1) fit_fn = np.poly1d(fit) plt.plot(x,y, 'yo', x, fit_fn(x), '--k') m,b = np.polyfit(x, y, 1) print ('m={0}'.format(m)) print ('b={0}'.format(b)) plt.show() ``` m=0.0258164673413 b=0.0491959521619 #### 2. Wheatstone bridge measurements ```python Rk = 3.5 # kOhms table = ListTable() table.append(['Ru', 'Ru, acc', 'L1', 'L2', 'Ru, wheatstone', 'Disc']) table.append(['(kOhms)', '(kOhms)', '(cm)', '(cm)', '(kOhms)', ' % ']) x = [0.470,0.680,1.000, 1.500] y= [0.512,0.712,1.131,1.590] z= [88.65, 84.50, 76.90, 69.80] for i in range(0,len(x)): xx = x[i] yy = y[i] zz = z[i] Rw = (100.0 - zz)/zz*Rk Disc = (Rw-yy)/yy*100.0 table.append([xx, yy, zz, 100.0-zz,Rw, Disc]) table ``` <table><tr><td>Ru</td><td>Ru, acc</td><td>L1</td><td>L2</td><td>Ru, wheatstone</td><td>Disc</td></tr><tr><td>(kOhms)</td><td>(kOhms)</td><td>(cm)</td><td>(cm)</td><td>(kOhms)</td><td> % </td></tr><tr><td>0.47</td><td>0.512</td><td>88.65</td><td>11.35</td><td>0.448110547095</td><td>-12.4784087704</td></tr><tr><td>0.68</td><td>0.712</td><td>84.5</td><td>15.5</td><td>0.64201183432</td><td>-9.82979855063</td></tr><tr><td>1.0</td><td>1.131</td><td>76.9</td><td>23.1</td><td>1.05136540962</td><td>-7.04107784059</td></tr><tr><td>1.5</td><td>1.59</td><td>69.8</td><td>30.2</td><td>1.51432664756</td><td>-4.75933034186</td></tr></table> ```python x = [0.470,0.680,1.000, 1.500] y= [0.512,0.712,1.131,1.590] z= [88.65, 84.50, 76.90, 69.80] for i in range(0,len(x)): xx = x[i] yy = y[i] zz = z[i] Rw = (100.0 - zz)/zz*Rk Disc = (Rw-yy)/yy*100.0 plt.scatter(yy, Disc) plt.title('Discrepancy vs Resistance') plt. xlabel('Resistance (kOhms)') plt. ylabel('Discrepancy (%)') plt.show() ``` ```python ```
53ec31f054f5bde3d3e95efc475e113b6d8a2484
23,978
ipynb
Jupyter Notebook
PHYS2211.Measurement.ipynb
Tatiana-Krivosheev/ipython-notebooks-physics
034ed0abd0532e42fdee879d134233d37bd082af
[ "CC0-1.0" ]
null
null
null
PHYS2211.Measurement.ipynb
Tatiana-Krivosheev/ipython-notebooks-physics
034ed0abd0532e42fdee879d134233d37bd082af
[ "CC0-1.0" ]
null
null
null
PHYS2211.Measurement.ipynb
Tatiana-Krivosheev/ipython-notebooks-physics
034ed0abd0532e42fdee879d134233d37bd082af
[ "CC0-1.0" ]
null
null
null
53.166297
6,028
0.714905
true
2,082
Qwen/Qwen-72B
1. YES 2. YES
0.718594
0.743168
0.534036
__label__kor_Hang
0.163815
0.079075
```python import numpy from msppy.msp import MSLP from msppy.solver import SDDP, Extensive from msppy.evaluation import Evaluation import matplotlib.pyplot as plt import seaborn seaborn.set_style('darkgrid') ``` American put option pricing ===================== This tutorial deals with the American put option pricing. Introduction ---------------- In American option, early exercise is preferable when the continuation (time) value shrinks below intrinsic value. The value of the option at each time step is thus given by the maximum of the intrinsic value and continuation value. By virtue of dynamic equation, the value at each time $i$ given by, \begin{equation} V_i(S_i) = \max\big\{(K-S_i)_+,\mathbb{E}_i^Q[\exp(-r\Delta t) V_{i+1}(S_{i+1})|S_i]\big\} \end{equation} Where $K$ is the strike price, $\Delta t$ is the time step, $\mathbb{E}_i^Q$ is the risk neutral measure and $r$ is the interest rate Binomial tree model --------------------------- Suppose spot price is 36, strike price is 40, volatility is 0.2, interest rate is 6%, expiration is 1 year, time step is 0.02. ```python S = 36 K = 40 sigma = 0.2 r = 0.06 T = 50 step = 0.02 u = numpy.exp(sigma*numpy.sqrt(step)) d = 1/u p = (numpy.exp(r*step)-d)/(u-d) ``` Solution ---------- ```python put = MSLP(T=T+1, discount=numpy.exp(-r*step), ctg=1) coef = [-u,-d] for t in range(T+1): m = put[t] s_now, s_past = m.addStateVar() y = m.addVar(obj=1,name="y") if t > 0: m.addConstr(s_now + s_past == 0, uncertainty={s_past: coef}) m.set_probability([p,1-p]) else: m.addConstr(s_now == S) if t < T: m.addConstr(m.getVarByName("y") + m.alpha*numpy.exp(-r*step) >= K-m.states[0]) else: m.addConstr(m.getVarByName("y") >= K-m.states[0]) sddp = SDDP(put) sddp.solve(max_iterations=200, n_processes=3, n_steps=3, logToConsole=0) ``` Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Academic license - for non-commercial use only Evolution of bound. ```python plt.plot(sddp.db) ``` More sophicated model can be considered. ```python ```
e8d0ce2139249fe0b25015e4ca968b5ca13c432f
17,965
ipynb
Jupyter Notebook
doc/source/examples/option_pricing/put.ipynb
jucaleb4/msppy
7e053cc99e805f2fa60675a28481109dfae3eb0b
[ "BSD-3-Clause" ]
39
2019-05-11T18:41:25.000Z
2022-01-28T02:22:54.000Z
doc/source/examples/option_pricing/put.ipynb
jucaleb4/msppy
7e053cc99e805f2fa60675a28481109dfae3eb0b
[ "BSD-3-Clause" ]
4
2019-05-12T03:28:09.000Z
2021-08-30T01:26:16.000Z
doc/source/examples/option_pricing/put.ipynb
jucaleb4/msppy
7e053cc99e805f2fa60675a28481109dfae3eb0b
[ "BSD-3-Clause" ]
18
2019-05-11T18:21:32.000Z
2022-01-30T04:33:59.000Z
77.103004
10,684
0.809073
true
1,131
Qwen/Qwen-72B
1. YES 2. YES
0.91118
0.73412
0.668915
__label__eng_Latn
0.990995
0.392444
# Can you eat more pizza than your siblings? ## Riddler Express 2017-07-14 https://fivethirtyeight.com/features/can-you-eat-more-pizza-than-your-siblings/ > You and your two older siblings are sharing two extra-large pizzas and decide to cut them in an unusual way. You overlap the pizzas so that the crust of one touches the center of the other (and vice versa since they are the same size). You then slice both pizzas around the area of overlap. Two of you will each get one of the crescent-shaped pieces, and the third will get both of the football-shaped cutouts. > > Which should you choose to get more pizza: one crescent or two footballs? ```python from bokeh.io import output_notebook output_notebook() ``` <div class="bk-root"> <a href="http://bokeh.pydata.org" target="_blank" class="bk-logo bk-logo-small bk-logo-notebook"></a> <span id="62c973a1-dba7-4b68-93c3-15ed6c19dd8b">Loading BokehJS ...</span> </div> The diagram shows the overlapping areas in the middle shaded darker than the others. This area will be removed from each pizza. ```python from bokeh.plotting import figure, show import numpy as np p = figure(plot_width=400, plot_height=400, x_range=[-2, 2], y_range=[-2, 2]) p.ellipse([-0.5, 0.5], [0, 0], width=2, height=2, color="darkred", alpha=0.25) p.line(x=[0.5, 0, 0, 0.5], y=[0, np.sqrt(3)/2, -np.sqrt(3)/2, 0], color='black', alpha=0.5) # p.arc(x=0.5, y=0, radius=1, start_angle=2*np.pi/3, end_angle=-2*np.pi/3) show(p) ``` <div class="bk-root"> <div class="bk-plotdiv" id="487d968b-abce-411c-9a33-14ab420c5204"></div> </div> To calculate the area of the overlap, consider the wedge that is cut from one of the circles. It is composed of a triangle as well as an additional section. The overlap is equal to two of these additional sections. To get the area of this section, we can compute the area of the wedge minus the triangle. Since the total overlap area is twice this, and it comes from both circles, the total pizza is four times this section. We will call the length of the vertical line segment $x$ and compute it given the right triangle with base $\frac{r}{2}$, height $\frac{x}{2}$ and hypotenus $r$. $$ \begin{align} (\frac{x}{2})^2 + (\frac{r}{2})^2 = r^2 \\ \frac{x^2 + r^2}{4} = r^2 \\ x^2 + r^2 = 4r^2 \\ x^2 = 3r^2 \\ x = \sqrt{3}r \\ \end{align} $$ So the area of the triangle above is $$ \begin{align} A_{\text{triangle}} & = \frac{1}{2} x \frac{r}{2} \\ & = \frac{1}{2} \sqrt{3} \frac{r}{2} \\ & = \frac{\sqrt{3}}{4} r^2 \end{align} $$ To calculate the area of the circular wedge, we need to figure out the angle. If we call $\alpha$ the half angle that is part fo the right triangle, $$ \begin{align} cos(\alpha) = \frac{1}{2} \\ \alpha = arccos(\frac{1}{2}) = \frac{\pi}{3} \end{align} $$ And the area of this circular wedge $$ \begin{align} A_{\text{wedge}} & = \dfrac{2 \alpha}{2 \pi} \pi r^2 \\ & = \alpha r^2 \\ & = \frac{\pi}{3} r^2 \end{align} $$ And the area of the section is the area of the wedge minus the area of the triangle. $$ \begin{align} A_{\text{section}} & = A_{\text{wedge}} - A_{\text{triangle}} \\ & = \frac{\pi}{3} r^2 - \frac{\sqrt{3}}{4} r^2 \\ & = (\frac{\pi}{3} - \frac{\sqrt{3}}{4}) r^2 \end{align} $$ ```python print('Area of wedge: {}'.format(np.pi/3)) print('Area of triangle: {}'.format(np.sqrt(3)/4)) area_section = np.pi/3 - np.sqrt(3)/4 print('Area of section: {}'.format(area_section)) print('Area of pizza minus: {}'.format(np.pi - 2*area_section)) print('Area of total sections: {}'.format(4*area_section)) ``` Area of wedge: 1.0471975511965976 Area of triangle: 0.4330127018922193 Area of section: 0.6141848493043783 Area of pizza minus: 1.9132229549810364 Area of total sections: 2.4567393972175133 ```python ```
4322f6659c40c5fe031926637c4570d30aca07cf
28,412
ipynb
Jupyter Notebook
fivethirtyeight/2017-07-14 Slice the pizza.ipynb
dennisobrien/PublicNotebooks
a2b6974a84468f2f91c25398cbd374bb03151582
[ "MIT" ]
1
2018-03-12T20:13:28.000Z
2018-03-12T20:13:28.000Z
fivethirtyeight/2017-07-14 Slice the pizza.ipynb
dennisobrien/PublicNotebooks
a2b6974a84468f2f91c25398cbd374bb03151582
[ "MIT" ]
null
null
null
fivethirtyeight/2017-07-14 Slice the pizza.ipynb
dennisobrien/PublicNotebooks
a2b6974a84468f2f91c25398cbd374bb03151582
[ "MIT" ]
null
null
null
58.340862
8,697
0.523582
true
1,257
Qwen/Qwen-72B
1. YES 2. YES
0.903294
0.887205
0.801407
__label__eng_Latn
0.958639
0.700269
# CS4277/CS5477 Lab 3-2: Absolute Pose Estimation ### Introduction In this assignment, you will get to estimate the rotation and translation of a camera by using the linear n-point camera pose determination algorithm. As discussed in the lecture, You will need at least four 2d-to-3d correspondencs to get a unique solution. We will provide ten 2d-to-3d correspondences and the camera intrinsics in the dataset. This assignment is worth **10%** of the final grade. References: * Lecture 7 Optional references: * Long Quan, Zhong-Dan Lan. Linear N-point Camera Pose Determination. ### Instructions This workbook provides the instructions for the assignment, and facilitates the running of your code and visualization of the results. For each part of the assignment, you are required to **complete the implementations of certain functions in the accompanying python file** (`pnp.py`). To facilitate implementation and grading, all your work is to be done in that file, and **you only have to submit the .py file**. Please note the following: 1. Fill in your name, email, and NUSNET ID at the top of the python file. 2. The parts you need to implement are clearly marked with the following: ``` """ YOUR CODE STARTS HERE """ """ YOUR CODE ENDS HERE """ ``` , and you should write your code in between the above two lines. 3. Note that for each part, there may certain functions that are prohibited to be used. It is important **NOT to use those prohibited functions** (or other functions with similar functionality). If you are unsure whether a particular function is allowed, feel free to ask any of the TAs. ### Submission Instructions Zip your completed `pnp.py` and `eight_point.py` and upload onto the relevant work bin in Luminus. --- ## Part 1: Absolute Pose Estimation In this part, you will implement the linear n-point camera pose determination algorithm. You will estimate the camera postion and orientation given a calibrated camera and ten 2d-to-3d correspondences. Each pair of correspondences $\mathbf{p}_i \leftrightarrow \mathbf{u}_i$ and $\mathbf{p}_j \leftrightarrow \mathbf{u}_j$ gives a constraint on the unknown camera-point distances: $$ d_{ij}^2 = x_i^2 + x_j^2 -2x_ix_jcos\theta_{ij}, $$ where $d_{ij} = \|\mathbf{p}_i - \mathbf{p}_j\|$ and $\theta_{ij}$ is the inter-point distance and angle. The quadratic constraint can be written as : $$ f_{ij}(x_i, x_j) = x_i^2 + x_j^2 -2x_ix_j\cos\theta_{ij}-d_{ij}^2 = 0. $$ For $n=3$, we can obtain three constraints $$ \begin{cases} f_{12}(x_1, x_2) = 0 \\ f_{13}(x_1, x_3) = 0 \\ f_{23}(x_2, x_3) = 0 \end{cases} $$ for the three unknown distances $x_1, x_2, x_3$. The elimination of $x_2, x_3$ gives an eighth degree polynomial in $x_1$: $$ g(x) = a_5x^4 + a_4x^3 + a_3x^2 + a_2x + a_1 = 0, $$ where $x = x_1^2$. Thus, given ten 2d-to-3d correspondences in the dataset, you will get $\frac{9 \times 8} {2} = 36 $ constraints for each unknown $x_i$. The matrix equation can be written as: $$ \begin{bmatrix} a_1 & a_2 & a_3 & a_4 & a_5 \\ a_1^2 & a_2^2 & a_3^2 & a_4^2 & a_5^2 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ a_1^{36} & a_2^{36} & a_3^{36} & a_4^{36} & a_5^{36} \end{bmatrix} \begin{bmatrix} 1 \\ x \\ x_2 \\ x^3 \\ x^4 \end{bmatrix} = \mathbf{A}_{36 \times 5}\mathbf{t}_5 = 0. $$ The vector $\mathbf{t}_5$ is obtained from the singular value decomposition of $\mathbf{A}_{36 \times 5}$. Then $x$ can be calculated as : $$ x = \text{average}(t_1/t_0, t_2/t_1, t_3/t_2, t_4/t_3), $$ and the final depth is $x_i = \sqrt x$. We will repeat the same process for all other points. Here are some steps you will follow during the implementation: 1. Construct the matrix $\mathbf{A}$, which is made up of the coefficients of the polynomial. We provide a help function `extract_coeff()` to extract the coefficients of a polynomial. An example of how to use the function is given below. Note that you will compute the `cos_theta12, cos_theta23, cos_theta_13` and `d12, d23, d13` using the real data. 2. Compute the camera-point distance $x_i$ for each point by taking SVD of matrix $\mathbf{A}$ . 3. Reconsruct the 3d coordinates of each point by using the helper function `reconstruct_3d()`. Note that the 2d points should be in the homogeneous coordinate in this function. 4. Recover the camera rotationa and translation by using the ICP algorithm. We provid the helper function `icp()`,where the inputs are the 3d coordinates of all points under the world and camera coordinates. Note that you may need the `np.squeeze()` to convert the data into the required format. After you get the rotation and translation of the camera, you can check your results by reprojecting all 3d points into image space and compare with the ground truth. You will find that the reprojections of the 3d points are close to the ground truth pixels if your estimations are correct (As shown below). **Implement the following function(s): `cv2.solvePnP()`** * <u>You may use the following functions</u>: `np.linalg.svd()`, `np.linalg.inv()`, `combinations()`, ```python %load_ext autoreload %autoreload 1 %aimport pnp %matplotlib inline %load_ext autoreload import numpy as np import sympy as sym import scipy.io as sio from sympy.polys import subresultants_qq_zz from itertools import combinations import cv2 import matplotlib.pyplot as plt from pnp import pnp_algo, visualize,extract_coeff np.set_printoptions(precision=6) ``` The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload ```python x1, x2, x3 = sym.symbols('x1, x2, x3') cos_theta12, cos_theta23, cos_theta13 = 0.0, 0.0, 0.0 d12, d23, d13 = 0.0, 0.0, 0.0 a = extract_coeff(x1, x2, x3, cos_theta12, cos_theta23, cos_theta13, d12, d23, d13) ``` ```python data = sio.loadmat('data/data_pnp.mat') points2d = data['points2d'] points3d = data['points3d'] K = data['k'] r, t = pnp_algo(K, points2d, points3d) points2d = np.squeeze(points2d) points3d = np.squeeze(points3d) visualize(r, t, points3d, points2d, K) ``` ```python ``` ```python ```
c3878b04f924913f3f1a0bdc7f7e46d2b5e96808
8,591
ipynb
Jupyter Notebook
3dcv/lab3/pnp_algo/pnp.ipynb
ShichengChen/NUS-3D-computer-vision
8f8ec4ddd25f895b0334f49209d933b1d6e6d5c2
[ "MIT" ]
1
2022-01-14T06:52:02.000Z
2022-01-14T06:52:02.000Z
3dcv/lab3/pnp_algo/pnp.ipynb
ShichengChen/NUS-3D-computer-vision
8f8ec4ddd25f895b0334f49209d933b1d6e6d5c2
[ "MIT" ]
null
null
null
3dcv/lab3/pnp_algo/pnp.ipynb
ShichengChen/NUS-3D-computer-vision
8f8ec4ddd25f895b0334f49209d933b1d6e6d5c2
[ "MIT" ]
null
null
null
38.698198
395
0.603539
true
1,811
Qwen/Qwen-72B
1. YES 2. YES
0.831143
0.877477
0.729309
__label__eng_Latn
0.991219
0.53276
# Contravariant & Covariant indices in Tensors (Symbolic) ```python from einsteinpy.symbolic import ChristoffelSymbols, RiemannCurvatureTensor from einsteinpy.symbolic.predefined import Schwarzschild import sympy sympy.init_printing() ``` ### Analysing the schwarzschild metric along with performing various operations ```python sch = Schwarzschild() sch.tensor() ``` ```python sch_inv = sch.inv() sch_inv.tensor() ``` ```python sch.order ``` ```python sch.config ``` 'll' ### Obtaining Christoffel Symbols from Metric Tensor ```python chr = ChristoffelSymbols.from_metric(sch_inv) # can be initialized from sch also chr.tensor() ``` ```python chr.config ``` 'ull' ### Changing the first index to covariant ```python new_chr = chr.change_config('lll') # changing the configuration to (covariant, covariant, covariant) new_chr.tensor() ``` ```python new_chr.config ``` 'lll' ### Any arbitary index configuration would also work! ```python new_chr2 = new_chr.change_config('lul') new_chr2.tensor() ``` ### Obtaining Riemann Tensor from Christoffel Symbols and manipulating it's indices ```python rm = RiemannCurvatureTensor.from_christoffels(new_chr2) rm[0,0,:,:] ``` ```python rm.config ``` 'ulll' ```python rm2 = rm.change_config("uuuu") rm2[0,0,:,:] ``` ```python rm3 = rm2.change_config("lulu") rm3[0,0,:,:] ``` ```python rm4 = rm3.change_config("ulll") rm4.simplify() rm4[0,0,:,:] ``` #### It is seen that `rm` and `rm4` are same as they have the same configuration
da3149b2aad7c4c060fb0be3d6a3d8fce54e2d32
151,373
ipynb
Jupyter Notebook
docs/source/examples/Playing with Contravariant and Covariant Indices in Tensors(Symbolic).ipynb
bibek22/einsteinpy
78bf5d942cbb12393852f8e4d7a8426f1ffe6f23
[ "MIT" ]
1
2020-06-01T18:37:53.000Z
2020-06-01T18:37:53.000Z
docs/source/examples/Playing with Contravariant and Covariant Indices in Tensors(Symbolic).ipynb
bibek22/einsteinpy
78bf5d942cbb12393852f8e4d7a8426f1ffe6f23
[ "MIT" ]
2
2019-04-08T17:39:50.000Z
2019-04-11T03:10:09.000Z
docs/source/examples/Playing with Contravariant and Covariant Indices in Tensors(Symbolic).ipynb
bibek22/einsteinpy
78bf5d942cbb12393852f8e4d7a8426f1ffe6f23
[ "MIT" ]
null
null
null
235.783489
27,584
0.822921
true
426
Qwen/Qwen-72B
1. YES 2. YES
0.962108
0.90599
0.87166
__label__eng_Latn
0.761771
0.863491
# "Gillespie Algorithm" > "In this blog post we will look at the grand-daddy of stochastic simulation methods: the Gillespie Algorithm (otherwise known as the stochastic simulation algorith SSA). If you have ever done any form of stochastic simulation you will owe a great deal of gratitude to the Gillespie algorithm which likely inspired the techniques you used." - toc: true - author: Lewis Cole (2020) - branch: master - badges: false - comments: false - categories: [Gillespie-Algorithm, Stochastic-Simulation-Algorithm, Computational-Statistics, Probability, Tau-Leaping, Master-Equation, Adaptive-Tau-Leaping] - hide: false - search_exclude: false - image: https://github.com/lewiscoleblog/blog/raw/master/images/Gillespie/Gillespie.jpg ```python #hide import warnings warnings.filterwarnings('ignore') ``` The Gillespie algorithm is one of the most historically important stochastic simulation algorithms ever created. At its heart the intuition behind it is very simple and it is re-assuring that it "works" - this is not always the case with stochastic simulation where the "obvious" idea can sometimes have unintended debilitating consequences. The algorithm was first presented by Doob (and is sometimes refered to as the Doob-Gillespie algorithm) in the mid 1940s. It was implemented by Kendall in the 1950s. However it wasn't until the mid 1970s that Gillespie re-derived the method by studying physical systems that it became widely used. In publishing the method he essentially created the entire fields of systems biology and computational chemistry by opening the door to what is possible through stochastic simulation. ## Background In this blog we will consider applying the Gillespie method to the area of chemical reaction kinetics, this is the application Gillespie originally had in mind. The concepts described will carry over to other applications. Imagine we wish to model a particular chemical reaction. We could use a determistic approach to model the reaction, this will require setting up a family of coupled differential equations. In doing so we will essentially "ignore" any microscopic behaviour and look at the reaction system at a "high level". This can mean we miss out on a lot of the "detail" of the reaction which may be of interest to us. Further in some cases this approach may not even be applicable, for example to set up a differential equation we assume that we have large quantities of reactants that are perfectly mixed, this allows us to "average over" all reactants to create nice smooth dynamics. This may not reflect reality if there are only relatively few reactants in a system. An alternate approach is to use a stochastic "discrete event" model - this is where we model individual reactions seperately as discrete events occuring in time. This matches our physical intuition of how reactions occur: we wait until the reactants "bump" into each other in the right way before a reaction occurs. One way to summarise this mathematically is through the use of a "master equation". In the sciences a master equation represents the time evolution properties of a multi-state jumping system, by which we mean a system that "jumps" between distinct states through time (in contrast a "diffusion system" varies gradually). The system in question being stochastic in nature we are concerned with observing how the state distribution varies over time, for example: with some initial condition what is the probability of finding the system in a particular state within the next X seconds/minutes/years? Of course the time units depend on the nature of the system (e.g. if we construct a master equation for predator/prey dynamics we are unlikely to be interested in microsecond timescales, however if looking at a chemical reaction we are unlikely to find a timescale in days useful.) If we want to display the master equation mathematically we use a transition rate matrix $A(t)$ - this can evolve in time or it can be static. We can then express the master equation in the form: $$ \frac{d\mathbf{P}_t}{dt} = A(t) \mathbf{P}_t $$ Where vector $\mathbf{P}_t$ represents the probability distribution of states at time t - obscured by notation is an initial condition. Those from a mathematical or probabilistic background will recognise this as a Kolmogorov backwards equation for jump processes. If we expand the notation a little such that $P_{ij}(s,t)$ represents the probability of the system being in state $i$ at time $s$ and state $j$ at time $t$ then we can note that the transition rate matrix satisfies: \begin{align} A_{ij}(t) &= \left[ \frac{\partial P_{ij}(t,u)}{du} \right]_{u=t} \\ A_{ij}(t) & \geq 0 \quad \quad \quad \quad \forall i \neq j \\ \sum_j A_{ij}(t) &= 0 \quad \quad \quad \quad \forall i \end{align} Further we can note that if there is a distribution $\pi$ such that: $$ \pi_j A_{ij}(t) = \pi_i A_{ij}(t) $$ For all pairs of states $(i,j)$ then the process satisfies detailed balance and the process is a reversible Markov process. ## Gillespie Algorithm The Gillespie algorithm is allows us to model the exact dynamics described by the master equation. In some (simple) cases we can solve the master equation analytically, but for complicated examples (e.g. say we have 50 different types of reaction occuring) this may not be feasible and so the Gillespie algorithm (or some sort of simulation method) is necessary. In pseudo code we can write down the Gillespie algorithm as: 1. **Initialization** - initialize the system, in the context of reaction kinetics this amounts to the setting up initial chemical concentrations 2. **Monte-Carlo** - 1. Randomly simulate the time to the next event 2. Given an event has occurred randomly select which event has occured 3. **Update** - based on 2. move the model time forward to the event time and update the state of the system 4. **Repeat** - Iterate through steps 2. and 3. until some stopping criteria is met This essentially follows our intuition and there is no "technical trickery" such as fancy sampling methods, acceptance/rejection, etc. It is just a clean simple method - which is nice! Since we model by event as opposed to discretizing time steps this is an "exact" simulation method - meaning any trajectory simulated will follow the master equation dynamics exactly. However due to the random nature of any trajectory we will have to loop over these steps multiple times to find "typical" reaction paths (or whatever property we are trying to study). ## An Example To illustrate the algorithm in action we will take a simple reaction. We will have following forward reaction $$A + B \to AB$$ Where two monomers $A$ and $B$ react to form a dimer $AB$. The corresponding reverse reaction being: $$AB \to A + B$$ We will denote the rate of the forward reaction to be $r_f$ and the rate of the backward reaction to be $r_f$. If we let the number of molecules present be denoted by: $N_A, N_B$ and $N_{AB}$ then the rate of any reaction occurring is: $$R = r_f N_A N_B + r_b N_{AB}$$ Also given a reaction has occured the probability of the forward reaction having taken place is: $$\mathbb{P}(A + B \to AB) = \frac{r_f N_A N_B}{R}$$ For a model such as this we typically want to remove any "path dependence" - the arrival of the next reaction event is independent of reactions that have occurred previously (given the concentration of reactants). To satisfy this constraint typically reactions events are taken to follow a Poisson process. Under this assumption the number of reactions occuring within a time period $\Delta T$ follows a $Poisson(R\Delta T)$ distribution. Moreover the time between reactions is then follows an exponential distribution. Thus if we sample $u \sim U[0,1]$ then we take the time until next reaction to be $\tau = \frac{1}{R}ln\left( \frac{1}{u} \right)$. (Note: here I have used that $U$ and $(1-U)$ have the same distribution). A basic implementation of this can be seen below: ```python # An implenetation of the Gillespie algorithm # applied to a pair of reactions: # A + B -> AB # AB -> A + B import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Fix random seed for repeatability np.random.seed(123) ###### Fix model parameters ###### N_A0 = 25 # Initial number of A molecules N_B0 = 35 # Initial number of B molecules N_AB0 = 5 # Initial number of AB molecules rf = 2 # Forward reaction rate rb = 1 # Backwards reaction rate steps = 25 # Number of reactions per trajectory cycles = 100 # Number of trajectories iterated over # Set up holder arrays T = np.zeros((cycles, steps+1)) N_A = np.zeros((cycles, steps+1)) N_B = np.zeros((cycles, steps+1)) N_AB = np.zeros((cycles, steps+1)) # Store initial conditions N_A[:,0] = N_A0 N_B[:,0] = N_B0 N_AB[:,0] = N_AB0 ###### Main Code Loop ###### for i in range(cycles): for j in range(steps): # Calculate updated overall reaction rate R = rf * N_A[i,j] * N_B[i,j] + rb * N_AB[i,j] # Calculate time to next reaction u1 = np.random.random() tau = 1/R * np.log(1/u1) # Store reaction time T[i, j+1] = T[i,j] + tau # Select which reaction to occur Rf = rf * N_A[i,j] * N_B[i,j] / R u2 = np.random.random() # Update populations if u2 < Rf: N_A[i,j+1] = N_A[i,j] - 1 N_B[i,j+1] = N_B[i,j] - 1 N_AB[i,j+1] = N_AB[i,j] + 1 else: N_A[i,j+1] = N_A[i,j] + 1 N_B[i,j+1] = N_B[i,j] + 1 N_AB[i,j+1] = N_AB[i,j] - 1 # Calculate an average trajectory plot ave_steps = 100 T_max = T.max() # Set up average arrays T_ave = np.linspace(0,T_max,ave_steps+1) N_A_ave = np.zeros(ave_steps+1) N_B_ave = np.zeros(ave_steps+1) N_AB_ave = np.zeros(ave_steps+1) N_A_ave[0] = N_A0 N_B_ave[0] = N_B0 N_AB_ave[0] = N_AB0 # Pass over average array entries for i in range(1, ave_steps+1): tmax = T_ave[i] A_sum = 0 B_sum = 0 AB_sum = 0 t_count = 0 # Pass over each trajectory and step therein for j in range(cycles): for k in range(steps): if T[j,k] <= tmax and T[j,k+1] > tmax: t_count += 1 A_sum += N_A[j,k] B_sum += N_B[j,k] AB_sum += N_AB[j,k] # Caclulate average - taking care if no samples observed if t_count == 0: N_A_ave[i] = N_A_ave[i-1] N_B_ave[i] = N_B_ave[i-1] N_AB_ave[i] = N_AB_ave[i-1] else: N_A_ave[i] = A_sum / t_count N_B_ave[i] = B_sum / t_count N_AB_ave[i] = AB_sum / t_count ###### Plot Trajectories ###### fig, axs = plt.subplots(3, 1, figsize=(10,20)) # Plot average trajectories axs[0].plot(T_ave, N_A_ave, marker='', color='red', linewidth=1.9, alpha=0.9) axs[0].set_title('Number A Molecules') axs[0].set_ylim((0,35)) axs[0].set_xlim((0,0.125)) axs[1].plot(T_ave, N_B_ave, marker='', color='red', linewidth=1.9, alpha=0.9) axs[1].set_title('Number B Molecules') axs[1].set_ylim((0,35)) axs[1].set_xlim((0,0.125)) axs[2].plot(T_ave, N_AB_ave, marker='', color='red', linewidth=1.9, alpha=0.9) axs[2].set_title('Number AB Molecules') axs[2].set_xlabel("Time") axs[2].set_ylim((0,35)) axs[2].set_xlim((0,0.125)) # Plot each simulated trajectory for i in range(cycles): axs[0].plot(T[i,:], N_A[i,:], marker='', color='grey', linewidth=0.6, alpha=0.3) axs[1].plot(T[i,:], N_B[i,:], marker='', color='grey', linewidth=0.6, alpha=0.3) axs[2].plot(T[i,:], N_AB[i,:], marker='', color='grey', linewidth=0.6, alpha=0.3) plt.show() ``` In these plots we can see the various trajectories along with their average. If we increase the number of molecules and the number of trajectories we can get a "smoother" plot. Since we have the full evolution of the system we can also look at some other statistics, for example let's suppose we are interested in the distribution in the number of molecules of each type at time 0.5. We can also plot this using our samples: ```python time = 0.025 N_A_time = np.zeros(cycles) N_B_time = np.zeros(cycles) N_AB_time = np.zeros(cycles) for i in range(cycles): for j in range(1, steps): if T[i,j] => time and T[i,j-1] < time: N_A_time[i] = N_A[i,j] N_B_time[i] = N_B[i,j] N_AB_time[i] = N_AB[i,j] # If trajectory doesn't span far enough take latest observation if T[i, steps] < time: N_A_time[i] = N_A[i, steps] N_B_time[i] = N_B[i, steps] N_AB_time[i] = N_AB[i, steps] plt.hist(N_A_time, density=True, bins=np.arange(35), label="A", color='lightgrey') plt.hist(N_B_time, density=True, bins=np.arange(35), label="B", color='dimgrey') plt.hist(N_AB_time, density=True, bins=np.arange(35), label="AB", color='red') plt.legend() plt.show() ``` If instead of a system of 2 reactions we instead wanted to look a system of a large number of reactions we could modify the method above quite simply. Instead of the calculation of $R$ (overall reaction rate) consisting of 2 terms it will consist of a larger number of terms depending on the nature of the individual reactions. The probability of selecting a particular reaction type would then equally be in proportion to their contribution to $R$. We can also notice that there is nothing "special" about the method that means it only applies to reaction kinetics. For example: the example code above could equally be a "marriage and divorce model" for heterosexual couples: A representing women and B representing men, AB representing a marriage. Through defining the "reactions" slightly differently it doesn't take much modification to turn this into a infection model: for example there could be 3 states: susceptible to infection, infected and recovered (potentially with immunity) with transition rates between each of these states. We can see then that the Gillespie algorithm is very flexible and allows us to model stochastic systems that may otherwise be mathematically intractable. Through the nature of the modelling procedure we can sample from the system exactly (upto the precision of floating point numbers within our computers!) There is a downside to exact simulation however: it can be very slow! In the example above the speed isn't really an issue since the system is so simple. However if we were modelling many different reaction types (say the order of 100s) then to allow for adequate samples we will need to run many trajectories, this can quickly spiral into a very slow running code! Thankfully however the method has been adapted in many ways to combat this issue. ## Hybrid-Gillespie We can note that calculating deterministic results from an ODE is (much) quicker than implementing the Gillespie simulation algorithm since there is no random element. However we notice that we do not have to model every reaction type using the same Gillespie approach. For example suppose we have one reaction type that is much slower than the others, say the order of 10 times slower. We could model this reaction via a determinstic ODE approach and simply rely on Gillespie for the more rapidly changing dynamics. Of course this is not applicable in every situation - as with any modelling or approximation used we should be sure that it is applicable to the situation at hand. For brevity we will not code an example of this here but it should be easy enough to modify the code above (for example by adding that molecule $A$ can "disappear" from the system with a rate 1/10 times the rate of the backward reaction). ## Tau Leaping Tau leaping modifies the Gillespie methodology above, it sacrifices exact simulation in favour of an approximate simulation that is quicker to compute. The main idea behind tau-leaping is also intuitive: instead of modelling time to the next event we "jump" forward in time and then compute how many reactions we would expect to see within that time frame and updating the population amounts in one step. By updating the population amounts in one go we should be able to compute much faster. It should be clear that this is an approximation to the Gillespie algorithm. The size of the "leaps" determines how efficient the method is and how accurate the approximation is. If we make very large steps we can model many reactions per step which speeds up the implementation, however the simulation will also be less accurate since the populations will be updated less frequently. Conversely a very small leap size will mean many leaps will not see a reaction and so the algorithm will run more slowly, however this should result in dynamics very close to the Gillespie method. Often choosing the leap size requuires some trial and error. we can write pseudo-code for the tau-leaping process as: 1. **Initialize** - Set initial conditions for the system and set leaping size 2. **Calculate event rates** - for each event types depending on state of the system 3. **Monte-Carlo** - for each event type sample number of events occuring within the leap 4. **Update** - Update system state based on number of events 5. **Repeat** - Repeat steps 2-4 until some stopping criteria is met Recall: in the example above we used an exponential waiting time between reactions. This means the reactions occur as a poisson process - as a result the number of reactions occuring within a given timeframe will follow a poisson distribution. We also have to be careful to not allow a negative population (at least in the example presented - in other systems this may be reasonable). We can modify our example above to use Tau-leaping as: ```python # An implenetation of the Gillespie algorithm # with tau leaping # Applied to a pair of reactions: # A + B -> AB # AB -> A + B import numpy as np import matplotlib.pyplot as plt from scipy.stats import poisson %matplotlib inline # Fix random seed for repeatability np.random.seed(123) ###### Fix model parameters ###### N_A0 = 25 # Initial number of A molecules N_B0 = 35 # Initial number of B molecules N_AB0 = 5 # Initial number of AB molecules rf = 2 # Forward reaction rate rb = 1 # Backwards reaction rate leap = 0.005 # Size of leaping steps steps = 25 # Number of leaps per trajectory cycles = 100 # Number of trajectories iterated over # Set up holder arrays T = np.arange(steps+1)*leap N_A = np.zeros((cycles, steps+1)) N_B = np.zeros((cycles, steps+1)) N_AB = np.zeros((cycles, steps+1)) # Store initial conditions N_A[:,0] = N_A0 N_B[:,0] = N_B0 N_AB[:,0] = N_AB0 ###### Main Code Loop ###### for i in range(cycles): for j in range(steps): # Calculate updated reaction rates Rf = rf * N_A[i,j] * N_B[i,j] Rb = rb * N_AB[i,j] # Calculate number of reactions by type uf = np.random.random() ub = np.random.random() Nf = poisson.ppf(uf, Rf*leap) Nb = poisson.ppf(ub, Rb*leap) # Apply limits to prevent negative population Limitf = min(N_A[i,j], N_B[i,j]) Limitb = N_AB[i,j] Nf = min(Nf, Limitf) Nb = min(Nb, Limitb) # Update populations N_A[i,j+1] = N_A[i,j] + Nb - Nf N_B[i,j+1] = N_B[i,j] + Nb - Nf N_AB[i,j+1] = N_AB[i,j] + Nf - Nb # Calculate average arrays N_A_ave = N_A.mean(axis=0) N_B_ave = N_B.mean(axis=0) N_AB_ave = N_AB.mean(axis=0) ###### Plot Trajectories ###### fig, axs = plt.subplots(3, 1, figsize=(10,20)) # Plot average trajectories axs[0].plot(T, N_A_ave, marker='', color='red', linewidth=1.9, alpha=0.9) axs[0].set_title('Number A Molecules') axs[0].set_ylim((0,35)) axs[0].set_xlim((0,0.125)) axs[1].plot(T, N_B_ave, marker='', color='red', linewidth=1.9, alpha=0.9) axs[1].set_title('Number B Molecules') axs[1].set_ylim((0,35)) axs[1].set_xlim((0,0.125)) axs[2].plot(T, N_AB_ave, marker='', color='red', linewidth=1.9, alpha=0.9) axs[2].set_title('Number AB Molecules') axs[2].set_xlabel("Time") axs[2].set_ylim((0,35)) axs[2].set_xlim((0,0.125)) # Plot each simulated trajectory for i in range(cycles): axs[0].plot(T[:], N_A[i,:], marker='', color='grey', linewidth=0.6, alpha=0.3) axs[1].plot(T[:], N_B[i,:], marker='', color='grey', linewidth=0.6, alpha=0.3) axs[2].plot(T[:], N_AB[i,:], marker='', color='grey', linewidth=0.6, alpha=0.3) plt.show() ``` We can see here that even though the trajectories from tau-leaping are less exact the procedure has produced smoother average results for the same number of simulation steps (approximately the same running time). And again we can look at the distribution at time=0.025: ```python time = 0.025 N_A_time = np.zeros(cycles) N_B_time = np.zeros(cycles) N_AB_time = np.zeros(cycles) for i in range(cycles): for j in range(1, steps): if T[j] >= time and T[j-1] < time: N_A_time[i] = N_A[i,j] N_B_time[i] = N_B[i,j] N_AB_time[i] = N_AB[i,j] # If trajectory doesn't span far enough take latest observation if T[i, steps] < time: N_A_time[i] = N_A[i, steps] N_B_time[i] = N_B[i, steps] N_AB_time[i] = N_AB[i, steps] plt.hist(N_A_time, density=True, bins=np.arange(35), label="A", color='lightgrey') plt.hist(N_B_time, density=True, bins=np.arange(35), label="B", color='dimgrey') plt.hist(N_AB_time, density=True, bins=np.arange(35), label="AB", color='red') plt.legend() plt.show() ``` Here we can see improved distributions with (what appears to be) less noise. To justify this we would want to run more tests however. Note: this is the most basic implementation of the tau-leaping procedure. In certain situations this needs to be manipulated to improve behaviour, for example if the poisson draw is often large enough to cause the population to go negative then a truncation procedure (or acceptance/rejection scheme) needs to be employed in such a way as to retain the average reaction rates. In this simple example we ignore this complication, there are some occasions where the number of $A$ molecules hits zero so there will be some bias in the estimates presented above. ## Adaptive Tau-Leaping The "problem" with the tau-leaping method above is that it is very sensitive to the leap size. It is also possible that as the system evolves what started out as a "good" leap size becomes "bad" as the dynamics change. One possible solution to this is to use an "adaptive" method whereby the leap size varies depending on the dynamics. The main idea is to limit the leap sizes from being so large that the populations can reach an unfavourable state (e.g. negative population sizes) or jump to a state "too far away". There are many ways to do this, one of the more popular was developed by Y. Cao and D. Gillespie in 2006. In order to describe the method we will need to introduce some notation. We let $\mathbf{X}_t = \left( X_t^i \right)_{i=1}^N$ to be a vector of population sizes at time t. We intorduce variables $v_{ij}$ to represent the change in component $i$ of the population when an event $j$ occurs - we will use $i$ indices to refer to components of the population vector and $j$ indices to refer to event types. $R_j(\mathbf{X}_t)$ is the rate of event $j$ with population $\mathbf{X}_t$. In this method we look to bound the relative shift in rates at each step by a parameter $\epsilon$. In pseudo-code we can describe the process via: 1. **Initialize** - Set initial conditions for the population 2. **Calculate event rates** - $R_j$ for each event types depending on state of the system 3. **Calculate auxiliary variables** - for each state component $i$ \begin{align} \mu_i &= \sum_j v_{ij} R_j \\ \sigma_j^2 &= \sum_j v_{ij}^2 R_j \end{align} 4. **Select highest order event** - for each state component $i$, denote the rate of this event as $g_i$ 5. **Calculate time step** $$ \tau = min_i \left( min\left( \frac{max\left( \frac{\epsilon X_i}{g_i}, 1 \right)}{|\mu_i|} , \frac{max\left( \frac{\epsilon X_i}{g_i}, 1 \right)^2}{\sigma_j^2} \right) \right) $$ 6. **Monte-Carlo** - for each event type sample number of events occuring within the leap step $\tau$ 7. **Update** - Update system state based on number of events 8. **Repeat** - Repeat steps 2-7 until some stopping criteria is met Step 4. involves selecting the highest order event - this essentially is the "most important" event that each $i$ is involved in. For very complex systems this may not be an obvious thing to do and will require more finesse. We can see that aside from steps 3-5 this is the exact same scheme as the previous example. There are other adaptive leaping schemes that one could use each with different pros and cons. We can modify the code above to use this scheme via: ```python # An implenetation of the Gillespie algorithm # With adaptive tau-leaping # Applied to a pair of reactions: # A + B -> AB # AB -> A + B import numpy as np import matplotlib.pyplot as plt from scipy.stats import poisson %matplotlib inline # Fix random seed for repeatability np.random.seed(123) ###### Fix model parameters ###### N_A0 = 25 # Initial number of A molecules N_B0 = 35 # Initial number of B molecules N_AB0 = 5 # Initial number of AB molecules rf = 2 # Forward reaction rate rb = 1 # Backwards reaction rate eps = 0.03 # Epsilon adaptive rate steps = 25 # Number of reactions per trajectory cycles = 100 # Number of trajectories iterated over # Set up holder arrays T = np.zeros((cycles, steps+1)) N_A = np.zeros((cycles, steps+1)) N_B = np.zeros((cycles, steps+1)) N_AB = np.zeros((cycles, steps+1)) # Store initial conditions N_A[:,0] = N_A0 N_B[:,0] = N_B0 N_AB[:,0] = N_AB0 ###### Main Code Loop ###### for i in range(cycles): for j in range(steps): # Calculate updated reaction rates Rf = rf * N_A[i,j] * N_B[i,j] Rb = rb * N_AB[i,j] # Calculate auxiliary variables mu_A = Rf - Rb mu_B = Rf - Rb mu_AB = Rb - Rf sig2_A = Rf + Rb sig2_B = Rf + Rb sig2_AB = Rf + Rb # Select highest order reactions g_A = Rf g_B = Rf g_AB = Rb # Caclulate internal maxima - taking care of divide by zero if g_A == 0: max_A = 1 else: max_A = max(eps*N_A[i,j]/g_A,1) if g_B == 0: max_B = 1 else: max_B = max(eps*N_B[i,j]/g_B, 1) if g_AB == 0: max_AB = 1 else: max_AB = max(eps*N_AB[i,j]/g_AB, 1) # Calculate minima for each component min_A = min(max_A / abs(mu_A), max_A**2 / sig2_A) min_B = min(max_B / abs(mu_B), max_B**2 / sig2_B) min_AB = min(max_AB / abs(mu_AB), max_AB**2 / sig2_AB) # Select tau leap size leap = min(min_A, min_B, min_AB) # Calculate number of reactions by type uf = np.random.random() ub = np.random.random() Nf = poisson.ppf(uf, Rf*leap) Nb = poisson.ppf(ub, Rb*leap) # Apply limits to prevent negative population Limitf = min(N_A[i,j], N_B[i,j]) Limitb = N_AB[i,j] Nf = min(Nf, Limitf) Nb = min(Nb, Limitb) # Update populations and times N_A[i,j+1] = N_A[i,j] + Nb - Nf N_B[i,j+1] = N_B[i,j] + Nb - Nf N_AB[i,j+1] = N_AB[i,j] + Nf - Nb T[i,j+1] = T[i,j] + leap # Calculate an average trajectory plot ave_steps = 100 T_max = T.max() # Set up average array holders T_ave = np.linspace(0,T_max,ave_steps+1) N_A_ave = np.zeros(ave_steps+1) N_B_ave = np.zeros(ave_steps+1) N_AB_ave = np.zeros(ave_steps+1) N_A_ave[0] = N_A0 N_B_ave[0] = N_B0 N_AB_ave[0] = N_AB0 # Pass over average array entries for i in range(1, ave_steps+1): tmax = T_ave[i] A_sum = 0 B_sum = 0 AB_sum = 0 t_count = 0 # Pass over each trajectory and step therein for j in range(cycles): for k in range(steps): if T[j,k] <= tmax and T[j,k+1] > tmax: t_count += 1 A_sum += N_A[j,k] B_sum += N_B[j,k] AB_sum += N_AB[j,k] # Caclulate average - taking care if no samples observed if t_count == 0: N_A_ave[i] = N_A_ave[i-1] N_B_ave[i] = N_B_ave[i-1] N_AB_ave[i] = N_AB_ave[i-1] else: N_A_ave[i] = A_sum / t_count N_B_ave[i] = B_sum / t_count N_AB_ave[i] = AB_sum / t_count ###### Plot Trajectories ###### fig, axs = plt.subplots(3, 1, figsize=(10,20)) axs[0].plot(T_ave, N_A_ave, marker='', color='red', linewidth=1.9, alpha=0.9) axs[0].set_title('Number A Molecules') axs[0].set_ylim((0,35)) axs[0].set_xlim((0,0.125)) axs[1].plot(T_ave, N_B_ave, marker='', color='red', linewidth=1.9, alpha=0.9) axs[1].set_title('Number B Molecules') axs[1].set_ylim((0,35)) axs[1].set_xlim((0,0.125)) axs[2].plot(T_ave, N_AB_ave, marker='', color='red', linewidth=1.9, alpha=0.9) axs[2].set_title('Number AB Molecules') axs[2].set_xlabel("Time") axs[2].set_ylim((0,35)) axs[2].set_xlim((0,0.125)) for i in range(cycles): axs[0].plot(T[i,:], N_A[i,:], marker='', color='grey', linewidth=0.6, alpha=0.3) axs[1].plot(T[i,:], N_B[i,:], marker='', color='grey', linewidth=0.6, alpha=0.3) axs[2].plot(T[i,:], N_AB[i,:], marker='', color='grey', linewidth=0.6, alpha=0.3) plt.show() ``` As with the previous tau-leaping algorithm there the trajectories are noticably less exact than the original Gillespie formulation. However owing to the variable time step the trajectories do appear slightly less granular than in the previous tau-leaping formulation. Again the average trajectory is smoother than in the original method for (approximately) the same amount of run-time. Looking at the time=0.025 distributions once again: ```python time = 0.025 N_A_time = np.zeros(cycles) N_B_time = np.zeros(cycles) N_AB_time = np.zeros(cycles) for i in range(cycles): for j in range(1, steps+1): if T[i,j] >= time and T[i,j-1] < time: N_A_time[i] = N_A[i,j] N_B_time[i] = N_B[i,j] N_AB_time[i] = N_AB[i,j] if T[i, steps] < time: N_A_time[i] = N_A[i, steps] N_B_time[i] = N_B[i, steps] N_AB_time[i] = N_AB[i, steps] plt.hist(N_A_time, density=True, bins=np.arange(35), label="A", color='lightgrey') plt.hist(N_B_time, density=True, bins=np.arange(35), label="B", color='dimgrey') plt.hist(N_AB_time, density=True, bins=np.arange(35), label="AB", color='red') plt.legend() plt.show() ``` Again the distributions for a fixed time appear to have become less noisy. In a small scale simple example such as this we would expect any "improvements" from a scheme like this to be minor, as we run more complicated examples we would expect a bigger performance differential. ## Conclusion In this blog post we have seen 3 variations of the Gillespie algorithm: the original, tau-leaping and an adaptive tau leaping scheme. We have seen that the original variation produces exact simulations of a specified system, via tau leaping we have seen that we can approximate this and still get reasonable results in a quicker time. Which is important when dealing with more complicated and larger systems. At this point we should also see the flexibility inherent in the Gillespie framework and why it has been applied in many different areas. We can also see that the algorithm is a "gateway" into agent-based schemes - instead of using a purely stochastic mechanism for selecting reaction types/times we could (for example) model individual molecules moving around in space and if they come within a certain radius of each other at a certain speed then a reaction occurs. This would turn the Gillespie algorithm into a full agent-based model for reaction kinetics (the benefit of doing this in most situations is likely slim to none however).
52843ae0ecb462b3e7ebf2c540885d2f350c0e43
714,517
ipynb
Jupyter Notebook
_notebooks/2020-04-14-Gillespie Algorithm.ipynb
lewiscoleblog/blog
50183d63491abbf9e56676a784f53dfbb3952af1
[ "Apache-2.0" ]
2
2020-03-31T18:53:59.000Z
2021-03-25T01:02:14.000Z
_notebooks/2020-04-14-Gillespie Algorithm.ipynb
lewiscoleblog/blog
50183d63491abbf9e56676a784f53dfbb3952af1
[ "Apache-2.0" ]
3
2020-04-07T15:31:16.000Z
2021-09-28T01:25:25.000Z
_notebooks/2020-04-14-Gillespie Algorithm.ipynb
lewiscoleblog/blog
50183d63491abbf9e56676a784f53dfbb3952af1
[ "Apache-2.0" ]
1
2020-05-09T18:03:39.000Z
2020-05-09T18:03:39.000Z
878.864699
239,428
0.946582
true
8,563
Qwen/Qwen-72B
1. YES 2. YES
0.705785
0.749087
0.528695
__label__eng_Latn
0.994983
0.066664
# Reconstructing an *off-axis* hologram by Fresnel Approximation Reference: Digital holography and wavefront sensing by Ulf Schnars, Claas Falldorf, John Watson, and Werner Jüptner, Springer-verlag Berlin an, 2016. (Section 3.2) ## Info about the digital hologram: 'ulf7.BMP' is a digital hologram created by recording an object at about 1 meter distance with HeNe laser (632.8 nm) and an image sensor with 6.8 µm pixel size. ```python #Import libraries realted to matplotlib and mathematical operations %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np ``` ```python # Read the hologram image file hologram = mpimg.imread('ulf7.BMP') hologram = hologram.astype(float) #Convert into float type. Crucial for non integer based mathematical operations # plot/view the hologram imgplot = plt.imshow(hologram, cmap="viridis") ``` ## Some equations from the book! The *Fresnel-Kirchhoff* integral describing diffraction field beyond an aperture is given by the coherent superposition of the secondary waves (section 2.4) \begin{equation} \Gamma\left(\xi^{\prime}, \eta^{\prime}\right)=\frac{i}{\lambda} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} A(x, y) \frac{\exp \left(-i \frac{2 \pi}{\lambda} \rho^{\prime}\right)}{\rho^{\prime}} Q d x d y \end{equation} where, $A(x, y)$ is the complex amplitude in the plane of the diffracting aperture, $\rho^{\prime}$ is the distance between a point in the aperture plane and a point in the observation plane, and $Q$ is the inclination factor defined to take care of no backward propagation of the diffracted optical field. For holograms, $Q$ is approximately equal to 1. A hologram $h(x,y)$ recorded by a reference light wave $E_{R}(x, y)$ can be reconstructed by a conjugate reference wave $E_{R}^{*}(x, y)$ as described by the following *Fresnel-Kirchhoff* integral \begin{equation} \Gamma(\xi, \eta)=\frac{i}{\lambda} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} h(x, y) E_{R}^{*}(x, y) \frac{\exp \left(-i \frac{2 \pi}{\lambda} \rho\right)}{\rho} d x d y \end{equation} with $\rho = \sqrt{ (x-\xi)^2 + (y-\eta)^2 + d^2 }$. Here $d$ is the distance between the object and hologram planes. Substituting the apprximated *Taylor* expansion of $\rho$ in above equation leads to the Fresnel reonstruction field relation (see section 3.2 of the book) \begin{aligned} \Gamma(\xi, \eta)=& \frac{i}{\lambda d} \exp \left(-i \frac{2 \pi}{\lambda} d\right) \exp \left[-i \frac{\pi}{\lambda d}\left(\xi^{2}+\eta^{2}\right)\right] \times \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} E_{R}^{*}(x, y) h(x, y) \exp \left[-i \frac{\pi}{\lambda d}\left(x^{2}+y^{2}\right)\right] \exp \left[i \frac{2 \pi}{\lambda d}(x \xi+y \eta)\right] d x d y \end{aligned} Or, in a digital form by \begin{aligned} \Gamma(m, n)=& \frac{i}{\lambda d} \exp \left(-i \frac{2 \pi}{\lambda} d\right) \exp \left[-i \pi \lambda d\left(\frac{m^{2}}{N^{2} \Delta x^{2}}+\frac{n^{2}}{N^{2} \Delta y^{2}}\right)\right] \times \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} E_{R}^{*}(k, l) h(k, l) \exp \left[-i \frac{\pi}{\lambda d}\left(k^{2} \Delta x^{2}+l^{2} \Delta y^{2}\right)\right] \exp \left[i 2 \pi\left(\frac{k m}{N}+\frac{l n}{N}\right)\right] \\ =& C \times \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} E_{R}^{*}(k, l) h(k, l) \exp \left[-i \frac{\pi}{\lambda d}\left(k^{2} \Delta x^{2}+l^{2} \Delta y^{2}\right)\right] \exp \left[i 2 \pi\left(\frac{k m}{N}+\frac{l n}{N}\right)\right] \end{aligned} where, $h(k,l)$ is the hologram, $N$ is number of pixels in camera sensor (assumed number of rows = number of columns, if not, convert the hologram in such a way prior to the operations), $\lambda$ is wavelength, $\Delta x$ and $\Delta y$ are horizontal and vertical distance of neighboring sensor pixels, and $d$ is the distance of reconstruction. It is easy to see that the last term under discrete integral is actually an IFT (inverse Fourier transform) of a multiple of hologram function and an exponential factor term. C is just a complex constant which does not affect the reconstruction process and $E_{R}^{*}(k, l)$ gets simplified to unity for a plane wave as reconstruction/recording wave. ```python # prepare the Fresnel operand for the hologram Nr,Nc = np.shape(hologram) #number of rows and columns in the hologram wavelength = 632.8e-9 #HeNe laser wavelength in SI units i.e. meters dx = 6.8e-6 #sensor pixel size in meters d = -1.054 #reconstruction distance in meters Nr = np.linspace(0, Nr-1, Nr)-Nr/2 Nc = np.linspace(0, Nc-1, Nc)-Nc/2 k, l = np.meshgrid(Nc,Nr) factor = np.multiply(hologram, np.exp(-1j*np.pi/(wavelength*d)*(np.multiply(k, k)*dx**2 + np.multiply(l, l)*dx**2))) reconstructed_field = np.fft.ifftshift(np.fft.ifft2(np.fft.ifftshift(factor))) # Take inverse Fourier transform of the factor ``` ```python # save and plot I = np.abs(reconstructed_field)/np.max(np.abs(reconstructed_field)) #normalized intensity profile plt.imshow(I, cmap="hot", clim=(0.0, 0.2)) plt.colorbar() mpimg.imsave('fresnel_reconstruction.png', I, cmap="hot", vmin=0.0, vmax=0.3) #save reconstruction matrix as image ```
7a0f6cdf22f1fb9b4db95bb4fc41218c5e6bb614
253,582
ipynb
Jupyter Notebook
Fresnel_reconstruction.ipynb
OptoManishK/Digital_Holography
0fa694e706daaaa703a4553f5ff6518ca8a4c0c3
[ "MIT" ]
4
2020-02-14T13:16:44.000Z
2021-07-15T02:38:30.000Z
Fresnel_reconstruction.ipynb
OptoManishK/Digital_Holography
0fa694e706daaaa703a4553f5ff6518ca8a4c0c3
[ "MIT" ]
null
null
null
Fresnel_reconstruction.ipynb
OptoManishK/Digital_Holography
0fa694e706daaaa703a4553f5ff6518ca8a4c0c3
[ "MIT" ]
4
2019-05-02T05:12:04.000Z
2022-01-22T13:06:25.000Z
1,760.986111
178,208
0.955506
true
1,668
Qwen/Qwen-72B
1. YES 2. YES
0.935347
0.839734
0.785442
__label__eng_Latn
0.921566
0.663178
# Three-particle potentials In this notebook we'll explore the Stillinger-Weber three-particle potential applied to a system of silicon (Si) atoms. The Stillinger-Weber potential is given by a two-particle potential similar to Lennard-Jones and a three-particle potential. \begin{align} U(r) = \sum_{ij} U(r_{ij}) + \sum_{ijk} U(r_{ij}, r_{ik}, \theta_{ijk}). \end{align} The two-particle potential is given by \begin{align} U(r_{ij}) = A \epsilon\left[ B \left( \frac{\sigma}{r_{ij}} \right)^{p} - \left( \frac{\sigma}{r_{ij}} \right)^{q} \right] \exp\left( \frac{\sigma}{r_{ij} - a\sigma} \right), \end{align} where $A$, $B$, $\epsilon$, $\sigma$, $p$, $q$ and $a$ are coefficients that can be adjusted depending on the system we are perusing. The three-particle potential has the form \begin{align} U(r_{ij}, r_{ik}, \theta_{ijk}) = \lambda \epsilon \left[ \cos(\theta_{ijk}) - \cos(\theta_{0}) \right]^2 \exp\left( \frac{\gamma\sigma}{r_{ij} - a\sigma} \right) \exp\left( \frac{\gamma\sigma}{r_{ik} - a\sigma} \right), \end{align} where we've introduced the new parameters $\lambda$ and $\gamma$. Note that we've simplified the potentials as we've removed the indices that can be associated with every coefficient to specify different types of interactions between atoms. Below we use the given parameter file. ```python import os import re import sys sys.path.append("..") import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import sklearn.linear_model from read_lammps_log import read_log, get_temp_lognames, read_rdf_log from diffusion import get_diffusion_constant sns.set(color_codes=True) ``` ```python %%writefile scripts/Si.sw # DATE: 2007-06-11 CONTRIBUTOR: Aidan Thompson, [email protected] # CITATION: Stillinger and Weber, Phys Rev B, 31, 5262, (1985) # Stillinger-Weber parameters for various elements and mixtures # multiple entries can be added to this file, # LAMMPS reads the ones it needs # these entries are in LAMMPS "metal" units: # epsilon = eV; sigma = Angstroms # other quantities are unitless # format of a single entry (one or more lines): # element 1, element 2, element 3, # epsilon, sigma, a, lambda, gamma, costheta0, A, B, p, q, tol # Here are the original parameters in metal units, for Silicon from: # # Stillinger and Weber, Phys. Rev. B, v. 31, p. 5262, (1985) # Si # element 1 Si # element 2 Si # element 3 2.1683 # epsilon 2.0951 # sigma 1.80 # a 21.0 # lambda 1.20 # gamma -0.333333333333 # cos(theta_0) 7.049556277 # A 0.6022245584 # B 4.0 # p 0.0 # q 0.0 # tol ``` Overwriting scripts/Si.sw In the Lennard-Jones-scripts we only specify $\epsilon$, $\sigma$ and $r_c$ (cut-off). Furthermore, we see that for $A = 4$, $B = 1$, $p = 12$, and $q = 6$ we almost recover the Lennard-Jones potential in the two-particle potential if we remove the exponential cut-off function. With $p = 4$ we lower the repulsive effect of the two-particle potential and by setting $q = 0$ we use a fixed attractive force between the particles. ```python %%writefile scripts/si.in units metal atom_style atomic atom_modify map array boundary p p p atom_modify sort 0 0.0 # temperature #variable T equal 1200.0 # diamond unit cell variable myL equal 10 variable myscale equal 1.3 variable bins equal 200 variable a equal 5.431*${myscale} lattice custom $a & a1 1.0 0.0 0.0 & a2 0.0 1.0 0.0 & a3 0.0 0.0 1.0 & basis 0.0 0.0 0.0 & basis 0.0 0.5 0.5 & basis 0.5 0.0 0.5 & basis 0.5 0.5 0.0 & basis 0.25 0.25 0.25 & basis 0.25 0.75 0.75 & basis 0.75 0.25 0.75 & basis 0.75 0.75 0.25 region myreg block 0 ${myL} & 0 ${myL} & 0 ${myL} create_box 1 myreg create_atoms 1 region myreg mass 1 28.06 group Si type 1 velocity all create ${T} 5287286 mom yes rot yes dist gaussian pair_style sw pair_coeff * * scripts/Si.sw Si neighbor 1.0 bin neigh_modify every 1 delay 10 check yes timestep 1.0e-3 #fix 1 all nve # Try using fix npt fix 1 all nvt temp ${T} ${T} 0.05 # Run simulation thermo 10 #dump 1 all custom 10 dat/si.lammpstrj id type x y z vx vy vz run 1000 reset_timestep 0 variable time equal dt*step compute msd all msd compute myrdf all rdf ${bins} fix 2 all ave/time 100 1 100 c_myrdf[*] file dat/si_rdf_${T}.log mode vector thermo_style custom step v_time temp ke pe etotal press c_msd[4] log dat/si_g_${T}.log run 5000 ``` Overwriting scripts/si.in ```python %%writefile scripts/run_si.in export OMP_NUM_THREADS=4 for T in $(seq 1 200 6001); do mpirun -np 4 lmp -var T $T -in scripts/si.in done ``` Overwriting scripts/run_si.in ```python temperature_list, file_list = get_temp_lognames("si_g") ``` ```python fig = plt.figure(figsize=(14, 10)) log_df_list = [] for T, filename in zip(temperature_list, file_list): log_df = read_log(filename) log_df_list.append(log_df) plt.plot( log_df["v_time"], log_df["c_msd[4]"], label=fr"$T = {T:.2f}$" ) plt.xlabel(r"$t$") plt.ylabel(r"$\langle r^2(t)\rangle$") plt.legend(loc="best") plt.title(r"Mean squared displacement as a function of time") plt.show() ``` ```python D_list = [] alpha_list = [] for T, log_df in zip(temperature_list, log_df_list): D, D_int, alpha, alpha_int = get_diffusion_constant(log_df) D_list.append(D) alpha_list.append(alpha) ``` ```python fig = plt.figure(figsize=(14, 10)) plt.plot(temperature_list, D_list) plt.title(r"Plot of the diffusion constant as a function of temperature") plt.xlabel(r"$T$") plt.ylabel(r"$D(t)$") plt.show() ``` ```python temperature_list, rdf_list = get_temp_lognames("si_rdf") ``` ```python g_r_dict = {} bin_centers_dict = {} for temp, filename in zip(temperature_list, rdf_list): bin_centers, g_r = read_rdf_log(filename) g_r_dict[temp] = g_r bin_centers_dict[temp] = bin_centers ``` ```python fig = plt.figure(figsize=(14, 10)) for i, temp in enumerate(temperature_list): key = max(g_r_dict[temp]) plt.plot( bin_centers_dict[temp][key], g_r_dict[temp][key], label=fr"$T = {temp}$", ) plt.xlabel(r"$r$") plt.ylabel(r"$g(r)$") plt.legend(loc="best") plt.title(r"Final radial distribution for varying temperatures") plt.show() ```
85e16e1f8255d0d7bf41a5284a74189169d55184
448,211
ipynb
Jupyter Notebook
project-1/three-particle-potentials.ipynb
Schoyen/FYS4460
0c6ba1deefbfd5e9d1657910243afc2297c695a3
[ "MIT" ]
1
2019-08-29T16:29:18.000Z
2019-08-29T16:29:18.000Z
project-1/three-particle-potentials.ipynb
Schoyen/FYS4460
0c6ba1deefbfd5e9d1657910243afc2297c695a3
[ "MIT" ]
null
null
null
project-1/three-particle-potentials.ipynb
Schoyen/FYS4460
0c6ba1deefbfd5e9d1657910243afc2297c695a3
[ "MIT" ]
1
2020-05-27T14:01:36.000Z
2020-05-27T14:01:36.000Z
1,085.256659
249,100
0.954189
true
2,060
Qwen/Qwen-72B
1. YES 2. YES
0.891811
0.841826
0.750749
__label__eng_Latn
0.769262
0.582574
# Reduced Density Matrices in Tequila This notebook serves as a tutorial to the computation and usage of the one- and two-particle reduced density matrices. ```python import tequila as tq import numpy ``` ## The 1- and 2-RDM First, look at the definition of the reduced density matrices (RDM) for some state $ |\psi\rangle$: 1-RDM: $ \gamma^p_q \equiv \langle \psi | a^p a_q | \psi\rangle$ 2-RDM $ \gamma^{pq}_{rs} \equiv \langle \psi | a^p a^q a_s a_r | \psi\rangle$ (we mainly use the standard physics ordering for the second-quantized operators, i.e. $p,r$ go with particle 1 and $q,s$ with particle 2) The operators $ a^p = a_p^\dagger $ and $a_p$ denote the standard fermionic creation and annihilation operators. Since we work on a quantum computer, $|\psi\rangle$ is represented by some unitary transformation $U$: $|\psi\rangle = U |0\rangle^{\otimes N_q}$, using $N_q$ qubits. This corresponds to $N_q$ spin-orbitals in Jordan-Wigner encoding. Obtaining the RDMs from a quantum computer is most intuitive when using the Jordan-Wigner transformation, since the results directly correspond to the ones computed classically in second quantized form. It is worth mentioning that since we only consider real orbitals in chemistry applications, the implementation also expects only real-valued RDM's. The well-known anticommutation relations yield a series of symmetry properties for the reduced density matrices, which can be taken into consideration to reduce the computational cost: \begin{align} \gamma^p_q &= \gamma^q_p \\ \gamma^{pq}_{rs} &= -\gamma^{qp}_{rs} = -\gamma^{pq}_{sr} = \gamma^{qp}_{sr} = \gamma^{rs}_{pq}\end{align} In chemistry applications, solving the electronic structure problem involves the electronic Hamiltonian (here in Born-Oppenheimer approximation) $$ H_{el} = h_0 + \sum_{pq} h^q_p a^p_q + \frac{1}{2}\sum_{pqrs} h^{rs}_{pq} a^{pq}_{rs}$$ with the one- and two-body integrals $h^q_p, h^{rs}_{pq}$ that turn out to be independent of spin. Therefore, we introduce the spin-free RDMs $\Gamma^P_Q$ and $\Gamma^{PQ}_{RS}$, obtained by spin-summation (we write molecular orbitals in uppercase letters $P,Q,\ldots\in\{1,\ldots,N_p\}$ in opposite to spin-orbitals $p,q,\ldots\in\{1,\ldots,N_q\}$): \begin{align} \Gamma^P_Q &= \sum_{\sigma \in \{\alpha, \beta\}} \gamma^{p\sigma}_{q\sigma} = \langle \psi |\sum_{\sigma} a^{p\sigma} a_{q\sigma} | \psi\rangle \\ \Gamma^{PQ}_{RS} &= \sum_{\sigma,\tau \in \{\alpha, \beta\}} \gamma^{p\sigma q\tau}_{r\sigma s\tau} = \langle \psi | \sum_{\sigma,\tau} a^{p\sigma} a^{q\tau} a_{s\tau} a_{r\sigma} | \psi \rangle. \end{align} Note, that by making use of linearity, we obtain the second equality in the two expressions above. Performing the summation before evaluating the expected value means less expected values and a considerable reduction in computational cost (only $N_p=\frac{N_q}{2}$ molecular orbitals vs. $N_q$ spin-orbitals). Due to the orthogonality of the spin states, the symmetries for the spin-free 2-RDM are slightly less than for the spin-orbital RDM: \begin{align} \Gamma^P_Q &= \Gamma^Q_P\\ \Gamma^{PQ}_{RS} &= \Gamma^{QP}_{SR} = \Gamma^{RS}_{PQ} \end{align} ```python # As an example, let's use the Helium atom in a minimal basis mol = tq.chemistry.Molecule(geometry='He 0.0 0.0 0.0', basis_set='6-31g') # We want to get the 1- and 2-RDM for the (approximate) ground state of Helium # For that, we (i) need to set up a unitary transformation U(angles) # (ii) determine a set of angles using VQE s.th. U(angles) |0> = |psi>, where H|psi> = E_0|psi> # (iii) compute the RDMs using compute_rdms # (i) Set up a circuit # This can be done either using the make_uccsd-method (see Chemistry-tutorial) or by a hand-written circuit # We use a hand-written circuit here U = tq.gates.X(target=0) U += tq.gates.X(target=1) U += tq.gates.Ry(target=3, control=0, angle='a1') U += tq.gates.X(target=0) U += tq.gates.X(target=1, control=3) U += tq.gates.Ry(target=2, control=1, angle='a2') U += tq.gates.X(target=1) U += tq.gates.Ry(target=2, control=1, angle='a3') U += tq.gates.X(target=1) U += tq.gates.X(target=2) U += tq.gates.X(target=0, control=2) U += tq.gates.X(target=2) # (ii) Run VQE H = mol.make_hamiltonian() O = tq.objective.objective.ExpectationValue(H=H, U=U) result = tq.minimize(objective=O, method='bfgs') ``` Optimizer: <class 'tequila.optimizers.optimizer_scipy.OptimizerSciPy'> backend : qulacs device : None samples : None save_history : True noise : None Method : BFGS Objective : 1 expectationvalues gradient : 12 expectationvalues active variables : 3 E=-1.64501793 angles= {a1: 0.9750975289027688, a2: 0.9195439775408228, a3: 3.49947562895891} samples= None E=-2.71423735 angles= {a1: 0.027272521865900456, a2: 0.5712705986200453, a3: 3.520294120034508} samples= None E=-2.83675244 angles= {a1: -0.32223270411892413, a2: -0.04569099158289358, a3: 3.5188489289335756} samples= None E=-2.86970164 angles= {a1: -0.13709848313691403, a2: 0.02172805360757754, a3: 3.5204102536245103} samples= None E=-2.86981377 angles= {a1: -0.13088348290012078, a2: 0.008088333832009013, a3: 3.519022680895903} samples= None E=-2.86981636 angles= {a1: -0.13079375632116388, a2: 0.007733482556698751, a3: 3.5173894007616413} samples= None E=-2.86982553 angles= {a1: -0.13043485000533636, a2: 0.006314077455457701, a3: 3.510856280224594} samples= None E=-2.86984780 angles= {a1: -0.13007285288854895, a2: 0.004740925916254425, a3: 3.494264102407214} samples= None E=-2.86988975 angles= {a1: -0.12972368573501192, a2: 0.0029648395618087795, a3: 3.4613580852617147} samples= None E=-2.86996535 angles= {a1: -0.12942551842632163, a2: 0.0008468777208130323, a3: 3.394945842601524} samples= None E=-2.87008241 angles= {a1: -0.12941852758214206, a2: -0.001404284566265573, a3: 3.2550600292581557} samples= None E=-2.87014838 angles= {a1: -0.13025095945928908, a2: -0.0011199665930895575, a3: 3.0890426632635295} samples= None E=-2.87015769 angles= {a1: -0.1309236921681526, a2: 0.0010325708853818772, a3: 3.0701586033329873} samples= None E=-2.87016207 angles= {a1: -0.13164011434830297, a2: 0.003920028405061652, a3: 3.0733549062345684} samples= None Optimization terminated successfully. Current function value: -2.870162 Iterations: 12 Function evaluations: 14 Gradient evaluations: 14 ```python # (iii) Using the optimal parameters out of VQE, we know have a circuit U_opt |0> ~ U|0> = |psi> mol.compute_rdms(U=U, variables=result.angles, spin_free=True, get_rdm1=True, get_rdm2=True) rdm1_spinfree, rdm2_spinfree = mol.rdm1, mol.rdm2 print('\nThe spin-free matrices:') print('1-RDM:\n' + str(rdm1_spinfree)) print('2-RDM:\n' + str(rdm2_spinfree)) # Let's also get the spin-orbital rdm2 # We can select to only determine one of either matrix, but if both are needed at some point, it is # more efficient to compute both within one call of compute_rdms print('\nThe spin-ful matrices:') mol.compute_rdms(U=U, variables=result.angles, spin_free=False, get_rdm1=False, get_rdm2=True) rdm1_spin, rdm2_spin = mol.rdm1, mol.rdm2 print('1-RDM is None now: ' + str(rdm1_spin)) print('2-RDM has been determined:\n' + str(rdm2_spin)) # We can compute the 1-rdm still at a later point mol.compute_rdms(U=U, variables=result.angles, spin_free=False, get_rdm1=True, get_rdm2=False) rdm1_spin = mol.rdm1 print('1-RDM is also here now:\n' + str(rdm1_spin)) ``` The spin-free matrices: 1-RDM: [[ 1.99134915 -0.00391427] [-0.00391427 0.00865085]] 2-RDM: [[[[ 1.99134029e+00 -4.19031562e-03] [-4.19031562e-03 -1.31183605e-01]] [[-4.19031562e-03 8.85898841e-06] [ 8.77611396e-06 2.76045591e-04]]] [[[-4.19031562e-03 8.77611396e-06] [ 8.85898841e-06 2.76045591e-04]] [[-1.31183605e-01 2.76045591e-04] [ 2.76045591e-04 8.64198765e-03]]]] The spin-ful matrices: 1-RDM has not been computed. Return None for 1-RDM. 1-RDM is None now: None 2-RDM has been determined: [[[[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]] [[ 0.00000000e+00 9.95670147e-01 0.00000000e+00 -2.23878547e-03] [-9.95670147e-01 0.00000000e+00 1.95153015e-03 0.00000000e+00] [-0.00000000e+00 -1.95153015e-03 0.00000000e+00 -6.55918025e-02] [ 2.23878547e-03 -0.00000000e+00 6.55918025e-02 0.00000000e+00]] [[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [-0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [-0.00000000e+00 -0.00000000e+00 0.00000000e+00 0.00000000e+00] [-0.00000000e+00 -0.00000000e+00 -0.00000000e+00 0.00000000e+00]] [[ 0.00000000e+00 -2.23878547e-03 0.00000000e+00 5.03395665e-06] [ 2.23878547e-03 0.00000000e+00 -4.38805698e-06 0.00000000e+00] [-0.00000000e+00 4.38805698e-06 0.00000000e+00 1.47484561e-04] [-5.03395665e-06 -0.00000000e+00 -1.47484561e-04 0.00000000e+00]]] [[[ 0.00000000e+00 -9.95670147e-01 -0.00000000e+00 2.23878547e-03] [ 9.95670147e-01 0.00000000e+00 -1.95153015e-03 -0.00000000e+00] [ 0.00000000e+00 1.95153015e-03 0.00000000e+00 6.55918025e-02] [-2.23878547e-03 0.00000000e+00 -6.55918025e-02 0.00000000e+00]] [[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]] [[ 0.00000000e+00 1.95153015e-03 0.00000000e+00 -4.38805698e-06] [-1.95153015e-03 0.00000000e+00 3.82503176e-06 0.00000000e+00] [-0.00000000e+00 -3.82503176e-06 0.00000000e+00 -1.28561031e-04] [ 4.38805698e-06 -0.00000000e+00 1.28561031e-04 0.00000000e+00]] [[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [-0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [-0.00000000e+00 -0.00000000e+00 0.00000000e+00 0.00000000e+00] [-0.00000000e+00 -0.00000000e+00 -0.00000000e+00 0.00000000e+00]]] [[[ 0.00000000e+00 -0.00000000e+00 -0.00000000e+00 -0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 -0.00000000e+00 -0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 -0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]] [[ 0.00000000e+00 -1.95153015e-03 -0.00000000e+00 4.38805698e-06] [ 1.95153015e-03 0.00000000e+00 -3.82503176e-06 -0.00000000e+00] [ 0.00000000e+00 3.82503176e-06 0.00000000e+00 1.28561031e-04] [-4.38805698e-06 0.00000000e+00 -1.28561031e-04 0.00000000e+00]] [[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]] [[ 0.00000000e+00 -6.55918025e-02 0.00000000e+00 1.47484561e-04] [ 6.55918025e-02 0.00000000e+00 -1.28561031e-04 0.00000000e+00] [-0.00000000e+00 1.28561031e-04 0.00000000e+00 4.32099382e-03] [-1.47484561e-04 -0.00000000e+00 -4.32099382e-03 0.00000000e+00]]] [[[ 0.00000000e+00 2.23878547e-03 -0.00000000e+00 -5.03395665e-06] [-2.23878547e-03 0.00000000e+00 4.38805698e-06 -0.00000000e+00] [ 0.00000000e+00 -4.38805698e-06 0.00000000e+00 -1.47484561e-04] [ 5.03395665e-06 0.00000000e+00 1.47484561e-04 0.00000000e+00]] [[ 0.00000000e+00 -0.00000000e+00 -0.00000000e+00 -0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 -0.00000000e+00 -0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 -0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]] [[ 0.00000000e+00 6.55918025e-02 -0.00000000e+00 -1.47484561e-04] [-6.55918025e-02 0.00000000e+00 1.28561031e-04 -0.00000000e+00] [ 0.00000000e+00 -1.28561031e-04 0.00000000e+00 -4.32099382e-03] [ 1.47484561e-04 0.00000000e+00 4.32099382e-03 0.00000000e+00]] [[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]]]] 1-RDM is also here now: [[ 0.99567518 0. -0.00180405 0. ] [ 0. 0.99567397 0. -0.00211022] [-0.00180405 0. 0.00432482 0. ] [ 0. -0.00211022 0. 0.00432603]] ```python # To check consistency with the spin-free rdms, we can do spin-summation afterwards # (again, if only the spin-free version is of interest, it is cheaper to get it right from compute_rdms) rdm1_spinsum, rdm2_spinsum = mol.rdm_spinsum(sum_rdm1=True, sum_rdm2=True) print('\nConsistency of spin summation:') print('1-RDM: ' + str(numpy.allclose(rdm1_spinsum, rdm1_spinfree, atol=1e-10))) print('2-RDM: ' + str(numpy.allclose(rdm2_spinsum, rdm2_spinfree, atol=1e-10))) ``` Consistency of spin summation: 1-RDM: True 2-RDM: True ```python # We can also compute the RDMs using the psi4-interface. # Then, psi4 is called to perform a CI-calculation, while collecting the 1- and 2-RDM # Let's use full CI here, but other CI flavors work as well mol.compute_rdms(psi4_method='fci') rdm1_psi4, rdm2_psi4 = mol.rdm1, mol.rdm2 print('\nPsi4-RDMs:') print('1-RDM:\n' + str(rdm1_psi4)) print('2-RDM:\n' + str(rdm2_psi4)) # Comparing the results to the VQE-matrices, we observe a close resemblance, # also suggested by the obtained energies fci_energy = mol.logs['fci'].variables['FCI TOTAL ENERGY'] vqe_energy = result.energy print('\nFCI energy: ' + str(fci_energy)) print('VQE-Energy: ' + str(vqe_energy)) ``` Psi4-RDMs: 1-RDM: [[ 1.9913455 -0.00385072] [-0.00385072 0.0086545 ]] 2-RDM: [[[[ 1.99133696e+00 -4.12235350e-03] [-4.12235350e-03 -1.31213704e-01]] [[-4.12235350e-03 8.53386376e-06] [ 8.53386376e-06 2.71631211e-04]]] [[[-4.12235350e-03 8.53386376e-06] [ 8.53386376e-06 2.71631211e-04]] [[-1.31213704e-01 2.71631211e-04] [ 2.71631211e-04 8.64596819e-03]]]] FCI energy: -2.870162138900821 VQE-Energy: -2.870162072561385 ## Consistency checks At this point, we can make a few consistency checks. We can validate the trace condition for the 1- and 2-RDM: \begin{align}\mathrm{tr}(\mathbf{\Gamma}_m)&=N!/(N-m)!\\ \mathrm{tr} (\mathbf{\Gamma}_1) &= \sum_P \Gamma^P_P = N \\ \mathrm{tr} (\mathbf{\Gamma}_2) &= \sum_{PQ} \Gamma^{PQ}_{PQ} = N(N-1), \end{align} $N$ describes the number of particles involved, i.e. in our case using a minimal basis this corresponds to $N_p$ above. For the Helium atom in Born-Oppenheimer approximation, $N_p=2$. In the literature, one can also find the $m$-particle reduced density matrices normalized by a factor $1/m!$, which in that case would be inherited by the trace conditions. Also, the (in our case, as we use the wavefunction from VQE, ground-state) energy can be computed by \begin{equation} E = \langle H_{el} \rangle = h_0 + \sum_{PQ} h^Q_P \Gamma^P_Q + \frac{1}{2}\sum_{PQRS} h^{RS}_{PQ} \Gamma^{PQ}_{RS}, \end{equation} where $h_0$ denotes the nuclear repulsion energy, which is 0 for Helium anyways. Note, that the expressions above also hold true for the spin-RDMs, given that the one- and two-body integrals are available in spin-orbital basis. ```python # Computation of consistency checks #todo: normalization of rdm2 *= 1/2 # Trace tr1_spin = numpy.einsum('pp', rdm1_spin, optimize='greedy') tr1_spinfree = numpy.einsum('pp', rdm1_spinfree, optimize='greedy') tr2_spin = numpy.einsum('pqpq', rdm2_spin, optimize='greedy') tr2_spinfree = numpy.einsum('pqpq', rdm2_spinfree, optimize='greedy') print("1-RDM: N_true = 2, N_spin = " + str(tr1_spin) + ", N_spinfree = " + str(tr1_spinfree)+".") print("2-RDM: N*(N-1)_true = 2, spin = " + str(tr2_spin) + ", spinfree = " + str(tr2_spinfree)+".") # Energy # Get molecular integrals h0 = mol.molecule.nuclear_repulsion print("h0 is zero: " + str(h0)) h1 = mol.molecule.one_body_integrals h2 = mol.molecule.two_body_integrals # Reorder two-body-integrals according to physics convention h2 = tq.chemistry.qc_base.NBodyTensor(elems=h2, ordering='openfermion') h2.reorder(to='phys') h2 = h2.elems # Compute energy rdm_energy = numpy.einsum('qp, pq', h1, rdm1_spinfree, optimize='greedy') + 1/2*numpy.einsum('rspq, pqrs', h2, rdm2_spinfree, optimize='greedy') print('\nVQE-Energy is: ' + str(vqe_energy)) print('RDM-energy matches: ' + str(rdm_energy)) ``` 1-RDM: N_true = 2, N_spin = 2.0000000000000004, N_spinfree = 2.0000000000000004. 2-RDM: N*(N-1)_true = 2, spin = 2.0000000000000004, spinfree = 2.000000000000001. h0 is zero: 0.0 VQE-Energy is: -2.870162072561385 RDM-energy matches: -2.870162072561384 ## References ... for the definition of the reduced density matrices, spin-free formulation, symmetries: 1. Kutzelnigg, W., Shamasundar, K. R. & Mukherjee, D. Spinfree formulation of reduced density matrices, density cumulants and generalised normal ordering. Mol. Phys. 108, 433–451 (2010). 2. Helgaker, T., Jørgensen, P. & Olsen, J. Molecular Electronic-Structure Theory (John Wiley & Sons, Ltd, 2000). ## Possible applications So far, the content of this notebook is comparably trivial, and misses some interesting applications. An interesting possilibity on how to make use of the RDM's obtained by a quantum computer is given by a technique that has been named quantum subspace expansion, which e.g. can be used to approximate excited states [3], decode quantum errors [4] or improve the accuracy of results [5]. References herefore: 3. McClean, J. R., Kimchi-Schwartz, M. E., Carter, J. & De Jong, W. A. Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states. Phys. Rev. A 95, 1–10 (2017). 4. McClean, J. R., Jiang, Z., Rubin, N. C., Babbush, R. & Neven, H. Decoding quantum errors with subspace expansions. Nat. Commun. 11, 1–9 (2020). 5. Takeshita, T. et al. Increasing the Representation Accuracy of Quantum Simulations of Chemistry without Extra Quantum Resources. Phys. Rev. X 10, 11004 (2020). Everybody is invited to enrich this notebook by implementing one of the techniques mentioned, or some other application of the 1- and 2-RDM!
44c338eb28d1cdc333fb615cd364c5bb0922757f
23,947
ipynb
Jupyter Notebook
tutorials/ReducedDensityMatrices.ipynb
georgios-ts/tequila
e9fbd90ffdaf86e4c91996195c753da598dfd23c
[ "MIT" ]
null
null
null
tutorials/ReducedDensityMatrices.ipynb
georgios-ts/tequila
e9fbd90ffdaf86e4c91996195c753da598dfd23c
[ "MIT" ]
3
2020-12-08T14:15:56.000Z
2020-12-17T16:38:35.000Z
tutorials/ReducedDensityMatrices.ipynb
georgios-ts/tequila
e9fbd90ffdaf86e4c91996195c753da598dfd23c
[ "MIT" ]
null
null
null
49.785863
451
0.59615
true
7,562
Qwen/Qwen-72B
1. YES 2. YES
0.861538
0.72487
0.624503
__label__eng_Latn
0.445852
0.289261
# This notebook shows how to extract a model from a Latex document and simulate the mode. ## Why specify a model in Latex? Sometime the **implementation** of a model in software don't match the **specification** of the model in the text in which the model is presented. It can be a challenge to make sure that the specification is updated in order to reflect changes made in the implementation. By extracting the model from a Latex script which describes and specify the model a one can always be sure that simulations reflect the model as described in the paper. Also the author is forced to make a complete specification of the model, else it won't run. ## The Economic Credit Loss model This jupyter notebook is inspired by IMF working paper (WP/20/111) The Expected Credit Loss Modeling from a Top-Down Stress Testing Perspective by Marco Gross, Dimitrios Laliotis, Mindaugas Leika, Pavel Lukyantsau. The working paper and the associated material is located https://www.imf.org/en/Publications/WP/Issues/2020/07/03/Expected-Credit-Loss-Modeling-from-a-Top-Down-Stress-Testing-Perspective-49545 from the abstract of the paper: > The objective of this paper is to present an integrated tool suite for IFRS 9- and CECL compatible estimation in top-down solvency stress tests. The tool suite serves as an illustration for institutions wishing to include accounting-based approaches for credit risk modeling in top-down stress tests. This is a jupyter notebook build with the purpose of illustrating the conversion of a model from Latex to ModelFlow. The purpose is testing so take very much care. ## Import libraries ```python %load_ext autoreload %autoreload 2 import pandas as pd from IPython.core.display import HTML,Markdown from modelclass import model import modeljupytermagic # some useful stuf model.widescreen() pd.set_option('display.max_rows', None, 'display.max_columns', 10, 'precision', 2) sortdf = lambda df: df[sorted([c for c in df.columns])] ``` The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload <style> div#notebook-container { width: 95%; } div#menubar-container { width: 65%; } div#maintoolbar-container { width: 99%; } </style> ## Write a latex script The model consists of the equations and the lists The jupyter magic command **%%latexflow** will extract the model. then it will transform the equations to **ModelFlow** equations and finaly it will create a modelflow **model** instance. The **model** instance will be able to solve the model. ```python %%latexflow ecl ### Loans can be in 3 stages Loans can be in 3 stages, s1,s2 and s3. New loans will be generated and loans will mature. Two lists of stages are defined: List $stage=\{s1, s2,s3\}$ List $stage\_from=\{s1, s2,s3\}$ ### A share of the loans in each stage wil transition to the same or another stage in the next time frame: \begin{equation} \label{eq:loanfromto} loan\_transition\_from\_to^{stage\_from,stage}_{t} = loan^{stage\_from}_{t-1} \times TR^{stage\_from,stage}_{t} \end{equation} \begin{equation} \label{eq:transition} loan\_transition\_to^{stage}_{t} = \underbrace{\sum_{stage\_from}(loan\_transition\_from\_to^{stage\_from,stage}_{t})}_{Transition} \end{equation} ### A share of the loans in each stage will mature another share will have to be written off \begin{equation} \label{eq:maturing} loan\_maturing^{stage}_{t} = M^{stage}_{t} \times loan^{stage}_{t-1} \end{equation} \begin{equation} \label{eq:writeoff} loan\_writeoff^{stage}_{t} = WRO^{stage}_{t} \times loan^{stage}_{t-1} \end{equation} ### So loans in a stage will reflect the inflow and outflow \begin{equation} \label{eq:E} loan^{stage}_{t} = \underbrace{loan\_transition\_to^{stage}_{t} }_{Transition} -\underbrace{loan\_maturing^{stage}_{t}}_{Maturing} -\underbrace{loan\_writeoff^{stage}_{t}}_{Writeoff} +\underbrace{loan\_new^{stage}_{t}}_{New loans} \end{equation} \begin{equation} \label{eq:new} loan\_new^{stage}_{t} = new^{stage}_{t} \times loan^{stage}_{t-1} \end{equation} \begin{equation} \label{eq:g} loan\_total_{t} = \sum_{stage}(loan^{stage}_{t}) \end{equation} ### New loans are only generated in stage 1. \begin{equation} \label{eq:E2} new\_s1_{t} = \frac{loan\_growth_{t} \times loan\_total_{t-1} + \sum_{stage}((M^{stage}_{t}+WRO^{stage}_{t})\times loan^{stage}_{t-1})}{(loan\_s1_{t-1})} \end{equation} ### Performing Loans \begin{equation} \label{eq:Performing} loan\_performing_{t} = loan\_s1_{t}+loan\_s2_{t} \end{equation} ### Cure \begin{equation} \label{eq:cure} loan\_cure = loan\_transition\_from\_to\_s3\_s1+loan\_transition\_from\_to\_s3\_s2 \end{equation} ### Probability of default (PD) The point in time PD is the fraction of loans in stage s1 and s2 going into stage s3 \begin{equation} \label{eq:PDPIT} PD\_pit= \frac{loan\_transition\_from\_to\_s1\_s3+loan\_transition\_from\_to\_s2\_s3}{loan\_s1+loan\_s2} \end{equation} The Troug The Cycle PD is a slow mowing average of the Point in time PD. The \begin{equation} \label{eq:PDTTC} PD\_TTC = logit^{-1}(logit(PD\_TTC(-1)) + alfa \times \Delta{PD\_pit}) \end{equation} ### And we can specify the dynamic of the transition matrix, based on Z score Let $\Phi$ be the normal cumulative distribution $\frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{ -\frac{t^2}{2}}dt$ \begin{equation} \label{eq:tr} TR^{stage\_from,stage}_{t} = \Phi{\left(\frac{bound\_upper^{stage\_from,stage}-\sqrt{\rho}\times Z_{t}}{\sqrt{1-\rho}}\right)}-\Phi{\left(\frac{bound\_lower^{stage\_from,stage}-\sqrt{\rho}\times Z_{t}}{\sqrt{1-\rho}}\right)} \end{equation} ``` # Now creating the model **ecl** ## The model ### Loans can be in 3 stages Loans can be in 3 stages, s1,s2 and s3. New loans will be generated and loans will mature. Two lists of stages are defined: List $stage=\{s1, s2,s3\}$ List $stage\_from=\{s1, s2,s3\}$ ### A share of the loans in each stage wil transition to the same or another stage in the next time frame: \begin{equation} \label{eq:loanfromto} loan\_transition\_from\_to^{stage\_from,stage}_{t} = loan^{stage\_from}_{t-1} \times TR^{stage\_from,stage}_{t} \end{equation} \begin{equation} \label{eq:transition} loan\_transition\_to^{stage}_{t} = \underbrace{\sum_{stage\_from}(loan\_transition\_from\_to^{stage\_from,stage}_{t})}_{Transition} \end{equation} ### A share of the loans in each stage will mature another share will have to be written off \begin{equation} \label{eq:maturing} loan\_maturing^{stage}_{t} = M^{stage}_{t} \times loan^{stage}_{t-1} \end{equation} \begin{equation} \label{eq:writeoff} loan\_writeoff^{stage}_{t} = WRO^{stage}_{t} \times loan^{stage}_{t-1} \end{equation} ### So loans in a stage will reflect the inflow and outflow \begin{equation} \label{eq:E} loan^{stage}_{t} = \underbrace{loan\_transition\_to^{stage}_{t} }_{Transition} -\underbrace{loan\_maturing^{stage}_{t}}_{Maturing} -\underbrace{loan\_writeoff^{stage}_{t}}_{Writeoff} +\underbrace{loan\_new^{stage}_{t}}_{New loans} \end{equation} \begin{equation} \label{eq:new} loan\_new^{stage}_{t} = new^{stage}_{t} \times loan^{stage}_{t-1} \end{equation} \begin{equation} \label{eq:g} loan\_total_{t} = \sum_{stage}(loan^{stage}_{t}) \end{equation} ### New loans are only generated in stage 1. \begin{equation} \label{eq:E2} new\_s1_{t} = \frac{loan\_growth_{t} \times loan\_total_{t-1} + \sum_{stage}((M^{stage}_{t}+WRO^{stage}_{t})\times loan^{stage}_{t-1})}{(loan\_s1_{t-1})} \end{equation} ### Performing Loans \begin{equation} \label{eq:Performing} loan\_performing_{t} = loan\_s1_{t}+loan\_s2_{t} \end{equation} ### Cure \begin{equation} \label{eq:cure} loan\_cure = loan\_transition\_from\_to\_s3\_s1+loan\_transition\_from\_to\_s3\_s2 \end{equation} ### Probability of default (PD) The point in time PD is the fraction of loans in stage s1 and s2 going into stage s3 \begin{equation} \label{eq:PDPIT} PD\_pit= \frac{loan\_transition\_from\_to\_s1\_s3+loan\_transition\_from\_to\_s2\_s3}{loan\_s1+loan\_s2} \end{equation} The Troug The Cycle PD is a slow mowing average of the Point in time PD. The \begin{equation} \label{eq:PDTTC} PD\_TTC = logit^{-1}(logit(PD\_TTC(-1)) + alfa \times \Delta{PD\_pit}) \end{equation} ### And we can specify the dynamic of the transition matrix, based on Z score Let $\Phi$ be the normal cumulative distribution $\frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{ -\frac{t^2}{2}}dt$ \begin{equation} \label{eq:tr} TR^{stage\_from,stage}_{t} = \Phi{\left(\frac{bound\_upper^{stage\_from,stage}-\sqrt{\rho}\times Z_{t}}{\sqrt{1-\rho}}\right)}-\Phi{\left(\frac{bound\_lower^{stage\_from,stage}-\sqrt{\rho}\times Z_{t}}{\sqrt{1-\rho}}\right)} \end{equation} ## The equations in Macro business logic language can be inspected ```python print(ecl.equations_original) ``` Do stage_from $ Do stage $ Frml loanfromto loan_transition_from_to_{stage_from}_{stage} = loan_{stage_from}(-1)*TR_{stage_from}_{stage} $ enddo $ enddo $ Do stage $ Frml transition loan_transition_to_{stage} = sum(stage_from,loan_transition_from_to_{stage_from}_{stage}) $ enddo $ Do stage $ Frml maturing loan_maturing_{stage} = M_{stage}*loan_{stage}(-1) $ enddo $ Do stage $ Frml writeoff loan_writeoff_{stage} = WRO_{stage}*loan_{stage}(-1) $ enddo $ Do stage $ Frml E loan_{stage} = loan_transition_to_{stage}-loan_maturing_{stage}-loan_writeoff_{stage}+loan_new_{stage} $ enddo $ Do stage $ Frml new loan_new_{stage} = new_{stage}*loan_{stage}(-1) $ enddo $ Frml g loan_total = sum(stage,loan_{stage}) $ Frml E2 new_s1 = ((loan_growth*loan_total(-1)+sum(stage,(M_{stage}+WRO_{stage})*loan_{stage}(-1)))/((loan_s1(-1)))) $ Frml Performing loan_performing = loan_s1+loan_s2 $ Frml cure loan_cure = loan_transition_from_to_s3_s1+loan_transition_from_to_s3_s2 $ Frml PDPIT PD_pit = ((loan_transition_from_to_s1_s3+loan_transition_from_to_s2_s3)/(loan_s1+loan_s2)) $ Frml PDTTC PD_TTC = logit_inverse(logit(PD_TTC(-1))+alfa*diff(PD_pit)) $ Do stage_from $ Do stage $ Frml tr TR_{stage_from}_{stage} = NORM.CDF((((bound_upper_{stage_from}_{stage}-sqrt(rho)*Z)/(sqrt(1-rho)))))-NORM.CDF((((bound_lower_{stage_from}_{stage}-sqrt(rho)*Z)/(sqrt(1-rho))))) $ enddo $ enddo $ LIST STAGE = STAGE : S1 S2 S3$ LIST STAGE_FROM = STAGE_FROM : S1 S2 S3$ ### The equations in business logic language can be inspected ```python print(ecl.equations) ``` FRML LOANFROMTO LOAN_TRANSITION_FROM_TO_S1_S1 = LOAN_S1(-1)*TR_S1_S1 $ FRML LOANFROMTO LOAN_TRANSITION_FROM_TO_S1_S2 = LOAN_S1(-1)*TR_S1_S2 $ FRML LOANFROMTO LOAN_TRANSITION_FROM_TO_S1_S3 = LOAN_S1(-1)*TR_S1_S3 $ FRML LOANFROMTO LOAN_TRANSITION_FROM_TO_S2_S1 = LOAN_S2(-1)*TR_S2_S1 $ FRML LOANFROMTO LOAN_TRANSITION_FROM_TO_S2_S2 = LOAN_S2(-1)*TR_S2_S2 $ FRML LOANFROMTO LOAN_TRANSITION_FROM_TO_S2_S3 = LOAN_S2(-1)*TR_S2_S3 $ FRML LOANFROMTO LOAN_TRANSITION_FROM_TO_S3_S1 = LOAN_S3(-1)*TR_S3_S1 $ FRML LOANFROMTO LOAN_TRANSITION_FROM_TO_S3_S2 = LOAN_S3(-1)*TR_S3_S2 $ FRML LOANFROMTO LOAN_TRANSITION_FROM_TO_S3_S3 = LOAN_S3(-1)*TR_S3_S3 $ FRML TRANSITION LOAN_TRANSITION_TO_S1 = (LOAN_TRANSITION_FROM_TO_S1_S1+LOAN_TRANSITION_FROM_TO_S2_S1+LOAN_TRANSITION_FROM_TO_S3_S1) $ FRML TRANSITION LOAN_TRANSITION_TO_S2 = (LOAN_TRANSITION_FROM_TO_S1_S2+LOAN_TRANSITION_FROM_TO_S2_S2+LOAN_TRANSITION_FROM_TO_S3_S2) $ FRML TRANSITION LOAN_TRANSITION_TO_S3 = (LOAN_TRANSITION_FROM_TO_S1_S3+LOAN_TRANSITION_FROM_TO_S2_S3+LOAN_TRANSITION_FROM_TO_S3_S3) $ FRML MATURING LOAN_MATURING_S1 = M_S1*LOAN_S1(-1) $ FRML MATURING LOAN_MATURING_S2 = M_S2*LOAN_S2(-1) $ FRML MATURING LOAN_MATURING_S3 = M_S3*LOAN_S3(-1) $ FRML WRITEOFF LOAN_WRITEOFF_S1 = WRO_S1*LOAN_S1(-1) $ FRML WRITEOFF LOAN_WRITEOFF_S2 = WRO_S2*LOAN_S2(-1) $ FRML WRITEOFF LOAN_WRITEOFF_S3 = WRO_S3*LOAN_S3(-1) $ FRML E LOAN_S1 = LOAN_TRANSITION_TO_S1-LOAN_MATURING_S1-LOAN_WRITEOFF_S1+LOAN_NEW_S1 $ FRML E LOAN_S2 = LOAN_TRANSITION_TO_S2-LOAN_MATURING_S2-LOAN_WRITEOFF_S2+LOAN_NEW_S2 $ FRML E LOAN_S3 = LOAN_TRANSITION_TO_S3-LOAN_MATURING_S3-LOAN_WRITEOFF_S3+LOAN_NEW_S3 $ FRML NEW LOAN_NEW_S1 = NEW_S1*LOAN_S1(-1) $ FRML NEW LOAN_NEW_S2 = NEW_S2*LOAN_S2(-1) $ FRML NEW LOAN_NEW_S3 = NEW_S3*LOAN_S3(-1) $ FRML G LOAN_TOTAL = (LOAN_S1+LOAN_S2+LOAN_S3) $ FRML E2 NEW_S1 = ((LOAN_GROWTH*LOAN_TOTAL(-1)+((M_S1+WRO_S1)*LOAN_S1(-1)+(M_S2+WRO_S2)*LOAN_S2(-1)+(M_S3+WRO_S3)*LOAN_S3(-1)))/((LOAN_S1(-1)))) $ FRML PERFORMING LOAN_PERFORMING = LOAN_S1+LOAN_S2 $ FRML CURE LOAN_CURE = LOAN_TRANSITION_FROM_TO_S3_S1+LOAN_TRANSITION_FROM_TO_S3_S2 $ FRML PDPIT PD_PIT = ((LOAN_TRANSITION_FROM_TO_S1_S3+LOAN_TRANSITION_FROM_TO_S2_S3)/(LOAN_S1+LOAN_S2)) $ FRML PDTTC PD_TTC = LOGIT_INVERSE(LOGIT(PD_TTC(-1))+ALFA*((PD_PIT)-(PD_PIT(-1)))) $ FRML TR TR_S1_S1 = NORM.CDF((((BOUND_UPPER_S1_S1-SQRT(RHO)*Z)/(SQRT(1-RHO)))))-NORM.CDF((((BOUND_LOWER_S1_S1-SQRT(RHO)*Z)/(SQRT(1-RHO))))) $ FRML TR TR_S1_S2 = NORM.CDF((((BOUND_UPPER_S1_S2-SQRT(RHO)*Z)/(SQRT(1-RHO)))))-NORM.CDF((((BOUND_LOWER_S1_S2-SQRT(RHO)*Z)/(SQRT(1-RHO))))) $ FRML TR TR_S1_S3 = NORM.CDF((((BOUND_UPPER_S1_S3-SQRT(RHO)*Z)/(SQRT(1-RHO)))))-NORM.CDF((((BOUND_LOWER_S1_S3-SQRT(RHO)*Z)/(SQRT(1-RHO))))) $ FRML TR TR_S2_S1 = NORM.CDF((((BOUND_UPPER_S2_S1-SQRT(RHO)*Z)/(SQRT(1-RHO)))))-NORM.CDF((((BOUND_LOWER_S2_S1-SQRT(RHO)*Z)/(SQRT(1-RHO))))) $ FRML TR TR_S2_S2 = NORM.CDF((((BOUND_UPPER_S2_S2-SQRT(RHO)*Z)/(SQRT(1-RHO)))))-NORM.CDF((((BOUND_LOWER_S2_S2-SQRT(RHO)*Z)/(SQRT(1-RHO))))) $ FRML TR TR_S2_S3 = NORM.CDF((((BOUND_UPPER_S2_S3-SQRT(RHO)*Z)/(SQRT(1-RHO)))))-NORM.CDF((((BOUND_LOWER_S2_S3-SQRT(RHO)*Z)/(SQRT(1-RHO))))) $ FRML TR TR_S3_S1 = NORM.CDF((((BOUND_UPPER_S3_S1-SQRT(RHO)*Z)/(SQRT(1-RHO)))))-NORM.CDF((((BOUND_LOWER_S3_S1-SQRT(RHO)*Z)/(SQRT(1-RHO))))) $ FRML TR TR_S3_S2 = NORM.CDF((((BOUND_UPPER_S3_S2-SQRT(RHO)*Z)/(SQRT(1-RHO)))))-NORM.CDF((((BOUND_LOWER_S3_S2-SQRT(RHO)*Z)/(SQRT(1-RHO))))) $ FRML TR TR_S3_S3 = NORM.CDF((((BOUND_UPPER_S3_S3-SQRT(RHO)*Z)/(SQRT(1-RHO)))))-NORM.CDF((((BOUND_LOWER_S3_S3-SQRT(RHO)*Z)/(SQRT(1-RHO))))) $ LIST STAGE = STAGE : S1 S2 S3$ LIST STAGE_FROM = STAGE_FROM : S1 S2 S3$ ### The model structure can be inspected ```python ecl.drawmodel(sink='LOAN_TOTAL',HR=0,pdf=0,att=0,size=(12,12)) ``` ## load data The data is copy-pasted from the excel sheet ### Load data common for baseline and adverse ```python %%dataframe T0tr noshow prefix=TR_ periods=1 melt S1 S2 S3 S1 89% 9% 2% S2 15% 79% 6% S3 2% 24% 74% ``` ```python %%dataframe startvalues show periods=1 pd_pit lgd_pit 1.6% 20% ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>PD_PIT</th> <th>LGD_PIT</th> </tr> <tr> <th>index</th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>2021</th> <td>0.02</td> <td>0.2</td> </tr> </tbody> </table> </div> ```python %%dataframe Loan_t0 show periods=1 melt S1 S2 S3 total loan 500 180 30 710 ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>S1</th> <th>S2</th> <th>S3</th> <th>TOTAL</th> </tr> </thead> <tbody> <tr> <th>LOAN</th> <td>500.0</td> <td>180.0</td> <td>30.0</td> <td>710.0</td> </tr> </tbody> </table> </div> <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>var_name</th> <th>LOAN_S1</th> <th>LOAN_S2</th> <th>LOAN_S3</th> <th>LOAN_TOTAL</th> </tr> <tr> <th>index</th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>2021</th> <td>500.0</td> <td>180.0</td> <td>30.0</td> <td>710.0</td> </tr> </tbody> </table> </div> ```python %%dataframe upper_bin prefix=bound_upper_ nshow periods=7 melt S1 S2 S3 S1 10000 -1,34 -2,33 S2 10000 1,08 -1,64 S3 10000 1,88 0,58 ``` ```python %%dataframe lower_bin prefix=bound_lower_ show periods=7 melt S1 S2 S3 S1 -1,34 -2,33 -10000 S2 1,08 -1,64 -10000 S3 1,88 0,58 -10000 ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>S1</th> <th>S2</th> <th>S3</th> </tr> </thead> <tbody> <tr> <th>S1</th> <td>-1.34</td> <td>-2.33</td> <td>-10000.0</td> </tr> <tr> <th>S2</th> <td>1.08</td> <td>-1.64</td> <td>-10000.0</td> </tr> <tr> <th>S3</th> <td>1.88</td> <td>0.58</td> <td>-10000.0</td> </tr> </tbody> </table> </div> <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>var_name</th> <th>BOUND_LOWER_S1_S1</th> <th>BOUND_LOWER_S2_S1</th> <th>BOUND_LOWER_S3_S1</th> <th>BOUND_LOWER_S1_S2</th> <th>BOUND_LOWER_S2_S2</th> <th>BOUND_LOWER_S3_S2</th> <th>BOUND_LOWER_S1_S3</th> <th>BOUND_LOWER_S2_S3</th> <th>BOUND_LOWER_S3_S3</th> </tr> <tr> <th>index</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>2021</th> <td>-1.34</td> <td>1.08</td> <td>1.88</td> <td>-2.33</td> <td>-1.64</td> <td>0.58</td> <td>-10000.0</td> <td>-10000.0</td> <td>-10000.0</td> </tr> <tr> <th>2022</th> <td>-1.34</td> <td>1.08</td> <td>1.88</td> <td>-2.33</td> <td>-1.64</td> <td>0.58</td> <td>-10000.0</td> <td>-10000.0</td> <td>-10000.0</td> </tr> <tr> <th>2023</th> <td>-1.34</td> <td>1.08</td> <td>1.88</td> <td>-2.33</td> <td>-1.64</td> <td>0.58</td> <td>-10000.0</td> <td>-10000.0</td> <td>-10000.0</td> </tr> <tr> <th>2024</th> <td>-1.34</td> <td>1.08</td> <td>1.88</td> <td>-2.33</td> <td>-1.64</td> <td>0.58</td> <td>-10000.0</td> <td>-10000.0</td> <td>-10000.0</td> </tr> <tr> <th>2025</th> <td>-1.34</td> <td>1.08</td> <td>1.88</td> <td>-2.33</td> <td>-1.64</td> <td>0.58</td> <td>-10000.0</td> <td>-10000.0</td> <td>-10000.0</td> </tr> <tr> <th>2026</th> <td>-1.34</td> <td>1.08</td> <td>1.88</td> <td>-2.33</td> <td>-1.64</td> <td>0.58</td> <td>-10000.0</td> <td>-10000.0</td> <td>-10000.0</td> </tr> <tr> <th>2027</th> <td>-1.34</td> <td>1.08</td> <td>1.88</td> <td>-2.33</td> <td>-1.64</td> <td>0.58</td> <td>-10000.0</td> <td>-10000.0</td> <td>-10000.0</td> </tr> </tbody> </table> </div> ```python %%dataframe parameters show periods=7 rho 0.2 ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>RHO</th> </tr> <tr> <th>index</th> <th></th> </tr> </thead> <tbody> <tr> <th>2021</th> <td>0.2</td> </tr> <tr> <th>2022</th> <td>0.2</td> </tr> <tr> <th>2023</th> <td>0.2</td> </tr> <tr> <th>2024</th> <td>0.2</td> </tr> <tr> <th>2025</th> <td>0.2</td> </tr> <tr> <th>2026</th> <td>0.2</td> </tr> <tr> <th>2027</th> <td>0.2</td> </tr> </tbody> </table> </div> ### Create the Static dataframe, common to scenarios ```python staticdf = pd.concat([T0tr_melted,Loan_t0_melted,startvalues,upper_bin_melted, lower_bin_melted,parameters],axis=1) HTML(staticdf.T.style.render().replace('nan','')) ``` <style type="text/css" > </style><table id="T_3c816_" ><thead> <tr> <th class="index_name level0" >index</th> <th class="col_heading level0 col0" >2021</th> <th class="col_heading level0 col1" >2022</th> <th class="col_heading level0 col2" >2023</th> <th class="col_heading level0 col3" >2024</th> <th class="col_heading level0 col4" >2025</th> <th class="col_heading level0 col5" >2026</th> <th class="col_heading level0 col6" >2027</th> </tr></thead><tbody> <tr> <th id="T_3c816_level0_row0" class="row_heading level0 row0" >TR_S1_S1</th> <td id="T_3c816_row0_col0" class="data row0 col0" >0.89</td> <td id="T_3c816_row0_col1" class="data row0 col1" ></td> <td id="T_3c816_row0_col2" class="data row0 col2" ></td> <td id="T_3c816_row0_col3" class="data row0 col3" ></td> <td id="T_3c816_row0_col4" class="data row0 col4" ></td> <td id="T_3c816_row0_col5" class="data row0 col5" ></td> <td id="T_3c816_row0_col6" class="data row0 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row1" class="row_heading level0 row1" >TR_S2_S1</th> <td id="T_3c816_row1_col0" class="data row1 col0" >0.15</td> <td id="T_3c816_row1_col1" class="data row1 col1" ></td> <td id="T_3c816_row1_col2" class="data row1 col2" ></td> <td id="T_3c816_row1_col3" class="data row1 col3" ></td> <td id="T_3c816_row1_col4" class="data row1 col4" ></td> <td id="T_3c816_row1_col5" class="data row1 col5" ></td> <td id="T_3c816_row1_col6" class="data row1 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row2" class="row_heading level0 row2" >TR_S3_S1</th> <td id="T_3c816_row2_col0" class="data row2 col0" >0.02</td> <td id="T_3c816_row2_col1" class="data row2 col1" ></td> <td id="T_3c816_row2_col2" class="data row2 col2" ></td> <td id="T_3c816_row2_col3" class="data row2 col3" ></td> <td id="T_3c816_row2_col4" class="data row2 col4" ></td> <td id="T_3c816_row2_col5" class="data row2 col5" ></td> <td id="T_3c816_row2_col6" class="data row2 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row3" class="row_heading level0 row3" >TR_S1_S2</th> <td id="T_3c816_row3_col0" class="data row3 col0" >0.09</td> <td id="T_3c816_row3_col1" class="data row3 col1" ></td> <td id="T_3c816_row3_col2" class="data row3 col2" ></td> <td id="T_3c816_row3_col3" class="data row3 col3" ></td> <td id="T_3c816_row3_col4" class="data row3 col4" ></td> <td id="T_3c816_row3_col5" class="data row3 col5" ></td> <td id="T_3c816_row3_col6" class="data row3 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row4" class="row_heading level0 row4" >TR_S2_S2</th> <td id="T_3c816_row4_col0" class="data row4 col0" >0.79</td> <td id="T_3c816_row4_col1" class="data row4 col1" ></td> <td id="T_3c816_row4_col2" class="data row4 col2" ></td> <td id="T_3c816_row4_col3" class="data row4 col3" ></td> <td id="T_3c816_row4_col4" class="data row4 col4" ></td> <td id="T_3c816_row4_col5" class="data row4 col5" ></td> <td id="T_3c816_row4_col6" class="data row4 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row5" class="row_heading level0 row5" >TR_S3_S2</th> <td id="T_3c816_row5_col0" class="data row5 col0" >0.24</td> <td id="T_3c816_row5_col1" class="data row5 col1" ></td> <td id="T_3c816_row5_col2" class="data row5 col2" ></td> <td id="T_3c816_row5_col3" class="data row5 col3" ></td> <td id="T_3c816_row5_col4" class="data row5 col4" ></td> <td id="T_3c816_row5_col5" class="data row5 col5" ></td> <td id="T_3c816_row5_col6" class="data row5 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row6" class="row_heading level0 row6" >TR_S1_S3</th> <td id="T_3c816_row6_col0" class="data row6 col0" >0.02</td> <td id="T_3c816_row6_col1" class="data row6 col1" ></td> <td id="T_3c816_row6_col2" class="data row6 col2" ></td> <td id="T_3c816_row6_col3" class="data row6 col3" ></td> <td id="T_3c816_row6_col4" class="data row6 col4" ></td> <td id="T_3c816_row6_col5" class="data row6 col5" ></td> <td id="T_3c816_row6_col6" class="data row6 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row7" class="row_heading level0 row7" >TR_S2_S3</th> <td id="T_3c816_row7_col0" class="data row7 col0" >0.06</td> <td id="T_3c816_row7_col1" class="data row7 col1" ></td> <td id="T_3c816_row7_col2" class="data row7 col2" ></td> <td id="T_3c816_row7_col3" class="data row7 col3" ></td> <td id="T_3c816_row7_col4" class="data row7 col4" ></td> <td id="T_3c816_row7_col5" class="data row7 col5" ></td> <td id="T_3c816_row7_col6" class="data row7 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row8" class="row_heading level0 row8" >TR_S3_S3</th> <td id="T_3c816_row8_col0" class="data row8 col0" >0.74</td> <td id="T_3c816_row8_col1" class="data row8 col1" ></td> <td id="T_3c816_row8_col2" class="data row8 col2" ></td> <td id="T_3c816_row8_col3" class="data row8 col3" ></td> <td id="T_3c816_row8_col4" class="data row8 col4" ></td> <td id="T_3c816_row8_col5" class="data row8 col5" ></td> <td id="T_3c816_row8_col6" class="data row8 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row9" class="row_heading level0 row9" >LOAN_S1</th> <td id="T_3c816_row9_col0" class="data row9 col0" >500.00</td> <td id="T_3c816_row9_col1" class="data row9 col1" ></td> <td id="T_3c816_row9_col2" class="data row9 col2" ></td> <td id="T_3c816_row9_col3" class="data row9 col3" ></td> <td id="T_3c816_row9_col4" class="data row9 col4" ></td> <td id="T_3c816_row9_col5" class="data row9 col5" ></td> <td id="T_3c816_row9_col6" class="data row9 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row10" class="row_heading level0 row10" >LOAN_S2</th> <td id="T_3c816_row10_col0" class="data row10 col0" >180.00</td> <td id="T_3c816_row10_col1" class="data row10 col1" ></td> <td id="T_3c816_row10_col2" class="data row10 col2" ></td> <td id="T_3c816_row10_col3" class="data row10 col3" ></td> <td id="T_3c816_row10_col4" class="data row10 col4" ></td> <td id="T_3c816_row10_col5" class="data row10 col5" ></td> <td id="T_3c816_row10_col6" class="data row10 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row11" class="row_heading level0 row11" >LOAN_S3</th> <td id="T_3c816_row11_col0" class="data row11 col0" >30.00</td> <td id="T_3c816_row11_col1" class="data row11 col1" ></td> <td id="T_3c816_row11_col2" class="data row11 col2" ></td> <td id="T_3c816_row11_col3" class="data row11 col3" ></td> <td id="T_3c816_row11_col4" class="data row11 col4" ></td> <td id="T_3c816_row11_col5" class="data row11 col5" ></td> <td id="T_3c816_row11_col6" class="data row11 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row12" class="row_heading level0 row12" >LOAN_TOTAL</th> <td id="T_3c816_row12_col0" class="data row12 col0" >710.00</td> <td id="T_3c816_row12_col1" class="data row12 col1" ></td> <td id="T_3c816_row12_col2" class="data row12 col2" ></td> <td id="T_3c816_row12_col3" class="data row12 col3" ></td> <td id="T_3c816_row12_col4" class="data row12 col4" ></td> <td id="T_3c816_row12_col5" class="data row12 col5" ></td> <td id="T_3c816_row12_col6" class="data row12 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row13" class="row_heading level0 row13" >PD_PIT</th> <td id="T_3c816_row13_col0" class="data row13 col0" >0.02</td> <td id="T_3c816_row13_col1" class="data row13 col1" ></td> <td id="T_3c816_row13_col2" class="data row13 col2" ></td> <td id="T_3c816_row13_col3" class="data row13 col3" ></td> <td id="T_3c816_row13_col4" class="data row13 col4" ></td> <td id="T_3c816_row13_col5" class="data row13 col5" ></td> <td id="T_3c816_row13_col6" class="data row13 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row14" class="row_heading level0 row14" >LGD_PIT</th> <td id="T_3c816_row14_col0" class="data row14 col0" >0.20</td> <td id="T_3c816_row14_col1" class="data row14 col1" ></td> <td id="T_3c816_row14_col2" class="data row14 col2" ></td> <td id="T_3c816_row14_col3" class="data row14 col3" ></td> <td id="T_3c816_row14_col4" class="data row14 col4" ></td> <td id="T_3c816_row14_col5" class="data row14 col5" ></td> <td id="T_3c816_row14_col6" class="data row14 col6" ></td> </tr> <tr> <th id="T_3c816_level0_row15" class="row_heading level0 row15" >BOUND_UPPER_S1_S1</th> <td id="T_3c816_row15_col0" class="data row15 col0" >10000.00</td> <td id="T_3c816_row15_col1" class="data row15 col1" >10000.00</td> <td id="T_3c816_row15_col2" class="data row15 col2" >10000.00</td> <td id="T_3c816_row15_col3" class="data row15 col3" >10000.00</td> <td id="T_3c816_row15_col4" class="data row15 col4" >10000.00</td> <td id="T_3c816_row15_col5" class="data row15 col5" >10000.00</td> <td id="T_3c816_row15_col6" class="data row15 col6" >10000.00</td> </tr> <tr> <th id="T_3c816_level0_row16" class="row_heading level0 row16" >BOUND_UPPER_S2_S1</th> <td id="T_3c816_row16_col0" class="data row16 col0" >10000.00</td> <td id="T_3c816_row16_col1" class="data row16 col1" >10000.00</td> <td id="T_3c816_row16_col2" class="data row16 col2" >10000.00</td> <td id="T_3c816_row16_col3" class="data row16 col3" >10000.00</td> <td id="T_3c816_row16_col4" class="data row16 col4" >10000.00</td> <td id="T_3c816_row16_col5" class="data row16 col5" >10000.00</td> <td id="T_3c816_row16_col6" class="data row16 col6" >10000.00</td> </tr> <tr> <th id="T_3c816_level0_row17" class="row_heading level0 row17" >BOUND_UPPER_S3_S1</th> <td id="T_3c816_row17_col0" class="data row17 col0" >10000.00</td> <td id="T_3c816_row17_col1" class="data row17 col1" >10000.00</td> <td id="T_3c816_row17_col2" class="data row17 col2" >10000.00</td> <td id="T_3c816_row17_col3" class="data row17 col3" >10000.00</td> <td id="T_3c816_row17_col4" class="data row17 col4" >10000.00</td> <td id="T_3c816_row17_col5" class="data row17 col5" >10000.00</td> <td id="T_3c816_row17_col6" class="data row17 col6" >10000.00</td> </tr> <tr> <th id="T_3c816_level0_row18" class="row_heading level0 row18" >BOUND_UPPER_S1_S2</th> <td id="T_3c816_row18_col0" class="data row18 col0" >-1.34</td> <td id="T_3c816_row18_col1" class="data row18 col1" >-1.34</td> <td id="T_3c816_row18_col2" class="data row18 col2" >-1.34</td> <td id="T_3c816_row18_col3" class="data row18 col3" >-1.34</td> <td id="T_3c816_row18_col4" class="data row18 col4" >-1.34</td> <td id="T_3c816_row18_col5" class="data row18 col5" >-1.34</td> <td id="T_3c816_row18_col6" class="data row18 col6" >-1.34</td> </tr> <tr> <th id="T_3c816_level0_row19" class="row_heading level0 row19" >BOUND_UPPER_S2_S2</th> <td id="T_3c816_row19_col0" class="data row19 col0" >1.08</td> <td id="T_3c816_row19_col1" class="data row19 col1" >1.08</td> <td id="T_3c816_row19_col2" class="data row19 col2" >1.08</td> <td id="T_3c816_row19_col3" class="data row19 col3" >1.08</td> <td id="T_3c816_row19_col4" class="data row19 col4" >1.08</td> <td id="T_3c816_row19_col5" class="data row19 col5" >1.08</td> <td id="T_3c816_row19_col6" class="data row19 col6" >1.08</td> </tr> <tr> <th id="T_3c816_level0_row20" class="row_heading level0 row20" >BOUND_UPPER_S3_S2</th> <td id="T_3c816_row20_col0" class="data row20 col0" >1.88</td> <td id="T_3c816_row20_col1" class="data row20 col1" >1.88</td> <td id="T_3c816_row20_col2" class="data row20 col2" >1.88</td> <td id="T_3c816_row20_col3" class="data row20 col3" >1.88</td> <td id="T_3c816_row20_col4" class="data row20 col4" >1.88</td> <td id="T_3c816_row20_col5" class="data row20 col5" >1.88</td> <td id="T_3c816_row20_col6" class="data row20 col6" >1.88</td> </tr> <tr> <th id="T_3c816_level0_row21" class="row_heading level0 row21" >BOUND_UPPER_S1_S3</th> <td id="T_3c816_row21_col0" class="data row21 col0" >-2.33</td> <td id="T_3c816_row21_col1" class="data row21 col1" >-2.33</td> <td id="T_3c816_row21_col2" class="data row21 col2" >-2.33</td> <td id="T_3c816_row21_col3" class="data row21 col3" >-2.33</td> <td id="T_3c816_row21_col4" class="data row21 col4" >-2.33</td> <td id="T_3c816_row21_col5" class="data row21 col5" >-2.33</td> <td id="T_3c816_row21_col6" class="data row21 col6" >-2.33</td> </tr> <tr> <th id="T_3c816_level0_row22" class="row_heading level0 row22" >BOUND_UPPER_S2_S3</th> <td id="T_3c816_row22_col0" class="data row22 col0" >-1.64</td> <td id="T_3c816_row22_col1" class="data row22 col1" >-1.64</td> <td id="T_3c816_row22_col2" class="data row22 col2" >-1.64</td> <td id="T_3c816_row22_col3" class="data row22 col3" >-1.64</td> <td id="T_3c816_row22_col4" class="data row22 col4" >-1.64</td> <td id="T_3c816_row22_col5" class="data row22 col5" >-1.64</td> <td id="T_3c816_row22_col6" class="data row22 col6" >-1.64</td> </tr> <tr> <th id="T_3c816_level0_row23" class="row_heading level0 row23" >BOUND_UPPER_S3_S3</th> <td id="T_3c816_row23_col0" class="data row23 col0" >0.58</td> <td id="T_3c816_row23_col1" class="data row23 col1" >0.58</td> <td id="T_3c816_row23_col2" class="data row23 col2" >0.58</td> <td id="T_3c816_row23_col3" class="data row23 col3" >0.58</td> <td id="T_3c816_row23_col4" class="data row23 col4" >0.58</td> <td id="T_3c816_row23_col5" class="data row23 col5" >0.58</td> <td id="T_3c816_row23_col6" class="data row23 col6" >0.58</td> </tr> <tr> <th id="T_3c816_level0_row24" class="row_heading level0 row24" >BOUND_LOWER_S1_S1</th> <td id="T_3c816_row24_col0" class="data row24 col0" >-1.34</td> <td id="T_3c816_row24_col1" class="data row24 col1" >-1.34</td> <td id="T_3c816_row24_col2" class="data row24 col2" >-1.34</td> <td id="T_3c816_row24_col3" class="data row24 col3" >-1.34</td> <td id="T_3c816_row24_col4" class="data row24 col4" >-1.34</td> <td id="T_3c816_row24_col5" class="data row24 col5" >-1.34</td> <td id="T_3c816_row24_col6" class="data row24 col6" >-1.34</td> </tr> <tr> <th id="T_3c816_level0_row25" class="row_heading level0 row25" >BOUND_LOWER_S2_S1</th> <td id="T_3c816_row25_col0" class="data row25 col0" >1.08</td> <td id="T_3c816_row25_col1" class="data row25 col1" >1.08</td> <td id="T_3c816_row25_col2" class="data row25 col2" >1.08</td> <td id="T_3c816_row25_col3" class="data row25 col3" >1.08</td> <td id="T_3c816_row25_col4" class="data row25 col4" >1.08</td> <td id="T_3c816_row25_col5" class="data row25 col5" >1.08</td> <td id="T_3c816_row25_col6" class="data row25 col6" >1.08</td> </tr> <tr> <th id="T_3c816_level0_row26" class="row_heading level0 row26" >BOUND_LOWER_S3_S1</th> <td id="T_3c816_row26_col0" class="data row26 col0" >1.88</td> <td id="T_3c816_row26_col1" class="data row26 col1" >1.88</td> <td id="T_3c816_row26_col2" class="data row26 col2" >1.88</td> <td id="T_3c816_row26_col3" class="data row26 col3" >1.88</td> <td id="T_3c816_row26_col4" class="data row26 col4" >1.88</td> <td id="T_3c816_row26_col5" class="data row26 col5" >1.88</td> <td id="T_3c816_row26_col6" class="data row26 col6" >1.88</td> </tr> <tr> <th id="T_3c816_level0_row27" class="row_heading level0 row27" >BOUND_LOWER_S1_S2</th> <td id="T_3c816_row27_col0" class="data row27 col0" >-2.33</td> <td id="T_3c816_row27_col1" class="data row27 col1" >-2.33</td> <td id="T_3c816_row27_col2" class="data row27 col2" >-2.33</td> <td id="T_3c816_row27_col3" class="data row27 col3" >-2.33</td> <td id="T_3c816_row27_col4" class="data row27 col4" >-2.33</td> <td id="T_3c816_row27_col5" class="data row27 col5" >-2.33</td> <td id="T_3c816_row27_col6" class="data row27 col6" >-2.33</td> </tr> <tr> <th id="T_3c816_level0_row28" class="row_heading level0 row28" >BOUND_LOWER_S2_S2</th> <td id="T_3c816_row28_col0" class="data row28 col0" >-1.64</td> <td id="T_3c816_row28_col1" class="data row28 col1" >-1.64</td> <td id="T_3c816_row28_col2" class="data row28 col2" >-1.64</td> <td id="T_3c816_row28_col3" class="data row28 col3" >-1.64</td> <td id="T_3c816_row28_col4" class="data row28 col4" >-1.64</td> <td id="T_3c816_row28_col5" class="data row28 col5" >-1.64</td> <td id="T_3c816_row28_col6" class="data row28 col6" >-1.64</td> </tr> <tr> <th id="T_3c816_level0_row29" class="row_heading level0 row29" >BOUND_LOWER_S3_S2</th> <td id="T_3c816_row29_col0" class="data row29 col0" >0.58</td> <td id="T_3c816_row29_col1" class="data row29 col1" >0.58</td> <td id="T_3c816_row29_col2" class="data row29 col2" >0.58</td> <td id="T_3c816_row29_col3" class="data row29 col3" >0.58</td> <td id="T_3c816_row29_col4" class="data row29 col4" >0.58</td> <td id="T_3c816_row29_col5" class="data row29 col5" >0.58</td> <td id="T_3c816_row29_col6" class="data row29 col6" >0.58</td> </tr> <tr> <th id="T_3c816_level0_row30" class="row_heading level0 row30" >BOUND_LOWER_S1_S3</th> <td id="T_3c816_row30_col0" class="data row30 col0" >-10000.00</td> <td id="T_3c816_row30_col1" class="data row30 col1" >-10000.00</td> <td id="T_3c816_row30_col2" class="data row30 col2" >-10000.00</td> <td id="T_3c816_row30_col3" class="data row30 col3" >-10000.00</td> <td id="T_3c816_row30_col4" class="data row30 col4" >-10000.00</td> <td id="T_3c816_row30_col5" class="data row30 col5" >-10000.00</td> <td id="T_3c816_row30_col6" class="data row30 col6" >-10000.00</td> </tr> <tr> <th id="T_3c816_level0_row31" class="row_heading level0 row31" >BOUND_LOWER_S2_S3</th> <td id="T_3c816_row31_col0" class="data row31 col0" >-10000.00</td> <td id="T_3c816_row31_col1" class="data row31 col1" >-10000.00</td> <td id="T_3c816_row31_col2" class="data row31 col2" >-10000.00</td> <td id="T_3c816_row31_col3" class="data row31 col3" >-10000.00</td> <td id="T_3c816_row31_col4" class="data row31 col4" >-10000.00</td> <td id="T_3c816_row31_col5" class="data row31 col5" >-10000.00</td> <td id="T_3c816_row31_col6" class="data row31 col6" >-10000.00</td> </tr> <tr> <th id="T_3c816_level0_row32" class="row_heading level0 row32" >BOUND_LOWER_S3_S3</th> <td id="T_3c816_row32_col0" class="data row32 col0" >-10000.00</td> <td id="T_3c816_row32_col1" class="data row32 col1" >-10000.00</td> <td id="T_3c816_row32_col2" class="data row32 col2" >-10000.00</td> <td id="T_3c816_row32_col3" class="data row32 col3" >-10000.00</td> <td id="T_3c816_row32_col4" class="data row32 col4" >-10000.00</td> <td id="T_3c816_row32_col5" class="data row32 col5" >-10000.00</td> <td id="T_3c816_row32_col6" class="data row32 col6" >-10000.00</td> </tr> <tr> <th id="T_3c816_level0_row33" class="row_heading level0 row33" >RHO</th> <td id="T_3c816_row33_col0" class="data row33 col0" >0.20</td> <td id="T_3c816_row33_col1" class="data row33 col1" >0.20</td> <td id="T_3c816_row33_col2" class="data row33 col2" >0.20</td> <td id="T_3c816_row33_col3" class="data row33 col3" >0.20</td> <td id="T_3c816_row33_col4" class="data row33 col4" >0.20</td> <td id="T_3c816_row33_col5" class="data row33 col5" >0.20</td> <td id="T_3c816_row33_col6" class="data row33 col6" >0.20</td> </tr> </tbody></table> ### Load data specific for the scenarios ```python %%dataframe inf_baseline nshow t periods=7 melt m wro s1 5% 0 s2 3.8% 0 s3 0 7.5% ``` ```python %%dataframe inf_adverse nshow t periods=7 melt m wro s1 3.8% 0 s2 2.5% 0 s3 0 6.3% ``` ```python %%dataframe projection_baseline nshow Z loan_growth 0 0.01 -0,47 0.01 -0,42 0.01 -0,38 0.01 -0,36 0.01 -0,34 0.01 -0,33 0.01 ``` ```python %%dataframe projection_adverse nshow Z loan_growth 0 0.01 -0,65 -0.01 -0,84 -0.008 -0,99 -0.006 -0,69 -0.004 -0,39 -0.002 -0,24 -0.0 ``` ### Create a dataframe for baseline and adverse scenario ```python baseupdate = pd.concat([inf_baseline_melted, projection_baseline],axis=1).pipe(sortdf) adverseupdate = pd.concat([inf_adverse_melted, projection_adverse],axis=1).pipe(sortdf) getnotnul = lambda x: x.loc[:,(x!=0.0).any(axis=0)] display(Markdown('## Baseline scenario')) display(baseupdate.pipe(getnotnul)) display(Markdown('## Adverse scenario')) display(adverseupdate.pipe(getnotnul)) ``` ## Baseline scenario <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>LOAN_GROWTH</th> <th>M_S1</th> <th>M_S2</th> <th>WRO_S3</th> <th>Z</th> </tr> <tr> <th>index</th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>2021</th> <td>0.01</td> <td>0.05</td> <td>0.04</td> <td>0.07</td> <td>0.00</td> </tr> <tr> <th>2022</th> <td>0.01</td> <td>0.05</td> <td>0.04</td> <td>0.07</td> <td>-0.47</td> </tr> <tr> <th>2023</th> <td>0.01</td> <td>0.05</td> <td>0.04</td> <td>0.07</td> <td>-0.42</td> </tr> <tr> <th>2024</th> <td>0.01</td> <td>0.05</td> <td>0.04</td> <td>0.07</td> <td>-0.38</td> </tr> <tr> <th>2025</th> <td>0.01</td> <td>0.05</td> <td>0.04</td> <td>0.07</td> <td>-0.36</td> </tr> <tr> <th>2026</th> <td>0.01</td> <td>0.05</td> <td>0.04</td> <td>0.07</td> <td>-0.34</td> </tr> <tr> <th>2027</th> <td>0.01</td> <td>0.05</td> <td>0.04</td> <td>0.07</td> <td>-0.33</td> </tr> </tbody> </table> </div> ## Adverse scenario <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>LOAN_GROWTH</th> <th>M_S1</th> <th>M_S2</th> <th>WRO_S3</th> <th>Z</th> </tr> <tr> <th>index</th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>2021</th> <td>1.00e-02</td> <td>0.04</td> <td>0.03</td> <td>0.06</td> <td>0.00</td> </tr> <tr> <th>2022</th> <td>-1.00e-02</td> <td>0.04</td> <td>0.03</td> <td>0.06</td> <td>-0.65</td> </tr> <tr> <th>2023</th> <td>-8.00e-03</td> <td>0.04</td> <td>0.03</td> <td>0.06</td> <td>-0.84</td> </tr> <tr> <th>2024</th> <td>-6.00e-03</td> <td>0.04</td> <td>0.03</td> <td>0.06</td> <td>-0.99</td> </tr> <tr> <th>2025</th> <td>-4.00e-03</td> <td>0.04</td> <td>0.03</td> <td>0.06</td> <td>-0.69</td> </tr> <tr> <th>2026</th> <td>-2.00e-03</td> <td>0.04</td> <td>0.03</td> <td>0.06</td> <td>-0.39</td> </tr> <tr> <th>2027</th> <td>-0.00e+00</td> <td>0.04</td> <td>0.03</td> <td>0.06</td> <td>-0.24</td> </tr> </tbody> </table> </div> ```python baseline = pd.concat([staticdf, inf_baseline_melted, projection_baseline],axis=1) adverse = pd.concat([staticdf, inf_adverse_melted, projection_adverse],axis=1) baseline.index = pd.period_range(start=2021,freq = 'Y',periods=7) adverse.index = pd.period_range(start=2021,freq = 'Y',periods=7) display(baseline.pipe(sortdf).T) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>2021</th> <th>2022</th> <th>2023</th> <th>2024</th> <th>2025</th> <th>2026</th> <th>2027</th> </tr> </thead> <tbody> <tr> <th>BOUND_LOWER_S1_S1</th> <td>-1.34</td> <td>-1.34</td> <td>-1.34</td> <td>-1.34</td> <td>-1.34</td> <td>-1.34</td> <td>-1.34</td> </tr> <tr> <th>BOUND_LOWER_S1_S2</th> <td>-2.33</td> <td>-2.33</td> <td>-2.33</td> <td>-2.33</td> <td>-2.33</td> <td>-2.33</td> <td>-2.33</td> </tr> <tr> <th>BOUND_LOWER_S1_S3</th> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> </tr> <tr> <th>BOUND_LOWER_S2_S1</th> <td>1.08</td> <td>1.08</td> <td>1.08</td> <td>1.08</td> <td>1.08</td> <td>1.08</td> <td>1.08</td> </tr> <tr> <th>BOUND_LOWER_S2_S2</th> <td>-1.64</td> <td>-1.64</td> <td>-1.64</td> <td>-1.64</td> <td>-1.64</td> <td>-1.64</td> <td>-1.64</td> </tr> <tr> <th>BOUND_LOWER_S2_S3</th> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> </tr> <tr> <th>BOUND_LOWER_S3_S1</th> <td>1.88</td> <td>1.88</td> <td>1.88</td> <td>1.88</td> <td>1.88</td> <td>1.88</td> <td>1.88</td> </tr> <tr> <th>BOUND_LOWER_S3_S2</th> <td>0.58</td> <td>0.58</td> <td>0.58</td> <td>0.58</td> <td>0.58</td> <td>0.58</td> <td>0.58</td> </tr> <tr> <th>BOUND_LOWER_S3_S3</th> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> <td>-10000.00</td> </tr> <tr> <th>BOUND_UPPER_S1_S1</th> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> </tr> <tr> <th>BOUND_UPPER_S1_S2</th> <td>-1.34</td> <td>-1.34</td> <td>-1.34</td> <td>-1.34</td> <td>-1.34</td> <td>-1.34</td> <td>-1.34</td> </tr> <tr> <th>BOUND_UPPER_S1_S3</th> <td>-2.33</td> <td>-2.33</td> <td>-2.33</td> <td>-2.33</td> <td>-2.33</td> <td>-2.33</td> <td>-2.33</td> </tr> <tr> <th>BOUND_UPPER_S2_S1</th> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> </tr> <tr> <th>BOUND_UPPER_S2_S2</th> <td>1.08</td> <td>1.08</td> <td>1.08</td> <td>1.08</td> <td>1.08</td> <td>1.08</td> <td>1.08</td> </tr> <tr> <th>BOUND_UPPER_S2_S3</th> <td>-1.64</td> <td>-1.64</td> <td>-1.64</td> <td>-1.64</td> <td>-1.64</td> <td>-1.64</td> <td>-1.64</td> </tr> <tr> <th>BOUND_UPPER_S3_S1</th> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> <td>10000.00</td> </tr> <tr> <th>BOUND_UPPER_S3_S2</th> <td>1.88</td> <td>1.88</td> <td>1.88</td> <td>1.88</td> <td>1.88</td> <td>1.88</td> <td>1.88</td> </tr> <tr> <th>BOUND_UPPER_S3_S3</th> <td>0.58</td> <td>0.58</td> <td>0.58</td> <td>0.58</td> <td>0.58</td> <td>0.58</td> <td>0.58</td> </tr> <tr> <th>LGD_PIT</th> <td>0.20</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>LOAN_GROWTH</th> <td>0.01</td> <td>0.01</td> <td>0.01</td> <td>0.01</td> <td>0.01</td> <td>0.01</td> <td>0.01</td> </tr> <tr> <th>LOAN_S1</th> <td>500.00</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>LOAN_S2</th> <td>180.00</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>LOAN_S3</th> <td>30.00</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>LOAN_TOTAL</th> <td>710.00</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>M_S1</th> <td>0.05</td> <td>0.05</td> <td>0.05</td> <td>0.05</td> <td>0.05</td> <td>0.05</td> <td>0.05</td> </tr> <tr> <th>M_S2</th> <td>0.04</td> <td>0.04</td> <td>0.04</td> <td>0.04</td> <td>0.04</td> <td>0.04</td> <td>0.04</td> </tr> <tr> <th>M_S3</th> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <th>PD_PIT</th> <td>0.02</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>RHO</th> <td>0.20</td> <td>0.20</td> <td>0.20</td> <td>0.20</td> <td>0.20</td> <td>0.20</td> <td>0.20</td> </tr> <tr> <th>TR_S1_S1</th> <td>0.89</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>TR_S1_S2</th> <td>0.09</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>TR_S1_S3</th> <td>0.02</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>TR_S2_S1</th> <td>0.15</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>TR_S2_S2</th> <td>0.79</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>TR_S2_S3</th> <td>0.06</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>TR_S3_S1</th> <td>0.02</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>TR_S3_S2</th> <td>0.24</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>TR_S3_S3</th> <td>0.74</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>WRO_S1</th> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <th>WRO_S2</th> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <th>WRO_S3</th> <td>0.07</td> <td>0.07</td> <td>0.07</td> <td>0.07</td> <td>0.07</td> <td>0.07</td> <td>0.07</td> </tr> <tr> <th>Z</th> <td>0.00</td> <td>-0.47</td> <td>-0.42</td> <td>-0.38</td> <td>-0.36</td> <td>-0.34</td> <td>-0.33</td> </tr> </tbody> </table> </div> ## Run baseline ```python base_result = ecl(baseline,keep='Baseline') ``` Will start calculating: testmodel 2022 solved 2023 solved 2024 solved 2025 solved 2026 solved 2027 solved testmodel calculated ### Save model and baseline results ```python ecl.modeldump('ecl.pcim') ``` ## Run adverse ```python adverse_result = ecl(adverse,keep = 'Adverse') ``` Will start calculating: testmodel 2022 solved 2023 solved 2024 solved 2025 solved 2026 solved 2027 solved testmodel calculated ## Inspect Results ```python with ecl.set_smpl('2021','2027'): ecl.keep_plot('loan_total',showtype='growth',legend=False); ``` ```python ecl.keep_plot('tr_*',showtype='level',legend=False); ``` ```python ```
fb4b2a9fd0989060ba8ede68b152a7608185d7e9
530,284
ipynb
Jupyter Notebook
Examples/Economic Credit Loss/ECL Z setup.ipynb
IbHansen/Modelflow2
48c5a5c13746650c37d8af36250fd35cdd40b05b
[ "X11" ]
1
2020-11-11T22:58:58.000Z
2020-11-11T22:58:58.000Z
Examples/Economic Credit Loss/ECL Z setup.ipynb
IbHansen/Modelflow2
48c5a5c13746650c37d8af36250fd35cdd40b05b
[ "X11" ]
null
null
null
Examples/Economic Credit Loss/ECL Z setup.ipynb
IbHansen/Modelflow2
48c5a5c13746650c37d8af36250fd35cdd40b05b
[ "X11" ]
1
2022-01-16T17:19:56.000Z
2022-01-16T17:19:56.000Z
119.138171
40,544
0.75782
true
22,222
Qwen/Qwen-72B
1. YES 2. YES
0.699254
0.763484
0.533869
__label__yue_Hant
0.199229
0.078687
# K-Means Clustering ```python import numpy as np import pandas as pd import matplotlib.pyplot as plt from tqdm import tqdm_notebook from sklearn.cluster import KMeans %matplotlib inline ``` Let's use the online news popularity dataset again. ```python df = pd.read_csv('./data/OnlineNewsPopularity/OnlineNewsPopularity.csv') ``` ```python df.columns = list(map(str.strip, df.columns)) ``` ```python df.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>url</th> <th>timedelta</th> <th>n_tokens_title</th> <th>n_tokens_content</th> <th>n_unique_tokens</th> <th>n_non_stop_words</th> <th>n_non_stop_unique_tokens</th> <th>num_hrefs</th> <th>num_self_hrefs</th> <th>num_imgs</th> <th>...</th> <th>min_positive_polarity</th> <th>max_positive_polarity</th> <th>avg_negative_polarity</th> <th>min_negative_polarity</th> <th>max_negative_polarity</th> <th>title_subjectivity</th> <th>title_sentiment_polarity</th> <th>abs_title_subjectivity</th> <th>abs_title_sentiment_polarity</th> <th>shares</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>http://mashable.com/2013/01/07/amazon-instant-...</td> <td>731.0</td> <td>12.0</td> <td>219.0</td> <td>0.663594</td> <td>1.0</td> <td>0.815385</td> <td>4.0</td> <td>2.0</td> <td>1.0</td> <td>...</td> <td>0.100000</td> <td>0.7</td> <td>-0.350000</td> <td>-0.600</td> <td>-0.200000</td> <td>0.500000</td> <td>-0.187500</td> <td>0.000000</td> <td>0.187500</td> <td>593</td> </tr> <tr> <td>1</td> <td>http://mashable.com/2013/01/07/ap-samsung-spon...</td> <td>731.0</td> <td>9.0</td> <td>255.0</td> <td>0.604743</td> <td>1.0</td> <td>0.791946</td> <td>3.0</td> <td>1.0</td> <td>1.0</td> <td>...</td> <td>0.033333</td> <td>0.7</td> <td>-0.118750</td> <td>-0.125</td> <td>-0.100000</td> <td>0.000000</td> <td>0.000000</td> <td>0.500000</td> <td>0.000000</td> <td>711</td> </tr> <tr> <td>2</td> <td>http://mashable.com/2013/01/07/apple-40-billio...</td> <td>731.0</td> <td>9.0</td> <td>211.0</td> <td>0.575130</td> <td>1.0</td> <td>0.663866</td> <td>3.0</td> <td>1.0</td> <td>1.0</td> <td>...</td> <td>0.100000</td> <td>1.0</td> <td>-0.466667</td> <td>-0.800</td> <td>-0.133333</td> <td>0.000000</td> <td>0.000000</td> <td>0.500000</td> <td>0.000000</td> <td>1500</td> </tr> <tr> <td>3</td> <td>http://mashable.com/2013/01/07/astronaut-notre...</td> <td>731.0</td> <td>9.0</td> <td>531.0</td> <td>0.503788</td> <td>1.0</td> <td>0.665635</td> <td>9.0</td> <td>0.0</td> <td>1.0</td> <td>...</td> <td>0.136364</td> <td>0.8</td> <td>-0.369697</td> <td>-0.600</td> <td>-0.166667</td> <td>0.000000</td> <td>0.000000</td> <td>0.500000</td> <td>0.000000</td> <td>1200</td> </tr> <tr> <td>4</td> <td>http://mashable.com/2013/01/07/att-u-verse-apps/</td> <td>731.0</td> <td>13.0</td> <td>1072.0</td> <td>0.415646</td> <td>1.0</td> <td>0.540890</td> <td>19.0</td> <td>19.0</td> <td>20.0</td> <td>...</td> <td>0.033333</td> <td>1.0</td> <td>-0.220192</td> <td>-0.500</td> <td>-0.050000</td> <td>0.454545</td> <td>0.136364</td> <td>0.045455</td> <td>0.136364</td> <td>505</td> </tr> </tbody> </table> <p>5 rows × 61 columns</p> </div> ```python predictors = df.columns[1:-1] ``` ```python predictors ``` Index(['timedelta', 'n_tokens_title', 'n_tokens_content', 'n_unique_tokens', 'n_non_stop_words', 'n_non_stop_unique_tokens', 'num_hrefs', 'num_self_hrefs', 'num_imgs', 'num_videos', 'average_token_length', 'num_keywords', 'data_channel_is_lifestyle', 'data_channel_is_entertainment', 'data_channel_is_bus', 'data_channel_is_socmed', 'data_channel_is_tech', 'data_channel_is_world', 'kw_min_min', 'kw_max_min', 'kw_avg_min', 'kw_min_max', 'kw_max_max', 'kw_avg_max', 'kw_min_avg', 'kw_max_avg', 'kw_avg_avg', 'self_reference_min_shares', 'self_reference_max_shares', 'self_reference_avg_sharess', 'weekday_is_monday', 'weekday_is_tuesday', 'weekday_is_wednesday', 'weekday_is_thursday', 'weekday_is_friday', 'weekday_is_saturday', 'weekday_is_sunday', 'is_weekend', 'LDA_00', 'LDA_01', 'LDA_02', 'LDA_03', 'LDA_04', 'global_subjectivity', 'global_sentiment_polarity', 'global_rate_positive_words', 'global_rate_negative_words', 'rate_positive_words', 'rate_negative_words', 'avg_positive_polarity', 'min_positive_polarity', 'max_positive_polarity', 'avg_negative_polarity', 'min_negative_polarity', 'max_negative_polarity', 'title_subjectivity', 'title_sentiment_polarity', 'abs_title_subjectivity', 'abs_title_sentiment_polarity'], dtype='object') ```python X = df[predictors] ``` Let's do some clustering! ```python # default number of cluster is 8 km = KMeans(random_state=123) ``` ```python km.fit(X) ``` KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300, n_clusters=8, n_init=10, n_jobs=None, precompute_distances='auto', random_state=123, tol=0.0001, verbose=0) **Important step**: Explain clusters using their cluster centers ```python pd.DataFrame(km.cluster_centers_, columns=predictors, index=[f'cluster_{i}' for i in range(km.n_clusters)]) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>timedelta</th> <th>n_tokens_title</th> <th>n_tokens_content</th> <th>n_unique_tokens</th> <th>n_non_stop_words</th> <th>n_non_stop_unique_tokens</th> <th>num_hrefs</th> <th>num_self_hrefs</th> <th>num_imgs</th> <th>num_videos</th> <th>...</th> <th>avg_positive_polarity</th> <th>min_positive_polarity</th> <th>max_positive_polarity</th> <th>avg_negative_polarity</th> <th>min_negative_polarity</th> <th>max_negative_polarity</th> <th>title_subjectivity</th> <th>title_sentiment_polarity</th> <th>abs_title_subjectivity</th> <th>abs_title_sentiment_polarity</th> </tr> </thead> <tbody> <tr> <td>cluster_0</td> <td>312.532490</td> <td>10.474285</td> <td>637.205812</td> <td>0.514725</td> <td>0.979677</td> <td>0.667342</td> <td>11.508067</td> <td>3.590160</td> <td>4.768607</td> <td>0.772440</td> <td>...</td> <td>0.351817</td> <td>0.090790</td> <td>0.772173</td> <td>-0.255333</td> <td>-0.544298</td> <td>-0.098654</td> <td>0.269734</td> <td>0.067475</td> <td>0.341026</td> <td>0.148424</td> </tr> <tr> <td>cluster_1</td> <td>701.075981</td> <td>9.825432</td> <td>477.826060</td> <td>0.562130</td> <td>0.990895</td> <td>0.705377</td> <td>9.249922</td> <td>3.243328</td> <td>3.481633</td> <td>1.380220</td> <td>...</td> <td>0.367362</td> <td>0.100495</td> <td>0.770791</td> <td>-0.253593</td> <td>-0.482222</td> <td>-0.114428</td> <td>0.284196</td> <td>0.098022</td> <td>0.344088</td> <td>0.165438</td> </tr> <tr> <td>cluster_2</td> <td>260.382867</td> <td>10.604942</td> <td>484.208237</td> <td>0.540988</td> <td>0.958814</td> <td>0.673785</td> <td>10.807743</td> <td>3.132125</td> <td>4.590280</td> <td>2.007414</td> <td>...</td> <td>0.354090</td> <td>0.099166</td> <td>0.740957</td> <td>-0.269538</td> <td>-0.519875</td> <td>-0.114768</td> <td>0.300977</td> <td>0.070144</td> <td>0.343373</td> <td>0.166063</td> </tr> <tr> <td>cluster_3</td> <td>239.278199</td> <td>10.641706</td> <td>579.094123</td> <td>0.593189</td> <td>1.080664</td> <td>0.731937</td> <td>11.657820</td> <td>3.217156</td> <td>5.273270</td> <td>0.973555</td> <td>...</td> <td>0.358234</td> <td>0.096926</td> <td>0.767177</td> <td>-0.267543</td> <td>-0.546387</td> <td>-0.108111</td> <td>0.283421</td> <td>0.065954</td> <td>0.338656</td> <td>0.155495</td> </tr> <tr> <td>cluster_4</td> <td>616.295149</td> <td>9.713263</td> <td>499.539825</td> <td>0.555388</td> <td>0.991946</td> <td>0.698625</td> <td>10.294612</td> <td>3.373904</td> <td>3.695185</td> <td>0.862180</td> <td>...</td> <td>0.357643</td> <td>0.096745</td> <td>0.763352</td> <td>-0.251214</td> <td>-0.485760</td> <td>-0.111387</td> <td>0.261819</td> <td>0.074259</td> <td>0.351109</td> <td>0.142987</td> </tr> <tr> <td>cluster_5</td> <td>308.187500</td> <td>10.549107</td> <td>426.375000</td> <td>0.548780</td> <td>0.937500</td> <td>0.683813</td> <td>7.464286</td> <td>2.285714</td> <td>2.241071</td> <td>2.700893</td> <td>...</td> <td>0.344292</td> <td>0.099235</td> <td>0.708658</td> <td>-0.250590</td> <td>-0.480960</td> <td>-0.114012</td> <td>0.327289</td> <td>0.089509</td> <td>0.324973</td> <td>0.182003</td> </tr> <tr> <td>cluster_6</td> <td>345.552632</td> <td>10.298246</td> <td>583.675439</td> <td>0.541461</td> <td>1.000000</td> <td>0.697670</td> <td>14.210526</td> <td>6.324561</td> <td>2.315789</td> <td>3.947368</td> <td>...</td> <td>0.364190</td> <td>0.083847</td> <td>0.779647</td> <td>-0.299456</td> <td>-0.569846</td> <td>-0.128740</td> <td>0.300746</td> <td>0.081838</td> <td>0.326942</td> <td>0.158885</td> </tr> <tr> <td>cluster_7</td> <td>244.601113</td> <td>10.759184</td> <td>368.749165</td> <td>0.496784</td> <td>0.842301</td> <td>0.608682</td> <td>8.720594</td> <td>2.571058</td> <td>3.952876</td> <td>3.027829</td> <td>...</td> <td>0.320771</td> <td>0.092167</td> <td>0.659715</td> <td>-0.246313</td> <td>-0.461191</td> <td>-0.107880</td> <td>0.324640</td> <td>0.072925</td> <td>0.334441</td> <td>0.181327</td> </tr> </tbody> </table> <p>8 rows × 59 columns</p> </div> ### Pattern/Group Discovery ```python new_X = X.copy() new_X['cluster_assignment'] = km.predict(X) new_X.insert(0, 'url', df.url) new_X.insert(new_X.shape[1], 'shares', df.shares) ``` ```python new_X[['url', 'cluster_assignment', 'shares']].head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>url</th> <th>cluster_assignment</th> <th>shares</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>http://mashable.com/2013/01/07/amazon-instant-...</td> <td>1</td> <td>593</td> </tr> <tr> <td>1</td> <td>http://mashable.com/2013/01/07/ap-samsung-spon...</td> <td>1</td> <td>711</td> </tr> <tr> <td>2</td> <td>http://mashable.com/2013/01/07/apple-40-billio...</td> <td>1</td> <td>1500</td> </tr> <tr> <td>3</td> <td>http://mashable.com/2013/01/07/astronaut-notre...</td> <td>1</td> <td>1200</td> </tr> <tr> <td>4</td> <td>http://mashable.com/2013/01/07/att-u-verse-apps/</td> <td>1</td> <td>505</td> </tr> </tbody> </table> </div> An example of what you can explore using these information: **Average number of shares by cluster** ```python new_X.groupby('cluster_assignment').shares.mean() ``` cluster_assignment 0 2914.458151 1 3129.574568 2 4232.111111 3 3177.414889 4 3199.874709 5 3196.093750 6 11389.885965 7 4761.953247 Name: shares, dtype: float64 ## How many clusters is the good number of clusters? Unless you have a SME telling you some insights on how many to expect, you sually don't know this piece of information. However, we can use metrics to figure out what is a good number. ### Goodness of Cluster Fit - Intra Distance Score A good clustering result should produce clusters that have 1) clusters that are as far as each other (max inter distance) 2) clusters that are tight as possible (min intra distance) You can calculate a score as such \begin{equation} score=\frac{IntraDistance}{InterDistance}=\frac{\sum_{i=0}^n a_i^2}{\sum_{i=0}^n b_i^2} \end{equation} where a is the distance between each instance and its cluster centroid and b is the distance of each centroid's distance. In `Scikit-Learn`, a score called `inertia` is calculated, which is within-cluster sum-of-square. Read more [here](https://scikit-learn.org/stable/modules/clustering.html#k-means). In another word, the tipping point of this score can be an indicator of the number of clusters we should use for the final result. ```python km.inertia_ ``` 152497198933272.78 **An example of searching number of clusters** ```python search_range = list(range(2, 21, 2)) search_range ``` [2, 4, 6, 8, 10, 12, 14, 16, 18, 20] ```python scores = [] for n_cluster in tqdm_notebook(search_range): km = KMeans(n_clusters=n_cluster, random_state=123) km.fit(X) scores.append(km.inertia_) ``` /Users/tli/Desktop/talks/data-science-for-developers/venv/lib/python3.6/site-packages/ipykernel_launcher.py:2: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0 Please use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook` HBox(children=(IntProgress(value=0, max=10), HTML(value=''))) As you can see above, the more clusters there are, the longer it takes for the algorithm to converge. ```python plt.plot(search_range, scores) ``` This is also called an "elbow plot", where we use to find the elbow point or the turning point indicating the score is becoming plateau. Let's go with `k=2` for now. ```python km = KMeans(n_clusters=2, random_state=123) ``` ```python km.fit(X) ``` KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300, n_clusters=2, n_init=10, n_jobs=None, precompute_distances='auto', random_state=123, tol=0.0001, verbose=0) ```python pd.DataFrame(km.cluster_centers_, columns=predictors, index=[f'cluster_{i}' for i in range(km.n_clusters)]) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>timedelta</th> <th>n_tokens_title</th> <th>n_tokens_content</th> <th>n_unique_tokens</th> <th>n_non_stop_words</th> <th>n_non_stop_unique_tokens</th> <th>num_hrefs</th> <th>num_self_hrefs</th> <th>num_imgs</th> <th>num_videos</th> <th>...</th> <th>avg_positive_polarity</th> <th>min_positive_polarity</th> <th>max_positive_polarity</th> <th>avg_negative_polarity</th> <th>min_negative_polarity</th> <th>max_negative_polarity</th> <th>title_subjectivity</th> <th>title_sentiment_polarity</th> <th>abs_title_subjectivity</th> <th>abs_title_sentiment_polarity</th> </tr> </thead> <tbody> <tr> <td>cluster_0</td> <td>324.256843</td> <td>10.448873</td> <td>552.499205</td> <td>0.547004</td> <td>0.996955</td> <td>0.687765</td> <td>11.025783</td> <td>3.29763</td> <td>4.636486</td> <td>1.238521</td> <td>...</td> <td>0.352642</td> <td>0.095004</td> <td>0.755492</td> <td>-0.260042</td> <td>-0.525401</td> <td>-0.106897</td> <td>0.282200</td> <td>0.069104</td> <td>0.341642</td> <td>0.155249</td> </tr> <tr> <td>cluster_1</td> <td>700.957313</td> <td>9.825173</td> <td>478.033271</td> <td>0.562082</td> <td>0.990898</td> <td>0.705313</td> <td>9.257690</td> <td>3.24796</td> <td>3.487445</td> <td>1.379787</td> <td>...</td> <td>0.367362</td> <td>0.100495</td> <td>0.770863</td> <td>-0.253600</td> <td>-0.482385</td> <td>-0.114407</td> <td>0.284107</td> <td>0.097991</td> <td>0.344136</td> <td>0.165386</td> </tr> </tbody> </table> <p>2 rows × 59 columns</p> </div> ### Importance of `Random State` A random state is a seed/starting point in generating random numbers. It tells the computer where to start. If we don't set it, the result of KMeans and any other algorithms that involve random processes will change every time we execute the algorithm. **With Random State Set** ```python km = KMeans(n_clusters=2, random_state=123) km.fit(X) pd.DataFrame(km.cluster_centers_, columns=predictors, index=[f'cluster_{i}' for i in range(km.n_clusters)]) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>timedelta</th> <th>n_tokens_title</th> <th>n_tokens_content</th> <th>n_unique_tokens</th> <th>n_non_stop_words</th> <th>n_non_stop_unique_tokens</th> <th>num_hrefs</th> <th>num_self_hrefs</th> <th>num_imgs</th> <th>num_videos</th> <th>...</th> <th>avg_positive_polarity</th> <th>min_positive_polarity</th> <th>max_positive_polarity</th> <th>avg_negative_polarity</th> <th>min_negative_polarity</th> <th>max_negative_polarity</th> <th>title_subjectivity</th> <th>title_sentiment_polarity</th> <th>abs_title_subjectivity</th> <th>abs_title_sentiment_polarity</th> </tr> </thead> <tbody> <tr> <td>cluster_0</td> <td>324.256843</td> <td>10.448873</td> <td>552.499205</td> <td>0.547004</td> <td>0.996955</td> <td>0.687765</td> <td>11.025783</td> <td>3.29763</td> <td>4.636486</td> <td>1.238521</td> <td>...</td> <td>0.352642</td> <td>0.095004</td> <td>0.755492</td> <td>-0.260042</td> <td>-0.525401</td> <td>-0.106897</td> <td>0.282200</td> <td>0.069104</td> <td>0.341642</td> <td>0.155249</td> </tr> <tr> <td>cluster_1</td> <td>700.957313</td> <td>9.825173</td> <td>478.033271</td> <td>0.562082</td> <td>0.990898</td> <td>0.705313</td> <td>9.257690</td> <td>3.24796</td> <td>3.487445</td> <td>1.379787</td> <td>...</td> <td>0.367362</td> <td>0.100495</td> <td>0.770863</td> <td>-0.253600</td> <td>-0.482385</td> <td>-0.114407</td> <td>0.284107</td> <td>0.097991</td> <td>0.344136</td> <td>0.165386</td> </tr> </tbody> </table> <p>2 rows × 59 columns</p> </div> **Without Random State** Might need to increase the number of clusters to see more obvious change. ```python km = KMeans(n_clusters=5) km.fit(X) pd.DataFrame(km.cluster_centers_, columns=predictors, index=[f'cluster_{i}' for i in range(km.n_clusters)]) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>timedelta</th> <th>n_tokens_title</th> <th>n_tokens_content</th> <th>n_unique_tokens</th> <th>n_non_stop_words</th> <th>n_non_stop_unique_tokens</th> <th>num_hrefs</th> <th>num_self_hrefs</th> <th>num_imgs</th> <th>num_videos</th> <th>...</th> <th>avg_positive_polarity</th> <th>min_positive_polarity</th> <th>max_positive_polarity</th> <th>avg_negative_polarity</th> <th>min_negative_polarity</th> <th>max_negative_polarity</th> <th>title_subjectivity</th> <th>title_sentiment_polarity</th> <th>abs_title_subjectivity</th> <th>abs_title_sentiment_polarity</th> </tr> </thead> <tbody> <tr> <td>cluster_0</td> <td>277.592180</td> <td>10.555504</td> <td>610.109796</td> <td>0.553030</td> <td>1.029220</td> <td>0.699121</td> <td>11.573725</td> <td>3.420542</td> <td>4.996551</td> <td>0.863920</td> <td>...</td> <td>0.354802</td> <td>0.093644</td> <td>0.769729</td> <td>-0.261092</td> <td>-0.545369</td> <td>-0.103165</td> <td>0.276235</td> <td>0.066924</td> <td>0.339818</td> <td>0.151644</td> </tr> <tr> <td>cluster_1</td> <td>701.075981</td> <td>9.825432</td> <td>477.826060</td> <td>0.562130</td> <td>0.990895</td> <td>0.705377</td> <td>9.249922</td> <td>3.243328</td> <td>3.481633</td> <td>1.380220</td> <td>...</td> <td>0.367362</td> <td>0.100495</td> <td>0.770791</td> <td>-0.253593</td> <td>-0.482222</td> <td>-0.114428</td> <td>0.284196</td> <td>0.098022</td> <td>0.344088</td> <td>0.165438</td> </tr> <tr> <td>cluster_2</td> <td>616.089161</td> <td>9.710315</td> <td>497.429895</td> <td>0.555867</td> <td>0.991958</td> <td>0.698852</td> <td>10.259441</td> <td>3.362937</td> <td>3.689860</td> <td>0.908392</td> <td>...</td> <td>0.357740</td> <td>0.097130</td> <td>0.762617</td> <td>-0.251707</td> <td>-0.486138</td> <td>-0.111403</td> <td>0.261645</td> <td>0.073320</td> <td>0.351674</td> <td>0.143067</td> </tr> <tr> <td>cluster_3</td> <td>305.330396</td> <td>10.541850</td> <td>422.008811</td> <td>0.544106</td> <td>0.929515</td> <td>0.678237</td> <td>7.387665</td> <td>2.268722</td> <td>2.215859</td> <td>2.674009</td> <td>...</td> <td>0.341127</td> <td>0.098364</td> <td>0.701495</td> <td>-0.248526</td> <td>-0.476365</td> <td>-0.113239</td> <td>0.324946</td> <td>0.088547</td> <td>0.325304</td> <td>0.179818</td> </tr> <tr> <td>cluster_4</td> <td>250.957146</td> <td>10.660261</td> <td>454.089132</td> <td>0.527180</td> <td>0.925337</td> <td>0.654065</td> <td>10.305059</td> <td>2.992490</td> <td>4.443782</td> <td>2.298763</td> <td>...</td> <td>0.344591</td> <td>0.096802</td> <td>0.718631</td> <td>-0.263109</td> <td>-0.504161</td> <td>-0.112730</td> <td>0.308219</td> <td>0.071112</td> <td>0.340056</td> <td>0.170856</td> </tr> </tbody> </table> <p>5 rows × 59 columns</p> </div> **Bonus** Other metric and clustering methods too look into: - Metric: [Silhousette Score](https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html) - Clustering method: [Other implemented algorithms](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.cluster) ```python ```
9fb313f4295683e581388856baec28ed58fd7384
70,580
ipynb
Jupyter Notebook
labs/lab-7-unsupervised-learning-clustering.ipynb
ilmstudios/GOTO19-MC-Data-Science
44f04b1f330bca73c14dd8f7c8cbcc01fba10082
[ "MIT" ]
2
2019-11-13T18:01:45.000Z
2019-11-15T17:32:49.000Z
labs/lab-7-unsupervised-learning-clustering.ipynb
ilmstudios/GOTO19-MC-Data-Science
44f04b1f330bca73c14dd8f7c8cbcc01fba10082
[ "MIT" ]
5
2019-11-13T21:47:23.000Z
2021-08-23T20:28:25.000Z
labs/lab-7-unsupervised-learning-clustering.ipynb
ilmstudios/GOTO19-MC-Data-Science
44f04b1f330bca73c14dd8f7c8cbcc01fba10082
[ "MIT" ]
1
2019-11-13T18:01:48.000Z
2019-11-13T18:01:48.000Z
41.738616
10,288
0.487773
true
9,335
Qwen/Qwen-72B
1. YES 2. YES
0.914901
0.743168
0.679925
__label__kor_Hang
0.165808
0.418025
# Import ```python #source #region #import #region import math from sympy import * import matplotlib.pyplot as plt from numpy import linspace import numpy as np from sympy.codegen.cfunctions import log10 from sympy.abc import x,t,y from sympy.plotting import plot #endregion #symbol declaration #region x, t = symbols('x t') f = symbols('f', cls=Function) #endregion ``` # Read Input ```python #input, output #region def ReadInput(file): f = file.readline() (lowT, upT) = map(lambda s: N(s), file.readline().split(",")) (lowX, upX) = map(lambda s: N(s), file.readline().split(",")) (t0, x0) = map(lambda s: N(s), file.readline().split(",")) epsilon = N(file.readline()) return (f, lowT, upT, lowX, upX, t0, x0, epsilon) #endregion ``` # Main Function ```python def Pica1(f, deltaT, deltaX, t0, x0, M, L, epsilon, mode = ""): N = GetN(M, L, deltaT, deltaX, epsilon) xn = SymbolicIntegrate(f, t0, x0, N, mode) return xn def Pica2(f, deltaT, t0, x0, M, L, epsilon, length = 69, mode = ""): xn = [] segmentLength = 2 * deltaT / length n = (int)(length / 2) for i in range(-n, n + 1): xn.append([t0 + i * segmentLength, x0]) xn = NumericIntegrate(f, xn, x0, segmentLength, epsilon, mode) return xn def Pica(filename, length = None, M = None, L = None, deltaT = None, mode = ""): try: file = open(filename, "r") (f, lowT, upT, lowX, upX, t0, x0, epsilon) = ReadInput(file) f = sympify(f) except: raise ValueError("invalid Pica input") file.close() if not lowX< x0 <upX or not lowT< t0< upT: raise ValueError("invalid Pica input") if M is None: M = GetM(x, lowT, upT, lowX, upX) else: if M <= 0: raise ValueError("invalid Pica input") if L is None: L = GetL(x, lowT, upT, lowX, upX) else: if L < 0: raise ValueError("invalid Pica input") if L == 0: SymbolicIntergrate(f, t0, x0, 1, mode) deltaX = min(x0 - lowX, upX - x0) if deltaT is None: deltaT = min(deltaX / M, 1 / (2 * L), t0 - lowT, upT - t0) interval = (float(t0-deltaT), float(t0+deltaT)) if length is None: return (Pica1(f, deltaT, deltaX, t0, x0, M, L, epsilon, mode), interval) return Pica2(f, deltaT, t0, x0, M, L, epsilon, length, mode) ``` # Main loop (integrate) ```python #region def NumericIntegrate(f, xn, x0, segmentLength, epsilon, mode = ""): n = (int) (len(xn)/2) segmentLength /=2 maxError = -math.inf loop = 0 while abs(maxError) > epsilon: if mode == "test": dx = [] loop += 1 maxError = -math.inf integral = 0 for i in range(n, 0, -1): integral = integral - segmentLength * (f.subs([(t, xn[i][0]), (x, xn[i][1])]) + f.subs([(t, xn[i - 1][0]), (x, xn[i - 1][1])])) newValue = x0 + integral error = abs(xn[i - 1][1] - newValue) xn[i - 1][1] = newValue if(error > maxError): maxError = error if mode == 'test': dx.append((xn[i][0], error)) integral = 0 for i in range(n, 2 * n): integral = integral + segmentLength * (f.subs([(t, xn[i][0]), (x, xn[i][1])]) + f.subs([(t, xn[i + 1][0]), (x, xn[i + 1][1])])) newValue = x0 + integral error = abs(xn[i + 1][1] - newValue) xn[i + 1][1] = x0 + integral if(error > maxError): maxError = error if mode == 'test': dx.append((xn[i][0], error)) if mode == "test": print("Lặp lần ", loop, " với max error = ", maxError) PlotPairs(dx) plt.show() return xn def GetN(M, L, deltaT, deltaX, epsilon, mode = ""): h = deltaT * L N = 1 error = M * deltaT while error > epsilon: N+=1 error = error * h / N return N def SymbolicIntegrate(f, t0, x0, N, mode = ''): xn = x0 for i in range(0,N): if mode == 'test': print(xn.evalf(2)) xn = x0 + integrate(f.replace(x,xn), (t,t0,t)) return xn #endregion ``` # Not implemented supremum finder ```python # sup #region def GetM(f, lowT, upT, lowX, upX): #not implemented return 10 def GetL(f, lowT, upT, lowX, upX): #not implemented return 10 #endregion ``` # Plot ```python #plot #region def PlotPairs(pairList): t,x = zip(*pairList) plt.scatter(t,x) def PlotSymbol(symbolOutput): func, interval = symbolOutput #t = linspace(interval[0], interval[1], 1000) #func = t**3/3 + t**7/67 plot((func, (t, interval[0], interval[1]))) def PlotBoth(symbolOutput, pairList): t1, x1 = zip(*pairList) plt.scatter(t1,x1) func, interval = symbolOutput t_vals = linspace(interval[0], interval[1], 1000) lam_x = lambdify(t, func, modules=['numpy']) x_vals = lam_x(t_vals) plt.plot(t_vals, x_vals) def Plot(f, interval): t_vals = linspace(interval[0], interval[1], 1000) lam_x = lambdify(t, f, modules = ['numpy']) x_vals = lam_x(t_vals) plt.plot(t_vals, x_vals) #plt.show() #endregion ``` # Test ```python filename = "input2.txt" result = Pica(filename, M = 2.5, L = 1) result1 = Pica(filename, M = 2.5, L = 1, length = 31, mode = 'test') print(result[0].evalf(2)) print("Khoảng hội tụ:", result[1]) PlotBoth(result, result1) ``` ```python filename = "input3.txt" result = Pica(filename, M = 5, L = 10) result1 = Pica(filename, length = 31, M = 12, L = 10) print(result[0].evalf(2)) print("Khoảng hội tụ:", result[1]) PlotBoth(result, result1) print(result1[0][0]) ``` ```python filename = "input4.txt" #result = Pica(filename, M = 15, L = 1.5) result1 = Pica(filename, length = 31, M = 15, L = 1.5) PlotPairs(result1) #print(result) #PlotBoth(result, result1) ``` ```python filename = "input1.txt" result = Pica(filename, M = 2, L = 1) result1 = Pica(filename, length = 31, M = 2, L = 1, mode = 'test') print(result[0].evalf(2)) print("Khoảng hội tụ:", result[1]) PlotBoth(result, result1) #Plot(sin(10*t) + cos(10*t), result[1]) #plt.show() ``` ```python filename = "input5.txt" result1 = Pica(filename, length = 222, M = 50, L = 1, mode = 'test') PlotPairs(result1) interval = (float(result1[0][0]), float(result1[len(result1)-1][0])) Plot(cos(300*t), interval) plt.show() ``` ```python filename = "input6.txt" result = Pica(filename, M = 250, L = 100) result1 = Pica(filename, length = 31, M = 250, L = 100, mode = 'test') print(result[0].evalf(2)) print("Khoảng hội tụ:", result[1]) PlotBoth(result, result1) #interval = (float(result1[0][0]), float(result1[len(result1)-1][0])) #Plot(sin(100*t), interval) ``` ```python ``` ```python ```
e3410171d6e29c136c77e99838fbf0ee323b0066
269,957
ipynb
Jupyter Notebook
Topic 5 - Solving Differential Equations/26.1.Pica/Pica.ipynb
dthanhqhtt/MI3040-Numerical-Analysis
cf38ea7e6dc834b19e7cffef8b867a02ba472eae
[ "MIT" ]
7
2020-11-23T17:00:20.000Z
2022-01-31T06:28:40.000Z
Topic 5 - Solving Differential Equations/26.1.Pica/Pica.ipynb
dthanhqhtt/MI3040-Numerical-Analysis
cf38ea7e6dc834b19e7cffef8b867a02ba472eae
[ "MIT" ]
2
2020-09-22T17:08:05.000Z
2020-12-20T12:00:59.000Z
Topic 5 - Solving Differential Equations/26.1.Pica/Pica.ipynb
dthanhqhtt/MI3040-Numerical-Analysis
cf38ea7e6dc834b19e7cffef8b867a02ba472eae
[ "MIT" ]
5
2020-12-03T05:11:49.000Z
2021-09-28T03:33:35.000Z
305.036158
39,656
0.928303
true
2,243
Qwen/Qwen-72B
1. YES 2. YES
0.841826
0.760651
0.640335
__label__eng_Latn
0.196042
0.326044
# NRPy+'s Reference Metric Interface ## Author: Zach Etienne ### Formatting improvements courtesy Brandon Clark ### NRPy+ Source Code for this module: [reference_metric.py](../edit/reference_metric.py) ## Introduction: ### Why use a reference metric? Benefits of choosing the best coordinate system for the problem When solving a partial differential equation on the computer, it is useful to first pick a coordinate system well-suited to the geometry of the problem. For example, if we are modeling a spherically-symmetric star, it would be hugely wasteful to model the star in 3-dimensional Cartesian coordinates ($x$,$y$,$z$). This is because in Cartesian coordinates, we would need to choose high sampling in all three Cartesian directions. If instead we chose to model the star in spherical coordinates ($r$,$\theta$,$\phi$), so long as the star is centered at $r=0$, we would not need to model the star with more than one point in the $\theta$ and $\phi$ directions! A similar argument holds for stars that are *nearly* spherically symmetric. Such stars may exhibit density distributions that vary slowly in $\theta$ and $\phi$ directions (e.g., isolated neutron stars or black holes). In these cases the number of points needed to sample the angular directions will still be much smaller than in the radial direction. Thus choice of an appropriate reference metric may directly mitigate the [Curse of Dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality). <a id='toc'></a> # Table of Contents $$\label{toc}$$ This notebook is organized as follow 1. [Step 1](#define_ref_metric): Defining a reference metric, [`reference_metric.py`](../edit/reference_metric.py) 1. [Step 2](#define_geometric): Defining geometric quantities, **`ref_metric__hatted_quantities()`** 1. [Step 3](#prescribed_ref_metric): Prescribed reference metrics in [`reference_metric.py`](../edit/reference_metric.py) 1. [Step 3.a](#sphericallike): Spherical-like coordinate systems 1. [Step 3.a.i](#spherical): **`reference_metric::CoordSystem = "Spherical"`** 1. [Step 3.a.ii](#sinhspherical): **`reference_metric::CoordSystem = "SinhSpherical"`** 1. [Step 3.a.iii](#sinhsphericalv2): **`reference_metric::CoordSystem = "SinhSphericalv2"`** 1. [Step 3.b](#cylindricallike): Cylindrical-like coordinate systems 1. [Step 3.b.i](#cylindrical): **`reference_metric::CoordSystem = "Cylindrical"`** 1. [Step 3.b.ii](#sinhcylindrical): **`reference_metric::CoordSystem = "SinhCylindrical"`** 1. [Step 3.b.iii](#sinhcylindricalv2): **`reference_metric::CoordSystem = "SinhCylindricalv2"`** 1. [Step 3.c](#cartesianlike): Cartesian-like coordinate systems 1. [Step 3.c.i](#cartesian): **`reference_metric::CoordSystem = "Cartesian"`** 1. [Step 3.c.ii](#sinhcartesian): **`reference_metric::CoordSystem = "SinhCartesian"`** 1. [Step 3.d](#prolatespheroidal): Prolate spheroidal coordinates 1. [Step 3.d.i](#symtp): **`reference_metric::CoordSystem = "SymTP"`** 1. [Step 3.d.ii](#sinhsymtp): **`reference_metric::CoordSystem = "SinhSymTP"`** 1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file <a id='define_ref_metric'></a> # Step 1: Defining a reference metric, [`reference_metric.py`](../edit/reference_metric.py) \[Back to [top](#toc)\] $$\label{define_ref_metric}$$ ***Note that currently only orthogonal reference metrics of dimension 3 or fewer are supported. This can be extended if desired.*** NRPy+ assumes all curvilinear coordinate systems map directly from a uniform, Cartesian numerical grid with coordinates $(x,y,z)$=(`xx[0]`,`xx[1]`,`xx[2]`). Thus when defining reference metrics, all defined coordinate quantities must be in terms of the `xx[]` array. As we will see, this adds a great deal of flexibility For example, [**reference_metric.py**](../edit/reference_metric.py) requires that the *orthogonal coordinate scale factors* be defined. As described [here](https://en.wikipedia.org/wiki/Curvilinear_coordinates), the $i$th scale factor is the positive root of the metric element $g_{ii}$. In ordinary spherical coordinates $(r,\theta,\phi)$, with line element $ds^2 = g_{ij} dx^i dx^j = dr^2+ r^2 d \theta^2 + r^2 \sin^2\theta \ d\phi^2$, we would first define * $r = xx_0$ * $\theta = xx_1$ * $\phi = xx_2$, so that the scale factors are defined as * `scalefactor_orthog[0]` = $1$ * `scalefactor_orthog[1]` = $r$ * `scalefactor_orthog[2]` = $r \sin \theta$ Here is the corresponding code: ```python import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends import NRPy_param_funcs as par # NRPy+: parameter interface import reference_metric as rfm # NRPy+: Reference metric support r = rfm.xx[0] th = rfm.xx[1] ph = rfm.xx[2] rfm.scalefactor_orthog[0] = 1 rfm.scalefactor_orthog[1] = r rfm.scalefactor_orthog[2] = r*sp.sin(th) # Notice that the scale factor will be given # in terms of the fundamental Cartesian # grid variables, and not {r,th,ph}: print("r*sin(th) = "+str(rfm.scalefactor_orthog[2])) ``` r*sin(th) = xx0*sin(xx1) Next suppose we wish to modify our radial coordinate $r(xx_0)$ to be an exponentially increasing function, so that our numerical grid $(xx_0,xx_1,xx_2)$ will map to a spherical grid with radial grid spacing ($\Delta r$) that *increases* with $r$. Generally we will find it useful to define $r(xx_0)$ to be an odd function, so let's choose $$r(xx_0) = a \sinh(xx_0/s),$$ where $a$ is an overall radial scaling factor, and $s$ denotes the scale (in units of $xx_0$) over which exponential growth will take place. In our implementation below, note that we use the relation $$\sinh(x) = \frac{e^x - e^{-x}}{2},$$ as SymPy finds it easier to evaluate exponentials than hyperbolic trigonometric functions. ```python a,s = sp.symbols('a s',positive=True) xx0_rescaled = rfm.xx[0] / s r = a*(sp.exp(xx0_rescaled) - sp.exp(-xx0_rescaled))/2 # Must redefine the scalefactors since 'r' has been updated! rfm.scalefactor_orthog[0] = 1 rfm.scalefactor_orthog[1] = r rfm.scalefactor_orthog[2] = r*sp.sin(th) print(rfm.scalefactor_orthog[2]) ``` a*(exp(xx0/s) - exp(-xx0/s))*sin(xx1)/2 Often we will find it useful to also define the appropriate mappings from (`xx[0]`,`xx[1]`,`xx[2]`) to Cartesian coordinates (for plotting purposes) and ordinary spherical coordinates (e.g., in case initial data when solving a PDE are naturally written in spherical coordinates). For this purpose, reference_metric.py also declares lists **`xx_to_Cart[]`** and **`xxSph[]`**, which in this case are defined as ```python rfm.xxSph[0] = r rfm.xxSph[1] = th rfm.xxSph[2] = ph rfm.xx_to_Cart[0] = r*sp.sin(th)*sp.cos(ph) rfm.xx_to_Cart[1] = r*sp.sin(th)*sp.sin(ph) rfm.xx_to_Cart[2] = r*sp.cos(th) # Here we show off SymPy's pretty_print() # and simplify() functions. Nice, no? sp.pretty_print(sp.simplify(rfm.xx_to_Cart[0])) ``` ⎛xx₀⎞ a⋅sin(xx₁)⋅cos(xx₂)⋅sinh⎜───⎟ ⎝ s ⎠ <a id='define_geometric'></a> # Step 2: Define geometric quantities, `ref_metric__hatted_quantities()` \[Back to [top](#toc)\] $$\label{define_geometric}$$ Once `scalefactor_orthog[]` has been defined, the function **`ref_metric__hatted_quantities()`** within [reference_metric.py](../edit/reference_metric.py) can be called to define a number of geometric quantities useful for solving PDEs in curvilinear coordinate systems. Adopting the notation of [Baumgarte, Montero, Cordero-Carrión, and Müller, PRD 87, 044026 (2012)](https://arxiv.org/abs/1211.6632), geometric quantities related to the reference metric are named "hatted" quantities, . For example, the reference metric is defined as $\hat{g}_{ij}$=`ghatDD[i][j]`: ```python rfm.ref_metric__hatted_quantities() sp.pretty_print(sp.Matrix(rfm.ghatDD)) ``` ⎡1 0 0 ⎤ ⎢ ⎥ ⎢ 2 ⎥ ⎢ ⎛ xx₀ -xx₀ ⎞ ⎥ ⎢ ⎜ ─── ─────⎟ ⎥ ⎢ 2 ⎜ s s ⎟ ⎥ ⎢ a ⋅⎝ℯ - ℯ ⎠ ⎥ ⎢0 ─────────────────── 0 ⎥ ⎢ 4 ⎥ ⎢ ⎥ ⎢ 2 ⎥ ⎢ ⎛ xx₀ -xx₀ ⎞ ⎥ ⎢ ⎜ ─── ─────⎟ ⎥ ⎢ 2 ⎜ s s ⎟ 2 ⎥ ⎢ a ⋅⎝ℯ - ℯ ⎠ ⋅sin (xx₁)⎥ ⎢0 0 ─────────────────────────────⎥ ⎣ 4 ⎦ In addition to $\hat{g}_{ij}$, **`ref_metric__hatted_quantities()`** also provides: * The rescaling "matrix" `ReDD[i][j]`, used for separating singular (due to chosen coordinate system) pieces of smooth rank-2 tensor components from the smooth parts, so that the smooth parts can be used within temporal and spatial differential operators. * Inverse reference metric: $\hat{g}^{ij}$=`ghatUU[i][j]`. * Reference metric determinant: $\det\left(\hat{g}_{ij}\right)$=`detgammahat`. * First and second derivatives of the reference metric: $\hat{g}_{ij,k}$=`ghatDD_dD[i][j][k]`; $\hat{g}_{ij,kl}$=`ghatDD_dDD[i][j][k][l]` * Christoffel symbols associated with the reference metric, $\hat{\Gamma}^i_{jk}$ = `GammahatUDD[i][j][k]` and their first derivatives $\hat{\Gamma}^i_{jk,l}$ = `GammahatUDD_dD[i][j][k][l]` For example, the Christoffel symbol $\hat{\Gamma}^{xx_1}_{xx_2 xx_2}=\hat{\Gamma}^1_{22}$ is given by `GammahatUDD[1][2][2]`: ```python sp.pretty_print(sp.simplify(rfm.GammahatUDD[1][2][2])) ``` -sin(2⋅xx₁) ──────────── 2 Given the trigonometric identity $2\sin(x)\cos(x) = \sin(2x)$, notice that the above expression is equivalent to Eq. 18 of [Baumgarte, Montero, Cordero-Carrión, and Müller, PRD 87, 044026 (2012)](https://arxiv.org/abs/1211.6632). This is expected since the sinh-radial spherical coordinate system is equivalent to ordinary spherical coordinates in the angular components. <a id='prescribed_ref_metric'></a> # Step 3: Prescribed reference metrics in [`reference_metric.py`](../edit/reference_metric.py) \[Back to [top](#toc)\] $$\label{prescribed_ref_metric}$$ One need not manually define scale factors or other quantities for reference metrics, as a number of prescribed reference metrics are already defined in [reference_metric.py](../edit/reference_metric.py). These can be accessed by first setting the parameter **reference_metric::CoordSystem** to one of the following, and then calling the function **`rfm.reference_metric()`**. ```python import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support import grid as gri # NRPy+: Functions having to do with numerical grids # Step 0a: Initialize parameters thismodule = __name__ par.initialize_param(par.glb_param("char", thismodule, "CoordSystem", "Spherical")) # Step 0b: Declare global variables xx = gri.xx xx_to_Cart = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s Cart_to_xx = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s Cartx,Carty,Cartz = sp.symbols("Cartx Carty Cartz", real=True) Cart = [Cartx,Carty,Cartz] xxSph = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s scalefactor_orthog = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s have_already_called_reference_metric_function = False CoordSystem = par.parval_from_str("reference_metric::CoordSystem") M_PI,M_SQRT1_2 = par.Cparameters("#define",thismodule,["M_PI","M_SQRT1_2"],"") global xxmin global xxmax global UnitVectors UnitVectors = ixp.zerorank2(DIM=3) ``` We will find the following plotting function useful for analyzing coordinate systems in which the radial coordinate is rescaled. ```python def create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0): import matplotlib.pyplot as plt # matplotlib: Python module specializing in plotting capabilities plt.clf() Nr = 20 dxx0 = 1.0 / float(Nr) xx0s = [] rs = [] deltars = [] rprimes = [] for i in range(Nr): xx0 = (float(i) + 0.5)*dxx0 xx0s.append(xx0) rs.append( sp.sympify(str(r_of_xx0 ).replace("xx0",str(xx0)))) rprimes.append(sp.sympify(str(rprime_of_xx0).replace("xx0",str(xx0)))) if i>0: deltars.append(sp.log(rs[i]-rs[i-1],10)) else: deltars.append(sp.log(2*rs[0],10)) # fig, ax = plt.subplots() fig = plt.figure(figsize=(12,12)) # 8 in x 8 in ax = fig.add_subplot(221) ax.set_title('$r(xx_0)$ for '+CoordSystem,fontsize='x-large') ax.set_xlabel('$xx_0$',fontsize='x-large') ax.set_ylabel('$r(xx_0)$',fontsize='x-large') ax.plot(xx0s, rs, 'k.', label='Spacing between\nadjacent gridpoints') # legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large') # legend.get_frame().set_facecolor('C1') ax = fig.add_subplot(222) ax.set_title('Grid spacing for '+CoordSystem,fontsize='x-large') ax.set_xlabel('$xx_0$',fontsize='x-large') ax.set_ylabel('$\log_{10}(\Delta r)$',fontsize='x-large') ax.plot(xx0s, deltars, 'k.', label='Spacing between\nadjacent gridpoints\nin $r(xx_0)$ plot') legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large') legend.get_frame().set_facecolor('C1') ax = fig.add_subplot(223) ax.set_title('$r\'(xx_0)$ for '+CoordSystem,fontsize='x-large') ax.set_xlabel('$xx_0$',fontsize='x-large') ax.set_ylabel('$r\'(xx_0)$',fontsize='x-large') ax.plot(xx0s, rprimes, 'k.', label='Nr=96') # legend = ax.legend(loc='upper left', shadow=True, fontsize='x-large') # legend.get_frame().set_facecolor('C1') plt.tight_layout(pad=2) plt.show() ``` <a id='sphericallike'></a> ## Step 3.a: Spherical-like coordinate systems \[Back to [top](#toc)\] $$\label{sphericallike}$$ <a id='spherical'></a> ### Step 3.a.i: **`reference_metric::CoordSystem = "Spherical"`** \[Back to [top](#toc)\] $$\label{spherical}$$ Standard spherical coordinates, with $(r,\theta,\phi)=(xx_0,xx_1,xx_2)$ ```python if CoordSystem == "Spherical": # Adding assumption real=True can help simplify expressions involving xx[0] & xx[1] below. xx[0] = sp.symbols("xx0", real=True) xx[1] = sp.symbols("xx1", real=True) RMAX = par.Cparameters("REAL", thismodule, ["RMAX"],10.0) xxmin = [sp.sympify(0), sp.sympify(0), -M_PI] xxmax = [ RMAX, M_PI, M_PI] r = xx[0] th = xx[1] ph = xx[2] Cart_to_xx[0] = sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2) Cart_to_xx[1] = sp.acos(Cartz / Cart_to_xx[0]) Cart_to_xx[2] = sp.atan2(Carty, Cartx) xxSph[0] = r xxSph[1] = th xxSph[2] = ph # Now define xCart, yCart, and zCart in terms of x0,xx[1],xx[2]. # Note that the relation between r and x0 is not necessarily trivial in SinhSpherical coordinates. See above. xx_to_Cart[0] = xxSph[0]*sp.sin(xxSph[1])*sp.cos(xxSph[2]) xx_to_Cart[1] = xxSph[0]*sp.sin(xxSph[1])*sp.sin(xxSph[2]) xx_to_Cart[2] = xxSph[0]*sp.cos(xxSph[1]) scalefactor_orthog[0] = sp.diff(xxSph[0],xx[0]) scalefactor_orthog[1] = xxSph[0] scalefactor_orthog[2] = xxSph[0]*sp.sin(xxSph[1]) # Set the unit vectors UnitVectors = [[ sp.sin(xxSph[1])*sp.cos(xxSph[2]), sp.sin(xxSph[1])*sp.sin(xxSph[2]), sp.cos(xxSph[1])], [ sp.cos(xxSph[1])*sp.cos(xxSph[2]), sp.cos(xxSph[1])*sp.sin(xxSph[2]), -sp.sin(xxSph[1])], [ -sp.sin(xxSph[2]), sp.cos(xxSph[2]), sp.sympify(0) ]] ``` Now let's analyze $r(xx_0)$ for **"Spherical"** coordinates. ```python %matplotlib inline CoordSystem = "Spherical" par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem) rfm.reference_metric() RMAX = 10.0 r_of_xx0 = sp.sympify(str(rfm.xxSph[0] ).replace("RMAX",str(RMAX))) rprime_of_xx0 = sp.sympify(str(sp.diff(rfm.xxSph[0],rfm.xx[0])).replace("RMAX",str(RMAX))) create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0) ``` <a id='sinhspherical'></a> ### Step 3.a.ii: **`reference_metric::CoordSystem = "SinhSpherical"`** \[Back to [top](#toc)\] $$\label{sinhspherical}$$ Spherical coordinates, but with $$r(xx_0) = \text{AMPL} \frac{\sinh\left(\frac{xx_0}{\text{SINHW}}\right)}{\sinh\left(\frac{1}{\text{SINHW}}\right)}.$$ SinhSpherical uses two parameters: `AMPL` and `SINHW`. `AMPL` sets the outer boundary distance; and `SINHW` sets the focusing of the coordinate points near $r=0$, where a small `SINHW` ($\sim 0.125$) will greatly focus the points near $r=0$ and a large `SINHW` will look more like an ordinary spherical polar coordinate system. ```python if CoordSystem == "SinhSpherical": xxmin = [sp.sympify(0), sp.sympify(0), -M_PI] xxmax = [sp.sympify(1), M_PI, M_PI] AMPL, SINHW = par.Cparameters("REAL",thismodule,["AMPL","SINHW"],[10.0,0.2]) # Set SinhSpherical radial coordinate by default; overwrite later if CoordSystem == "SinhSphericalv2". r = AMPL * (sp.exp(xx[0] / SINHW) - sp.exp(-xx[0] / SINHW)) / \ (sp.exp(1 / SINHW) - sp.exp(-1 / SINHW)) th = xx[1] ph = xx[2] Cart_to_xx[0] = SINHW*sp.asinh(sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2)*sp.sinh(1/SINHW)/AMPL) Cart_to_xx[1] = sp.acos(Cartz / sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2)) Cart_to_xx[2] = sp.atan2(Carty, Cartx) xxSph[0] = r xxSph[1] = th xxSph[2] = ph # Now define xCart, yCart, and zCart in terms of x0,xx[1],xx[2]. # Note that the relation between r and x0 is not necessarily trivial in SinhSpherical coordinates. See above. xx_to_Cart[0] = xxSph[0]*sp.sin(xxSph[1])*sp.cos(xxSph[2]) xx_to_Cart[1] = xxSph[0]*sp.sin(xxSph[1])*sp.sin(xxSph[2]) xx_to_Cart[2] = xxSph[0]*sp.cos(xxSph[1]) scalefactor_orthog[0] = sp.diff(xxSph[0],xx[0]) scalefactor_orthog[1] = xxSph[0] scalefactor_orthog[2] = xxSph[0]*sp.sin(xxSph[1]) # Set the unit vectors UnitVectors = [[ sp.sin(xxSph[1])*sp.cos(xxSph[2]), sp.sin(xxSph[1])*sp.sin(xxSph[2]), sp.cos(xxSph[1])], [ sp.cos(xxSph[1])*sp.cos(xxSph[2]), sp.cos(xxSph[1])*sp.sin(xxSph[2]), -sp.sin(xxSph[1])], [ -sp.sin(xxSph[2]), sp.cos(xxSph[2]), sp.sympify(0) ]] ``` Now we explore $r(xx_0)$ for `SinhSpherical` assuming `AMPL=10.0` and `SINHW=0.2`: ```python %matplotlib inline CoordSystem = "SinhSpherical" par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem) rfm.reference_metric() AMPL = 10.0 SINHW = 0.2 r_of_xx0 = sp.sympify(str(rfm.xxSph[0] ).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW))) rprime_of_xx0 = sp.sympify(str(sp.diff(rfm.xxSph[0],rfm.xx[0])).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW))) create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0) ``` <a id='sinhsphericalv2'></a> ### Step 3.a.iii: **`reference_metric::CoordSystem = "SinhSphericalv2"`** \[Back to [top](#toc)\] $$\label{sinhsphericalv2}$$ The same as SinhSpherical coordinates, but with an additional `AMPL*const_dr*xx_0` term: $$r(xx_0) = \text{AMPL} \left[\text{const_dr}\ xx_0 + \frac{\sinh\left(\frac{xx_0}{\text{SINHW}}\right)}{\sinh\left(\frac{1}{\text{SINHW}}\right)}\right].$$ ```python if CoordSystem == "SinhSphericalv2": # SinhSphericalv2 adds the parameter "const_dr", which allows for a region near xx[0]=0 to have # constant radial resolution of const_dr, provided the sinh() term does not dominate near xx[0]=0. xxmin = [sp.sympify(0), sp.sympify(0), -M_PI] xxmax = [sp.sympify(1), M_PI, M_PI] AMPL, SINHW = par.Cparameters("REAL",thismodule,["AMPL","SINHW"],[10.0,0.2]) const_dr = par.Cparameters("REAL",thismodule,["const_dr"],0.0625) r = AMPL*( const_dr*xx[0] + (sp.exp(xx[0] / SINHW) - sp.exp(-xx[0] / SINHW)) / (sp.exp(1 / SINHW) - sp.exp(-1 / SINHW)) ) th = xx[1] ph = xx[2] # NO CLOSED-FORM EXPRESSION FOR RADIAL INVERSION. # Cart_to_xx[0] = "NewtonRaphson" # Cart_to_xx[1] = sp.acos(Cartz / sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2)) # Cart_to_xx[2] = sp.atan2(Carty, Cartx) xxSph[0] = r xxSph[1] = th xxSph[2] = ph # Now define xCart, yCart, and zCart in terms of x0,xx[1],xx[2]. # Note that the relation between r and x0 is not necessarily trivial in SinhSpherical coordinates. See above. xx_to_Cart[0] = xxSph[0]*sp.sin(xxSph[1])*sp.cos(xxSph[2]) xx_to_Cart[1] = xxSph[0]*sp.sin(xxSph[1])*sp.sin(xxSph[2]) xx_to_Cart[2] = xxSph[0]*sp.cos(xxSph[1]) scalefactor_orthog[0] = sp.diff(xxSph[0],xx[0]) scalefactor_orthog[1] = xxSph[0] scalefactor_orthog[2] = xxSph[0]*sp.sin(xxSph[1]) # Set the unit vectors UnitVectors = [[ sp.sin(xxSph[1])*sp.cos(xxSph[2]), sp.sin(xxSph[1])*sp.sin(xxSph[2]), sp.cos(xxSph[1])], [ sp.cos(xxSph[1])*sp.cos(xxSph[2]), sp.cos(xxSph[1])*sp.sin(xxSph[2]), -sp.sin(xxSph[1])], [ -sp.sin(xxSph[2]), sp.cos(xxSph[2]), sp.sympify(0) ]] ``` Now we explore $r(xx_0)$ for `SinhSphericalv2` assuming `AMPL=10.0`, `SINHW=0.2`, and `const_dr=0.05`. Notice that the `const_dr` term significantly increases the grid spacing near $xx_0=0$ relative to `SinhSpherical` coordinates. ```python %matplotlib inline CoordSystem = "SinhSphericalv2" par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem) rfm.reference_metric() AMPL = 10.0 SINHW = 0.2 const_dr = 0.05 r_of_xx0 = sp.sympify(str(rfm.xxSph[0] ).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)).replace("const_dr",str(const_dr))) rprime_of_xx0 = sp.sympify(str(sp.diff(rfm.xxSph[0],rfm.xx[0])).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)).replace("const_dr",str(const_dr))) create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0) ``` <a id='cylindricallike'></a> ## Step 3.b: Cylindrical-like coordinate systems \[Back to [top](#toc)\] $$\label{cylindricallike}$$ <a id='cylindrical'></a> ### Step 3.b.i: **`reference_metric::CoordSystem = "Cylindrical"`** \[Back to [top](#toc)\] $$\label{cylindrical}$$ Standard cylindrical coordinates, with $(\rho,\phi,z)=(xx_0,xx_1,xx_2)$ ```python if CoordSystem == "Cylindrical": # Assuming the cylindrical radial coordinate # is positive makes nice simplifications of # unit vectors possible. xx[0] = sp.symbols("xx0", real=True) RHOMAX,ZMIN,ZMAX = par.Cparameters("REAL",thismodule,["RHOMAX","ZMIN","ZMAX"],[10.0,-10.0,10.0]) xxmin = [sp.sympify(0), -M_PI, ZMIN] xxmax = [ RHOMAX, M_PI, ZMAX] RHOCYL = xx[0] PHICYL = xx[1] ZCYL = xx[2] Cart_to_xx[0] = sp.sqrt(Cartx ** 2 + Carty ** 2) Cart_to_xx[1] = sp.atan2(Carty, Cartx) Cart_to_xx[2] = Cartz xx_to_Cart[0] = RHOCYL*sp.cos(PHICYL) xx_to_Cart[1] = RHOCYL*sp.sin(PHICYL) xx_to_Cart[2] = ZCYL xxSph[0] = sp.sqrt(RHOCYL**2 + ZCYL**2) xxSph[1] = sp.acos(ZCYL / xxSph[0]) xxSph[2] = PHICYL scalefactor_orthog[0] = sp.diff(RHOCYL,xx[0]) scalefactor_orthog[1] = RHOCYL scalefactor_orthog[2] = sp.diff(ZCYL,xx[2]) # Set the unit vectors UnitVectors = [[ sp.cos(PHICYL), sp.sin(PHICYL), sp.sympify(0)], [-sp.sin(PHICYL), sp.cos(PHICYL), sp.sympify(0)], [ sp.sympify(0), sp.sympify(0), sp.sympify(1)]] ``` Next let's plot **"Cylindrical"** coordinates. ```python %matplotlib inline import numpy as np # NumPy: A numerical methods module for Python import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D R = np.linspace(0, 2, 24) h = 2 u = np.linspace(0, 2*np.pi, 24) x = np.outer(R, np.cos(u)) y = np.outer(R, np.sin(u)) z = h * np.outer(np.ones(np.size(u)), np.ones(np.size(u))) r = np.arange(0,2,0.25) theta = 2*np.pi*r*0 fig = plt.figure(figsize=(12,12)) # 8 in x 8 in fig, (ax1, ax2) = plt.subplots(1, 2) ax1 = plt.axes(projection='polar') ax1.set_rmax(2) ax1.set_rgrids(r,labels=[]) thetas = np.linspace(0,360,24, endpoint=True) ax1.set_thetagrids(thetas,labels=[]) # ax.grid(True) ax1.grid(True,linewidth='1.0') ax1.set_title("Top Down View") plt.show() ax2 = plt.axes(projection='3d', xticklabels=[], yticklabels=[], zticklabels=[]) #ax2.plot_surface(x,y,z, alpha=.75, cmap = 'viridis') # z in case of disk which is parallel to XY plane is constant and you can directly use h x=np.linspace(-2, 2, 100) z=np.linspace(-2, 2, 100) Xc, Zc=np.meshgrid(x, z) Yc = np.sqrt(4-Xc**2) rstride = 10 cstride = 10 ax2.plot_surface(Xc, Yc, Zc, alpha=1.0, rstride=rstride, cstride=cstride, cmap = 'viridis') ax2.plot_surface(Xc, -Yc, Zc, alpha=1.0, rstride=rstride, cstride=cstride, cmap = 'viridis') ax2.set_title("Standard Cylindrical Grid in 3D") ax2.grid(False) plt.axis('off') plt.show() ``` <a id='sinhcylindrical'></a> ### Step 3.b.ii" **`reference_metric::CoordSystem = "SinhCylindrical"`** \[Back to [top](#toc)\] $$\label{sinhcylindrical}$$ Cylindrical coordinates, but with $$\rho(xx_0) = \text{AMPLRHO} \frac{\sinh\left(\frac{xx_0}{\text{SINHWRHO}}\right)}{\sinh\left(\frac{1}{\text{SINHWRHO}}\right)}$$ and $$z(xx_2) = \text{AMPLZ} \frac{\sinh\left(\frac{xx_2}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWZ}}\right)}$$ ```python if CoordSystem == "SinhCylindrical": # Assuming the cylindrical radial coordinate # is positive makes nice simplifications of # unit vectors possible. xx[0] = sp.symbols("xx0", real=True) xxmin = [sp.sympify(0), -M_PI, sp.sympify(-1)] xxmax = [sp.sympify(1), M_PI, sp.sympify(+1)] AMPLRHO, SINHWRHO, AMPLZ, SINHWZ = par.Cparameters("REAL",thismodule, ["AMPLRHO","SINHWRHO","AMPLZ","SINHWZ"], [ 10.0, 0.2, 10.0, 0.2]) # Set SinhCylindrical radial & z coordinates by default; overwrite later if CoordSystem == "SinhCylindricalv2". RHOCYL = AMPLRHO * (sp.exp(xx[0] / SINHWRHO) - sp.exp(-xx[0] / SINHWRHO)) / (sp.exp(1 / SINHWRHO) - sp.exp(-1 / SINHWRHO)) # phi coordinate remains unchanged. PHICYL = xx[1] ZCYL = AMPLZ * (sp.exp(xx[2] / SINHWZ) - sp.exp(-xx[2] / SINHWZ)) / (sp.exp(1 / SINHWZ) - sp.exp(-1 / SINHWZ)) Cart_to_xx[0] = SINHWRHO*sp.asinh(sp.sqrt(Cartx ** 2 + Carty ** 2)*sp.sinh(1/SINHWRHO)/AMPLRHO) Cart_to_xx[1] = sp.atan2(Carty, Cartx) Cart_to_xx[2] = SINHWZ*sp.asinh(Cartz*sp.sinh(1/SINHWZ)/AMPLZ) xx_to_Cart[0] = RHOCYL*sp.cos(PHICYL) xx_to_Cart[1] = RHOCYL*sp.sin(PHICYL) xx_to_Cart[2] = ZCYL xxSph[0] = sp.sqrt(RHOCYL**2 + ZCYL**2) xxSph[1] = sp.acos(ZCYL / xxSph[0]) xxSph[2] = PHICYL scalefactor_orthog[0] = sp.diff(RHOCYL,xx[0]) scalefactor_orthog[1] = RHOCYL scalefactor_orthog[2] = sp.diff(ZCYL,xx[2]) # Set the unit vectors UnitVectors = [[ sp.cos(PHICYL), sp.sin(PHICYL), sp.sympify(0)], [-sp.sin(PHICYL), sp.cos(PHICYL), sp.sympify(0)], [ sp.sympify(0), sp.sympify(0), sp.sympify(1)]] ``` Next let's plot **"SinhCylindrical"** coordinates. ```python fig=plt.figure() plt.clf() fig = plt.figure() ax = plt.subplot(1,1,1, projection='polar') ax.set_rmax(2) Nr = 20 xx0s = np.linspace(0,2,Nr, endpoint=True) + 1.0/(2.0*Nr) rs = [] AMPLRHO = 1.0 SINHW = 0.4 for i in range(Nr): rs.append(AMPLRHO * (np.exp(xx0s[i] / SINHW) - np.exp(-xx0s[i] / SINHW)) / \ (np.exp(1.0 / SINHW) - np.exp(-1.0 / SINHW))) ax.set_rgrids(rs,labels=[]) thetas = np.linspace(0,360,25, endpoint=True) ax.set_thetagrids(thetas,labels=[]) # ax.grid(True) ax.grid(True,linewidth='1.0') plt.show() ``` <a id='sinhcylindricalv2'></a> ### Step 3.b.iii: **`reference_metric::CoordSystem = "SinhCylindricalv2"`** \[Back to [top](#toc)\] $$\label{sinhcylindricalv2}$$ Cylindrical coordinates, but with $$\rho(xx_0) = \text{AMPLRHO} \left[\text{const_drho}\ xx_0 + \frac{\sinh\left(\frac{xx_0}{\text{SINHWRHO}}\right)}{\sinh\left(\frac{1}{\text{SINHWRHO}}\right)}\right]$$ and $$z(xx_2) = \text{AMPLZ} \left[\text{const_dz}\ xx_2 + \frac{\sinh\left(\frac{xx_2}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWZ}}\right)}\right]$$ ```python if CoordSystem == "SinhCylindricalv2": # Assuming the cylindrical radial coordinate # is positive makes nice simplifications of # unit vectors possible. xx[0] = sp.symbols("xx0", real=True) # SinhCylindricalv2 adds the parameters "const_drho", "const_dz", which allows for regions near xx[0]=0 # and xx[2]=0 to have constant rho and z resolution of const_drho and const_dz, provided the sinh() terms # do not dominate near xx[0]=0 and xx[2]=0. xxmin = [sp.sympify(0), -M_PI, sp.sympify(-1)] xxmax = [sp.sympify(1), M_PI, sp.sympify(+1)] AMPLRHO, SINHWRHO, AMPLZ, SINHWZ = par.Cparameters("REAL",thismodule, ["AMPLRHO","SINHWRHO","AMPLZ","SINHWZ"], [ 10.0, 0.2, 10.0, 0.2]) const_drho, const_dz = par.Cparameters("REAL",thismodule,["const_drho","const_dz"],[0.0625,0.0625]) RHOCYL = AMPLRHO * ( const_drho*xx[0] + (sp.exp(xx[0] / SINHWRHO) - sp.exp(-xx[0] / SINHWRHO)) / (sp.exp(1 / SINHWRHO) - sp.exp(-1 / SINHWRHO)) ) PHICYL = xx[1] ZCYL = AMPLZ * ( const_dz *xx[2] + (sp.exp(xx[2] / SINHWZ ) - sp.exp(-xx[2] / SINHWZ )) / (sp.exp(1 / SINHWZ ) - sp.exp(-1 / SINHWZ )) ) # NO CLOSED-FORM EXPRESSION FOR RADIAL OR Z INVERSION. # Cart_to_xx[0] = "NewtonRaphson" # Cart_to_xx[1] = sp.atan2(Carty, Cartx) # Cart_to_xx[2] = "NewtonRaphson" xx_to_Cart[0] = RHOCYL*sp.cos(PHICYL) xx_to_Cart[1] = RHOCYL*sp.sin(PHICYL) xx_to_Cart[2] = ZCYL xxSph[0] = sp.sqrt(RHOCYL**2 + ZCYL**2) xxSph[1] = sp.acos(ZCYL / xxSph[0]) xxSph[2] = PHICYL scalefactor_orthog[0] = sp.diff(RHOCYL,xx[0]) scalefactor_orthog[1] = RHOCYL scalefactor_orthog[2] = sp.diff(ZCYL,xx[2]) # Set the unit vectors UnitVectors = [[ sp.cos(PHICYL), sp.sin(PHICYL), sp.sympify(0)], [-sp.sin(PHICYL), sp.cos(PHICYL), sp.sympify(0)], [ sp.sympify(0), sp.sympify(0), sp.sympify(1)]] ``` For example, let's set up **`SinhCylindricalv2`** coordinates and output the Christoffel symbol $\hat{\Gamma}^{xx_2}_{xx_2 xx_2}$, or more simply $\hat{\Gamma}^2_{22}$: ```python par.set_parval_from_str("reference_metric::CoordSystem","SinhCylindricalv2") rfm.reference_metric() sp.pretty_print(sp.simplify(rfm.GammahatUDD[2][2][2])) ``` ⎛ 2⋅xx₂ ⎞ 1 ⎜ ────── ⎟ ────── ⎜ SINHWZ ⎟ SINHWZ -⎝ℯ - 1⎠⋅ℯ ──────────────────────────────────────────────────────────────────────── ⎛ ⎛ 2 ⎞ xx₂ ⎛ 2⋅xx₂ ⎞ 1 ⎞ ⎜ ⎜ ────── ⎟ ────── ⎜ ────── ⎟ ──────⎟ ⎜ ⎜ SINHWZ ⎟ SINHWZ ⎜ SINHWZ ⎟ SINHWZ⎟ SINHWZ⋅⎝- SINHWZ⋅const_dz⋅⎝ℯ - 1⎠⋅ℯ - ⎝ℯ + 1⎠⋅ℯ ⎠ As we will soon see, defining these "hatted" quantities will be quite useful when expressing hyperbolic ([wave-equation](https://en.wikipedia.org/wiki/Wave_equation)-like) PDEs in non-Cartesian coordinate systems. <a id='cartesianlike'></a> ## Step 3.c: Cartesian-like coordinate systems \[Back to [top](#toc)\] $$\label{cartesianlike}$$ <a id='cartesian'></a> ### Step 3.c.i: **`reference_metric::CoordSystem = "Cartesian"`** \[Back to [top](#toc)\] $$\label{cartesian}$$ Standard Cartesian coordinates, with $(x,y,z)=$ `(xx0,xx1,xx2)` ```python if CoordSystem == "Cartesian": xmin, xmax, ymin, ymax, zmin, zmax = par.Cparameters("REAL",thismodule, ["xmin","xmax","ymin","ymax","zmin","zmax"], [ -10.0, 10.0, -10.0, 10.0, -10.0, 10.0]) xxmin = ["xmin", "ymin", "zmin"] xxmax = ["xmax", "ymax", "zmax"] xx_to_Cart[0] = xx[0] xx_to_Cart[1] = xx[1] xx_to_Cart[2] = xx[2] xxSph[0] = sp.sqrt(xx[0] ** 2 + xx[1] ** 2 + xx[2] ** 2) xxSph[1] = sp.acos(xx[2] / xxSph[0]) xxSph[2] = sp.atan2(xx[1], xx[0]) Cart_to_xx[0] = Cartx Cart_to_xx[1] = Carty Cart_to_xx[2] = Cartz scalefactor_orthog[0] = sp.sympify(1) scalefactor_orthog[1] = sp.sympify(1) scalefactor_orthog[2] = sp.sympify(1) # Set the transpose of the matrix of unit vectors UnitVectors = [[sp.sympify(1), sp.sympify(0), sp.sympify(0)], [sp.sympify(0), sp.sympify(1), sp.sympify(0)], [sp.sympify(0), sp.sympify(0), sp.sympify(1)]] ``` ```python %matplotlib inline import numpy as np # NumPy: A numerical methods module for Python import matplotlib.pyplot as plt # matplotlib: Python module specializing in plotting capabilities plt.clf() fig = plt.figure() ax = fig.gca() Nx = 16 ax.set_xticks(np.arange(0, 1., 1./Nx)) ax.set_yticks(np.arange(0, 1., 1./Nx)) for tick in ax.get_xticklabels(): tick.set_rotation(60) # plt.scatter(x, y) ax.set_aspect('equal') plt.grid() # plt.savefig("Cartgrid.png",dpi=300) plt.show() # plt.close(fig) ``` <a id='sinhcartesian'></a> ### Step 3.c.ii: **`reference_metric::CoordSystem = "SinhCartesian"`** \[Back to [top](#toc)\] $$\label{sinhcartesian}$$ In this coordinate system, all three coordinates behave like the $z$-coordinate in SinhCylindrical coordinates, i.e. $$ \begin{align} x(xx_0) &= \text{AMPLXYZ} \left[\frac{\sinh\left(\frac{xx_0}{\text{SINHWXYZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWXYZ}}\right)}\right]\ ,\\ y(xx_1) &= \text{AMPLXYZ} \left[\frac{\sinh\left(\frac{xx_1}{\text{SINHWXYZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWXYZ}}\right)}\right]\ ,\\ z(xx_2) &= \text{AMPLXYZ} \left[\frac{\sinh\left(\frac{xx_2}{\text{SINHWXYZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWXYZ}}\right)}\right] \end{align} $$ ```python if CoordSystem == "SinhCartesian": # SinhCartesian coordinates allows us to push the outer boundary of the # computational domain a lot further away, while keeping reasonably high # resolution towards the center of the computational grid. # Set default values for min and max (x,y,z) xxmin = [sp.sympify(-1), sp.sympify(-1), sp.sympify(-1)] xxmax = [sp.sympify(+1), sp.sympify(+1), sp.sympify(+1)] # Declare basic parameters of the coordinate system and their default values AMPLXYZ, SINHWXYZ = par.Cparameters("REAL", thismodule, ["AMPLXYZ", "SINHWXYZ"], [ 10.0, 0.2]) # Compute (xx_to_Cart0,xx_to_Cart1,xx_to_Cart2) from (xx0,xx1,xx2) for ii in [0, 1, 2]: xx_to_Cart[ii] = AMPLXYZ*(sp.exp(xx[ii]/SINHWXYZ) - sp.exp(-xx[ii]/SINHWXYZ))/(sp.exp(1/SINHWXYZ) - sp.exp(-1/SINHWXYZ)) # Compute (r,th,ph) from (xx_to_Cart2,xx_to_Cart1,xx_to_Cart2) xxSph[0] = sp.sqrt(xx_to_Cart[0] ** 2 + xx_to_Cart[1] ** 2 + xx_to_Cart[2] ** 2) xxSph[1] = sp.acos(xx_to_Cart[2] / xxSph[0]) xxSph[2] = sp.atan2(xx_to_Cart[1], xx_to_Cart[0]) # Compute (xx0,xx1,xx2) from (Cartx,Carty,Cartz) Cart_to_xx[0] = SINHWXYZ*sp.asinh(Cartx*sp.sinh(1/SINHWXYZ)/AMPLXYZ) Cart_to_xx[1] = SINHWXYZ*sp.asinh(Carty*sp.sinh(1/SINHWXYZ)/AMPLXYZ) Cart_to_xx[2] = SINHWXYZ*sp.asinh(Cartz*sp.sinh(1/SINHWXYZ)/AMPLXYZ) # Compute scale factors scalefactor_orthog[0] = sp.diff(xx_to_Cart[0], xx[0]) scalefactor_orthog[1] = sp.diff(xx_to_Cart[1], xx[1]) scalefactor_orthog[2] = sp.diff(xx_to_Cart[2], xx[2]) # Set the transpose of the matrix of unit vectors UnitVectors = [[sp.sympify(1), sp.sympify(0), sp.sympify(0)], [sp.sympify(0), sp.sympify(1), sp.sympify(0)], [sp.sympify(0), sp.sympify(0), sp.sympify(1)]] ``` ```python %matplotlib inline import numpy as np # NumPy: A numerical methods module for Python import matplotlib.pyplot as plt # matplotlib: Python module specializing in plotting capabilities plt.clf() fig = plt.figure(dpi=160) ax = fig.gca() # Set plot title ax.set_title(r"$z=0$ slice of the 3D grid") # Set SINH parameters. Here we assume: # # AMPLX = AMPLY = SINHA # SINHWX = SINHWY = SINHW SINHA = 10.0 SINHW = 0.45 # Set number of points. We assume the same point # distribution along the (x,y)-directions Nxxs = 24 xxis = np.linspace(-1,1,Nxxs, endpoint=True) # Compute axis ticks by evaluating x and y using SinhCartesian coordinates axis_ticks = [] for i in range(Nxxs): axis_ticks.append(SINHA * (np.exp(xxis[i] / SINHW) - np.exp(-xxis[i] / SINHW)) / \ (np.exp(1.0 / SINHW) - np.exp(-1.0 / SINHW))) # Set the axis ticks ax.set_xticks(axis_ticks) ax.set_yticks(axis_ticks) # Set x and y labels. Initialize array with empty strings labelsx = ["" for i in range(Nxxs)] labelsy = ["" for i in range(Nxxs)] # Set x_min and x_max tick label labelsx[0] = r"-AMPLX" labelsx[-1] = r"AMPLX" # Set y_min and y_max tick label labelsy[0] = r"-AMPLY" labelsy[-1] = r"AMPLY" # Set tick labels ax.set_xticklabels(labelsx) ax.set_yticklabels(labelsy) # Rotate x labels by 60 degrees for tick in ax.get_xticklabels(): tick.set_rotation(60) # Draw the x=0 and y=0 ticklabel ax.text(0,-11,"0",ha="center",va="center") ax.text(-11,0,"0",ha="center",va="center") # plt.scatter(x, y) ax.set_aspect('equal') plt.grid(color='black',linewidth=0.3) plt.show() # plt.savefig("Cartgrid.png",dpi=400) # plt.close(fig) ``` <a id='prolatespheroidal'></a> ## Step 3.d: [Prolate spheroidal](https://en.wikipedia.org/wiki/Prolate_spheroidal_coordinates)-like coordinate systems \[Back to [top](#toc)\] $$\label{prolatespheroidal}$$ <a id='symtp'></a> ### Step 3.d.i: **`reference_metric::CoordSystem = "SymTP"`** \[Back to [top](#toc)\] $$\label{symtp}$$ Symmetric TwoPuncture coordinates, with $(\rho,\phi,z)=(xx_0\sin(xx_1), xx_2, \sqrt{xx_0^2 + \text{bScale}^2}\cos(xx_1))$ ```python if CoordSystem == "SymTP": var1, var2= sp.symbols('var1 var2',real=True) bScale, AW, AMAX, RHOMAX, ZMIN, ZMAX = par.Cparameters("REAL",thismodule, ["bScale","AW","AMAX","RHOMAX","ZMIN","ZMAX"], [0.5, 0.2, 10.0, 10.0, -10.0, 10.0]) # Assuming xx0, xx1, and bScale # are positive makes nice simplifications of # unit vectors possible. xx[0],xx[1] = sp.symbols("xx0 xx1", real=True) xxmin = [sp.sympify(0), sp.sympify(0),-M_PI] xxmax = [ AMAX, M_PI, M_PI] AA = xx[0] if CoordSystem == "SinhSymTP": AA = (sp.exp(xx[0]/AW)-sp.exp(-xx[0]/AW))/2 var1 = sp.sqrt(AA**2 + (bScale * sp.sin(xx[1]))**2) var2 = sp.sqrt(AA**2 + bScale**2) RHOSYMTP = AA*sp.sin(xx[1]) PHSYMTP = xx[2] ZSYMTP = var2*sp.cos(xx[1]) xx_to_Cart[0] = AA *sp.sin(xx[1])*sp.cos(xx[2]) xx_to_Cart[1] = AA *sp.sin(xx[1])*sp.sin(xx[2]) xx_to_Cart[2] = ZSYMTP xxSph[0] = sp.sqrt(RHOSYMTP**2 + ZSYMTP**2) xxSph[1] = sp.acos(ZSYMTP / xxSph[0]) xxSph[2] = PHSYMTP rSph = sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2) thSph = sp.acos(Cartz / rSph) phSph = sp.atan2(Carty, Cartx) # Mathematica script to compute Cart_to_xx[] # AA = x1; # var2 = Sqrt[AA^2 + bScale^2]; # RHOSYMTP = AA*Sin[x2]; # ZSYMTP = var2*Cos[x2]; # Solve[{rSph == Sqrt[RHOSYMTP^2 + ZSYMTP^2], # thSph == ArcCos[ZSYMTP/Sqrt[RHOSYMTP^2 + ZSYMTP^2]], # phSph == x3}, # {x1, x2, x3}] Cart_to_xx[0] = sp.sqrt(-bScale**2 + rSph**2 + sp.sqrt(bScale**4 + 2*bScale**2*rSph**2 + rSph**4 - 4*bScale**2*rSph**2*sp.cos(thSph)**2))*M_SQRT1_2 # M_SQRT1_2 = 1/sqrt(2); define this way for UnitTesting # The sign() function in the following expression ensures the correct root is taken. Cart_to_xx[1] = sp.acos(sp.sign(Cartz)*( sp.sqrt(1 + rSph**2/bScale**2 - sp.sqrt(bScale**4 + 2*bScale**2*rSph**2 + rSph**4 - 4*bScale**2*rSph**2*sp.cos(thSph)**2)/bScale**2)*M_SQRT1_2)) # M_SQRT1_2 = 1/sqrt(2); define this way for UnitTesting Cart_to_xx[2] = phSph ``` <a id='sinhsymtp'></a> ### Step 3.d.ii: **`reference_metric::CoordSystem = "SinhSymTP"`** \[Back to [top](#toc)\] $$\label{sinhsymtp}$$ Symmetric TwoPuncture coordinates, but with $$xx_0 \to \sinh(xx_0/\text{AW})$$ ```python if CoordSystem == "SinhSymTP": var1, var2= sp.symbols('var1 var2',real=True) bScale, AW, AMAX, RHOMAX, ZMIN, ZMAX = par.Cparameters("REAL",thismodule, ["bScale","AW","AMAX","RHOMAX","ZMIN","ZMAX"], [0.5, 0.2, 10.0, 10.0, -10.0, 10.0]) # Assuming xx0, xx1, and bScale # are positive makes nice simplifications of # unit vectors possible. xx[0],xx[1] = sp.symbols("xx0 xx1", real=True) xxmin = [sp.sympify(0), sp.sympify(0),-M_PI] xxmax = [ AMAX, M_PI, M_PI] AA = xx[0] if CoordSystem == "SinhSymTP": # With xxmax[0] == AMAX, sinh(xx0/AMAX) will evaluate to a number between 0 and 1. # Similarly, sinh(xx0/(AMAX*SINHWAA)) / sinh(1/SINHWAA) will also evaluate to a number between 0 and 1. # Then AA = AMAX*sinh(xx0/(AMAX*SINHWAA)) / sinh(1/SINHWAA) will evaluate to a number between 0 and AMAX. AA = AMAX * (sp.exp(xx[0] / (AMAX*SINHWAA)) - sp.exp(-xx[0] / (AMAX*SINHWAA))) / (sp.exp(1 / SINHWAA) - sp.exp(-1 / AMAX)) var1 = sp.sqrt(AA**2 + (bScale * sp.sin(xx[1]))**2) var2 = sp.sqrt(AA**2 + bScale**2) RHOSYMTP = AA*sp.sin(xx[1]) PHSYMTP = xx[2] ZSYMTP = var2*sp.cos(xx[1]) xx_to_Cart[0] = AA *sp.sin(xx[1])*sp.cos(xx[2]) xx_to_Cart[1] = AA *sp.sin(xx[1])*sp.sin(xx[2]) xx_to_Cart[2] = ZSYMTP xxSph[0] = sp.sqrt(RHOSYMTP**2 + ZSYMTP**2) xxSph[1] = sp.acos(ZSYMTP / xxSph[0]) xxSph[2] = PHSYMTP scalefactor_orthog[0] = sp.diff(AA,xx[0]) * var1 / var2 scalefactor_orthog[1] = var1 scalefactor_orthog[2] = AA * sp.sin(xx[1]) # Set the transpose of the matrix of unit vectors UnitVectors = [[sp.sin(xx[1]) * sp.cos(xx[2]) * var2 / var1, sp.sin(xx[1]) * sp.sin(xx[2]) * var2 / var1, AA * sp.cos(xx[1]) / var1], [AA * sp.cos(xx[1]) * sp.cos(xx[2]) / var1, AA * sp.cos(xx[1]) * sp.sin(xx[2]) / var1, -sp.sin(xx[1]) * var2 / var1], [-sp.sin(xx[2]), sp.cos(xx[2]), sp.sympify(0)]] ``` <a id='latex_pdf_output'></a> # Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\] $$\label{latex_pdf_output}$$ The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename [Tutorial-Reference_Metric.pdf](Tutorial-Reference_Metric.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) ```python import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Reference_Metric") ``` Created Tutorial-Reference_Metric.tex, and compiled LaTeX file to PDF file Tutorial-Reference_Metric.pdf
f83976e762a4536593c5a4432090c7cbee5644e5
368,555
ipynb
Jupyter Notebook
Tutorial-Reference_Metric.ipynb
GeraintPratten/nrpytutorial
9d9ecb6c4f020adca29b51c79fb33787644c05e1
[ "BSD-2-Clause" ]
2
2021-07-16T02:35:40.000Z
2021-08-02T15:08:20.000Z
Tutorial-Reference_Metric.ipynb
GeraintPratten/nrpytutorial
9d9ecb6c4f020adca29b51c79fb33787644c05e1
[ "BSD-2-Clause" ]
null
null
null
Tutorial-Reference_Metric.ipynb
GeraintPratten/nrpytutorial
9d9ecb6c4f020adca29b51c79fb33787644c05e1
[ "BSD-2-Clause" ]
null
null
null
205.208797
59,176
0.883274
true
14,996
Qwen/Qwen-72B
1. YES 2. YES
0.831143
0.672332
0.558804
__label__eng_Latn
0.425394
0.136618
# Custom Embedded Runge-Kutta Method ```python !pip install fastrk matplotlib ``` Requirement already satisfied: fastrk in c:\users\stasb\pycharmprojects\fastrk (0.0.5) Requirement already satisfied: matplotlib in c:\users\stasb\appdata\roaming\python\python38\site-packages (3.5.0) Requirement already satisfied: numba>=0.51.2 in c:\users\stasb\appdata\roaming\python\python38\site-packages (from fastrk) (0.54.1) Requirement already satisfied: numpy>=1.19.2 in c:\users\stasb\appdata\roaming\python\python38\site-packages (from fastrk) (1.20.3) Requirement already satisfied: sympy>=1.8 in c:\users\stasb\appdata\roaming\python\python38\site-packages (from fastrk) (1.9) Requirement already satisfied: scipy>=1.2 in c:\users\stasb\appdata\roaming\python\python38\site-packages (from fastrk) (1.7.2) Requirement already satisfied: packaging>=20.0 in c:\users\stasb\appdata\roaming\python\python38\site-packages (from matplotlib) (21.3) Requirement already satisfied: fonttools>=4.22.0 in c:\users\stasb\appdata\roaming\python\python38\site-packages (from matplotlib) (4.28.1) Requirement already satisfied: python-dateutil>=2.7 in c:\users\stasb\appdata\roaming\python\python38\site-packages\python_dateutil-2.8.1-py3.8.egg (from matplotlib) (2.8.1) Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\stasb\appdata\roaming\python\python38\site-packages\kiwisolver-1.3.1-py3.8-win-amd64.egg (from matplotlib) (1.3.1) Requirement already satisfied: pyparsing>=2.2.1 in c:\users\stasb\appdata\roaming\python\python38\site-packages (from matplotlib) (3.0.6) Requirement already satisfied: cycler>=0.10 in c:\users\stasb\appdata\roaming\python\python38\site-packages\cycler-0.10.0-py3.8.egg (from matplotlib) (0.10.0) Requirement already satisfied: pillow>=6.2.0 in c:\users\stasb\appdata\roaming\python\python38\site-packages\pillow-8.0.1-py3.8-win-amd64.egg (from matplotlib) (8.0.1) Requirement already satisfied: setuptools-scm>=4 in c:\users\stasb\appdata\roaming\python\python38\site-packages (from matplotlib) (6.3.2) Requirement already satisfied: six in c:\users\stasb\appdata\roaming\python\python38\site-packages\six-1.15.0-py3.8.egg (from cycler>=0.10->matplotlib) (1.15.0) Requirement already satisfied: llvmlite<0.38,>=0.37.0rc1 in c:\users\stasb\appdata\roaming\python\python38\site-packages (from numba>=0.51.2->fastrk) (0.37.0) Requirement already satisfied: setuptools in c:\users\stasb\anaconda3\envs\p38\lib\site-packages (from numba>=0.51.2->fastrk) (58.0.4) Requirement already satisfied: tomli>=1.0.0 in c:\users\stasb\appdata\roaming\python\python38\site-packages (from setuptools-scm>=4->matplotlib) (1.2.2) Requirement already satisfied: mpmath>=0.19 in c:\users\stasb\appdata\roaming\python\python38\site-packages\mpmath-1.1.0-py3.8.egg (from sympy>=1.8->fastrk) (1.1.0) ```python import math import numpy as np import matplotlib.pyplot as plt from cycler import cycler from numba import jit, cfunc from fastrk import (ButcherTable, RKCodeGen, EventsCodeGen, default_jitkwargs) import bt_456_custom ``` ### Create python file with Butcher Table elements - matrix A - vectors b_main, b_subs, c - and tuple of orders ```python !type bt_456_custom.py ``` '''Butcher Table for RK4(5)6 from Prince and Dormand. Original matlab code at: https://github.com/USNavalResearchLaboratory/TrackerComponentLibrary/blob/master/Mathematical_Functions/Differential_Equations/RungeKStep.m ''' A = [ [0, 0, 0, 0, 0, 0], [1 / 4, 0, 0, 0, 0, 0], [3 / 32, 9 / 32, 0, 0, 0, 0], [1932 / 2197, -7200 / 2197, 7296 / 2197, 0, 0, 0], [439 / 216, -8, 3680 / 513, -845 / 4104, 0, 0], [-8 / 27, 2, -3544 / 2565, 1859 / 4104, -11 / 40, 0] ] c = [0, 1 / 4, 3 / 8, 12 / 13, 1, 1 / 2] b_main = [25 / 216, 0, 1408 / 2565, 2197 / 4104, -1 / 5, 0] b_subs = [16 / 135, 0, 6656 / 12825, 28561 / 56430, -9 / 50, 2 / 55] order = (4, 5) ```python BT456 = ButcherTable("456", bt_456_custom) # rk_456.py will be created and imported here rk_module = RKCodeGen(BT456, autonomous=False).save_and_import() rk_prop = rk_module.rk_prop ``` ```python # using of cfunc decorator instead of jit decorator allows you to compile separately # ODE right part and rk_prop with no recompilation required on next run @cfunc('f8[:](f8, f8[:])') def eq(t, s): ''' Mathematical pendulum equation set. Parameters ---------- t: scalar time s: np.array state vector [theta, omega] Return ------ ds: np.array ds/dt - time derivative of state vector ''' theta, omega = s return np.array([omega, -math.sin(theta)]) ``` ```python N = 100 ws = np.linspace(0, 2, N+2)[1:-1] data = [] for i in range(N): s0 = np.array([0., ws[i]]) trj = rk_prop(eq, s0, 0, 2*np.pi, np.inf, 1e-12, 1e-12) data.append(trj) ``` ```python colors = plt.cm.twilight(np.linspace(0, 1, N)) plt.rc('axes', prop_cycle=cycler(color=colors)) plt.figure(dpi=200) for i in range(N): plt.plot(data[i][:, 1], data[i][:, 2]) plt.xlabel(r'$\theta$') plt.ylabel(r'$\omega$') plt.title('phase portrait near center fixed point') #plt.show() ``` ## Calculate oscillation periods ```python def event_sin_theta(t, s): return math.sin(s[0]) events = [event_sin_theta] values = np.array([0.]) terminals = np.array([True]) directions = np.array([1]) counts = np.array([-1]) accurates = np.array([True]) call_event = EventsCodeGen(events).save_and_import() rk_prop_ev = rk_module.rk_prop_ev ``` ```python trjs = [] evs = [] for i in range(N): s0 = np.array([0., ws[i]]) trj, evarr = rk_prop_ev(eq, s0, 0, 1000, np.inf, 1e-12, 1e-12, values, terminals, directions, counts, accurates, call_event, 1e-12, 1e-12, 100) trjs.append(trj) evs.append(evarr) ``` ```python plt.figure(dpi=200) for i in range(N): plt.plot(trjs[i][:, 1], trjs[i][:, 2]) plt.xlabel(r'$\theta$') plt.ylabel(r'$\omega$') plt.title('phase portrait near center fixed point') ``` ```python periods = np.array(evs)[:, 0, 2] plt.plot(ws, periods, '-k') plt.xlabel(r'$\omega_0$') plt.ylabel(r'period') plt.title('mathematical pendulum oscillation periods\nfor $\omega_0 \in (0, 2)$'); ``` ```python ```
4459c5c32a6d0f1525ecec8152b37a374aa2100c
763,185
ipynb
Jupyter Notebook
examples/ex2_custom_erk_method.ipynb
BoberSA/fastrk
72b313537428bd4c15bdca29c9c4ef0c01039184
[ "MIT" ]
1
2021-07-12T18:36:27.000Z
2021-07-12T18:36:27.000Z
examples/ex2_custom_erk_method.ipynb
BoberSA/fastrk
72b313537428bd4c15bdca29c9c4ef0c01039184
[ "MIT" ]
null
null
null
examples/ex2_custom_erk_method.ipynb
BoberSA/fastrk
72b313537428bd4c15bdca29c9c4ef0c01039184
[ "MIT" ]
null
null
null
1,847.905569
371,759
0.963635
true
2,200
Qwen/Qwen-72B
1. YES 2. YES
0.872347
0.793106
0.691864
__label__eng_Latn
0.419402
0.445763
# Import ```python #source #region #import #region import math from sympy import * import matplotlib.pyplot as plt from numpy import linspace import numpy as np from sympy.codegen.cfunctions import log10 from sympy.abc import x,t,y from sympy.plotting import plot #endregion #symbol declaration #region x, t = symbols('x t') f = symbols('f', cls=Function) #endregion ``` # Read Input ```python #input, output #region def ReadInput(file): f = file.readline() (lowT, upT) = map(lambda s: N(s), file.readline().split(",")) (lowX, upX) = map(lambda s: N(s), file.readline().split(",")) (t0, x0) = map(lambda s: N(s), file.readline().split(",")) epsilon = N(file.readline()) return (f, lowT, upT, lowX, upX, t0, x0, epsilon) #endregion ``` # Main Function ```python def Pica1(f, deltaT, deltaX, t0, x0, M, L, epsilon, mode = ""): N = GetN(M, L, deltaT, deltaX, epsilon) xn = SymbolicIntegrate(f, t0, x0, N, mode) return xn def Pica2(f, deltaT, t0, x0, M, L, epsilon, length = 69, mode = ""): xn = [] segmentLength = 2 * deltaT / length n = (int)(length / 2) for i in range(-n, n + 1): xn.append([t0 + i * segmentLength, x0]) xn = NumericIntegrate(f, xn, x0, segmentLength, epsilon, mode) return xn def Pica(filename, length = None, M = None, L = None, deltaT = None, mode = ""): try: file = open(filename, "r") (f, lowT, upT, lowX, upX, t0, x0, epsilon) = ReadInput(file) f = sympify(f) except: raise ValueError("invalid Pica input") file.close() if not lowX< x0 <upX or not lowT< t0< upT: raise ValueError("invalid Pica input") if M is None: M = GetM(x, lowT, upT, lowX, upX) else: if M <= 0: raise ValueError("invalid Pica input") if L is None: L = GetL(x, lowT, upT, lowX, upX) else: if L < 0: raise ValueError("invalid Pica input") if L == 0: SymbolicIntergrate(f, t0, x0, 1, mode) deltaX = min(x0 - lowX, upX - x0) if deltaT is None: deltaT = min(deltaX / M, 1 / (2 * L), t0 - lowT, upT - t0) interval = (float(t0-deltaT), float(t0+deltaT)) if length is None: return (Pica1(f, deltaT, deltaX, t0, x0, M, L, epsilon, mode), interval) return Pica2(f, deltaT, t0, x0, M, L, epsilon, length, mode) ``` # Main loop (integrate) ```python #region def NumericIntegrate(f, xn, x0, segmentLength, epsilon, mode = ""): n = (int) (len(xn)/2) segmentLength /=2 maxError = -math.inf loop = 0 while abs(maxError) > epsilon: if mode == "test": dx = [] loop += 1 maxError = -math.inf integral = 0 for i in range(n, 0, -1): integral = integral - segmentLength * (f.subs([(t, xn[i][0]), (x, xn[i][1])]) + f.subs([(t, xn[i - 1][0]), (x, xn[i - 1][1])])) newValue = x0 + integral error = abs(xn[i - 1][1] - newValue) xn[i - 1][1] = newValue if(error > maxError): maxError = error if mode == 'test': dx.append((xn[i][0], error)) integral = 0 for i in range(n, 2 * n): integral = integral + segmentLength * (f.subs([(t, xn[i][0]), (x, xn[i][1])]) + f.subs([(t, xn[i + 1][0]), (x, xn[i + 1][1])])) newValue = x0 + integral error = abs(xn[i + 1][1] - newValue) xn[i + 1][1] = x0 + integral if(error > maxError): maxError = error if mode == 'test': dx.append((xn[i][0], error)) if mode == "test": print("Lặp lần ", loop, " với max error = ", maxError) PlotPairs(dx) plt.show() return xn def GetN(M, L, deltaT, deltaX, epsilon, mode = ""): h = deltaT * L N = 1 error = M * deltaT while error > epsilon: N+=1 error = error * h / N return N def SymbolicIntegrate(f, t0, x0, N, mode = ''): xn = x0 for i in range(0,N): if mode == 'test': print(xn.evalf(2)) xn = x0 + integrate(f.replace(x,xn), (t,t0,t)) return xn #endregion ``` # Not implemented supremum finder ```python # sup #region def GetM(f, lowT, upT, lowX, upX): #not implemented return 10 def GetL(f, lowT, upT, lowX, upX): #not implemented return 10 #endregion ``` # Plot ```python #plot #region def PlotPairs(pairList): t,x = zip(*pairList) plt.scatter(t,x) def PlotSymbol(symbolOutput): func, interval = symbolOutput #t = linspace(interval[0], interval[1], 1000) #func = t**3/3 + t**7/67 plot((func, (t, interval[0], interval[1]))) def PlotBoth(symbolOutput, pairList): t1, x1 = zip(*pairList) plt.scatter(t1,x1) func, interval = symbolOutput t_vals = linspace(interval[0], interval[1], 1000) lam_x = lambdify(t, func, modules=['numpy']) x_vals = lam_x(t_vals) plt.plot(t_vals, x_vals) def Plot(f, interval, label = ""): t_vals = linspace(interval[0], interval[1], 1000) lam_x = lambdify(t, f, modules = ['numpy']) x_vals = lam_x(t_vals) plt.plot(t_vals, x_vals) #plt.show() #endregion #Program #region ``` # Test ```python filename = "input2.txt" result = Pica(filename, M = 2.5, L = 1) result1 = Pica(filename, M = 2.5, L = 1, length = 31, mode = 'test') print(result[0].evalf(2)) print("Khoảng hội tụ:", result[1]) PlotBoth(result, result1) ``` ```python filename = "input3.txt" result = Pica(filename, M = 5, L = 10) result1 = Pica(filename, length = 31, M = 12, L = 10) print(result[0].evalf(2)) print("Khoảng hội tụ:", result[1]) PlotBoth(result, result1) ``` ```python filename = "input4.txt" #result = Pica(filename, M = 15, L = 1.5) result1 = Pica(filename, length = 31, M = 15, L = 1.5) PlotPairs(result1) #print(result) #PlotBoth(result, result1) ``` ```python filename = "input1.txt" result = Pica(filename, M = 2, L = 1) result1 = Pica(filename, length = 31, M = 2, L = 1, mode = 'test') print(result[0].evalf(2)) print("Khoảng hội tụ:", result[1]) PlotBoth(result, result1) #Plot(sin(10*t) + cos(10*t), result[1]) #plt.show() ``` ```python filename = "input5.txt" result1 = Pica(filename, length = 222, M = 50, L = 1) PlotPairs(result1) interval = (float(result1[0][0]), float(result1[len(result1)-1][0])) Plot(cos(300*t), interval) plt.show() ``` ```python filename = "input6.txt" result = Pica(filename, M = 250, L = 100) result1 = Pica(filename, length = 31, M = 250, L = 100, mode = 'test') print(result[0].evalf(2)) print("Khoảng hội tụ:", result[1]) PlotBoth(result, result1) #interval = (float(result1[0][0]), float(result1[len(result1)-1][0])) #Plot(sin(100*t), interval) ``` ```python ``` ```python ```
9a35aa2327de6e3f1883d3138145b53a6c4f7a2b
219,480
ipynb
Jupyter Notebook
Topic 5 - Solving Differential Equations/26.1.Pica/.ipynb_checkpoints/Pica-checkpoint.ipynb
dthanhqhtt/MI3040-Numerical-Analysis
cf38ea7e6dc834b19e7cffef8b867a02ba472eae
[ "MIT" ]
7
2020-11-23T17:00:20.000Z
2022-01-31T06:28:40.000Z
Topic 5 - Solving Differential Equations/26.1.Pica/.ipynb_checkpoints/Pica-checkpoint.ipynb
dthanhqhtt/MI3040-Numerical-Analysis
cf38ea7e6dc834b19e7cffef8b867a02ba472eae
[ "MIT" ]
2
2020-09-22T17:08:05.000Z
2020-12-20T12:00:59.000Z
Topic 5 - Solving Differential Equations/26.1.Pica/.ipynb_checkpoints/Pica-checkpoint.ipynb
dthanhqhtt/MI3040-Numerical-Analysis
cf38ea7e6dc834b19e7cffef8b867a02ba472eae
[ "MIT" ]
5
2020-12-03T05:11:49.000Z
2021-09-28T03:33:35.000Z
269.63145
40,344
0.923569
true
2,239
Qwen/Qwen-72B
1. YES 2. YES
0.831143
0.798187
0.663407
__label__eng_Latn
0.200215
0.379649
# Systems of Equations Imagine you are at a casino, and you have a mixture of £10 and £25 chips. You know that you have a total of 16 chips, and you also know that the total value of chips you have is £250. Is this enough information to determine how many of each denomination of chip you have? Well, we can express each of the facts that we have as an equation. The first equation deals with the total number of chips - we know that this is 16, and that it is the number of £10 chips (which we'll call ***x*** ) added to the number of £25 chips (***y***). The second equation deals with the total value of the chips (£250), and we know that this is made up of ***x*** chips worth £10 and ***y*** chips worth £25. Here are the equations \begin{equation}x + y = 16 \end{equation} \begin{equation}10x + 25y = 250 \end{equation} Taken together, these equations form a *system of equations* that will enable us to determine how many of each chip denomination we have. ## Graphing Lines to Find the Intersection Point One approach is to determine all possible values for x and y in each equation and plot them. A collection of 16 chips could be made up of 16 £10 chips and no £25 chips, no £10 chips and 16 £25 chips, or any combination between these. Similarly, a total of £250 could be made up of 25 £10 chips and no £25 chips, no £10 chips and 10 £25 chips, or a combination in between. Let's plot each of these ranges of values as lines on a graph: ```python %matplotlib inline from matplotlib import pyplot as plt # Get the extremes for number of chips chipsAll10s = [16, 0] chipsAll25s = [0, 16] # Get the extremes for values valueAll10s = [25,0] valueAll25s = [0,10] # Plot the lines plt.plot(chipsAll10s,chipsAll25s, color='blue') plt.plot(valueAll10s, valueAll25s, color="orange") plt.xlabel('x (£10 chips)') plt.ylabel('y (£25 chips)') plt.grid() plt.show() ``` Looking at the graph, you can see that there is only a single combination of £10 and £25 chips that is on both the line for all possible combinations of 16 chips and the line for all possible combinations of £250. The point where the line intersects is (10, 6); or put another way, there are ten £10 chips and six £25 chips. ### Solving a System of Equations with Elimination You can also solve a system of equations mathematically. Let's take a look at our two equations: \begin{equation}x + y = 16 \end{equation} \begin{equation}10x + 25y = 250 \end{equation} We can combine these equations to eliminate one of the variable terms and solve the resulting equation to find the value of one of the variables. Let's start by combining the equations and eliminating the x term. We can combine the equations by adding them together, but first, we need to manipulate one of the equations so that adding them will eliminate the x term. The first equation includes the term ***x***, and the second includes the term ***10x***, so if we multiply the first equation by -10, the two x terms will cancel each other out. So here are the equations with the first one multiplied by -10: \begin{equation}-10(x + y) = -10(16) \end{equation} \begin{equation}10x + 25y = 250 \end{equation} After we apply the multiplication to all of the terms in the first equation, the system of equations look like this: \begin{equation}-10x + -10y = -160 \end{equation} \begin{equation}10x + 25y = 250 \end{equation} Now we can combine the equations by adding them. The ***-10x*** and ***10x*** cancel one another, leaving us with a single equation like this: \begin{equation}15y = 90 \end{equation} We can isolate ***y*** by dividing both sides by 15: \begin{equation}y = \frac{90}{15} \end{equation} So now we have a value for ***y***: \begin{equation}y = 6 \end{equation} So how does that help us? Well, now we have a value for ***y*** that satisfies both equations. We can simply use it in either of the equations to determine the value of ***x***. Let's use the first one: \begin{equation}x + 6 = 16 \end{equation} When we work through this equation, we get a value for ***x***: \begin{equation}x = 10 \end{equation} So now we've calculated values for ***x*** and ***y***, and we find, just as we did with the graphical intersection method, that there are ten £10 chips and six £25 chips. You can run the following Python code to verify that the equations are both true with an ***x*** value of 10 and a ***y*** value of 6. ```python x = 10 y = 6 print ((x + y == 16) & ((10*x) + (25*y) == 250)) ``` True ```python ```
e2fd813bf3a15e4f00abb15851564191653ff53c
23,733
ipynb
Jupyter Notebook
Basics Of Algebra by Hiren/01-03-Systems of Equations.ipynb
serkin/Basic-Mathematics-for-Machine-Learning
ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab
[ "Apache-2.0" ]
null
null
null
Basics Of Algebra by Hiren/01-03-Systems of Equations.ipynb
serkin/Basic-Mathematics-for-Machine-Learning
ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab
[ "Apache-2.0" ]
null
null
null
Basics Of Algebra by Hiren/01-03-Systems of Equations.ipynb
serkin/Basic-Mathematics-for-Machine-Learning
ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab
[ "Apache-2.0" ]
null
null
null
140.431953
17,236
0.861754
true
1,227
Qwen/Qwen-72B
1. YES 2. YES
0.880797
0.948155
0.835132
__label__eng_Latn
0.999407
0.778624
<h1 align="center"> TUGAS BESAR TF3101 - DINAMIKA SISTEM DAN SIMULASI </h1> <h2 align="center"> Sistem Elektrik, Elektromekanik, dan Termofluidik</h2> <h3>Nama Anggota:</h3> <body> <ul> <li>Erlant Muhammad Khalfani (13317025)</li> <li>Bernardus Rendy (13317041)</li> </ul> </body> ## 1. Pemodelan Sistem Elektrik## Untuk pemodelan sistem elektrik, dipilih rangkaian RLC seri dengan sebuah sumber tegangan seperti yang tertera pada gambar di bawah ini. ### Deskripsi Sistem 1. Input <br> Sistem ini memiliki input sumber tegangan $v_i$, yang merupakan fungsi waktu $v_i(t)$. <br> 2. Output <br> Sistem ini memiliki *output* arus $i_2$, yaitu arus yang mengalir pada *mesh* II. Tegangan $v_{L1}$ dan $v_{R2}$ juga dapat berfungsi sebagai *output*. Pada program ini, *output* yang akan di-*plot* hanya $v_{R2}$ dan $v_{L1}$. Nilai $i_2$ berbanding lurus terhadap nilai $v_{R2}$, sehingga bentuk grafik $i_2$ akan menyerupai bentuk grafik $v_{R2}$ 3. Parameter <br> Sistem ini memiliki parameter-parameter $R_1$, $R_2$, $L_1$, dan $C_1$. Hambatan-hambatan $R_1$ dan $R_2$ adalah parameter *resistance*. Induktor $L_1$ adalah parameter *inertance*. Kapasitor $C_1$ adalah parameter *capacitance*. ### Asumsi 1. Arus setiap *mesh* pada keadaan awal adalah nol ($i_1(0) = i_2(0) = 0$). 2. Turunan arus terhadap waktu pada setiap *mesh* adalah nol ($\frac{di_1(0)}{dt}=\frac{di_2(0)}{dt}=0$) ### Pemodelan dengan *Bond Graph* Dari sistem rangkaian listrik di atas, akan didapatkan *bond graph* sebagai berikut. <br> Pada gambar di atas, terlihat bahwa setiap *junction* memenuhi aturan kausalitas. Ini menandakan bahwa rangkaian di atas bersifat *causal*. Dari *bond graph* di atas, dapat diturunkan *Ordinary Differential Equation* (ODE) seperti hasil penerapan *Kirchhoff's Voltage Law* (KVL) pada setiap *mesh*. Dalam pemodelan *bond graph* variabel-variabel dibedakan menjadi variabel *effort* dan *flow*. Sistem di atas merupakan sistem elektrik, sehingga memiliki variabel *effort* berupa tegangan ($v$) dan *flow* berupa arus ($i$). ### Persamaan Matematis - ODE Dilakukan analisis besaran *effort* pada *1-junction* sebelah kiri. Ini akan menghasilkan: $$ v_i = v_{R1} + v_{C1} $$ <br> Hasil ini sama seperti hasil dari KVL pada *mesh* I. Nilai $v_{R1}$ dan $v_{C1}$ diberikan oleh rumus-rumus: $$ v_{R1} = R_1i_1 $$ <br> $$ v_{C1} = \frac{1}{C_1}\int (i_1 - i_2)dt $$ sehingga hasil KVL pada *mesh* I menjadi: $$ v_i = R_1i_1 + \frac{1}{C_1}\int (i_1 - i_2)dt $$ Kemudian, analisis juga dilakukan pada *1-junction* sebelah kanan, yang akan menghasilkan: $$ v_{C1} = v_{R2} + v_{L1} $$ <br> Ini juga sama seperti hasil KVL pada *mesh* II. Nilai $v_{R2}$ dan $v_{L1}$ diberikan oleh rumus-rumus: $$ v_{R2} = R_2i_2 $$ <br> $$ v_{L1} = L_1\frac{di_2}{dt} $$ sehingga hasil KVL pada *mesh* II menjadi: $$ \frac{1}{C_1}\int(i_1-i_2)dt = R_2i_2 + L_1\frac{di_2}{dt} $$ atau $$ 0 = L_1\frac{di_2}{dt} + R_2i_2 + \frac{1}{C_1}\int(i_2-i_1)dt $$ ### Persamaan Matematis - *Transfer Function* Setelah didapatkan ODE hasil dari *bond graph*, dapat dilakukan *Laplace Transform* untuk mendapatkan fungsi transfer sistem. *Laplace Transform* pada persamaan hasil KVL *mesh* I menghasilkan: $$ (R_1 + \frac{1}{C_1s})I_1 + (-\frac{1}{C_1s})I_2 = V_i $$ <br> dan pada persamaan hasil *mesh* II, akan didapatkan: $$ (-\frac{1}{C_1s})I_1 + (L_1s + R_2 + \frac{1}{C_1s})I_2 = 0 $$ <br> Kedua persamaan itu dieliminasi, sehingga didapatkan fungsi transfer antara $I_2$ dengan $V_i$ $$ \frac{I_2(s)}{V_i(s)} = \frac{1}{R_1C_1L_1s^2 + (R_1R_2C_1 + L_1)s + R_1 + R_2} $$ <br> Dari hasil *Laplace Transform* persamaan pada *mesh* II, didapatkan nilai $V_{L1}$ dari rumus $$ V_{L1} = L_1sI_2 $$ <br> sehingga didapatkan fungsi transfer antara $V_{L1}$ dengan $V_i$ $$ \frac{V_{L1}(s)}{V_i(s)} = \frac{L_1s}{R_1C_1L_1s^2 + (R_1R_2C_1 + L_1)s + R_1 + R_2} $$ Sementara fungsi transfer antara $V_{R2}$ dan $V_i$ adalah $$ \frac{V_{R2}(s)}{V_i(s)} = \frac{R_2}{R_1C_1L_1s^2 + (R_1R_2C_1 + L_1)s + R_1 + R_2} $$ ```python #IMPORTS from ipywidgets import interact, interactive, fixed, interact_manual , HBox, VBox, Label, Layout import ipywidgets as widgets import numpy as np import matplotlib.pyplot as plt from scipy import signal ``` ```python #DEFINISI SLIDER-SLIDER PARAMETER #Slider R1 R1_slider = widgets.FloatSlider( value=1., min=1., max=1000., step=1., description='$R_1 (\Omega)$', readout_format='.1f', ) #Slider R2 R2_slider = widgets.FloatSlider( value=1., min=1., max=1000., step=1., description='$R_2 (\Omega)$', readout_format='.1f', ) #Slider C1 C1_slider = widgets.IntSlider( value=1, min=10, max=1000, step=1, description='$C_1 (\mu F)$', ) #Slider L1 L1_slider = widgets.FloatSlider( value=0.1, min=1., max=1000., step=0.1, description='$L_1 (mH)$', readout_format='.1f', ) ``` ```python #DEKLARASI SELECTOR INPUT #Slider selector input vi_select = signal_select = widgets.Dropdown( options=[('Step', 0), ('Impulse', 1)], description='Tipe Sinyal:', ) #DEKLARASI SELECTOR OUTPUT #Output Selector vo_select = widgets.ToggleButtons( options=['v_R2', 'v_L1'], description='Output:', ) ``` ```python #DEKLARASI TAMBAHAN UNTUK INTERFACE #Color button color_select1 = widgets.ToggleButtons( options=['blue', 'red', 'green', 'black'], description='Color:', ) ``` ```python #PENENTUAN NILAI-NILAI PARAMETER R1 = R1_slider.value R2 = R2_slider.value C1 = C1_slider.value L1 = L1_slider.value #PENENTUAN NILAI DAN BENTUK INPUT vform = vi_select.value #PENENTUAN OUTPUT vo = vo_select #PENENTUAN PADA INTERFACE color = color_select1.value ``` ```python #Plot v_L1 menggunakan transfer function def plot_electric (vo, R1, R2, C1, L1, vform, color): #Menyesuaikan nilai parameter dengan satuan R1 = R1 R2 = R2 C1 = C1*(10**-6) L1 = L1*(10**-3) f, ax = plt.subplots(1, 1, figsize=(8, 6)) num1 = [R2] num2 = [L1, 0] den = [R1*C1*L1, R1*R2*C1+L1, R1+R2] if vo=='v_R2': sys_vr =signal.TransferFunction(num1, den) step_vr = signal.step(sys_vr) impl_vr = signal.impulse(sys_vr) if vform == 0: ax.plot(step_vr[0], step_vr[1], color=color, label='Respon Step') elif vform == 1: ax.plot(impl_vr[0], impl_vr[1], color=color, label='Respon Impuls') ax.grid() ax.legend() elif vo=='v_L1': sys_vl = signal.TransferFunction(num2, den) step_vl = signal.step(sys_vl) impl_vl = signal.impulse(sys_vl) #Plot respon if vform == 0: ax.plot(step_vl[0], step_vl[1], color=color, label='Respon Step') elif vform == 1: ax.plot(impl_vl[0], impl_vl[1], color=color, label='Respon Impuls') ax.grid() ax.legend() ``` ```python ui_el = widgets.VBox([vo_select, R1_slider, R2_slider, C1_slider, L1_slider, vi_select, color_select1]) out_el = widgets.interactive_output(plot_electric, {'vo':vo_select,'R1':R1_slider,'R2':R2_slider,'C1':C1_slider,'L1':L1_slider,'vform':vi_select,'color':color_select1}) int_el = widgets.HBox([ui_el, out_el]) ``` ```python display(int_el) ``` HBox(children=(VBox(children=(ToggleButtons(description='Output:', options=('v_R2', 'v_L1'), value='v_R2'), Fl… ### Analisis### <h4>a. Respon Step </h4> Dari hasil simulasi, didapatkan pengaruh perubahan-perubahan nilai parameter pada *output* sistem setelah diberikan *input* berupa sinyal *step*, di antaranya: 1. Kenaikan nilai $R_1$ akan menurunkan *steady-state gain* ($K$) sistem. Ini terlihat dari turunnya nilai *output* $v_{R2}$ pada keadaan *steady-state* dan turunnya nilai *maximum overshoot* ($M_p$) pada *output* $v_{L1}$. Perubahan nilai $R_1$ juga berbanding terbalik dengan perubahan koefisien redaman $\xi$, terlihat dari semakin jelas terlihatnya osilasi seiring dengan kenaikan nilai $R_1$. Perubahan nilai $R_1$ juga sebanding dengan perubahan nilai *settling time* ($t_s$). Ini terlihat dengan bertambahnya waktu sistem untuk mencapai nilai dalam rentang 2-5% dari nilai keadaan *steady-state*. 2. Kenaikan nilai $R_2$ akan meningkatkan *steady-state gain* ($K$) sistem dengan *output* $v_{R2}$ tetapi menurunkan *steady-state gain* ($K$) *output* $v_{L1}$. Selain itu, dapat terlihat juga bahwa perubahan nilai $R_2$ berbanding terbalik dengan nilai *settling time* ($t_s$); Saat nilai $R_2$ naik, sistem mencapai kondisi *steady-state* dalam waktu yang lebih singkat. Kenaikan nilai $R_2$ juga menyebabkan penurunan nilai *maximum overshoot* ($M_p$). 3. Perubahan nilai $C_1$ sebanding dengan perubahan nilai *settling time*, seperti yang dapat terlihat dengan bertambahnya waktu yang diperlukan sistem untuk mendekati keadaan *steady-state* seiring dengan kenaikan nilai $C_1$. Selain itu nilai $C_1$ juga berbanding terbalik dengan nilai *maximum overshoot*, ini dapat dilihat dari turunnya nilai *maximum overshoot* ketika nilai $C_1$ dinaikan. Pada saat nilai $C_1$, naik, juga terlihat kenaikan nilai *delay time* ($t_d$), *rise time* ($t_r$), dan *peak time* ($t_p$). 4. Kenaikan nilai $L_1$ mengakibatkan berkurangnya nilai frekuensi osilasi, serta meningkatkan *settling time* sistem. Perubahan nilai $L_1$ ini juga sebanding dengan *steady-state gain* sistem untuk *output* $v_{L1}$. <h4>b. Respon Impuls </h4> Dari hasil simulasi, didapatkan pengaruh perubahan-perubahan nilai parameter pada *output* sistem setelah diberikan *input* berupa sinyal *impulse*, di antaranya: 1. Perubahan nilai $R_1$ berbanding terbalik dengan nilai *peak response*. Kenaikan nilai $R_1$ juga menaikkan nilai *settling time* ($t_s$). 2. Kenaikan nilai $R_2$ memengaruhi nilai *peak response* $v_{R2}$, tetapi tidak berpengaruh pada *peak response* $v_{L1}$. Naiknya nilai $R_2$ juga menurunkan nilai *settling time* ($t_s$), yang terlihat dari semakin cepatnya sistem mencapai kondisi *steady-state*. 3. Kenaikan nilai $C_1$ menyebabkan turunnya nilai *peak response*. Kenaikan nilai $C_1$ juga menyebabkan kenaikan nilai *settling time* ($t_s$), yang dapat dilihat dengan bertambahnya waktu yang diperlukan sistem untuk mendekati keadaan *steady-state*. 4. Kenaikan nilai $L_1$ menyebabkan turunnya nilai *peak response*. Kenaikan nilai $L_1$ juga menurunkan nilai *settling time* ($t_s$), yang dapat dilihat dari bertambahnya waktu yang diperlukan sistem untuk mendekati keadaan *steady-state*. ## 2. Pemodelan Sistem Elektromekanik ### DC Brushed Motor Torsi Besar dengan Motor Driver Sistem yang akan dimodelkan berupa motor driver high current BTS7960 seperti gambar pertama, dihubungkan dengan motor torsi besar dengan brush pada gambar kedua. <div> </div> <div> </div> <p style="text-align:center"><b>Sumber gambar: KRTMI URO ITB</b></p> ### Deskripsi Sistem 1. Input <br> Sistem ini memiliki input sinyal $V_{in}$, yang merupakan fungsi waktu $V_{in}(t)$. Tegangan $v_i(t)$ ini dapat berbentuk fungsi step, impuls, atau pulse width modulation dengan duty cycle tertentu (luaran mikrokontroller umum). <br> 2. Output <br> Sistem ini memiliki *output* posisi sudut $\theta$, kecepatan sudut motor $\omega$, percepatan sudut motor $\alpha$, dan torsi $T$. Output ditentukan sesuai kebutuhan untuk manuver robot. Terlihat variable output bergantung pada $\theta$, $\frac {d\theta}{dt}$, dan $\frac{d^2\theta}{dt}$, sehingga dicari beberapa persamaan diferensial sesuai tiap output. 3. Parameter <br> Sistem memiliki parameter $J,K_f,K_a,L,R,K_{emf},K_{md}$ yang diturunkan dari karakteristik subsistem mekanik dan elektrik sebagai berikut. #### Subsistem Motor Driver Pertama ditinjau struktur sistem dari motor driver. Motor driver yang digunakan adalah tipe MOSFET BTS7960 sehingga memiliki karakteristik dinamik yang meningkat hampir dengan instant. MOSFET dirangkai sedemikian rupa sehingga dapat digunakan untuk kontrol maju/mundur motor. Diasumsikan rise-time MOSFET cukup cepat relatif terhadap sinyal dan motor driver cukup linear, maka motor driver dapat dimodelkan sebagai sistem orde 0 dengan gain sebesar $ K_{md} $. <p style="text-align:center"><b>Sumber gambar: Datasheet BTS7960</b></p> <p style="text-align:center"><b>Model Orde 0 Motor Driver</b></p> Maka persamaan dinamik output terhadap input dalam motor driver adalah <br> $ V_m=K_{md}V_{in} $<br> Sama seperti hubungan input output pada karakteristik statik. #### Subsistem Motor Lalu ditinjau struktur sistem dari motor torsi besar dengan inertia beban yang tidak dapat diabaikan. <p style="text-align:center"><b>Sumber gambar: https://www.researchgate.net/figure/The-structure-of-a-DC-motor_fig2_260272509</b></p> <br> Maka dapat diturunkan persamaan diferensial untuk sistem mekanik. <br> <p style="text-align:center"><b>Sumber gambar: Chapman - Electric Machinery Fundamentals 4th Edition</b></p> $$ T=K_a i_a $$ dengan $T$ adalah torsi dan $K_a$ adalah konstanta proporsionalitas torsi (hasil perkalian K dengan flux) untuk arus armature $i_a$. $$ V_{emf}=K_{emf} \omega $$ dengan $V_{emf}$ adalah tegangan penyebab electromotive force dan $K_{emf}$ konstanta proporsionalitas tegangan emf (hasil perkalian K dengan flux pada kondisi ideal tanpa voltage drop) untuk kecepatan putar sudut dari motor. <br> Namun, akibat terbentuknya torsi adalah berputarnya beban dengan kecepatan sudut sebesar $\omega$ dan percepatan sudut sebesar $\alpha$. Faktor proporsionalitas terhadap percepatan sudut adalah $J$ (Inersia Putar) dan terhadap kecepatan sudut sebesar $ K_f $ (Konstanta Redam Putar) Sehingga dapat diturunkan persamaan diferensial sebagai berikut (Persamaan 1): <br> $$ J\alpha + K_f\omega = T $$ $$ J\frac {d^2\theta}{dt} + K_f\frac {d\theta}{dt} = K_a i_a $$ $$ J\frac {d\omega}{dt} + K_f \omega = K_a i_a $$ Kemudian diturunkan persamaan diferensial untuk sistem elektrik yang terdapat pada motor sehingga $i_a$ dapat disubstitusi dengan input $V_{in}$ (Persamaan 2): $$ L \frac{d{i_a}}{dt} + R i_a + K_{emf} \omega = V_m $$ $$ V_m = K_{md} V_{in} $$ $$ L \frac{d{i_a}}{dt} + R i_a + K_{emf} \omega = K_{md} V_{in} $$ ### Pemodelan dengan Fungsi Transfer Dengan persamaan subsistem tersebut, dapat dilakukan pemodelan fungsi transfer sistem dengan transformasi ke domain laplace (s). Dilakukan penyelesaian menggunakan fungsi transfer dalam domain laplace, pertama dilakukan transfer ke domain laplace dengan asumsi <br> $ i_a (0) = 0 $ <br> $ \frac {di_a}{dt} = 0 $ <br> $ \theta (0) = 0 $ <br> $ \omega (0) = 0 $ <br> $ \alpha (0) = 0 $ <br> Tidak diasumsikan terdapat voltage drop karena telah di akumulasi di $K_{emf}$, namun diasumsikan voltage drop berbanding lurus terhadap $\omega$. <br> Persamaan 1 menjadi: $$ J s \omega + K_f \omega = K_a i_a $$ Persamaan 2 menjadi: $$ L s i_a + R i_a + K_{emf} \omega = K_{md} V_{in} $$ $$ i_a=\frac {K_{md} V_{in}-K_{emf} \omega}{L s + R} $$ Sehingga terbentuk fungsi transfer sistem keseluruhan dalam $\omega$ adalah: $$ J s \omega + K_f \omega = \frac {K_a(K_{md} V_{in} - K_{emf} \omega)}{L s + R} $$ Fungsi transfer untuk $\omega$ adalah: $$ \omega = \frac {K_a(K_{md} V_{in}-K_{emf} \omega)}{(L s + R)(J s + K_f)} $$ $$ \omega = \frac {K_a K_{md} V_{in}}{(L s + R)(J s + K_f)(1 + \frac {K_a K_{emf}}{(L s + R)(J s + K_f)})} $$ $$ \frac {\omega (s)}{V_{in}(s)} = \frac {K_a K_{md}}{(L s + R)(J s + K_f)+ K_a K_{emf}} $$ Dapat diturunkan fungsi transfer untuk theta dengan mengubah variable pada persamaan 1: $$ J s^2 \theta + K_f s \theta = K_a i_a $$ Persamaan 2: $$ L s i_a + R i_a + K_{emf} s \theta = K_{md} V_{in} $$ $$ i_a=\frac {K_{md} V_{in}-K_{emf} s \theta}{L s + R} $$ Sehingga terbentuk fungsi transfer sistem keseluruhan dalam $\theta$ adalah: $$ J s^2 \theta + K_f s \theta = \frac {K_a(K_{md} V_{in}-K_{emf} s \theta)}{L s + R} $$ Fungsi transfer untuk $\theta$ adalah: $$ \theta = \frac {K_a(K_{md} V_{in}-K_{emf} s \theta)}{(L s + R)(J s^2 + K_f s )} $$ $$ \theta + \frac {K_a K_{emf} s \theta}{(L s + R)(J s^2 + K_f s )}= \frac {K_a K_{md} V_{in}}{(L s + R)(J s^2 + K_f s )} $$ $$ \theta= \frac {K_a K_{md} V_{in}}{(L s + R)(J s^2 + K_f s )(1 + \frac {K_a K_{emf} s}{(L s + R)(J s^2 + K_f s )})} $$ $$ \frac {\theta (s)}{V_{in}(s)}= \frac {K_a K_{md}}{(L s + R)(J s^2 + K_f s )+ K_a K_{emf} s} $$ Terlihat bahwa fungsi transfer untuk $\omega$ dan $\theta$ hanya berbeda sebesar $ \frac {1}{s} $ sesuai dengan hubungan $$ \omega = s \theta $$ Sehingga fungsi transfer untuk $\alpha$ akan memenuhi $$ \alpha = s\omega = s^2 \theta $$ Sehingga fungsi transfer untuk $\alpha$ adalah: $$ \frac {\alpha (s)}{V_{in}(s)} = \frac {K_a K_{md} s}{(L s + R)(J s + K_f)+ K_a K_{emf}} $$ ### Output Dari fungsi transfer, diformulasikan persamaan output posisi sudut $\theta$, kecepatan sudut motor $\omega$, percepatan sudut $\alpha$, dan torsi $T$ dalam fungsi waktu (t). $$ \theta (t) = \mathscr {L^{-1}} \{\frac {K_a K_{md} V_{in}(s)}{(L s + R)(J s^2 + K_f s )+ K_a K_{emf} s}\} $$ <br> $$ \omega (t) = \mathscr {L^{-1}} \{\frac {K_a K_{md} V_{in}(s)}{(L s + R)(J s + K_f)+ K_a K_{emf}}\} $$ <br> $$ \alpha (t)= \mathscr {L^{-1}} \{\frac {K_a K_{md} Vin_{in}(s) s}{(L s + R)(J s + K_f)+ K_a K_{emf}}\} $$ <br> $$ T = \frac {K_a(K_{md} V_{in}-K_{emf} \omega)}{L s + R} $$ ```python # Digunakan penyelesaian numerik untuk output import numpy as np from scipy.integrate import odeint import scipy.signal as sig import matplotlib.pyplot as plt from sympy.physics.mechanics import dynamicsymbols, SymbolicSystem from sympy import * import control as control ``` ```python vin = symbols ('V_{in}') #import symbol input ``` ```python omega, theta, alpha = dynamicsymbols('omega theta alpha') #import symbol output ``` ```python ka,kmd,l,r,j,kf,kemf,s,t = symbols ('K_a K_{md} L R J K_f K_{emf} s t')#import symbol parameter dan s ``` ```python thetaOverVin = (ka*kmd)/((l*s+r)*(j*s**2+kf*s)+ka*kemf*s) #persamaan fungsi transfer theta polyThetaOverVin = thetaOverVin.as_poly() #Penyederhanaan persamaan polyThetaOverVin ``` $\displaystyle \operatorname{Poly}{\left( \frac{1}{J L s^{3} + J R s^{2} + K_{a} K_{emf} s + K_{f} L s^{2} + K_{f} R s}K_{a}K_{md}, \frac{1}{J L s^{3} + J R s^{2} + K_{a} K_{emf} s + K_{f} L s^{2} + K_{f} R s}, K_{a}, K_{md}, domain=\mathbb{Z} \right)}$ ```python omegaOverVin = (ka*kmd)/((l*s+r)*(j*s+kf)+ka*kemf) #persamaan fungsi transfer omega polyOmegaOverVin = omegaOverVin.as_poly() #Penyederhanaan persamaan polyOmegaOverVin ``` $\displaystyle \operatorname{Poly}{\left( \frac{1}{J L s^{2} + J R s + K_{a} K_{emf} + K_{f} L s + K_{f} R}K_{a}K_{md}, \frac{1}{J L s^{2} + J R s + K_{a} K_{emf} + K_{f} L s + K_{f} R}, K_{a}, K_{md}, domain=\mathbb{Z} \right)}$ ```python alphaOverVin = (ka*kmd*s)/((l*s+r)*(j*s+kf)+ka*kemf) polyAlphaOverVin = alphaOverVin.as_poly() #Penyederhanaan persamaan polyAlphaOverVin ``` $\displaystyle \operatorname{Poly}{\left( s\frac{1}{J L s^{2} + J R s + K_{a} K_{emf} + K_{f} L s + K_{f} R}K_{a}K_{md}, s, \frac{1}{J L s^{2} + J R s + K_{a} K_{emf} + K_{f} L s + K_{f} R}, K_{a}, K_{md}, domain=\mathbb{Z} \right)}$ ```python torqueOverVin= ka*(kmd-kemf*((ka*kmd)/((l*s+r)*(j*s+kf)+ka*kemf)))/(l*s+r) #Penyederhanaan persamaan torsi polyTorqueOverVin = torqueOverVin.as_poly() polyTorqueOverVin ``` $\displaystyle \operatorname{Poly}{\left( -\frac{1}{J L^{2} s^{3} + 2 J L R s^{2} + J R^{2} s + K_{a} K_{emf} L s + K_{a} K_{emf} R + K_{f} L^{2} s^{2} + 2 K_{f} L R s + K_{f} R^{2}}K_{a}^{2}K_{emf}K_{md} + \frac{1}{L s + R}K_{a}K_{md}, \frac{1}{J L^{2} s^{3} + 2 J L R s^{2} + J R^{2} s + K_{a} K_{emf} L s + K_{a} K_{emf} R + K_{f} L^{2} s^{2} + 2 K_{f} L R s + K_{f} R^{2}}, \frac{1}{L s + R}, K_{a}, K_{emf}, K_{md}, domain=\mathbb{Z} \right)}$ ```python def plot_elektromekanik(Ka,Kmd,L,R,J,Kf,Kemf,VinType,tMax,dutyCycle,grid): # Parameter diberi value dan model system dibentuk dalam transfer function yang dapat diolah python Ka = Ka Kmd = Kmd L = L R = R J = J Kf = Kf Kemf = Kemf # Pembuatan model transfer function tf = control.tf tf_Theta_Vin = tf([Ka*Kmd],[J*L,(J*R+Kf*L),(Ka*Kemf+Kf*R),0]) tf_Omega_Vin = tf([Ka*Kmd],[J*L,(J*R+Kf*L),(Ka*Kemf+Kf*R)]) tf_Alpha_Vin = tf([Ka*Kmd,0],[J*L,(J*R+Kf*L),(Ka*Kemf+Kf*R)]) tf_Torque_Vin = tf([Ka*Kmd],[L,R]) - tf([Kmd*Kemf*Ka**2],[J*L**2,(2*J*L*R+Kf*L**2),(J*R**2+Ka*Kemf*L+2*Kf*L*R),(Ka*Kemf*R+Kf*R**2)]) f, axs = plt.subplots(4, sharex=True, figsize=(10, 10)) # Fungsi mengatur rentang waktu analisis (harus memiliki kelipatan 1 ms) def analysisTime(maxTime): ts=np.linspace(0, maxTime, maxTime*100) return ts t=analysisTime(tMax) if VinType== 2: # Input pwm dalam 1 millisecond def Pwm(dutyCycle,totalTime): trepeat=np.linspace(0, 1, 100) squareWave=(5*sig.square(2 * np.pi * trepeat, duty=dutyCycle)) finalInput=np.zeros(len(totalTime)) for i in range(len(squareWave)): if squareWave[i]<0: squareWave[i]=0 for i in range(len(totalTime)): finalInput[i]=squareWave[i%100] return finalInput pwm=Pwm(dutyCycle,t) tPwmTheta, yPwmTheta, xPwmTheta = control.forced_response(tf_Theta_Vin, T=t, U=pwm, X0=0) tPwmOmega, yPwmOmega, xPwmOmega = control.forced_response(tf_Omega_Vin, t, pwm, X0=0) tPwmAlpha, yPwmAlpha, xPwmAlpha = control.forced_response(tf_Alpha_Vin, t, pwm, X0=0) tPwmTorque, yPwmTorque, xPwmTorque = control.forced_response(tf_Torque_Vin, t, pwm, X0=0) axs[0].plot(tPwmTheta, yPwmTheta, color = 'blue', label ='Theta') axs[1].plot(tPwmOmega, yPwmOmega, color = 'red', label ='Omega') axs[2].plot(tPwmAlpha, yPwmAlpha, color = 'black', label ='Alpha') axs[3].plot(tPwmTorque, yPwmTorque, color = 'green', label ='Torque') axs[0].title.set_text('Theta $(rad)$ (Input PWM)') axs[1].title.set_text('Omega $(\\frac {rad}{ms})$ (Input PWM)') axs[2].title.set_text('Alpha $(\\frac {rad}{ms^2})$ (Input PWM)') axs[3].title.set_text('Torque $(Nm)$ (Input PWM)') elif VinType== 0: tStepTheta, yStepTheta = control.step_response(tf_Theta_Vin,T=t, X0=0) tStepOmega, yStepOmega = control.step_response(tf_Omega_Vin,T=t, X0=0) tStepAlpha, yStepAlpha = control.step_response(tf_Alpha_Vin,T=t, X0=0) tStepTorque, yStepTorque = control.step_response(tf_Torque_Vin, T=t, X0=0) axs[0].plot(tStepTheta, yStepTheta, color = 'blue', label ='Theta') axs[1].plot(tStepOmega, yStepOmega, color = 'red', label ='Omega') axs[2].plot(tStepAlpha, yStepAlpha, color = 'black', label ='Alpha') axs[3].plot(tStepTorque, yStepTorque, color = 'green', label ='Torque') axs[0].title.set_text('Theta $(rad)$ (Input Step)') axs[1].title.set_text('Omega $(\\frac {rad}{ms})$ (Input Step)') axs[2].title.set_text('Alpha $(\\frac {rad}{ms^2})$(Input Step)') axs[3].title.set_text('Torque $(Nm)$ (Input Step)') elif VinType== 1 : tImpulseTheta, yImpulseTheta = control.impulse_response(tf_Theta_Vin,T=t, X0=0) tImpulseOmega, yImpulseOmega = control.impulse_response(tf_Omega_Vin,T=t, X0=0) tImpulseAlpha, yImpulseAlpha = control.impulse_response(tf_Alpha_Vin,T=t, X0=0) tImpulseTorque, yImpulseTorque = control.impulse_response(tf_Torque_Vin, T=t, X0=0) axs[0].plot(tImpulseTheta, yImpulseTheta, color = 'blue', label ='Theta') axs[1].plot(tImpulseOmega, yImpulseOmega, color = 'red', label ='Omega') axs[2].plot(tImpulseAlpha, yImpulseAlpha, color = 'black', label ='Alpha') axs[3].plot(tImpulseTorque, yImpulseTorque, color = 'green', label ='Torque') axs[0].title.set_text('Theta $(rad)$ (Input Impulse)') axs[1].title.set_text('Omega $(\\frac {rad}{ms})$ (Input Impulse)') axs[2].title.set_text('Alpha $(\\frac {rad}{ms^2})$ (Input Impulse)') axs[3].title.set_text('Torque $(Nm)$ (Input Impulse)') axs[0].legend() axs[1].legend() axs[2].legend() axs[3].legend() axs[0].grid(grid) axs[1].grid(grid) axs[2].grid(grid) axs[3].grid(grid) ``` ```python #DEFINISI WIDGETS PARAMETER Ka_slider = widgets.FloatSlider( value=0.1, min=0.1, max=30.0, step=0.1, description='$K_a (\\frac {Nm}{A})$', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) Kmd_slider = widgets.FloatSlider( value=0.1, min=2, max=20.0, step=0.1, description='$K_{md} (\\frac {V}{V})$', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) L_slider = widgets.FloatSlider( value=0.1, min=0.1, max=100.0, step=0.1, description='$L (mH)$', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) R_slider = widgets.IntSlider( value=1, min=1, max=1000, step=1, description='$R (\Omega)$', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) J_slider = widgets.FloatSlider( value=0.1, min=0.1, max=100.0, step=0.1, description='$J (\\frac {Nm(ms)^2}{rad})$', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) Kf_slider = widgets.FloatSlider( value=0.1, min=0.1, max=100.0, step=0.1, description='$K_{f} (\\frac {Nm(ms)}{rad})$', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) Kemf_slider = widgets.FloatSlider( value=0.1, min=0.1, max=30, step=0.1, description='$K_{emf} (\\frac {V(ms)}{rad})$', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) VinType_select = widgets.Dropdown( options=[('Step', 0), ('Impulse', 1),('PWM',2)], description='Tipe Sinyal Input:', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) tMax_slider = widgets.IntSlider( value=10, min=1, max=500, step=1, description='$t_{max} (ms)$', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) dutyCycle_slider = widgets.FloatSlider( value=0.5, min=0, max=1.0, step=0.05, description='$Duty Cycle (\%)$', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) grid_button = widgets.ToggleButton( value=True, description='Grid', icon='check', layout=Layout(width='20%', height='50px',margin='10px 10px 10px 350px'), style={'description_width': '200px'}, ) ui_em = widgets.VBox([Ka_slider,Kmd_slider,L_slider,R_slider,J_slider,Kf_slider,Kemf_slider,VinType_select,tMax_slider,dutyCycle_slider,grid_button]) out_em = widgets.interactive_output(plot_elektromekanik, {'Ka':Ka_slider,'Kmd':Kmd_slider,'L':L_slider,'R':R_slider,'J':J_slider,'Kf':Kf_slider,'Kemf':Kemf_slider,'VinType':VinType_select,'tMax':tMax_slider,'dutyCycle':dutyCycle_slider, 'grid':grid_button}) ``` ```python display(ui_em,out_em) ``` VBox(children=(FloatSlider(value=0.1, description='$K_a (\\frac {Nm}{A})$', layout=Layout(height='50px', width… Output() ### Analisis Karena model memiliki persamaan yang cukup kompleks sehingga tidak dapat diambil secara intuitive kesimpulan parameter terhadap output sistem, akan dilakukan percobaan menggunakan slider untuk mengubah parameter dan mengamati interaksi perubahan antara parameter. Akan dilakukan juga perubahan bentuk input dan analisa efek penggunaan PWM sebagai modulasi sinyal step dengan besar maksimum 5V terhadap output. #### 1. Peningkatan $K_a$ Peningkatan $K_a$ menyebabkan peningkatan osilasi ($\omega_d$) dan meningkatkan gain pada output $\omega$ dan $\alpha$ serta meningkatkan gradien dari output $\theta$. Namun, gain Torque tidak terpengaruh. #### 2. Peningkatan $K_{md}$ Peningkatan $K_{md}$ membuat amplitudo $V_{in}$ meningkat sehingga amplitudo output bertambah. #### 3. Peningkatan $L$ Peningkatan $L$ menyebabkan peningkatan kecepatan sudut $\omega$ dan $T$ menjadi lebih lambat serta penurunan $\alpha$ yang semakin lambat sehingga menyebabkan peningkatan $\theta$ semakin lambat (peningkatan rise time). #### 4. Peningkatan $R$ Peningkatan $R$ menyebabkan osilasi output ($\omega_d$) $\omega$, $\alpha$, dan Torque semakin kecil dan gain yang semakin kecil sehingga mengurangi gradien dari output $\theta$. #### 5. Peningkatan $J$ Peningkatan $J$ meningkatkan gain Torque dan menurunkan gain $\theta$, $\omega$, dan $\alpha$. #### 6. Peningkatan $K_f$ Peningkatan $K_f$ meningkatkan gain Torque dan menurunkan gain $\theta$, $\omega$, dan $\alpha$. #### 7. Peningkatan $K_{emf}$ Peningkatan $K_{emf}$ menurunkan gain Torque, $\theta$, $\omega$, dan $\alpha$. #### 8. Interaksi antar parameter Perbandingan pengurangan $R$ dibanding peningkatan $K_a$ kira kira 3 kali lipat. Peningkatan pada $J$ dan $K_f$ terbatas pada peningkatan $K_a$. Secara fisis, peningkatan $K_a$ dan $K_{emf}$ terjadi secara bersamaan dan hampir sebanding (hanya dibedakan pada voltage drop pada berbagai komponen), diikuti oleh $L$ sehingga untuk $K_a$ dan $K_{emf}$ besar, waktu mencapai steady state juga semakin lama. Hal yang menarik adalah $K_a$ dan $K_{emf}$ membuat sistem memiliki gain (transfer energi) yang kecil jika hanya ditinjau dari peningkatan nilai $K_a$ dan $K_{emf}$, namun ketika diikuti peningkatan $V_{in}$ sistem memiliki transfer energi yang lebih besar daripada sebelumnya pada keadaan steady state. Jadi dapat disimpulkan bahwa $K_a$ dan $K_{emf}$ harus memiliki nilai yang cukup besar agar konfigurasi sesuai dengan input $V_{in}$ dan menghasilkan transfer energi yang efisien. Input $V_{in}$ juga harus sesuai dengan sistem $K_a$ dan $K_{emf}$ yang ada sehingga dapat memutar motor (ini mengapa terdapat voltage minimum dan voltage yang disarankan untuk menjalankan sebuah motor). #### 9. Pengaruh Input Step Penggunaan input step memiliki osilasi ($\omega_d$) semakin sedikit. #### 10. Pengaruh Input Impuls Penggunaan input impulse membuat $\theta$ mencapai steady state karena motor berhenti berputar sehingga $\omega$,$\alpha$, dan Torque memiliki nilai steady state 0. #### 11. Pengaruh Input PWM Penggunaan input PWM dengan duty cycle tertentu membuat osilasi yang semakin banyak, namun dengan peningkatan duty cycle, osilasi semakin sedikit (semakin mendekati sinyal step). Hal yang menarik disini adalah sinyal PWM dapat digunakan untuk mengontrol, tetapi ketika tidak digunakan pengontrol, sinyal PWM malah memberikan osilasi pada sistem. ## 3. Pemodelan Sistem Mekanik Dimodelkan sistem mekanik sebagai berikut <p style="text-align: center"><b>Sistem Mekanik Sederhana dengan Bond Graph</b></p> ### Deskripsi Sistem 1. Input $F$ sebagai gaya yang dikerjakan pada massa 2. Output $x$ sebagai perpindahan, $v$ sebagai kecepatan, dan $a$ sebagai percepatan pada massa 3. Parameter Dari penurunan bond graph, didapatkan parameter $k$, $b$, dan $m$ ### Pemodean Transfer Function Fungsi transfer dapat dengan mudah di turunkan dari hubungan bond graph $$ m \frac {d^2 x}{dt^2} = F-kx-b\frac{dx}{dt} $$ <br> Transformasi laplace menghasilkan <br> $$ s^2 x = \frac {F}{m}-x\frac {k}{m}-sx\frac{b}{m} $$ $$ (s^2+s\frac{b}{m}+\frac {k}{m})x=\frac {F}{m} $$ <br> Untuk x: <br> $$ \frac {x}{F}=\frac {1}{(ms^2+bs+k)} $$ <br> Untuk v: <br> $$ \frac {v}{F}=\frac {s}{(ms^2+bs+k)} $$ <br> Untuk a: <br> $$ \frac {a}{F}=\frac {s^2}{(ms^2+bs+k)} $$ ```python # Digunakan penyelesaian numerik untuk output import numpy as np from scipy.integrate import odeint import scipy.signal as sig import matplotlib.pyplot as plt from sympy.physics.mechanics import dynamicsymbols, SymbolicSystem from sympy import * import control as control ``` ```python def plot_mekanik(M,B,K,VinType,grid): # Parameter diberi value dan model system dibentuk dalam transfer function yang dapat diolah python m=M b=B k=K tf = sig.TransferFunction tf_X_F=tf([1],[m,b,k]) tf_V_F=tf([1,0],[m,b,k]) tf_A_F=tf([1,0,0],[m,b,k]) f, axs = plt.subplots(3, sharex=True, figsize=(10, 10)) if VinType==0: tImpX,xOutImp=sig.impulse(tf_X_F) tImpV,vOutImp=sig.impulse(tf_V_F) tImpA,aOutImp=sig.impulse(tf_A_F) axs[0].plot(tImpX,xOutImp, color = 'blue', label ='x') axs[1].plot(tImpV,vOutImp, color = 'red', label ='v') axs[2].plot(tImpA,aOutImp, color = 'green', label ='a') axs[0].title.set_text('Perpindahan Linear $(m)$ (Input Impuls)') axs[1].title.set_text('Kecepatan Linear $(\\frac {m}{s})$ (Input Impuls)') axs[2].title.set_text('Percepatan Linear $(\\frac {m}{s^2})$ (Input Impuls)') elif VinType==1: tStepX,xOutStep=sig.step(tf_X_F) tStepV,vOutStep=sig.step(tf_V_F) tStepA,aOutStep=sig.step(tf_A_F) axs[0].plot(tStepX,xOutStep, color = 'blue', label ='x') axs[1].plot(tStepV,vOutStep, color = 'red', label ='v') axs[2].plot(tStepA,aOutStep, color = 'green', label ='a') axs[0].title.set_text('Perpindahan Linear $(m)$ (Input Step)') axs[1].title.set_text('Kecepatan Linear $(\\frac {m}{s})$ (Input Step)') axs[2].title.set_text('Percepatan Linear $(\\frac {m}{s^2})$ (Input Step)') axs[0].legend() axs[1].legend() axs[2].legend() axs[0].grid(grid) axs[1].grid(grid) axs[2].grid(grid) ``` ```python M_slider = widgets.FloatSlider( value=0.1, min=0.1, max=30.0, step=0.1, description='Massa $(kg)$', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) B_slider = widgets.FloatSlider( value=0.1, min=2, max=20.0, step=0.1, description='Konstanta Redaman $(\\frac {Ns}{m})$', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) K_slider = widgets.FloatSlider( value=0.1, min=0.1, max=100.0, step=0.1, description='Konstanta pegas $(\\frac {N}{m})$', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) VinType_select = widgets.Dropdown( options=[('Impulse', 0), ('Step', 1)], description='Tipe Sinyal Input:', layout=Layout(width='80%', height='50px'), style={'description_width': '200px'}, ) grid_button = widgets.ToggleButton( value=True, description='Grid', icon='check', layout=Layout(width='20%', height='50px',margin='10px 10px 10px 350px'), style={'description_width': '200px'}, ) ui_mk = widgets.VBox([M_slider,B_slider,K_slider,VinType_select,grid_button]) out_mk = widgets.interactive_output(plot_mekanik, {'M':M_slider,'B':B_slider,'K':K_slider,'VinType':VinType_select,'grid':grid_button}) ``` ```python display(ui_mk,out_mk) ``` VBox(children=(FloatSlider(value=0.1, description='Massa $(kg)$', layout=Layout(height='50px', width='80%'), m… Output() ### Analisis Berdasarkan persamaan yang cukup sederhana, sistem mekanik orde dua memiliki karakteristik berikut: 1. Pengaruh peningkatan massa Massa pada sistem berperilaku seperti komponen inersial yang meningkatkan rise time dan settling time ketika diperbesar. 2. Pengaruh peningkatan konstanta redaman Konstanta redaman berperilaku seperti komponen hambatan yang meredam sistem sehingga maximum overshoot menjadi kecil (akibat peningkatan damping ratio) ketika peningkatan konstanta redaman terjadi. Konstanta redaman juga berpengaruh pada settling time, dimana peningkatan konstanta redaman meningkatkan settling time. 3. Pengaruh peningkatan konstanta pegas Konstanta pegas berperilaku seperti komponen kapasitansi yang mengurangi besar gain dari perpindahan, mengurangi damping ratio, meningkatkan frekuensi osilasi sistem, mengurangi amplitudo kecepatan sistem, mempercepat settling time, dan mempercepat peak time, meningkatkan maximum overshoot. ## 3. Pemodelan Sistem Termofluidik Akan dimodelkan sistem termofluidik sederhana berupa tangki dengan air mengalir dengan asumsi peninjauan konduksi tanpa memperhitungkan konveksi, massa jenis $\rho$ konstan, kalor jenis $C$ konstan, koefisien transfer kalor $U$ konstan, dan luas alas tangki $A$ konstan. Distribusi panas diabaikan dan dianggap heat flux tersebar secara merata dalam sistem, serta tangki dianggap isoterm. Diasumsikan juga suhu lingkungan konstan sebesar $T_{env}$ <p style="text-align: center"><b>Water Heater</b></p> <p style="text-align: center"><b>Sumber: https://en.wikipedia.org/wiki/Water_heating#/media/File:Water_Heater_White.jpg</b></p> <p style="text-align: center"><b>Diagram Sederhana Sistem</b></p> ### Deskripsi Sistem 1. Input $q_i$ sebagai kalor oleh pemanas elektrik, $f_i$ sebagai flow masuk 2. Output $f_o$ sebagai flow keluar, $T_{tank}$ sebagai suhu tangki 3. Parameter didapatkan parameter #### Konservasi Massa $$ \frac {dM}{dt}=f_i-f_o $$ $$ \frac {d(\rho A h)}{dt}=f_i-f_o $$ #### Konservasi Energi $$ \frac {dE}{dt}=q_i - q_o $$ #### Konservasi Massa dan Energi $$ C \frac {\rho A d (h T_{tank})}{dt}=q_i-UA(T_{tank}-T_{env}) $$ $$ C \rho A (T_{tank} \frac {dh}{dt} + h \frac {dT_{tank}}{dt})=q_i-UA(T_{tank}-T_{env}) $$ ### Pemodelan dengan Bond Graph ### Pemodean Transfer Function
287c0312108301c4943a659a14e637e7cfd346ef
53,350
ipynb
Jupyter Notebook
Final_Task_2019/.ipynb_checkpoints/PR5_13317025_13317041-checkpoint.ipynb
bernardusrendy/sds
16e7cea8a2177c3762b5a8014969ae4197626cd7
[ "MIT" ]
1
2019-12-03T15:37:39.000Z
2019-12-03T15:37:39.000Z
Final_Task_2019/.ipynb_checkpoints/PR5_13317025_13317041-checkpoint.ipynb
bernardusrendy/sds
16e7cea8a2177c3762b5a8014969ae4197626cd7
[ "MIT" ]
null
null
null
Final_Task_2019/.ipynb_checkpoints/PR5_13317025_13317041-checkpoint.ipynb
bernardusrendy/sds
16e7cea8a2177c3762b5a8014969ae4197626cd7
[ "MIT" ]
1
2020-01-15T02:38:21.000Z
2020-01-15T02:38:21.000Z
40.203466
1,099
0.565436
true
12,856
Qwen/Qwen-72B
1. YES 2. YES
0.766294
0.752013
0.576262
__label__ind_Latn
0.724517
0.177181
# Mie Scattering Function **Scott Prahl** **April 2021** *If miepython is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter)* ```python #!pip install --user miepython ``` ```python import numpy as np import matplotlib.pyplot as plt try: import miepython except ModuleNotFoundError: print('miepython not installed. To install, uncomment and run the cell above.') print('Once installation is successful, rerun this cell again.') ``` miepython not installed. To install, uncomment and run the cell above. Once installation is successful, rerun this cell again. Mie scattering describes the special case of the interaction of light passing through a non-absorbing medium with a single embedded spherical object. The sphere itself can be non-absorbing, moderately absorbing, or perfectly absorbing. ## Goals for this notebook: * show how to plot the phase function * explain the units for the scattering phase function * describe normalization of the phase function * show a few examples from classic Mie texts ## Geometry Specifically, the scattering function $p(\theta_i,\phi_i,\theta_o,\phi_o)$ describes the amount of light scattered by a particle for light incident at an angle $(\theta_i,\phi_i)$ and exiting the particle (in the far field) at an angle $(\theta_o,\phi_o)$. For simplicity, the scattering function is often assumed to be rotationally symmetric (it is, obviously, for spherical scatterers) and that the angle that the light is scattered into only depends the $\theta=\theta_o-\theta_i$. In this case, the scattering function can be written as $p(\theta)$. Finally, the angle is often replaced by $\mu=\cos\theta$ and therefore the phase function becomes just $p(\mu)$. The figure below shows the basic idea. An incoming monochromatic plane wave hits a sphere and produces *in the far field* two separate monochromatic waves — a slightly attenuated unscattered planar wave and an outgoing spherical wave. Obviously. the scattered light will be cylindrically symmetric about the ray passing through the center of the sphere. ```python t = np.linspace(0,2*np.pi,100) xx = np.cos(t) yy = np.sin(t) fig,ax=plt.subplots(figsize=(10,8)) plt.axes().set_aspect('equal') plt.plot(xx,yy) plt.plot([-5,7],[0,0],'--k') plt.annotate('incoming irradiance', xy=(-4.5,-2.3),ha='left',color='blue',fontsize=14) for i in range(6): y0 = i -2.5 plt.annotate('',xy=(-1.5,y0),xytext=(-5,y0),arrowprops=dict(arrowstyle="->",color='blue')) plt.annotate('unscattered irradiance', xy=(3,-2.3),ha='left',color='blue',fontsize=14) for i in range(6): y0 = i -2.5 plt.annotate('',xy=(7,y0),xytext=(3,y0),arrowprops=dict(arrowstyle="->",color='blue',ls=':')) plt.annotate('scattered\nspherical\nwave', xy=(0,1.5),ha='left',color='red',fontsize=16) plt.annotate('',xy=(2.5,2.5),xytext=(0,0),arrowprops=dict(arrowstyle="->",color='red')) plt.annotate(r'$\theta$',xy=(2,0.7),color='red',fontsize=14) plt.annotate('',xy=(2,2),xytext=(2.7,0),arrowprops=dict(connectionstyle="arc3,rad=0.2", arrowstyle="<->",color='red')) plt.xlim(-5,7) plt.ylim(-3,3) plt.axis('off') plt.show() ``` ## Scattered Wave ```python fig,ax=plt.subplots(figsize=(10,8)) plt.axes().set_aspect('equal') plt.scatter([0],[0],s=30) m = 1.5 x = np.pi/3 theta = np.linspace(-180,180,180) mu = np.cos(theta/180*np.pi) scat = 15 * miepython.i_unpolarized(m,x,mu) plt.plot(scat*np.cos(theta/180*np.pi),scat*np.sin(theta/180*np.pi)) for i in range(12): ii = i*15 xx = scat[ii]*np.cos(theta[ii]/180*np.pi) yy = scat[ii]*np.sin(theta[ii]/180*np.pi) # print(xx,yy) plt.annotate('',xy=(xx,yy),xytext=(0,0),arrowprops=dict(arrowstyle="->",color='red')) plt.annotate('incident irradiance', xy=(-4.5,-2.3),ha='left',color='blue',fontsize=14) for i in range(6): y0 = i -2.5 plt.annotate('',xy=(-1.5,y0),xytext=(-5,y0),arrowprops=dict(arrowstyle="->",color='blue')) plt.annotate('unscattered irradiance', xy=(3,-2.3),ha='left',color='blue',fontsize=14) for i in range(6): y0 = i -2.5 plt.annotate('',xy=(7,y0),xytext=(3,y0),arrowprops=dict(arrowstyle="->",color='blue',ls=':')) plt.annotate('scattered\nspherical wave', xy=(0,1.5),ha='left',color='red',fontsize=16) plt.xlim(-5,7) plt.ylim(-3,3) #plt.axis('off') plt.show() ``` ## Normalization of the scattered light So the scattering function or phase function has at least three reasonable normalizations that involve integrating over all $4\pi$ steradians. Below $d\Omega=\sin\theta d\theta\,d\phi$ is a differential solid angle $$ \begin{align} \int_{4\pi} p(\theta,\phi) \,d\Omega &= 1\\[2mm] \int_{4\pi} p(\theta,\phi) \,d\Omega &= 4\pi \\[2mm] \int_{4\pi} p(\theta,\phi) \,d\Omega &= a \qquad\qquad \mbox{Used by miepython}\\[2mm] \end{align} $$ where $a$ is the single scattering albedo, $$ a = \frac{\sigma_s}{\sigma_s+\sigma_a} $$ and $\sigma_s$ is the scattering cross section, and $\sigma_a$ is the absorption cross section. *The choice of normalization was made because it accounts for light lost through absorption by the sphere.* If the incident light has units of watts, then the values from the scattering function $p(\theta,\phi)$ have units of radiant intensity or W/sr. For example, a circular detector with radius $r_d$ at a distance $R$ will subtend an angle $$ \Omega = \frac{\pi r_d^2}{R^2} $$ (assuming $r_d\ll R$). Now if $P_0$ of light is scattered by a sphere then the scattered power on the detector will be $$ P_d = P_0 \cdot \Omega \cdot p(\theta,\phi) $$ ## Examples ### Unpolarized Scattering Function If unpolarized light hits the sphere, then there are no polarization effects to worry about. It is pretty easy to generate a plot to show how scattering changes with angle. ```python m = 1.5 x = np.pi/3 theta = np.linspace(-180,180,180) mu = np.cos(theta/180*np.pi) scat = miepython.i_unpolarized(m,x,mu) fig,ax = plt.subplots(1,2,figsize=(12,5)) ax=plt.subplot(121, projection='polar') ax.plot(theta/180*np.pi,scat) ax.set_rticks([0.05, 0.1,0.15]) ax.set_title("m=1.5, Sphere Diameter = $\lambda$/3") plt.subplot(122) plt.plot(theta,scat) plt.xlabel('Exit Angle [degrees]') plt.ylabel('Unpolarized Scattered light [1/sr]') plt.title('m=1.5, Sphere Diameter = $\lambda$/3') plt.ylim(0.00,0.2) plt.show() ``` A similar calculation but using `ez_intensities()` ```python m = 1.33 lambda0 = 632.8 # nm d = 200 # nm theta = np.linspace(-180,180,180) mu = np.cos(theta/180*np.pi) Ipar, Iper = miepython.ez_intensities(m, d, lambda0, mu) fig,ax = plt.subplots(1,2,figsize=(12,5)) ax=plt.subplot(121, projection='polar') ax.plot(theta/180*np.pi,Ipar) ax.plot(theta/180*np.pi,Iper) ax.set_rticks([0.05, 0.1, 0.15, 0.20]) plt.title("m=%.2f, Sphere Diameter = %.0f nm, $\lambda$=%.1f nm" % (m, d, lambda0)) plt.subplot(122) plt.plot(theta,Ipar) plt.plot(theta,Iper) plt.xlabel('Exit Angle [degrees]') plt.ylabel('Unpolarized Scattered light [1/sr]') plt.title("m=%.2f, Sphere Diameter = %.0f nm, $\lambda$=%.1f nm" % (m, d, lambda0)) plt.ylim(0.00,0.2) plt.show() ``` ### Rayleigh Scattering Classic Rayleigh scattering treats small particles with natural (unpolarized) light. The solid black line denotes the total scattered intensity. The red dashed line is light scattered that is polarized perpendicular to the plane of the graph and the blue dotted line is for light parallel to the plane of the graph. (Compare with van de Hult, Figure 10) ```python m = 1.3 x = 0.01 theta = np.linspace(-180,180,180) mu = np.cos(theta/180*np.pi) ipar = miepython.i_par(m,x,mu)/2 iper = miepython.i_per(m,x,mu)/2 iun = miepython.i_unpolarized(m,x,mu) fig,ax = plt.subplots(1,2,figsize=(12,5)) ax=plt.subplot(121, projection='polar') ax.plot(theta/180*np.pi,iper,'r--') ax.plot(theta/180*np.pi,ipar,'b:') ax.plot(theta/180*np.pi,iun,'k') ax.set_rticks([0.05, 0.1,0.15]) plt.title('m=%.2f, Sphere Parameter = %.2f' %(m,x)) plt.subplot(122) plt.plot(theta,iper,'r--') plt.plot(theta,ipar,'b:') plt.plot(theta,iun,'k') plt.xlabel('Exit Angle [degrees]') plt.ylabel('Normalized Scattered light [1/sr]') plt.title('m=%.2f, Sphere Parameter = %.2f' %(m,x)) plt.ylim(0.00,0.125) plt.text(130,0.02,r"$0.5I_{per}$",color="blue", fontsize=16) plt.text(120,0.062,r"$0.5I_{par}$",color="red", fontsize=16) plt.text(30,0.11,r"$I_{unpolarized}$",color="black", fontsize=16) plt.show() ``` ## Differential Scattering Cross Section Sometimes one would like the scattering function normalized so that the integral over all $4\pi$ steradians to be the scattering cross section $$ \sigma_{sca} = \frac{\pi d^2}{4} Q_{sca} $$ The *differential scattering cross section* \frac{d\sigma_{sca}}{d\Omega} $$ \sigma_{sca} = \int_{4\pi} \frac{d\sigma_{sca}}{d\Omega}\,d\Omega $$ Since the unpolarized scattering normalized so its integral is the single scattering albedo, this means that $$ \frac{Q_{sca}}{Q_{ext}} = \int_{4\pi} p(\mu) \sin\theta\,d\theta d\phi $$ and therefore the differential scattering cross section can be obtained `miepython` using $$ \frac{d\sigma_{sca}}{d\Omega} = \frac{\pi d^2 Q_{ext}}{4} \cdot p(\theta,\phi) $$ Note that this is $Q_{ext}$ and *not* $Q_{sca}$ because of the choice of normalization! For example, here is a replica of [figure 4](http://plaza.ufl.edu/dwhahn/Rayleigh%20and%20Mie%20Light%20Scattering.pdf) ```python m = 1.4-0j lambda0 = 532e-9 # m theta = np.linspace(0,180,1000) mu = np.cos(theta* np.pi/180) d = 1700e-9 # m x = 2 * np.pi/lambda0 * d/2 geometric_cross_section = np.pi * d**2/4 * 1e4 # cm**2 qext, qsca, qback, g = miepython.mie(m,x) sigma_sca = geometric_cross_section * qext * miepython.i_unpolarized(m,x,mu) plt.semilogy(theta, sigma_sca*1e-3, color='blue') plt.text(15, sigma_sca[0]*3e-4, "%.0fnm\n(x10$^{-3}$)" % (d*1e9), color='blue') d = 170e-9 # m x = 2 * np.pi/lambda0 * d/2 geometric_cross_section = np.pi * d**2/4 * 1e4 # cm**2 qext, qsca, qback, g = miepython.mie(m,x) sigma_sca = geometric_cross_section * qext * miepython.i_unpolarized(m,x,mu) plt.semilogy(theta, sigma_sca, color='red') plt.text(110, sigma_sca[-1]/2, "%.0fnm" % (d*1e9), color='red') d = 17e-9 # m x = 2 * np.pi/lambda0 * d/2 geometric_cross_section = np.pi * d**2/4 * 1e4 # cm**2 qext, qsca, qback, g = miepython.mie(m,x) sigma_sca = geometric_cross_section * qext * miepython.i_unpolarized(m,x,mu) plt.semilogy(theta, sigma_sca*1e6, color='green') plt.text(130, sigma_sca[-1]*1e6, "(x10$^6$)\n%.0fnm" % (d*1e9), color='green') plt.title("Refractive index m=1.4, $\lambda$=532nm") plt.xlabel("Scattering Angle (degrees)") plt.ylabel("Diff. Scattering Cross Section (cm$^2$/sr)") plt.grid(True) plt.show() ``` ## Normalization revisited ### Evenly spaced $\mu=\cos\theta$ Start with uniformly distributed scattering angles that are evenly spaced over the cosine of the scattered angle. #### Verifying normalization numerically Specifically, to ensure proper normalization, the integral of the scattering function over all solid angles must be unity $$ a = \int_0^{2\pi}\int_0^\pi \, p(\theta,\phi)\,\sin\theta\,d\theta\,d\phi $$ or with a change of variables $\mu=\cos\theta$ and using the symmetry to the integral in $\phi$ $$ a = 2\pi \int_{-1}^1 \, p(\mu)\,d\mu $$ This integral can be done numerically by simply summing all the rectangles $$ a = 2\pi \sum_{i=0}^N p(\mu_i)\,\Delta\mu_i $$ and if all the rectanges have the same width $$ a = 2\pi\Delta\mu \sum_{i=0}^N p(\mu_i) $$ #### Case 1. n=1.5, x=1 The total integral `total=` in the title should match the albedo `a=`. For this non-strongly peaked scattering function, the simple integration remains close to the expected value. ```python m = 1.5 x = 1 mu = np.linspace(-1,1,501) intensity = miepython.i_unpolarized(m,x,mu) qext, qsca, qback, g = miepython.mie(m,x) a = qsca/qext #integrate over all angles dmu = mu[1] - mu[0] total = 2 * np.pi * dmu * np.sum(intensity) plt.plot(mu,intensity) plt.xlabel(r'$\cos(\theta)$') plt.ylabel('Unpolarized Scattering Intensity [1/sr]') plt.title('m=%.3f%+.3fj, x=%.2f, a=%.3f, total=%.3f'%(m.real,m.imag,x,a, total)) plt.show() ``` #### Case 2: m=1.5-1.5j, x=1 Aagin the total integral `total=` in the title should match the albedo `a=`. For this non-strongly peaked scattering function, the simple integration remains close to the expected value. ```python m = 1.5 - 1.5j x = 1 mu = np.linspace(-1,1,501) intensity = miepython.i_unpolarized(m,x,mu) qext, qsca, qback, g = miepython.mie(m,x) a = qsca/qext #integrate over all angles dmu = mu[1] - mu[0] total = 2 * np.pi * dmu * np.sum(intensity) plt.plot(mu,intensity) plt.xlabel(r'$\cos(\theta)$') plt.ylabel('Unpolarized Scattering Intensity [1/sr]') plt.title('m=%.3f%+.3fj, x=%.2f, a=%.3f, total=%.3f'%(m.real,m.imag,x,a, total)) plt.show() ``` ## Normalization, evenly spaced $\theta$ The total integral total in the title should match the albedo $a$. For this non-strongly peaked scattering function, even spacing in $\theta$ improves the accuracy of the integration. ```python m = 1.5-1.5j x = 1 theta = np.linspace(0,180,361)*np.pi/180 mu = np.cos(theta) intensity = miepython.i_unpolarized(m,x,mu) qext, qsca, qback, g = miepython.mie(m,x) a = qsca/qext #integrate over all angles dtheta = theta[1]-theta[0] total = 2 * np.pi * dtheta * np.sum(intensity* np.sin(theta)) plt.plot(mu,intensity) plt.xlabel(r'$\cos(\theta)$') plt.ylabel('Unpolarized Scattering Intensity [1/sr]') plt.title('m=%.3f%+.3fj, x=%.2f, a=%.3f, total=%.3f'%(m.real,m.imag,x,a, total)) plt.show() ``` ## Comparison to Wiscombe's Mie Program Wiscombe normalizes as $$ \int_{4\pi} p(\theta,\phi) \,d\Omega = \pi x^2 Q_{sca} $$ where $p(\theta)$ is the scattered light. Once corrected for differences in phase function normalization, Wiscombe's test cases match those from `miepython` exactly. ### Wiscombe's Test Case 14 ```python """ MIEV0 Test Case 14: Refractive index: real 1.500 imag -1.000E+00, Mie size parameter = 1.000 Angle Cosine S-sub-1 S-sub-2 Intensity Deg of Polzn 0.00 1.000000 5.84080E-01 1.90515E-01 5.84080E-01 1.90515E-01 3.77446E-01 0.0000 30.00 0.866025 5.65702E-01 1.87200E-01 5.00161E-01 1.45611E-01 3.13213E-01 -0.1336 60.00 0.500000 5.17525E-01 1.78443E-01 2.87964E-01 4.10540E-02 1.92141E-01 -0.5597 90.00 0.000000 4.56340E-01 1.67167E-01 3.62285E-02 -6.18265E-02 1.20663E-01 -0.9574 """ x=1.0 m=1.5-1.0j mu=np.cos(np.linspace(0,90,4) * np.pi/180) qext, qsca, qback, g = miepython.mie(m,x) albedo = qsca/qext unpolar = miepython.i_unpolarized(m,x,mu) # normalized to a unpolar /= albedo # normalized to 1 unpolar_miev = np.array([3.77446E-01,3.13213E-01,1.92141E-01,1.20663E-01]) unpolar_miev /= np.pi * qsca * x**2 # normalized to 1 ratio = unpolar_miev/unpolar print("MIEV0 Test Case 14: m=1.500-1.000j, Mie size parameter = 1.000") print() print(" %9.1f°%9.1f°%9.1f°%9.1f°"%(0,30,60,90)) print("MIEV0 %9.5f %9.5f %9.5f %9.5f"%(unpolar_miev[0],unpolar_miev[1],unpolar_miev[2],unpolar_miev[3])) print("miepython %9.5f %9.5f %9.5f %9.5f"%(unpolar[0],unpolar[1],unpolar[2],unpolar[3])) print("ratio %9.5f %9.5f %9.5f %9.5f"%(ratio[0],ratio[1],ratio[2],ratio[3])) ``` ### Wiscombe's Test Case 10 ```python """ MIEV0 Test Case 10: Refractive index: real 1.330 imag -1.000E-05, Mie size parameter = 100.000 Angle Cosine S-sub-1 S-sub-2 Intensity Deg of Polzn 0.00 1.000000 5.25330E+03 -1.24319E+02 5.25330E+03 -1.24319E+02 2.76126E+07 0.0000 30.00 0.866025 -5.53457E+01 -2.97188E+01 -8.46720E+01 -1.99947E+01 5.75775E+03 0.3146 60.00 0.500000 1.71049E+01 -1.52010E+01 3.31076E+01 -2.70979E+00 8.13553E+02 0.3563 90.00 0.000000 -3.65576E+00 8.76986E+00 -6.55051E+00 -4.67537E+00 7.75217E+01 -0.1645 """ x=100.0 m=1.33-1e-5j mu=np.cos(np.linspace(0,90,4) * np.pi/180) qext, qsca, qback, g = miepython.mie(m,x) albedo = qsca/qext unpolar = miepython.i_unpolarized(m,x,mu) # normalized to a unpolar /= albedo # normalized to 1 unpolar_miev = np.array([2.76126E+07,5.75775E+03,8.13553E+02,7.75217E+01]) unpolar_miev /= np.pi * qsca * x**2 # normalized to 1 ratio = unpolar_miev/unpolar print("MIEV0 Test Case 10: m=1.330-0.00001j, Mie size parameter = 100.000") print() print(" %9.1f°%9.1f°%9.1f°%9.1f°"%(0,30,60,90)) print("MIEV0 %9.5f %9.5f %9.5f %9.5f"%(unpolar_miev[0],unpolar_miev[1],unpolar_miev[2],unpolar_miev[3])) print("miepython %9.5f %9.5f %9.5f %9.5f"%(unpolar[0],unpolar[1],unpolar[2],unpolar[3])) print("ratio %9.5f %9.5f %9.5f %9.5f"%(ratio[0],ratio[1],ratio[2],ratio[3])) ``` ### Wiscombe's Test Case 7 ```python """ MIEV0 Test Case 7: Refractive index: real 0.750 imag 0.000E+00, Mie size parameter = 10.000 Angle Cosine S-sub-1 S-sub-2 Intensity Deg of Polzn 0.00 1.000000 5.58066E+01 -9.75810E+00 5.58066E+01 -9.75810E+00 3.20960E+03 0.0000 30.00 0.866025 -7.67288E+00 1.08732E+01 -1.09292E+01 9.62967E+00 1.94639E+02 0.0901 60.00 0.500000 3.58789E+00 -1.75618E+00 3.42741E+00 8.08269E-02 1.38554E+01 -0.1517 90.00 0.000000 -1.78590E+00 -5.23283E-02 -5.14875E-01 -7.02729E-01 1.97556E+00 -0.6158 """ x=10.0 m=0.75 mu=np.cos(np.linspace(0,90,4) * np.pi/180) qext, qsca, qback, g = miepython.mie(m,x) albedo = qsca/qext unpolar = miepython.i_unpolarized(m,x,mu) # normalized to a unpolar /= albedo # normalized to 1 unpolar_miev = np.array([3.20960E+03,1.94639E+02,1.38554E+01,1.97556E+00]) unpolar_miev /= np.pi * qsca * x**2 # normalized to 1 ratio = unpolar_miev/unpolar print("MIEV0 Test Case 7: m=0.75, Mie size parameter = 10.000") print() print(" %9.1f°%9.1f°%9.1f°%9.1f°"%(0,30,60,90)) print("MIEV0 %9.5f %9.5f %9.5f %9.5f"%(unpolar_miev[0],unpolar_miev[1],unpolar_miev[2],unpolar_miev[3])) print("miepython %9.5f %9.5f %9.5f %9.5f"%(unpolar[0],unpolar[1],unpolar[2],unpolar[3])) print("ratio %9.5f %9.5f %9.5f %9.5f"%(ratio[0],ratio[1],ratio[2],ratio[3])) ``` ## Comparison to Bohren & Huffmans's Mie Program Bohren & Huffman normalizes as $$ \int_{4\pi} p(\theta,\phi) \,d\Omega = 4 \pi x^2 Q_{sca} $$ ### Bohren & Huffmans's Test Case 14 ```python """ BHMie Test Case 14, Refractive index = 1.5000-1.0000j, Size parameter = 1.0000 Angle Cosine S1 S2 0.00 1.0000 -8.38663e-01 -8.64763e-01 -8.38663e-01 -8.64763e-01 0.52 0.8660 -8.19225e-01 -8.61719e-01 -7.21779e-01 -7.27856e-01 1.05 0.5000 -7.68157e-01 -8.53697e-01 -4.19454e-01 -3.72965e-01 1.57 0.0000 -7.03034e-01 -8.43425e-01 -4.44461e-02 6.94424e-02 """ x=1.0 m=1.5-1j mu=np.cos(np.linspace(0,90,4) * np.pi/180) qext, qsca, qback, g = miepython.mie(m,x) albedo = qsca/qext unpolar = miepython.i_unpolarized(m,x,mu) # normalized to a unpolar /= albedo # normalized to 1 s1_bh = np.empty(4,dtype=np.complex) s1_bh[0] = -8.38663e-01 - 8.64763e-01*1j s1_bh[1] = -8.19225e-01 - 8.61719e-01*1j s1_bh[2] = -7.68157e-01 - 8.53697e-01*1j s1_bh[3] = -7.03034e-01 - 8.43425e-01*1j s2_bh = np.empty(4,dtype=np.complex) s2_bh[0] = -8.38663e-01 - 8.64763e-01*1j s2_bh[1] = -7.21779e-01 - 7.27856e-01*1j s2_bh[2] = -4.19454e-01 - 3.72965e-01*1j s2_bh[3] = -4.44461e-02 + 6.94424e-02*1j # BHMie seems to normalize their intensities to 4 * pi * x**2 * Qsca unpolar_bh = (abs(s1_bh)**2+abs(s2_bh)**2)/2 unpolar_bh /= np.pi * qsca * 4 * x**2 # normalized to 1 ratio = unpolar_bh/unpolar print("BHMie Test Case 14: m=1.5000-1.0000j, Size parameter = 1.0000") print() print(" %9.1f°%9.1f°%9.1f°%9.1f°"%(0,30,60,90)) print("BHMIE %9.5f %9.5f %9.5f %9.5f"%(unpolar_bh[0],unpolar_bh[1],unpolar_bh[2],unpolar_bh[3])) print("miepython %9.5f %9.5f %9.5f %9.5f"%(unpolar[0],unpolar[1],unpolar[2],unpolar[3])) print("ratio %9.5f %9.5f %9.5f %9.5f"%(ratio[0],ratio[1],ratio[2],ratio[3])) print() print("Note that this test is identical to MIEV0 Test Case 14 above.") print() print("Wiscombe's code is much more robust than Bohren's so I attribute errors all to Bohren") ``` ### Bohren & Huffman, water droplets Tiny water droplet (0.26 microns) in clouds has pretty strong forward scattering! A graph of this is figure 4.9 in Bohren and Huffman's *Absorption and Scattering of Light by Small Particles*. A bizarre scaling factor of $16\pi$ is needed to make the `miepython` results match those in the figure 4.9. ```python x=3 m=1.33-1e-8j theta = np.linspace(0,180,181) mu = np.cos(theta*np.pi/180) scaling_factor = 16*np.pi iper = scaling_factor*miepython.i_per(m,x,mu) ipar = scaling_factor*miepython.i_par(m,x,mu) P = (iper-ipar)/(iper+ipar) plt.subplots(2,1,figsize=(8,8)) plt.subplot(2,1,1) plt.semilogy(theta,ipar,label='$i_{par}$') plt.semilogy(theta,iper,label='$i_{per}$') plt.xlim(0,180) plt.xticks(range(0,181,30)) plt.ylabel('i$_{par}$ and i$_{per}$') plt.legend() plt.title('Figure 4.9 from Bohren & Huffman') plt.subplot(2,1,2) plt.plot(theta,P) plt.ylim(-1,1) plt.xticks(range(0,181,30)) plt.xlim(0,180) plt.ylabel('Polarization') plt.plot([0,180],[0,0],':k') plt.xlabel('Angle (Degrees)') plt.show() ``` ## van de Hulst Comparison This graph (see figure 29 in *Light Scattering by Small Particles*) was obviously constructed by hand. In this graph, van de Hulst worked hard to get as much information as possible ```python x=5 m=10000 theta = np.linspace(0,180,361) mu = np.cos(theta*np.pi/180) fig, ax = plt.subplots(figsize=(8,8)) x=10 s1,s2 = miepython.mie_S1_S2(m,x,mu) sone = 2.5*abs(s1) stwo = 2.5*abs(s2) plt.plot(theta,sone,'b') plt.plot(theta,stwo,'--r') plt.annotate('x=%.1f '%x,xy=(theta[-1],sone[-1]),ha='right',va='bottom') x=5 s1,s2 = miepython.mie_S1_S2(m,x,mu) sone = 2.5*abs(s1) + 1 stwo = 2.5*abs(s2) + 1 plt.plot(theta,sone,'b') plt.plot(theta,stwo,'--r') plt.annotate('x=%.1f '%x,xy=(theta[-1],sone[-1]),ha='right',va='bottom') x=3 s1,s2 = miepython.mie_S1_S2(m,x,mu) sone = 2.5*abs(s1) + 2 stwo = 2.5*abs(s2) + 2 plt.plot(theta,sone,'b') plt.plot(theta,stwo,'--r') plt.annotate('x=%.1f '%x,xy=(theta[-1],sone[-1]),ha='right',va='bottom') x=1 s1,s2 = miepython.mie_S1_S2(m,x,mu) sone = 2.5*abs(s1) + 3 stwo = 2.5*abs(s2) + 3 plt.plot(theta,sone,'b') plt.plot(theta,stwo,'--r') plt.annotate('x=%.1f '%x,xy=(theta[-1],sone[-1]),ha='right',va='bottom') x=0.5 s1,s2 = miepython.mie_S1_S2(m,x,mu) sone = 2.5*abs(s1) + 4 stwo = 2.5*abs(s2) + 4 plt.plot(theta,sone,'b') plt.plot(theta,stwo,'--r') plt.annotate('x=%.1f '%x,xy=(theta[-1],sone[-1]),ha='right',va='bottom') plt.xlim(0,180) plt.ylim(0,5.5) plt.xticks(range(0,181,30)) plt.yticks(np.arange(0,5.51,0.5)) plt.title('Figure 29 from van de Hulst, Non-Absorbing Spheres') plt.xlabel('Angle (Degrees)') ax.set_yticklabels(['0','1/2','0','1/2','0','1/2','0','1/2','0','1/2','5',' ']) plt.grid(True) plt.show() ``` ## Comparisons with Kerker, Angular Gain Another interesting graph is figure 4.51 from [*The Scattering of Light* by Kerker](https://www.sciencedirect.com/book/9780124045507/the-scattering-of-light-and-other-electromagnetic-radiation). The angular gain is $$ G_1 = \frac{4}{x^2} |S_1(\theta)|^2 \qquad\mbox{and}\qquad G_2 = \frac{4}{x^2} |S_2(\theta)|^2 $$ ```python ## Kerker, Angular Gain x=1 m=10000 theta = np.linspace(0,180,361) mu = np.cos(theta*np.pi/180) fig, ax = plt.subplots(figsize=(8,8)) s1,s2 = miepython.mie_S1_S2(m,x,mu) G1 = 4*abs(s1)**2/x**2 G2 = 4*abs(s2)**2/x**2 plt.plot(theta,G1,'b') plt.plot(theta,G2,'--r') plt.annotate('$G_1$',xy=(50,0.36),color='blue',fontsize=14) plt.annotate('$G_2$',xy=(135,0.46),color='red',fontsize=14) plt.xlim(0,180) plt.xticks(range(0,181,30)) plt.title('Figure 4.51 from Kerker, Non-Absorbing Spheres, x=1') plt.xlabel('Angle (Degrees)') plt.ylabel('Angular Gain') plt.show() ``` ```python ```
977db747d30da78f1b627b1eea9f87da65c98bf8
77,569
ipynb
Jupyter Notebook
docs/03_angular_scattering.ipynb
maulanailyasy/miepythonscot
acc28b5f6e041feef83dbe3bceb711a95c1ae37a
[ "MIT" ]
104
2017-09-06T13:37:14.000Z
2022-03-02T07:09:24.000Z
docs/03_angular_scattering.ipynb
maulanailyasy/miepythonscot
acc28b5f6e041feef83dbe3bceb711a95c1ae37a
[ "MIT" ]
16
2018-03-01T15:38:21.000Z
2022-03-13T16:06:43.000Z
docs/03_angular_scattering.ipynb
maulanailyasy/miepythonscot
acc28b5f6e041feef83dbe3bceb711a95c1ae37a
[ "MIT" ]
42
2017-09-27T14:42:59.000Z
2022-03-14T03:48:27.000Z
72.494393
27,480
0.756165
true
9,058
Qwen/Qwen-72B
1. YES 2. YES
0.782662
0.785309
0.614632
__label__eng_Latn
0.521011
0.266325
# Kernel PCA 1. Pick a kernel(polynomial, sigmoid, rbf) 2. Construct the normalized kernel matrix of the data(dimension: m by m) 3. Solve an eigenvalue problem 4. For any data point(new or old), we can represent it as linear combination form 아이리스 데이터를 불러온다. ```python import pandas as pd import numpy as np df = pd.read_csv( filepath_or_buffer='https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', header=None, sep=',') df.columns = ['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid', 'class'] df.head() ``` <div> <style> .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>sepal_len</th> <th>sepal_wid</th> <th>petal_len</th> <th>petal_wid</th> <th>class</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>5.1</td> <td>3.5</td> <td>1.4</td> <td>0.2</td> <td>Iris-setosa</td> </tr> <tr> <th>1</th> <td>4.9</td> <td>3.0</td> <td>1.4</td> <td>0.2</td> <td>Iris-setosa</td> </tr> <tr> <th>2</th> <td>4.7</td> <td>3.2</td> <td>1.3</td> <td>0.2</td> <td>Iris-setosa</td> </tr> <tr> <th>3</th> <td>4.6</td> <td>3.1</td> <td>1.5</td> <td>0.2</td> <td>Iris-setosa</td> </tr> <tr> <th>4</th> <td>5.0</td> <td>3.6</td> <td>1.4</td> <td>0.2</td> <td>Iris-setosa</td> </tr> </tbody> </table> </div> X, y 변수 지정 ```python X = df.iloc[:,0:4].values y = df.iloc[:,4].values ``` 커널 행렬을 정의한다. 커널은 polynomial, sigmoid, rbf 중에 선택할 수 있다. \begin{align} \text{polynomial} & =\text{K}(x,y) = (x^{T}y+\text{C})^{d} \\[2.2ex] \text{sigmoid} & = \text{K}(x,y) = \text{tanh}(\sigma x^{T}y + \text{C}) \\[1.2ex] \text{rbf} & =\text{K}(x,y) = \text{exp}\left( -\frac{\|x-y\|_{2}^{2}}{2\sigma^{2}} \right) \end{align} ```python #define kernel matrix def kernel_matrix(x, kernel=None, d=3, sigma=None, C=1.): n = x.shape[0] if sigma is None: sigma = 1./n xxt = x.dot(x.T) if kernel == 'polynomial': return (C + xxt)**d elif kernel == 'sigmoid': return np.tanh(sigma*xxt + C) elif kernel == 'rbf': A = x.dot(x.T) B = np.repeat(np.diag(xxt), n).reshape(n, n) return np.exp(-(B.T - 2*A + B)/(2*sigma**2)) else: return xxt ``` ■ Pick a kernel(polynomial, sigmoid, rbf) 아이리스 데이터의 커널 행렬을 구한다. 커널은 'polynomial' 커널을 선택하였다. ```python K = kernel_matrix(X, kernel='polynomial', sigma=0.2) print(K.shape) print(K) ``` (150, 150) [[ 70240.512376 57022.169049 55002.062627 ..., 143301.984337 145034.127064 118298.461429] [ 57022.169049 46694.890801 44701.078149 ..., 121508.031177 122023.936 99961.946721] [ 55002.062627 44701.078149 43095.878216 ..., 112748.588191 114084.125 93082.856768] ..., [ 143301.984337 121508.031177 112748.588191 ..., 577801.395289 596522.410632 483182.234423] [ 145034.127064 122023.936 114084.125 ..., 596522.410632 623930.478625 501701.826536] [ 118298.461429 99961.946721 93082.856768 ..., 483182.234423 501701.826536 406210.479416]] ■ Construct the normalized kernel matrix of the data(dimension: m by m) 커널 공간에서 데이터를 표준화시키기 위해 커널 행렬을 다음의 식을 이용하여 Gram 행렬로 바꿔준다. \begin{align} \tilde{\mathbf{K}} & = (\mathbf{I}_{n}-\mathbf{1}_{n})\mathbf{K}(\mathbf{I}_{n}-\mathbf{1}_{n}) \\ & = \mathbf{K} - \mathbf{1}_{n}\mathbf{K} - \mathbf{K}\mathbf{1}_{n} + \mathbf{1}_{n}\mathbf{K}\mathbf{1}_{n} \end{align} ■ Solve an eigenvalue problem 이 Gram 행렬을 다음과 같이 eigenvalue decompotison 한다. \begin{align} \tilde{\mathbf{K}} \alpha_{k} = N\lambda_{k}\alpha_{k} \end{align} 여기서 $\alpha_{k}$는 $\tilde{\mathbf{K}}$의 k번째 eigenvector, $\lambda_{k}$는 k번째 eigenvalue이다. ```python n = K.shape[0] one_mat = np.repeat(1/n, n**2).reshape(n, n) gram = K - one_mat.dot(K) - K.dot(one_mat) + one_mat.dot(K).dot(one_mat) eigen_vals, eigen_vecs = np.linalg.eigh(gram) print("eigen_values \n{}".format(eigen_vals)) print("eigen_vectors \n{}".format(eigen_vecs)) ``` eigen_values [ -1.20420106e-08 -8.64272447e-09 -2.34224220e-09 -2.00051167e-09 -1.89339337e-09 -1.68169782e-09 -1.50570616e-09 -1.49326186e-09 -1.47744975e-09 -1.28091771e-09 -1.16755032e-09 -1.08332575e-09 -1.06142545e-09 -9.81079191e-10 -9.48183243e-10 -9.01508103e-10 -8.33957433e-10 -8.30942910e-10 -7.47916370e-10 -7.17824460e-10 -6.61689413e-10 -6.08569925e-10 -5.70729836e-10 -5.49715396e-10 -5.46918322e-10 -5.33396977e-10 -5.22301752e-10 -4.25666251e-10 -4.23703813e-10 -4.23590482e-10 -4.03510709e-10 -3.57518088e-10 -3.44208594e-10 -3.08730750e-10 -3.06795513e-10 -2.73110651e-10 -2.53612837e-10 -2.42593238e-10 -2.19726129e-10 -1.92463245e-10 -1.86575711e-10 -1.71573164e-10 -1.69107087e-10 -1.50923431e-10 -1.46905157e-10 -1.23247965e-10 -1.22813866e-10 -1.19324900e-10 -9.31018563e-11 -8.86691229e-11 -7.15092201e-11 -5.91711476e-11 -3.94914255e-11 -3.41176300e-11 -2.89869843e-11 -2.88287827e-11 -2.07314158e-11 -1.33584338e-11 -6.60336798e-12 -3.72764149e-12 8.27028425e-14 7.86677457e-12 1.58781549e-11 1.70245602e-11 2.42489058e-11 3.01126784e-11 4.94594099e-11 4.96475002e-11 5.36618994e-11 6.32311880e-11 7.56496287e-11 8.18365041e-11 9.01339176e-11 9.49141425e-11 1.00053268e-10 1.32345959e-10 1.32999074e-10 1.39353706e-10 1.47156476e-10 1.47174997e-10 1.57691221e-10 1.94540678e-10 2.13247486e-10 2.35493867e-10 2.49369422e-10 2.57749330e-10 2.70880591e-10 3.18447237e-10 3.27474291e-10 3.54862128e-10 3.75382637e-10 4.60466052e-10 4.61994638e-10 4.63856842e-10 4.87938144e-10 5.32622662e-10 6.18010829e-10 6.40980070e-10 6.57980761e-10 6.90843093e-10 6.92323430e-10 7.63184548e-10 7.68246675e-10 8.50648841e-10 8.70693381e-10 9.77761769e-10 1.11500340e-09 1.16375242e-09 1.22755755e-09 1.32910241e-09 1.36834077e-09 1.60524301e-09 1.91965326e-09 2.12089784e-09 2.24295174e-09 1.08066228e-08 3.70374719e-04 1.33549165e-03 1.83050647e-03 2.82486878e-03 6.00514944e-03 1.25097245e-02 1.57454921e-02 2.94670123e-02 4.99359681e-02 6.08453692e-02 1.19408062e-01 2.01410768e-01 2.86397474e-01 3.91446560e-01 1.08862924e+00 2.63747714e+00 4.17428746e+00 8.70955411e+00 1.18862979e+01 2.91447061e+01 3.66966888e+01 4.53937230e+01 5.70213374e+01 1.13151941e+02 1.84413578e+02 3.49316751e+02 1.02163087e+03 1.89055155e+03 3.69278071e+03 5.81710949e+04 6.14470202e+04 2.13152207e+05 4.20875913e+05 1.51039336e+07] eigen_vectors [[ 0. 0. 0. ..., -0.00303367 0.05642649 0.08933896] [ 0.40183273 -0.60966562 -0.11060735 ..., 0.02194831 -0.01740526 0.09386439] [-0.05089924 -0.03123819 0.02480582 ..., -0.00335821 -0.01993893 0.09681559] ..., [-0.0384111 -0.07969503 0.08946659 ..., -0.04566547 -0.00631036 -0.06473875] [ 0.08523079 0.09000966 0.07000142 ..., -0.2319608 -0.01693831 -0.07137556] [-0.1767896 -0.03329307 0.07721842 ..., -0.08596493 -0.06621576 -0.03435337]] eigenvalue와 eigenvector로 pair를 정의하고 이 pair를 eigenvalue의 크기가 큰 것부터 작은 것까지 순서대로 정렬한다. ```python eigen_pairs = [(eigen_vals[i], eigen_vecs[:,i]) for i in range(len(eigen_vals))] eigen_pairs.sort(key = lambda x: x[0], reverse=True) ``` 위에서 구한 eigenvector와 scikit learn에 있는 KernelPCA에서 구한 eigenvector를 비교해 보았다. ```python from sklearn.decomposition import KernelPCA kpca0 = KernelPCA(n_components=3, kernel='poly') kpca0.fit(X) ``` KernelPCA(alpha=1.0, coef0=1, copy_X=True, degree=3, eigen_solver='auto', fit_inverse_transform=False, gamma=None, kernel='poly', kernel_params=None, max_iter=None, n_components=3, n_jobs=1, random_state=None, remove_zero_eig=False, tol=0) eigenvalue가 큰 순서대로 세 개의 eigenvector만 뽑아 플랏을 그리면 다음과 같다. ```python kpca0_eigen_vecs = kpca0.alphas_ eigen_vecs_for_comparison = np.vstack([eigen_pairs[0][1], eigen_pairs[1][1], eigen_pairs[2][1]]).T ``` ```python import matplotlib.pyplot as plt %matplotlib inline #first eigenvector plt.subplot(2,1,1) plt.plot(kpca0_eigen_vecs[:,0], c='blue') plt.subplot(2,1,2) plt.plot(eigen_vecs_for_comparison[:,0], c='green') ``` ```python #Second eigenvector plt.subplot(2,1,1) plt.plot(kpca0_eigen_vecs[:,1], c='blue') plt.subplot(2,1,2) plt.plot(eigen_vecs_for_comparison[:,1], c='green') ``` ```python #Third eigenvector plt.subplot(2,1,1) plt.plot(kpca0_eigen_vecs[:,2], c='blue') plt.subplot(2,1,2) plt.plot(eigen_vecs_for_comparison[:,2], c='green') ``` ■ For any data point(new or old), we can represent it as linear combination form Kernel PCA의 결과는 다음의 식으로 구할 수 있다. \begin{align} y_{k}(x) = \phi(x)^{T}v_{k} = \sum_{i=1}^n \alpha_{ki}\tilde{\mathbf{K}}(x,x_i) \end{align} ```python # transform data n_components = 2 transformed_data = [] for j in range(n): loc = np.zeros(n_components) for k in range(n_components): inner_prod_sum = 0. for i in range(n): inner_prod_sum += eigen_pairs[k][1][i] * gram[j,i] loc[k] = inner_prod_sum/np.sqrt(eigen_pairs[k][0]) transformed_data.append(loc) transformed_data = np.array(transformed_data) ``` feature space에 사영(projection) 된 데이터 포인트를 플랏에 찍어보면 다음과 같다. ```python label = df['class'].unique() print(label) with plt.style.context("seaborn-darkgrid"): for l in zip(label): plt.scatter(transformed_data[y==l,0], transformed_data[y==l,1], label=l) plt.xlabel("PC 1") plt.ylabel("PC 2") plt.legend() plt.show() ``` scikit learn의 KernelPCA로부터 얻은 결과는 다음과 같다. ```python kpca0 = KernelPCA(n_components=2, kernel='poly') Y = kpca0.fit_transform(X) ``` ```python with plt.style.context("seaborn-darkgrid"): for l in label: plt.scatter(Y[y==l,0], Y[y==l,1],label=l) plt.xlabel("PC 1") plt.ylabel("PC 2") plt.legend() plt.show() ``` Kernel을 적용하지 않은 PCA에 의한 구분은 어떻게 나오는지 확인해보자. ```python from sklearn.decomposition import PCA pca = PCA(n_components=2) Y_ = pca.fit_transform(X) with plt.style.context("seaborn-darkgrid"): for l in label: plt.scatter(Y_[y==l,0], Y_[y==l,1],label=l) plt.xlabel("PC 1") plt.ylabel("PC 2") plt.legend() plt.show() ``` ```python ``` ```python ``` ```python ```
11e30e29fa13ffd813ee03268930e72532bce661
191,949
ipynb
Jupyter Notebook
02 Kernel-based Learning/Tutorial 07 - Kernel PCA/Kernel_PCA_Chae_1.ipynb
KateYeon/Business-Anlaytics
454c1cb1b88499e94eeb5e8a7a32309afb7165e5
[ "MIT" ]
null
null
null
02 Kernel-based Learning/Tutorial 07 - Kernel PCA/Kernel_PCA_Chae_1.ipynb
KateYeon/Business-Anlaytics
454c1cb1b88499e94eeb5e8a7a32309afb7165e5
[ "MIT" ]
null
null
null
02 Kernel-based Learning/Tutorial 07 - Kernel PCA/Kernel_PCA_Chae_1.ipynb
KateYeon/Business-Anlaytics
454c1cb1b88499e94eeb5e8a7a32309afb7165e5
[ "MIT" ]
null
null
null
268.46014
39,743
0.903365
true
4,657
Qwen/Qwen-72B
1. YES 2. YES
0.885631
0.760651
0.673656
__label__kor_Hang
0.292154
0.40346
# Computational Astrophysics ## Elliptic PDEs. Examples --- ## Eduard Larrañaga Observatorio Astronómico Nacional\ Facultad de Ciencias\ Universidad Nacional de Colombia --- ### About this notebook In this notebook we present some of the techniques used to solve the Poisson equation. `A. Garcia. Numerical Methods for Physics. (1999). Chapter 6 - 7 ` --- ## Poisson Equation as an ODE Example. Homogenoeous Sphere Consider the Poisson equation \begin{equation} \nabla^2 \Phi = 4\pi G \rho\,\, \end{equation} with spherical symmetry as a second-order ODE \begin{equation} \frac{d^2 \Phi}{d r^2} + \frac{2}{r}\frac{d \Phi}{d r} = 4 \pi G\rho\,\,. \end{equation} This equation is reduced to the system \begin{equation} \begin{aligned} \frac{d \Phi}{dr} &= z\,\,,\\ \frac{dz}{dr} + \frac{2}{r} z &= 4 \pi G \rho\,\,. \end{aligned} \end{equation} As a first example, we will consider the simplified case of an homogeneous sphere (constant density) and we will use the following inner and outer boundary conditions, \begin{equation} \begin{aligned} \left . \frac{d \Phi}{dr} \right| _{r=0} & = 0\,\,,\\ \Phi(R_\mathrm{surface}) &= - \frac{GM}{R_\mathrm{surface}}\,\,. \end{aligned} \end{equation} This system can be integrated using the forward Euler method with an appropriate initial condition at $r=0$. Since the gravitational potential is determined only up to a constant, the problem can be integrated and later, the solution can be adjusted to match the boundary condition at $R_\mathrm{surface}$. We will consider a model with a total mass of 1 solar mass and a surface radius equal to the solar radius. ```python import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Uniform sphere model. rho = const. M = 1.989e33 # Solar mass in g R = 6.955e10 # Solar radius in cm rho = M/((4./3.)*np.pi*R**3) # constant density (gr/cm^3) # Global constants (cgs units) G = 6.67e-8 # ODE definition def ODE(r, q0, rho0): ''' ------------------------------------------ ODE(r,q0) ------------------------------------------ ODEs system for the Poisson Equation Arguments: r: radius q0: numpy array with the initial condition data: q0[0] = Phi q0[1] = dPhi/dr ------------------------------------------ ''' Phi = q0[0] z = q0[1] f = np.zeros(2) f[0] = z f[1] = 4*np.pi*G*rho0 - 2*z/r return f def FEuler(h, r0, q0, rho0): ''' ------------------------------------------ FEuler(h, r0, q0, rho0) ------------------------------------------ Forward Euler's method for solving a ODEs system. Arguments: h: stepsize for the iteration r0: independent parameter initial value q0: numpy array with the initial values of the functions in the ODEs system ------------------------------------------ ''' f = ODE(r0, q0, rho0) q1 = q0 + h*f return q1 # Grid definition n = 1000 # steps in the grid r_0 = 1e-8 # Note that we don't begin at r_0=0 r_f = R # Constant stepsize defined by the number of steps in the grid h = (r_f - r_0)/n # Arrays to store the solution radius = np.linspace(r_0, r_f, n) # Radius information Q = np.zeros([2,n]) # Euler's Method information # Initial Conditions Q[0,0] = 0. # Initial guess for the value of the potential at the center Q[1,0] = 0. # Derivative of the potential at the center # Main loops for solving the problem for i in range(1,n): q0 = Q[:,i-1] qf = FEuler(h, radius[i-1], q0, rho) Q[:,i] = qf[:] plt.figure(figsize=(7,5)) plt.plot(radius,Q[0]) plt.xlabel(r'$r$') plt.ylabel(r'$\Phi(r)$') plt.show() ``` Correcting to match the boundary condition at the surface of the star gives: ```python # Adjust of the potential using the boundary condition at R correction = - G*M/R - Q[0,-1] Phi_in = Q[0,:] + correction plt.figure(figsize=(7,5)) plt.plot(radius,Phi_in, label=r'Corrected Solution') plt.plot(radius,Q[0],'--', label=r'Initial Solution') plt.xlabel(r'$r$') plt.ylabel(r'$\Phi(r)$') plt.legend() plt.show() ``` Since $\rho$ is a constant, it is possible to obtain an analytic solution of the Poisson equation, which is given by \begin{equation} \Phi(r) = \frac{2}{3}\pi G \rho (r^2 - 3 R^2)\,\,. \end{equation} A comparison of the numerical and the analytical solution gives ```python def AnalyticPotential(r): return (2*np.pi*G*rho/3)*(r**2 - 3*R**2) plt.figure(figsize=(7,5)) plt.plot(radius,Phi_in, label=r'Numerical Potential') plt.plot(radius,AnalyticPotential(radius),'k--', label=r'Analytic Potential') plt.xlabel(r'$r$') plt.ylabel(r'$\Phi(r)$') plt.legend() plt.show() ``` Finally, we can include the external Newtonian potential to complete the plot ```python def externalPhi(r): return -G*M/r extradius = np.linspace(R, 4*R, n) plt.figure(figsize=(7,5)) plt.plot(radius,Phi_in, label=r'Corrected Internal Potential') plt.plot(extradius,externalPhi(extradius),'r--', label=r'External Potential') plt.xlabel(r'$r$') plt.ylabel(r'$\Phi(r)$') plt.legend() plt.show() ``` --- ## Poisson Equation solved by the Matrix Method. Homogenoeous Sphere In the matrix method, the Poisson Equation is discretized with centered differences. Imposing the boundary condition $\frac{\partial \Phi}{ \partial r} = 0$ gives \begin{equation} \Phi_{-1} = \Phi_{0}\,\,. \end{equation} Then, the Poisson equation is written as linear system \begin{equation} J \mathbf{\Phi} = \mathbf{b}\,\,, \label{eq:pde_poisson2} \end{equation} where $\Phi$ = $(\Phi_0, \cdots , \Phi_{n-1})^T$ (for a grid with $n$ points labeled $0$ to $n-1$) and $\mathbf{b} = 4\pi G (\rho_0, \cdots , \rho_{n-1})^T$. The matrix $J$ has tri-diagonal form and can be explicitely given as 1. $i=j=0$: \begin{equation} J_{00} = - \frac{1}{(\Delta r)^2} - \frac{1}{r_0 \Delta r}\,\,, \end{equation} 2. $i=j$: \begin{equation} J_{ij} = \frac{-2}{(\Delta r)^2}\,\,, \end{equation} 3. $i+1=j$: \begin{equation} J_{ij} = \frac{1}{(\Delta r)^2} + \frac{1}{r_i \Delta r}\,\,, \end{equation} 4. $i-1=j$: \begin{equation} J_{ij} = \frac{1}{(\Delta r)^2} - \frac{1}{r_i \Delta r}\,\,. \end{equation} Now we solve the homogeneous sphere problem using this description. Once again, we obtain the numerical solution and correct it by adjusting the boundary condition at the surface \begin{equation} \Phi(R_\text{surface}) = - \frac{G M}{R_\text{surface}}. \end{equation} ```python import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Uniform sphere model. rho = const. M = 1.989e33 # Solar mass in g R = 6.955e10 # Solar radius in cm rho = M/((4./3.)*np.pi*R**3) # constant density (gr/cm^3) # Global constants (cgs units) G = 6.67e-8 # Grid definition n = 1000 # steps in the grid r_0 = 1e-8 r_f = R # Constant stepsize defined by the number of steps in the grid dr = (r_f - r_0)/n # Arrays to store the solution radius = np.linspace(r_0, r_f, n) # Radius information b = (4*np.pi*G*rho)*np.ones(n) J = np.zeros([n,n]) # Definition of the matrix J J[0,0] = -1/dr**2 - 1/(dr*r_0) for i in range(1,n): J[i,i] = -2/dr**2 J[i,i-1] = 1/dr**2 - 1/(dr*radius[i]) for i in range(0,n-1): J[i,i+1] = 1/dr**2 + 1/(dr*radius[i]) # Forward Elimination d = np.zeros(n) w = np.zeros(n) d[0] = J[0,0] w[0] = b[0]/d[0] for i in range(1,n): d[i] = J[i,i] - J[i,i-1]*J[i-1,i]/d[i-1] w[i] = (b[i] - w[i-1]*J[i,i-1])/d[i] # Back-Substitution y = np.zeros(n) y[-1] = w[-1] for i in range(n-2,-1,-1): y[i] = w[i] - y[i+1]*J[i,i+1]/d[i] # Adjust of the potential using the boundary condition at R correction = - G*M/R - y[-1] Phi = y + correction plt.figure(figsize=(7,5)) plt.plot(radius, Phi, label=r'Corrected Solution') plt.plot(radius, y, "--", label=r'Initial Solution') plt.xlabel(r'$r$') plt.ylabel(r'$\Phi(r)$') plt.legend() plt.show() ``` Comparison with the analytic potential gives ```python def AnalyticPotential(r): return (2*np.pi*G*rho/3)*(r**2 - 3*R**2) plt.figure(figsize=(7,5)) plt.plot(radius,Phi, label=r'Numerical Potential') plt.plot(radius,AnalyticPotential(radius),'k--', label=r'Analytic Potential') plt.xlabel(r'$r$') plt.ylabel(r'$\Phi(r)$') plt.legend() plt.show() ``` Finally, including the external potential gives the plot ```python def externalPhi(r): return -G*M/r extradius = np.linspace(R, 4*R, n) plt.figure(figsize=(7,5)) plt.plot(radius,Phi_in, label=r'Corrected Internal Potential') plt.plot(extradius,externalPhi(extradius),'r--', label=r'External Potential') plt.xlabel(r'$r$') plt.ylabel(r'$\Phi(r)$') plt.legend() plt.show() ``` ```python ```
58a4c429a48de811a19d110e060bd8969252f88d
142,853
ipynb
Jupyter Notebook
18._PDE6/presentation/Example1.ipynb
ashcat2005/ComputationalAstrophysics
edda507d0d0a433dfd674a2451d750cf6ad3f1b7
[ "MIT" ]
2
2020-09-23T02:49:10.000Z
2021-08-21T06:04:39.000Z
18._PDE6/presentation/Example1.ipynb
ashcat2005/ComputationalAstrophysics
edda507d0d0a433dfd674a2451d750cf6ad3f1b7
[ "MIT" ]
null
null
null
18._PDE6/presentation/Example1.ipynb
ashcat2005/ComputationalAstrophysics
edda507d0d0a433dfd674a2451d750cf6ad3f1b7
[ "MIT" ]
2
2020-12-05T14:06:28.000Z
2022-01-25T04:51:58.000Z
251.501761
19,752
0.915781
true
2,763
Qwen/Qwen-72B
1. YES 2. YES
0.92523
0.875787
0.810304
__label__eng_Latn
0.838439
0.720941
<!-- dom:TITLE: FFM234, Klassisk fysik och vektorfält - Veckans tal --> # FFM234, Klassisk fysik och vektorfält - Veckans tal <!-- dom:AUTHOR: [Christian Forssén](http://fy.chalmers.se/subatom/tsp/), Institutionen för fysik, Chalmers --> <!-- Author: --> **[Christian Forssén](http://fy.chalmers.se/subatom/tsp/), Institutionen för fysik, Chalmers** Date: **Aug 10, 2019** <!-- --- begin exercise --- --> ## Kurvintegral längs komplicerad ellips Beräkna integralen <!-- Equation labels as ordinary links --> <div id="_auto1"></div> $$ \begin{equation} \oint_\Gamma \vec{F} \cdot \mbox{d}\vec{r}, \label{_auto1} \tag{1} \end{equation} $$ där <!-- Equation labels as ordinary links --> <div id="_auto2"></div> $$ \begin{equation} \vec{F} = \left[x^2-a\left(y+z\right)\right]\hat{x} + \left(y^2-az\right) \hat{y} + \left[z^2-a\left(x+y\right)\right] \hat{z}, \label{_auto2} \tag{2} \end{equation} $$ och $\Gamma$ är den kurva som utgör skärningen mellan cylindern <!-- Equation labels as ordinary links --> <div id="_auto3"></div> $$ \begin{equation} \left(x-a\right)^2 +y^2 = a^2,\quad z \ge 0, \label{_auto3} \tag{3} \end{equation} $$ och sfären <!-- Equation labels as ordinary links --> <div id="_auto4"></div> $$ \begin{equation} x^2 + y^2 + z^2 = R^2, \quad R> 2a, \label{_auto4} \tag{4} \end{equation} $$ där $a$ är en konstant med dimensionen längd. <!-- --- begin answer of exercise --- --> **Answer.** $\pi a^3$. <!-- --- end answer of exercise --- --> <!-- --- begin solution of exercise --- --> **Solution.** Vi kan först konstatera att skärningen mellan cylinder och sfär är en ellips vars exakta form är något komplicerad att fastställa. Eftersom kurvan $\Gamma$ är en sluten kurva är det lockande att använda Stokes sats, så vi beräknar rotationen $$ \vec{\nabla} \times \vec{F} = \left|\begin{array}{ccc} \hat{x} & \hat{y} & \hat{z} \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ x^2-a\left(y+z\right) & y^2-az & z^2-a\left(x+y\right) \\ \end{array} \right| \nonumber $$ <!-- Equation labels as ordinary links --> <div id="_auto5"></div> $$ \begin{equation} = \left(-a+a\right) \hat{x} + \left(-a+a\right) \hat{y} + a\hat{z} = a\hat{z}. \label{_auto5} \tag{5} \end{equation} $$ Alltså är rotationen av $\vec{F}$ en rent vertikal vektor. Vi kan nu använda Stokes sats <!-- Equation labels as ordinary links --> <div id="_auto6"></div> $$ \begin{equation} \oint_\Gamma \vec{F} \cdot \mbox{d}\vec{r} = \int_S \vec{\nabla} \times \vec{F} \cdot \mbox{d}\vec{S}. \label{_auto6} \tag{6} \end{equation} $$ Lägg märke till att ytan skall orienteras så att den följer högerhandsregeln. Detta betyder att om vi följer kurvan $\Gamma$ moturs så skall normalen $\hat{n}$ till $S$ peka uppåt. <!-- Equation labels as ordinary links --> <div id="_auto7"></div> $$ \begin{equation} \int_S \vec{\nabla} \times \vec{F} \cdot \mbox{d}\vec{S} = \int_S a \hat{z} \cdot \hat{n} \mbox{d}S = a \int_S \hat{z} \cdot \hat{n} \mbox{d}S. \label{_auto7} \tag{7} \end{equation} $$ Skalärprodukten i den sista integralen betyder att vi projicerar ner arean $S$ på ett plan vinkelrät mot $\hat{z}$, det vill säga på $xy$-planet. I detta planet är skärningen cylinderns tvärsnittsyta, en cirkel med radien $a$, och integralen blir cirkelarean $\pi a^2$. Alltså blir integralen till slut <!-- Equation labels as ordinary links --> <div id="_auto8"></div> $$ \begin{equation} \oint_\Gamma \vec{F} \cdot \mbox{d}\vec{r} = a \pi a^2 = \pi a^3. \label{_auto8} \tag{8} \end{equation} $$ <!-- --- end solution of exercise --- --> <!-- --- end exercise --- -->
be3432570972e11cb4043a39c20f9ae176013ba9
6,282
ipynb
Jupyter Notebook
doc/src/veckanstal/04-integralsatser-veckanstal/04-integralsatser-veckanstal.ipynb
physics-chalmers/ffm234
b37a744e50604ba0956724883714ea3d87929f81
[ "CC0-1.0" ]
null
null
null
doc/src/veckanstal/04-integralsatser-veckanstal/04-integralsatser-veckanstal.ipynb
physics-chalmers/ffm234
b37a744e50604ba0956724883714ea3d87929f81
[ "CC0-1.0" ]
null
null
null
doc/src/veckanstal/04-integralsatser-veckanstal/04-integralsatser-veckanstal.ipynb
physics-chalmers/ffm234
b37a744e50604ba0956724883714ea3d87929f81
[ "CC0-1.0" ]
2
2020-08-06T06:03:59.000Z
2020-11-03T13:36:07.000Z
26.284519
311
0.485992
true
1,330
Qwen/Qwen-72B
1. YES 2. YES
0.743168
0.731059
0.543299
__label__swe_Latn
0.874248
0.100596
<table border="0"> <tr> <td> </td> <td> </td> </tr> </table> # Double Machine Learning: Use Cases and Examples Double Machine Learning (DML) is an algorithm that applies arbitrary machine learning methods to fit the treatment and response, then uses a linear model to predict the response residuals from the treatment residuals. The EconML SDK implements the following DML classes: * LinearDML: suitable for estimating heterogeneous treatment effects. * SparseLinearDML: suitable for the case when $W$ is high dimensional vector and both the first stage and second stage estimate are linear. In ths notebook, we show the performance of the DML on both synthetic data and observational data. **Notebook contents:** 1. Example usage with single continuous treatment synthetic data 2. Example usage with single binary treatment synthetic data 3. Example usage with multiple continuous treatment synthetic data 4. Example usage with single continuous treatment observational data 5. Example usage with multiple continuous treatment, multiple outcome observational data ```python import econml ``` ```python ## Ignore warnings import warnings warnings.filterwarnings('ignore') ``` ```python # Main imports from econml.dml import DML, LinearDML,SparseLinearDML # Helper imports import numpy as np from itertools import product from sklearn.linear_model import Lasso, LassoCV, LogisticRegression, LogisticRegressionCV,LinearRegression,MultiTaskElasticNet,MultiTaskElasticNetCV from sklearn.ensemble import RandomForestRegressor,RandomForestClassifier from sklearn.preprocessing import PolynomialFeatures import matplotlib.pyplot as plt import matplotlib from sklearn.model_selection import train_test_split %matplotlib inline ``` ## 1. Example Usage with Single Continuous Treatment Synthetic Data and Model Selection ### 1.1. DGP We use the data generating process (DGP) from [here](https://arxiv.org/abs/1806.03467). The DGP is described by the following equations: \begin{align} T =& \langle W, \beta\rangle + \eta, & \;\eta \sim \text{Uniform}(-1, 1)\\ Y =& T\cdot \theta(X) + \langle W, \gamma\rangle + \epsilon, &\; \epsilon \sim \text{Uniform}(-1, 1)\\ W \sim& \text{Normal}(0,\, I_{n_w})\\ X \sim& \text{Uniform}(0,1)^{n_x} \end{align} where $W$ is a matrix of high-dimensional confounders and $\beta, \gamma$ have high sparsity. For this DGP, \begin{align} \theta(x) = \exp(2\cdot x_1). \end{align} ```python # Treatment effect function def exp_te(x): return np.exp(2*x[0]) ``` ```python # DGP constants np.random.seed(123) n = 2000 n_w = 30 support_size = 5 n_x = 1 # Outcome support support_Y = np.random.choice(np.arange(n_w), size=support_size, replace=False) coefs_Y = np.random.uniform(0, 1, size=support_size) epsilon_sample = lambda n: np.random.uniform(-1, 1, size=n) # Treatment support support_T = support_Y coefs_T = np.random.uniform(0, 1, size=support_size) eta_sample = lambda n: np.random.uniform(-1, 1, size=n) # Generate controls, covariates, treatments and outcomes W = np.random.normal(0, 1, size=(n, n_w)) X = np.random.uniform(0, 1, size=(n, n_x)) # Heterogeneous treatment effects TE = np.array([exp_te(x_i) for x_i in X]) T = np.dot(W[:, support_T], coefs_T) + eta_sample(n) Y = TE * T + np.dot(W[:, support_Y], coefs_Y) + epsilon_sample(n) Y_train, Y_val, T_train, T_val, X_train, X_val, W_train, W_val = train_test_split(Y, T, X, W, test_size=.2) # Generate test data X_test = np.array(list(product(np.arange(0, 1, 0.01), repeat=n_x))) ``` ### 1.2. Train Estimator We train models in three different ways, and compare their performance. #### 1.2.1. Default Setting ```python est = LinearDML(model_y=RandomForestRegressor(), model_t=RandomForestRegressor(), random_state=123) est.fit(Y_train, T_train, X=X_train, W=W_train) te_pred = est.effect(X_test) ``` #### 1.2.2. Polynomial Features for Heterogeneity ```python est1 = SparseLinearDML(model_y=RandomForestRegressor(), model_t=RandomForestRegressor(), featurizer=PolynomialFeatures(degree=3), random_state=123) est1.fit(Y_train, T_train, X=X_train, W=W_train) te_pred1=est1.effect(X_test) ``` #### 1.2.3. Polynomial Features with regularization ```python est2 = DML(model_y=RandomForestRegressor(), model_t=RandomForestRegressor(), model_final=Lasso(alpha=0.1, fit_intercept=False), featurizer=PolynomialFeatures(degree=10), random_state=123) est2.fit(Y_train, T_train, X=X_train, W=W_train) te_pred2=est2.effect(X_test) ``` #### 1.2.4 Random Forest Final Stage ```python from econml.dml import ForestDML # One can replace model_y and model_t with any scikit-learn regressor and classifier correspondingly # as long as it accepts the sample_weight keyword argument at fit time. est3 = ForestDML(model_y=RandomForestRegressor(), model_t=RandomForestRegressor(), discrete_treatment=False, n_estimators=1000, subsample_fr=.8, min_samples_leaf=10, min_impurity_decrease=0.001, verbose=0, min_weight_fraction_leaf=.01) est3.fit(Y_train, T_train, X=X_train, W=W_train) te_pred3 = est3.effect(X_test) ``` ```python est3.feature_importances_ ``` array([1.]) ### 1.3. Performance Visualization ```python plt.figure(figsize=(10,6)) plt.plot(X_test, te_pred, label='DML default') plt.plot(X_test, te_pred1, label='DML polynomial degree=3') plt.plot(X_test, te_pred2, label='DML polynomial degree=10 with Lasso') plt.plot(X_test, te_pred3, label='ForestDML') expected_te = np.array([exp_te(x_i) for x_i in X_test]) plt.plot(X_test, expected_te, 'b--', label='True effect') plt.ylabel('Treatment Effect') plt.xlabel('x') plt.legend() plt.show() ``` ### 1.4. Model selection For the three different models above, we can use score function to estimate the final model performance. The score is the MSE of the final stage Y residual, which can be seen as a proxy of the MSE of treatment effect. ```python score={} score["DML default"] = est.score(Y_val, T_val, X_val, W_val) score["DML polynomial degree=2"] = est1.score(Y_val, T_val, X_val, W_val) score["DML polynomial degree=10 with Lasso"] = est2.score(Y_val, T_val, X_val, W_val) score["ForestDML"] = est3.score(Y_val, T_val, X_val, W_val) score ``` {'DML default': 1.815769478666336, 'DML polynomial degree=2': 1.6951945911574153, 'DML polynomial degree=10 with Lasso': 2.148988483212333, 'ForestDML': 1.74374363161912} ```python print("best model selected by score: ",min(score,key=lambda x: score.get(x))) ``` best model selected by score: DML polynomial degree=2 ```python mse_te={} mse_te["DML default"] = ((expected_te - te_pred)**2).mean() mse_te["DML polynomial degree=2"] = ((expected_te - te_pred1)**2).mean() mse_te["DML polynomial degree=10 with Lasso"] = ((expected_te - te_pred2)**2).mean() mse_te["ForestDML"] = ((expected_te - te_pred3)**2).mean() mse_te ``` {'DML default': 0.3565984526892961, 'DML polynomial degree=2': 0.2153049849895232, 'DML polynomial degree=10 with Lasso': 0.1966157424558891, 'ForestDML': 0.11973811235711414} ```python print("best model selected by MSE of TE: ", min(mse_te, key=lambda x: mse_te.get(x))) ``` best model selected by MSE of TE: ForestDML ## 2. Example Usage with Single Binary Treatment Synthetic Data and Confidence Intervals ### 2.1. DGP We use the following DGP: \begin{align} T \sim & \text{Bernoulli}\left(f(W)\right), &\; f(W)=\sigma(\langle W, \beta\rangle + \eta), \;\eta \sim \text{Uniform}(-1, 1)\\ Y = & T\cdot \theta(X) + \langle W, \gamma\rangle + \epsilon, & \; \epsilon \sim \text{Uniform}(-1, 1)\\ W \sim & \text{Normal}(0,\, I_{n_w}) & \\ X \sim & \text{Uniform}(0,\, 1)^{n_x} \end{align} where $W$ is a matrix of high-dimensional confounders, $\beta, \gamma$ have high sparsity and $\sigma$ is the sigmoid function. For this DGP, \begin{align} \theta(x) = \exp( 2\cdot x_1 ). \end{align} ```python # Treatment effect function def exp_te(x): return np.exp(2 * x[0])# DGP constants np.random.seed(123) n = 1000 n_w = 30 support_size = 5 n_x = 4 # Outcome support support_Y = np.random.choice(range(n_w), size=support_size, replace=False) coefs_Y = np.random.uniform(0, 1, size=support_size) epsilon_sample = lambda n:np.random.uniform(-1, 1, size=n) # Treatment support support_T = support_Y coefs_T = np.random.uniform(0, 1, size=support_size) eta_sample = lambda n: np.random.uniform(-1, 1, size=n) # Generate controls, covariates, treatments and outcomes W = np.random.normal(0, 1, size=(n, n_w)) X = np.random.uniform(0, 1, size=(n, n_x)) # Heterogeneous treatment effects TE = np.array([exp_te(x_i) for x_i in X]) # Define treatment log_odds = np.dot(W[:, support_T], coefs_T) + eta_sample(n) T_sigmoid = 1/(1 + np.exp(-log_odds)) T = np.array([np.random.binomial(1, p) for p in T_sigmoid]) # Define the outcome Y = TE * T + np.dot(W[:, support_Y], coefs_Y) + epsilon_sample(n) # get testing data X_test = np.random.uniform(0, 1, size=(n, n_x)) X_test[:, 0] = np.linspace(0, 1, n) ``` ### 2.2. Train Estimator ```python est = LinearDML(model_y=RandomForestRegressor(), model_t=RandomForestClassifier(min_samples_leaf=10), discrete_treatment=True, linear_first_stages=False, n_splits=6) est.fit(Y, T, X=X, W=W) te_pred = est.effect(X_test) lb, ub = est.effect_interval(X_test, alpha=0.01) ``` ```python est2 = SparseLinearDML(model_y=RandomForestRegressor(), model_t=RandomForestClassifier(min_samples_leaf=10), discrete_treatment=True, featurizer=PolynomialFeatures(degree=2), linear_first_stages=False, n_splits=6) est2.fit(Y, T, X=X, W=W) te_pred2 = est2.effect(X_test) lb2, ub2 = est2.effect_interval(X_test, alpha=0.01) ``` ```python est3 = ForestDML(model_y=RandomForestRegressor(), model_t=RandomForestClassifier(min_samples_leaf=10), discrete_treatment=True, n_estimators=1000, subsample_fr=.8, min_samples_leaf=10, min_impurity_decrease=0.001, verbose=0, min_weight_fraction_leaf=.01, n_crossfit_splits=6) est3.fit(Y, T, X=X, W=W) te_pred3 = est3.effect(X_test) lb3, ub3 = est3.effect_interval(X_test, alpha=0.01) ``` ```python est3.feature_importances_ ``` array([0.89352545, 0.03341642, 0.03505072, 0.0380074 ]) ### 2.3. Performance Visualization ```python expected_te=np.array([exp_te(x_i) for x_i in X_test]) plt.figure(figsize=(16,6)) plt.subplot(1, 3, 1) plt.plot(X_test[:, 0], te_pred, label='LinearDML', alpha=.6) plt.fill_between(X_test[:, 0], lb, ub, alpha=.4) plt.plot(X_test[:, 0], expected_te, 'b--', label='True effect') plt.ylabel('Treatment Effect') plt.xlabel('x') plt.legend() plt.subplot(1, 3, 2) plt.plot(X_test[:, 0], te_pred2, label='SparseLinearDML', alpha=.6) plt.fill_between(X_test[:, 0], lb2, ub2, alpha=.4) plt.plot(X_test[:, 0], expected_te, 'b--', label='True effect') plt.ylabel('Treatment Effect') plt.xlabel('x') plt.legend() plt.subplot(1, 3, 3) plt.plot(X_test[:, 0], te_pred3, label='ForestDML', alpha=.6) plt.fill_between(X_test[:, 0], lb3, ub3, alpha=.4) plt.plot(X_test[:, 0], expected_te, 'b--', label='True effect') plt.ylabel('Treatment Effect') plt.xlabel('x') plt.legend() plt.show() ``` ### 2.4. Other Inferences #### 2.4.1 Effect Inferences Other than confidence interval, we could also output other statistical inferences of the effect include standard error, z-test score and p value given each sample $X[i]$. ```python est.effect_inference(X_test[:10,]).summary_frame(alpha=0.1, value=0, decimals=3) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>point_estimate</th> <th>stderr</th> <th>zstat</th> <th>pvalue</th> <th>ci_lower</th> <th>ci_upper</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0.381</td> <td>0.146</td> <td>2.605</td> <td>0.009</td> <td>0.140</td> <td>0.621</td> </tr> <tr> <th>1</th> <td>0.409</td> <td>0.163</td> <td>2.512</td> <td>0.012</td> <td>0.141</td> <td>0.677</td> </tr> <tr> <th>2</th> <td>0.308</td> <td>0.170</td> <td>1.811</td> <td>0.070</td> <td>0.028</td> <td>0.588</td> </tr> <tr> <th>3</th> <td>0.416</td> <td>0.150</td> <td>2.780</td> <td>0.005</td> <td>0.170</td> <td>0.662</td> </tr> <tr> <th>4</th> <td>0.562</td> <td>0.152</td> <td>3.700</td> <td>0.000</td> <td>0.312</td> <td>0.811</td> </tr> <tr> <th>5</th> <td>0.538</td> <td>0.139</td> <td>3.879</td> <td>0.000</td> <td>0.310</td> <td>0.766</td> </tr> <tr> <th>6</th> <td>0.436</td> <td>0.124</td> <td>3.531</td> <td>0.000</td> <td>0.233</td> <td>0.639</td> </tr> <tr> <th>7</th> <td>0.589</td> <td>0.154</td> <td>3.825</td> <td>0.000</td> <td>0.336</td> <td>0.842</td> </tr> <tr> <th>8</th> <td>0.446</td> <td>0.126</td> <td>3.530</td> <td>0.000</td> <td>0.238</td> <td>0.653</td> </tr> <tr> <th>9</th> <td>0.487</td> <td>0.162</td> <td>3.011</td> <td>0.003</td> <td>0.221</td> <td>0.753</td> </tr> </tbody> </table> </div> We could also get the population inferences given sample $X$. ```python est.effect_inference(X_test).population_summary(alpha=0.1, value=0, decimals=3, tol=0.001) ``` <table class="simpletable"> <caption>Uncertainty of Mean Point Estimate</caption> <tr> <th>mean_point</th> <th>stderr_mean</th> <th>zstat</th> <th>pvalue</th> <th>ci_mean_lower</th> <th>ci_mean_upper</th> </tr> <tr> <td>3.369</td> <td>0.14</td> <td>24.056</td> <td>0.0</td> <td>3.138</td> <td>3.599</td> </tr> </table> <table class="simpletable"> <caption>Distribution of Point Estimate</caption> <tr> <th>std_point</th> <th>pct_point_lower</th> <th>pct_point_upper</th> </tr> <tr> <td>1.724</td> <td>0.691</td> <td>6.067</td> </tr> </table> <table class="simpletable"> <caption>Total Variance of Point Estimate</caption> <tr> <th>stderr_point</th> <th>ci_point_lower</th> <th>ci_point_upper</th> </tr> <tr> <td>1.73</td> <td>0.682</td> <td>6.065</td> </tr> </table><br/><br/>Note: The stderr_mean is a conservative upper bound. #### 2.4.2 Coefficient and Intercept Inferences We could also get the coefficient and intercept inference for the final model when it's linear. ```python est.coef__inference().summary_frame() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>point_estimate</th> <th>stderr</th> <th>zstat</th> <th>pvalue</th> <th>ci_lower</th> <th>ci_upper</th> </tr> </thead> <tbody> <tr> <th>X0</th> <td>5.958</td> <td>0.227</td> <td>26.211</td> <td>0.000</td> <td>5.584</td> <td>6.332</td> </tr> <tr> <th>X1</th> <td>-0.058</td> <td>0.216</td> <td>-0.267</td> <td>0.789</td> <td>-0.413</td> <td>0.297</td> </tr> <tr> <th>X2</th> <td>-0.326</td> <td>0.218</td> <td>-1.491</td> <td>0.136</td> <td>-0.685</td> <td>0.034</td> </tr> <tr> <th>X3</th> <td>0.217</td> <td>0.211</td> <td>1.029</td> <td>0.304</td> <td>-0.130</td> <td>0.564</td> </tr> </tbody> </table> </div> ```python est.intercept__inference().summary_frame() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>point_estimate</th> <th>stderr</th> <th>zstat</th> <th>pvalue</th> <th>ci_lower</th> <th>ci_upper</th> </tr> </thead> <tbody> <tr> <th>intercept</th> <td>0.47</td> <td>0.218</td> <td>2.154</td> <td>0.031</td> <td>0.111</td> <td>0.829</td> </tr> </tbody> </table> </div> ```python est.summary() ``` <table class="simpletable"> <caption>Coefficient Results</caption> <tr> <td></td> <th>point_estimate</th> <th>stderr</th> <th>zstat</th> <th>pvalue</th> <th>ci_lower</th> <th>ci_upper</th> </tr> <tr> <th>X0</th> <td>5.958</td> <td>0.227</td> <td>26.211</td> <td>0.0</td> <td>5.584</td> <td>6.332</td> </tr> <tr> <th>X1</th> <td>-0.058</td> <td>0.216</td> <td>-0.267</td> <td>0.789</td> <td>-0.413</td> <td>0.297</td> </tr> <tr> <th>X2</th> <td>-0.326</td> <td>0.218</td> <td>-1.491</td> <td>0.136</td> <td>-0.685</td> <td>0.034</td> </tr> <tr> <th>X3</th> <td>0.217</td> <td>0.211</td> <td>1.029</td> <td>0.304</td> <td>-0.13</td> <td>0.564</td> </tr> </table> <table class="simpletable"> <caption>Intercept Results</caption> <tr> <td></td> <th>point_estimate</th> <th>stderr</th> <th>zstat</th> <th>pvalue</th> <th>ci_lower</th> <th>ci_upper</th> </tr> <tr> <th>intercept</th> <td>0.47</td> <td>0.218</td> <td>2.154</td> <td>0.031</td> <td>0.111</td> <td>0.829</td> </tr> </table> ## 3. Example Usage with Multiple Continuous Treatment Synthetic Data ### 3.1. DGP We use the data generating process (DGP) from [here](https://arxiv.org/abs/1806.03467), and modify the treatment to generate multiple treatments. The DGP is described by the following equations: \begin{align} T =& \langle W, \beta\rangle + \eta, & \;\eta \sim \text{Uniform}(-1, 1)\\ Y =& T\cdot \theta_{1}(X) + T^{2}\cdot \theta_{2}(X) + \langle W, \gamma\rangle + \epsilon, &\; \epsilon \sim \text{Uniform}(-1, 1)\\ W \sim& \text{Normal}(0,\, I_{n_w})\\ X \sim& \text{Uniform}(0,1)^{n_x} \end{align} where $W$ is a matrix of high-dimensional confounders and $\beta, \gamma$ have high sparsity. For this DGP, \begin{align} \theta_{1}(x) = \exp(2\cdot x_1)\\ \theta_{2}(x) = x_1^{2}\\ \end{align} ```python # DGP constants np.random.seed(123) n = 6000 n_w = 30 support_size = 5 n_x = 5 # Outcome support support_Y = np.random.choice(np.arange(n_w), size=support_size, replace=False) coefs_Y = np.random.uniform(0, 1, size=support_size) epsilon_sample = lambda n: np.random.uniform(-1, 1, size=n) # Treatment support support_T = support_Y coefs_T = np.random.uniform(0, 1, size=support_size) eta_sample = lambda n: np.random.uniform(-1, 1, size=n) # Generate controls, covariates, treatments and outcomes W = np.random.normal(0, 1, size=(n, n_w)) X = np.random.uniform(0, 1, size=(n, n_x)) # Heterogeneous treatment effects TE1 = np.array([x_i[0] for x_i in X]) TE2 = np.array([x_i[0]**2 for x_i in X]).flatten() T = np.dot(W[:, support_T], coefs_T) + eta_sample(n) Y = TE1 * T + TE2 * T**2 + np.dot(W[:, support_Y], coefs_Y) + epsilon_sample(n) # Generate test data X_test = np.random.uniform(0, 1, size=(100, n_x)) X_test[:, 0] = np.linspace(0, 1, 100) ``` ### 3.2. Train Estimator ```python from sklearn.ensemble import GradientBoostingRegressor from sklearn.multioutput import MultiOutputRegressor from sklearn.linear_model import ElasticNetCV est = LinearDML(model_y=GradientBoostingRegressor(n_estimators=100, max_depth=3, min_samples_leaf=20), model_t=MultiOutputRegressor(GradientBoostingRegressor(n_estimators=100, max_depth=3, min_samples_leaf=20)), featurizer=PolynomialFeatures(degree=2, include_bias=False), linear_first_stages=False, n_splits=5) ``` ```python T = T.reshape(-1,1) est.fit(Y, np.concatenate((T, T**2), axis=1), X=X, W=W) ``` <econml.dml.LinearDML at 0x1da9d64bf08> ```python te_pred = est.const_marginal_effect(X_test) ``` ```python lb, ub = est.const_marginal_effect_interval(X_test, alpha=0.01) ``` ### 3.3. Performance Visualization ```python plt.figure(figsize=(10,6)) plt.plot(X_test[:, 0], te_pred[:, 0], label='DML estimate1') plt.fill_between(X_test[:, 0], lb[:, 0], ub[:, 0], alpha=.4) plt.plot(X_test[:, 0], te_pred[:, 1], label='DML estimate2') plt.fill_between(X_test[:, 0], lb[:, 1], ub[:, 1], alpha=.4) expected_te1 = np.array([x_i[0] for x_i in X_test]) expected_te2=np.array([x_i[0]**2 for x_i in X_test]).flatten() plt.plot(X_test[:, 0], expected_te1, '--', label='True effect1') plt.plot(X_test[:, 0], expected_te2, '--', label='True effect2') plt.ylabel("Treatment Effect") plt.xlabel("x") plt.legend() plt.show() ``` ## 4. Example Usage with Single Continuous Treatment Observational Data We applied our technique to Dominick’s dataset, a popular historical dataset of store-level orange juice prices and sales provided by University of Chicago Booth School of Business. The dataset is comprised of a large number of covariates $W$, but researchers might only be interested in learning the elasticity of demand as a function of a few variables $x$ such as income or education. We applied the `LinearDML` to estimate orange juice price elasticity as a function of income, and our results, unveil the natural phenomenon that lower income consumers are more price-sensitive. ### 4.1. Data ```python # A few more imports import os import pandas as pd import urllib.request from sklearn.preprocessing import StandardScaler ``` ```python # Import the data file_name = "oj_large.csv" if not os.path.isfile(file_name): print("Downloading file (this might take a few seconds)...") urllib.request.urlretrieve("https://msalicedatapublic.blob.core.windows.net/datasets/OrangeJuice/oj_large.csv", file_name) oj_data = pd.read_csv(file_name) ``` ```python oj_data.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>store</th> <th>brand</th> <th>week</th> <th>logmove</th> <th>feat</th> <th>price</th> <th>AGE60</th> <th>EDUC</th> <th>ETHNIC</th> <th>INCOME</th> <th>HHLARGE</th> <th>WORKWOM</th> <th>HVAL150</th> <th>SSTRDIST</th> <th>SSTRVOL</th> <th>CPDIST5</th> <th>CPWVOL5</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>2</td> <td>tropicana</td> <td>40</td> <td>9.018695</td> <td>0</td> <td>3.87</td> <td>0.232865</td> <td>0.248935</td> <td>0.11428</td> <td>10.553205</td> <td>0.103953</td> <td>0.303585</td> <td>0.463887</td> <td>2.110122</td> <td>1.142857</td> <td>1.92728</td> <td>0.376927</td> </tr> <tr> <th>1</th> <td>2</td> <td>tropicana</td> <td>46</td> <td>8.723231</td> <td>0</td> <td>3.87</td> <td>0.232865</td> <td>0.248935</td> <td>0.11428</td> <td>10.553205</td> <td>0.103953</td> <td>0.303585</td> <td>0.463887</td> <td>2.110122</td> <td>1.142857</td> <td>1.92728</td> <td>0.376927</td> </tr> <tr> <th>2</th> <td>2</td> <td>tropicana</td> <td>47</td> <td>8.253228</td> <td>0</td> <td>3.87</td> <td>0.232865</td> <td>0.248935</td> <td>0.11428</td> <td>10.553205</td> <td>0.103953</td> <td>0.303585</td> <td>0.463887</td> <td>2.110122</td> <td>1.142857</td> <td>1.92728</td> <td>0.376927</td> </tr> <tr> <th>3</th> <td>2</td> <td>tropicana</td> <td>48</td> <td>8.987197</td> <td>0</td> <td>3.87</td> <td>0.232865</td> <td>0.248935</td> <td>0.11428</td> <td>10.553205</td> <td>0.103953</td> <td>0.303585</td> <td>0.463887</td> <td>2.110122</td> <td>1.142857</td> <td>1.92728</td> <td>0.376927</td> </tr> <tr> <th>4</th> <td>2</td> <td>tropicana</td> <td>50</td> <td>9.093357</td> <td>0</td> <td>3.87</td> <td>0.232865</td> <td>0.248935</td> <td>0.11428</td> <td>10.553205</td> <td>0.103953</td> <td>0.303585</td> <td>0.463887</td> <td>2.110122</td> <td>1.142857</td> <td>1.92728</td> <td>0.376927</td> </tr> </tbody> </table> </div> ```python # Prepare data Y = oj_data['logmove'].values T = np.log(oj_data["price"]).values scaler = StandardScaler() W1 = scaler.fit_transform(oj_data[[c for c in oj_data.columns if c not in ['price', 'logmove', 'brand', 'week', 'store','INCOME']]].values) W2 = pd.get_dummies(oj_data[['brand']]).values W = np.concatenate([W1, W2], axis=1) X=scaler.fit_transform(oj_data[['INCOME']].values) ``` ```python ## Generate test data min_income = -1 max_income = 1 delta = (1 - (-1)) / 100 X_test = np.arange(min_income, max_income + delta - 0.001, delta).reshape(-1,1) ``` ### 4.2. Train Estimator ```python est = LinearDML(model_y=RandomForestRegressor(),model_t=RandomForestRegressor()) est.fit(Y, T, X=X, W=W) te_pred=est.effect(X_test) ``` ### 4.3. Performance Visualization ```python # Plot Oranje Juice elasticity as a function of income plt.figure(figsize=(10,6)) plt.plot(X_test, te_pred, label="OJ Elasticity") plt.xlabel(r'Scale(Income)') plt.ylabel('Orange Juice Elasticity') plt.legend() plt.title("Orange Juice Elasticity vs Income") plt.show() ``` ### 4.4. Confidence Intervals We can also get confidence intervals around our predictions by passing an additional `inference` argument to `fit`. All estimators support bootstrap intervals, which involves refitting the same estimator repeatedly on subsamples of the original data, but `LinearDML` also supports a more efficient approach which can be achieved by leaving inference set to the default of `'auto'` or by explicitly passing `inference='statsmodels'`. ```python est.fit(Y, T, X=X, W=W) te_pred=est.effect(X_test) te_pred_interval = est.const_marginal_effect_interval(X_test, alpha=0.02) ``` ```python # Plot Oranje Juice elasticity as a function of income plt.figure(figsize=(10,6)) plt.plot(X_test.flatten(), te_pred, label="OJ Elasticity") plt.fill_between(X_test.flatten(), te_pred_interval[0], te_pred_interval[1], alpha=.5, label="1-99% CI") plt.xlabel(r'Scale(Income)') plt.ylabel('Orange Juice Elasticity') plt.title("Orange Juice Elasticity vs Income") plt.legend() plt.show() ``` ## 5. Example Usage with Multiple Continuous Treatment, Multiple Outcome Observational Data We use the same data, but in this case, we want to fit the demand of multiple brand as a function of the price of each one of them, i.e. fit the matrix of cross price elasticities. It can be done, by simply setting as $Y$ to be the vector of demands and $T$ to be the vector of prices. Then we can obtain the matrix of cross price elasticities. \begin{align} Y=[Logmove_{tropicana},Logmove_{minute.maid},Logmove_{dominicks}] \\ T=[Logprice_{tropicana},Logprice_{minute.maid},Logprice_{dominicks}] \\ \end{align} ### 5.1. Data ```python # Import the data oj_data = pd.read_csv(file_name) ``` ```python # Prepare data oj_data['price'] = np.log(oj_data["price"]) # Transform dataset. # For each store in each week, get a vector of logmove and a vector of logprice for each brand. # Other features are store specific, will be the same for all brands. groupbylist = ["store", "week", "AGE60", "EDUC", "ETHNIC", "INCOME", "HHLARGE", "WORKWOM", "HVAL150", "SSTRDIST", "SSTRVOL", "CPDIST5", "CPWVOL5"] oj_data1 = pd.pivot_table(oj_data,index=groupbylist, columns=oj_data.groupby(groupbylist).cumcount(), values=['logmove', 'price'], aggfunc='sum').reset_index() oj_data1.columns = oj_data1.columns.map('{0[0]}{0[1]}'.format) oj_data1 = oj_data1.rename(index=str, columns={"logmove0": "logmove_T", "logmove1": "logmove_M", "logmove2":"logmove_D", "price0":"price_T", "price1":"price_M", "price2":"price_D"}) # Define Y,T,X,W Y = oj_data1[['logmove_T', "logmove_M", "logmove_D"]].values T = oj_data1[['price_T', "price_M", "price_D"]].values scaler = StandardScaler() W=scaler.fit_transform(oj_data1[[c for c in groupbylist if c not in ['week', 'store', 'INCOME']]].values) X=scaler.fit_transform(oj_data1[['INCOME']].values) ``` ```python ## Generate test data min_income = -1 max_income = 1 delta = (1 - (-1)) / 100 X_test = np.arange(min_income, max_income + delta - 0.001, delta).reshape(-1, 1) ``` ### 5.2. Train Estimator ```python est = LinearDML(model_y=MultiTaskElasticNetCV(cv=3, tol=1, selection='random'), model_t=MultiTaskElasticNetCV(cv=3), featurizer=PolynomialFeatures(1), linear_first_stages=True) est.fit(Y, T, X=X, W=W) te_pred = est.const_marginal_effect(X_test) ``` ### 5.3. Performance Visualization ```python # Plot Oranje Juice elasticity as a function of income plt.figure(figsize=(18, 10)) dic={0:"Tropicana", 1:"Minute.maid", 2:"Dominicks"} for i in range(3): for j in range(3): plt.subplot(3, 3, 3 * i + j + 1) plt.plot(X_test, te_pred[:, i, j], color="C{}".format(str(3 * i + j)), label="OJ Elasticity {} to {}".format(dic[j], dic[i])) plt.xlabel(r'Scale(Income)') plt.ylabel('Orange Juice Elasticity') plt.legend() plt.suptitle("Orange Juice Elasticity vs Income", fontsize=16) plt.show() ``` **Findings**: Look at the diagonal of the matrix, the TE of OJ prices are always negative to the sales across all the brand, but people with higher income are less price-sensitive. By contrast, for the non-diagonal of the matrix, the TE of prices for other brands are always positive to the sales for that brand, the TE is affected by income in different ways for different competitors. In addition, compare to previous plot, the negative TE of OJ prices for each brand are all larger than the TE considering all brand together, which means we would have underestimated the effect of price changes on demand. ### 5.4. Confidence Intervals ```python est.fit(Y, T, X=X, W=W) te_pred = est.const_marginal_effect(X_test) te_pred_interval = est.const_marginal_effect_interval(X_test, alpha=0.02) ``` ```python # Plot Oranje Juice elasticity as a function of income plt.figure(figsize=(18, 10)) dic={0:"Tropicana", 1:"Minute.maid", 2:"Dominicks"} for i in range(3): for j in range(3): plt.subplot(3, 3, 3 * i + j + 1) plt.plot(X_test, te_pred[:, i, j], color="C{}".format(str(3 * i + j)), label="OJ Elasticity {} to {}".format(dic[j], dic[i])) plt.fill_between(X_test.flatten(), te_pred_interval[0][:, i, j],te_pred_interval[1][:, i,j], color="C{}".format(str(3*i+j)),alpha=.5, label="1-99% CI") plt.xlabel(r'Scale(Income)') plt.ylabel('Orange Juice Elasticity') plt.legend() plt.suptitle("Orange Juice Elasticity vs Income",fontsize=16) plt.show() ```
0e52d53876944fc81044cd32417c4a95da7e1d44
643,002
ipynb
Jupyter Notebook
notebooks/Double Machine Learning Examples.ipynb
jaronowitz/EconML
3df959d120d429537a62ebfb22a84b9b28530457
[ "MIT" ]
1
2021-08-24T14:22:45.000Z
2021-08-24T14:22:45.000Z
notebooks/Double Machine Learning Examples.ipynb
jaronowitz/EconML
3df959d120d429537a62ebfb22a84b9b28530457
[ "MIT" ]
null
null
null
notebooks/Double Machine Learning Examples.ipynb
jaronowitz/EconML
3df959d120d429537a62ebfb22a84b9b28530457
[ "MIT" ]
null
null
null
351.751641
140,720
0.923397
true
10,411
Qwen/Qwen-72B
1. YES 2. YES
0.72487
0.692642
0.502076
__label__eng_Latn
0.522629
0.004819
# CMPE 547 Bayesian Statistics and Machine Learning # CMPE 548 Monte Carlo Methods # SWE 546 Data Mining # SWE 582 Machine Learning for Data Analytics ###Supplementary Notes ###Boğaziçi University, Dept. of Computer Engineering ###Instructor: A. Taylan Cemgil ### Notebook Summary * We review the notation and parametrization of densities of some basic distributions that are often encountered * We show how random numbers are generated using python libraries * We show some basic visualization methods such as displaying histograms # Sampling From Basic Distributions Sampling from basic distribution is easy using the numpy library. Formally we will write $x \sim p(X|\theta)$ where $\theta$ is the _parameter vector_, $p(X| \theta)$ denotes the _density_ of the random variable $X$ and $x$ is a _realization_, a particular draw from the density $p$. The following distributions are building blocks from which more complicated processes may be constructed. It is important to have a basic understanding of these distributions. ### Continuous Univariate * Uniform $\mathcal{U}$ * Univariate Gaussian $\mathcal{N}$ * Gamma $\mathcal{G}$ * Inverse Gamma $\mathcal{IG}$ * Beta $\mathcal{B}$ ### Discrete * Poisson $\mathcal{P}$ * Bernoulli $\mathcal{BE}$ * Binomial $\mathcal{BI}$ * Categorical $\mathcal{M}$ * Multinomial $\mathcal{M}$ ### Continuous Multivariate (todo) * Multivariate Gaussian $\mathcal{N}$ * Dirichlet $\mathcal{D}$ ### Continuous Matrix-variate (todo) * Wishart $\mathcal{W}$ * Inverse Wishart $\mathcal{IW}$ * Matrix Gaussian $\mathcal{N}$ ## Sampling from standard uniform $\mathcal{U}(0,1)$ For generating a single random number in the interval $[0, 1)$ we use the notation $$ x_1 \sim \mathcal{U}(x; 0,1) $$ In python, this is implemented as ```python import numpy as np x_1 = np.random.rand() print(x_1) ``` 0.9944312250274825 We can also generate an array of realizations $x_i$ for $i=1 \dots N$, $$ x_i \sim \mathcal{U}(x; 0,1) $$ ```python import numpy as np N = 5 x = np.random.rand(N) print(x) ``` [ 0.21176772 0.20066043 0.26891193 0.5395703 0.62822017] For large $N$, it is more informative to display an histogram of generated data: ```python %matplotlib inline import matplotlib.pyplot as plt # Number of realizations N = 1000 x = np.random.rand(N) plt.hist(x, bins=20) plt.xlabel('x') plt.ylabel('Count') plt.show() ``` $\newcommand{\indi}[1]{\left[{#1}\right]}$ $\newcommand{\E}[1]{\left\langle{#1}\right\rangle}$ We know that the density of the uniform distribution $\mathcal{U}(0,1)$ is $$ \mathcal{U}(x; 0,1) = \left\{ \begin{array}{cc} 1 & 0 \leq x < 1 \\ 0 & \text{otherwise} \end{array} \right. $$ or using the indicator notation $$ \mathcal{U}(x; 0,1) = \left[ x \in [0,1) \right] $$ #### Indicator function To write and manipulate discrete probability distributions in algebraic expression, the *indicator* function is useful: $$ \left[x\right] = \left\{ \begin{array}{cc} 1 & x\;\;\text{is true} \\ 0 & x\;\;\text{is false} \end{array} \right.$$ This notation is also known as the Iverson's convention. #### How to plot the density and the histogram onto the same plot? In one dimension, the histogram is simply the count of the data points that fall to a given interval. Mathematically, we have $j = 1\dots J$ intervals where $B_j = [b_{j-1}, b_j]$ and $b_j$ are bin boundries such that $b_0 < b_1 < \dots < b_J$. $$ h(x) = \sum_{j=1}^J \sum_{i=1}^N \indi{x \in B_j} \indi{x_i \in B_j} $$ This expression, at the first sight looks somewhat more complicated than it really is. The indicator product just encodes the logical condition $x \in B_j$ __and__ $x_i \in B_j$. The sum over $j$ is just a convenient way of writing the result instead of specifying the histogram as a case by case basis for each bin. It is important to get used to such nested sums. When the density $p(x)$ is given, the probability that a single realization is in bin $B_j$ is given by $$ \Pr\left\{x \in B_j\right\} = \int_{B_j} dx p(x) = \int_{-\infty}^{\infty} dx \indi{x\in B_j} p(x) = \E{\indi{x\in B_j}} $$ In other words, the probability is just the expectation of the indicator. The histogram can be written as follows $$ h(x) = \sum_{j=1}^J \indi{x \in B_j} \sum_{i=1}^N \indi{x_i \in B_j} $$ We define the counts at each bin as $$ c_j \equiv \sum_{i=1}^N \indi{x_i \in B_j} $$ If all bins have the same width, i.e., $b_j - b_{j-1} = \Delta$ for $\forall j$, and if $\Delta$ is sufficiently small we have $$ \E{\indi{x\in B_j}} \approx p(b_{j-1}+\Delta/2) \Delta $$ i.e., the probability is roughly the interval width times the density evaluated at the middle point of the bin. The expected value of the counts is $$ \E{c_j} = \sum_{i=1}^N \E{\indi{x_i \in B_j}} \approx N \Delta p(b_{j-1}+\Delta/2) $$ Hence, the density should be roughly $$ p(b_{j-1}+\Delta/2) \approx \frac{\E{c_j} }{N \Delta} $$ The $N$ term is intuitive but the $\Delta$ term is easily forgotten. When plotting the histograms on top of the corresponding densities, we should scale the normalized histogram ${ c_j }/{N}$ by dividing by $\Delta$. ```python N = 1000 # Bin width Delta = 0.02 # Bin edges b = np.arange(0 ,1+Delta, Delta) # Evaluate the density g = np.ones(b.size) # Draw the samples u = np.random.rand(N) counts,edges = np.histogram(u, bins=b) plt.bar(b[:-1], counts/N/Delta, width=Delta) plt.hold(True) plt.plot(b, g, linewidth=3, color='y') plt.hold(False) plt.show() ``` The __plt.hist__ function (calling __np.histogram__) can do this calculation automatically if the option normed=True. However, when the grid is not uniform, it is better to write your own code to be sure what is going on. ```python N = 1000 Delta = 0.05 b = np.arange(0 ,1+Delta, Delta) g = np.ones(b.size) u = np.random.rand(N) plt.hist(u, bins=b, normed=True) plt.hold(True) plt.plot(b, g, linewidth=3, color='y') plt.hold(False) plt.show() ``` # Sampling from Continuous Univariate Distributions * Uniform $\mathcal{U}$ * Univariate Gaussian $\mathcal{N}$ $${\cal N}(x;\mu, v) = \frac{1}{\sqrt{2\pi v}} \exp\left(-\frac12 \frac{(x - \mu)^2}{v}\right) $$ * Gamma $\mathcal{G}$ $${\cal G}(\lambda; a, b) = \frac{b^a \lambda^{a-1}}{\Gamma(a)} \exp( - b \lambda) = \exp((a-1)\log\lambda - b \lambda - \log\Gamma(a) + a\log b) $$ * Inverse Gamma $\mathcal{IG}$ $${\cal IG}(v; \alpha, \beta) = \frac{\beta^\alpha}{\Gamma(\alpha) v^{\alpha+1}} \exp(- \frac{\beta}{v}) = \exp(-(\alpha+1)\log v - \beta /v - \log\Gamma(\alpha) + \alpha \log \beta)$$ * Beta $\mathcal{B}$ $${\cal B}(r; \alpha, \beta) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \Gamma(\beta) } r^{\alpha-1} (1-r)^{\beta-1}$$ We will illustrate two alternative ways for sampling from continuous distributions. - The first method has minimal dependence on the numpy and scipy libraries. This is initially the preferred method. Only random variable generators and the $\log \Gamma(x)$ (__gammaln__) function is used and nothing more. - The second method uses scipy. This is a lot more practical but requires knowing more about the internals of the library. ### Aside: Gamma function $\Gamma(x)$ The gamma function $\Gamma(x)$ is the (generalized) factorial. - Defined by $$\Gamma(x) = \int_0^{\infty} t^{x-1} e^{-t}\, dt$$ - For integer $x$, $\Gamma(x) = (x-1)!$. Remember that for positive integers $x$, the factorial function can be defined recursively $x! = (x-1)! x $ for $x\geq 1$. - For real $x>1$, the gamma function satisfies $$ \Gamma(x+1) = \Gamma(x) x $$ - Interestingly, we have $$\Gamma(1/2) = \sqrt{\pi}$$ - Hence $$\Gamma(3/2) = \Gamma(1/2 + 1) = \Gamma(1/2) (1/2) = \sqrt{\pi}/2$$ - It is available in many numerical computation packages, in python it is available as __scipy.special.gamma__. - To compute $\log \Gamma(x)$, you should always use the implementation as __scipy.special.gammaln__. The gamma function blows up super-exponentially so numerically you should never evaluate $\log \Gamma(x)$ as ```python import numpy as np import scipy.special as sps np.log(sps.gamma(x)) # Don't sps.gammaln(x) # Do ``` - A related function is the Beta function $$B(x,y) = \int_0^{1} t^{x-1} (1-t)^{y-1}\, dt$$ - We have $$B(x,y) = \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$$ - Both $\Gamma(x)$ and $B(x)$ pop up as normalizing constant of the gamma and beta distributions. #### Derivatives of $\Gamma(x)$ - <span style="color:red"> </span> The derivatives of $\log \Gamma(x)$ pop up quite often when fitting densities. The first derivative has a specific name, often called the digamma function or the psi function. $$ \Psi(x) \equiv \frac{d}{d x} \log \Gamma(x) $$ - It is available as __scipy.special.digamma__ or __scipy.special.psi__ - Higher order derivatives of the $\log \Gamma(x)$ function (including digamma itself) are available as __scipy.special.polygamma__ ```python import numpy as np import scipy.special as sps x = np.arange(0.1,5,0.01) f = sps.gammaln(x) df = sps.psi(x) # First derivative of the digamma function ddf = sps.polygamma(1,x) # sps.psi(x) == sps.polygamma(0,x) plt.figure(figsize=(8,10)) plt.subplot(3,1,1) plt.plot(x, f, 'r') plt.xlabel('x') plt.ylabel('log Gamma(x)') plt.subplot(3,1,2) plt.grid(True) plt.plot(x, df, 'b') plt.xlabel('x') plt.ylabel('Psi(x)') plt.subplot(3,1,3) plt.plot(x, ddf, 'k') plt.xlabel('x') plt.ylabel('Psi\'(x)') plt.show() ``` #### Stirling's approximation An important approximation to the factorial is the famous Stirling's approximation \begin{align} n! \sim \sqrt{2 \pi n}\left(\frac{n}{e}\right)^n \end{align} \begin{align} \log \Gamma(x+1) \approx \frac{1}{2}\log(2 \pi) + x \log(x) - \frac{1}{2} \log(x) \end{align} ## Sampling using numpy.random ```python %matplotlib inline import matplotlib.pyplot as plt import numpy as np from scipy.special import gammaln def plot_histogram_and_density(N, c, edges, dx, g, title='Put a title'): ''' N : Number of Datapoints c : Counts, as obtained from np.histogram function edges : bin edges, as obtained from np.histogram dx : The bin width g : Density evaluated at the points given in edges title : for the plot ''' plt.bar(edges[:-1], c/N/dx, width=dx) plt.hold(True) plt.plot(edges, g, linewidth=3, color='y') plt.hold(False) plt.title(title) def log_gaussian_pdf(x, mu, V): return -0.5*np.log(2*np.pi*V) -0.5*(x-mu)**2/V def log_gamma_pdf(x, a, b): return (a-1)*np.log(x) - b*x - gammaln(a) + a*np.log(b) def log_invgamma_pdf(x, a, b): return -(a+1)*np.log(x) - b/x - gammaln(a) + a*np.log(b) def log_beta_pdf(x, a, b): return - gammaln(a) - gammaln(b) + gammaln(a+b) + np.log(x)*(a-1) + np.log(1-x)*(b-1) N = 1000 # Univariate Gaussian mu = 2 # mean V = 1.2 # Variance x_normal = np.random.normal(mu, np.sqrt(V), N) dx = 10*np.sqrt(V)/50 x = np.arange(mu-5*np.sqrt(V) ,mu+5*np.sqrt(V),dx) g = np.exp(log_gaussian_pdf(x, mu, V)) #g = scs.norm.pdf(x, loc=mu, scale=np.sqrt(V)) c,edges = np.histogram(x_normal, bins=x) plt.figure(num=None, figsize=(16, 5), dpi=80, facecolor='w', edgecolor='k') plt.subplot(2,2,1) plot_histogram_and_density(N, c, x, dx, g, 'Gaussian') ## Gamma # Shape a = 1.2 # inverse scale b = 30 # Generate unit scale first than scale with inverse scale parameter b x_gamma = np.random.gamma(a, 1, N)/b dx = np.max(x_gamma)/500 x = np.arange(dx, 250*dx, dx) g = np.exp(log_gamma_pdf(x, a, b)) c,edges = np.histogram(x_gamma, bins=x) plt.subplot(2,2,2) plot_histogram_and_density(N, c, x, dx, g, 'Gamma') ## Inverse Gamma a = 3.5 b = 0.2 x_invgamma = b/np.random.gamma(a, 1, N) dx = np.max(x_invgamma)/500 x = np.arange(dx, 150*dx, dx) g = np.exp(log_invgamma_pdf(x,a,b)) c,edges = np.histogram(x_invgamma, bins=x) plt.subplot(2,2,3) plot_histogram_and_density(N, c, x, dx, g, 'Inverse Gamma') ## Beta a = 0.5 b = 1 x_beta = np.random.beta(a, b, N) dx = 0.01 x = np.arange(dx, 1, dx) g = np.exp(log_beta_pdf(x, a, b)) c,edges = np.histogram(x_beta, bins=x) plt.subplot(2,2,4) plot_histogram_and_density(N, c, x, dx, g, 'Beta') plt.show() ``` ## Sampling using scipy.stats ```python %matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.stats as scs N = 2000 # Univariate Gaussian mu = 2 # mean V = 1.2 # Variance rv_normal = scs.norm(loc=mu, scale=np.sqrt(V)) x_normal = rv_normal.rvs(size=N) dx = 10*np.sqrt(V)/50 x = np.arange(mu-5*np.sqrt(V) ,mu+5*np.sqrt(V),dx) g = rv_normal.pdf(x) c,edges = np.histogram(x_normal, bins=x) plt.figure(num=None, figsize=(16, 5), dpi=80, facecolor='w', edgecolor='k') plt.subplot(2,2,1) plot_histogram_and_density(N, c, x, dx, g, 'Gaussian') ## Gamma a = 3.2 b = 30 # The following is equivalent to our parametrization of gamma, note the 1/b term rv_gamma = scs.gamma(a, scale=1/b) x_gamma = rv_gamma.rvs(N) dx = np.max(x_gamma)/500 x = np.arange(0, 250*dx, dx) g = rv_gamma.pdf(x) c,edges = np.histogram(x_gamma, bins=x) plt.subplot(2,2,2) plot_histogram_and_density(N, c, x, dx, g, 'Gamma') ## Inverse Gamma a = 3.5 b = 0.2 # Note the b term rv_invgamma = scs.invgamma(a, scale=b) x_invgamma = rv_invgamma.rvs(N) dx = np.max(x_invgamma)/500 x = np.arange(dx, 150*dx, dx) g = rv_invgamma.pdf(x) c,edges = np.histogram(x_invgamma, bins=x) plt.subplot(2,2,3) plot_histogram_and_density(N, c, x, dx, g, 'Inverse Gamma') ## Beta a = 0.7 b = 0.8 rv_beta = scs.beta(a, b) x_beta = rv_beta.rvs(N) dx = 0.02 x = np.arange(0, 1+dx, dx) g = rv_beta.pdf(x) c,edges = np.histogram(x_beta, bins=x) plt.subplot(2,2,4) plot_histogram_and_density(N, c, x, dx, g, 'Beta') plt.show() ``` # Sampling from Discrete Densities * Bernoulli $\mathcal{BE}$ $$ {\cal BE}(r; w) = w^r (1-w)^{1-r} \;\; \text{if} \; r \in \{0, 1\} $$ * Binomial $\mathcal{BI}$ $${\cal BI}(r; L, w) = \binom{L}{r, (L-r)} w^r (1-w)^{L-r} \;\; \text{if} \; r \in \{0, 1, \dots, L\} $$ Here, the binomial coefficient is defined as $$ \binom{L}{r, (L-r)} = \frac{N!}{r!(L-r)!} $$ Note that $$ {\cal BE}(r; w) = {\cal BI}(r; L=1, w) $$ * Poisson $\mathcal{PO}$, with intensity $\lambda$ $${\cal PO}(x;\lambda) = \frac{e^{-\lambda} \lambda^x}{x!} = \exp(x \log \lambda - \lambda - \log\Gamma(x+1)) $$ Given samples on nonnegative integers, we can obtain histograms easily using __np.bincount__. ```python c = np.bincount(samples) ``` The functionality is equivalent to the following sniplet, while implementation is possibly different and more efficient. ```python upper_bound = np.max() c = np.zeros(upper_bound+1) for i in samples: c[i] += 1 ``` ```python %matplotlib inline import matplotlib.pyplot as plt import numpy as np def plot_histogram_and_pmf(N, c, domain, dx, g, title='Put a title'): ''' N : Number of Datapoints c : Counts, as obtained from np.bincount function domain : integers for each c, same size as c dx : The bin width g : Density evaluated at the points given in edges title : for the plot ''' plt.bar(domain-dx/2, c/N, width=dx) plt.hold(True) plt.plot(domain, g, 'ro:', linewidth=3, color='y') plt.hold(False) plt.title(title) def log_poisson_pdf(x, lam): return -lam + x*np.log(lam) - gammaln(x+1) def log_bernoulli_pdf(r, pr): return r*np.log(pr) + (1-r)*np.log(1 - pr) def log_binomial_pdf(r, pr, L): return gammaln(L+1) - gammaln(r+1) - gammaln(L-r+1) + r*np.log(pr) + (L-r)*np.log(1 - pr) N = 100 pr = 0.8 # For plots bin_width = 0.3 # Bernoulli L = 1 x_bern = np.random.binomial(n=L, p=pr, size=N) c = np.bincount(x_bern, minlength=L+1) g = np.exp(log_bernoulli_pdf(np.arange(L+1), pr)) plt.figure(figsize=(20,4)) plt.subplot(1,3,1) plot_histogram_and_pmf(N, c, np.arange(L+1), bin_width, g, 'Bernoulli') plt.xticks([0,1]) # Binomial L = 10 pr = 0.7 x_binom = np.random.binomial(n=L, p=pr, size=N) c = np.bincount(x_binom, minlength=L+1) g = np.exp(log_binomial_pdf(np.arange(L+1), pr, L)) plt.subplot(1,3,2) plot_histogram_and_pmf(N, c, np.arange(L+1), bin_width, g, 'Binomial') plt.xticks(np.arange(L+1)) # Poisson intensity = 10.5 x_poiss = np.random.poisson(intensity, size =N ) c = np.bincount(x_poiss) x = np.arange(len(c)) g = np.exp(log_poisson_pdf(x, intensity)) plt.subplot(1,3,3) plot_histogram_and_pmf(N, c, x, bin_width, g, 'Poisson') ``` ## Bernoulli, Binomial, Categorical and Multinomial Distributions The Bernoulli and Binomial distributions are quite simple and well known distributions on small integers, so it may come as a surprise that they have another, less obvious but arguably more useful representation as discrete multivariate densities. This representation makes the link to categorical distributions where there are more than two possible outcomes. Finally, all Bernoulli, Binomial or Categorical distributions are special cases of Multinomial distribution. ### Bernoulli Recall the Bernoulli distribution $r \in \{0, 1\}$ $$ {\cal BE}(r; w) = w^r (1-w)^{1-r} $$ We will define $\pi_0 = 1-w$ and $\pi_1 = w$, such that $\pi_0 + \pi_1 = 1$. The parameter vector is $\pi = (\pi_0, \pi_1)$ We will also introduce a positional encoding such that \begin{eqnarray} r = 0 & \Rightarrow & s = (1, 0) \\ r = 1 & \Rightarrow & s = (0, 1) \end{eqnarray} In other words $s = (s_0, s_1)$ is a 2-dimensional vector where $$s_0, s_1 \in \{0,1\}\;\text{and}\; s_0 + s_1 = 1$$ We can now write the Bernoulli density $$ p(s | \pi) = \pi_0^{s_0} \pi_1^{s_1} $$ ### Binomial Similarly, recall the Binomial density where $r \in \{0, 1, \dots, L\}$ $${\cal BI}(r; L, w) = \binom{L}{r, (L-r)} w^r (1-w)^{L-r} $$ We will again define $\pi_0 = 1-w$ and $\pi_1 = w$, such that $\pi_0 + \pi_1 = 1$. The parameter vector is $\pi = (\pi_0, \pi_1)$ \begin{eqnarray} r = 0 & \Rightarrow & s = (L, 0) \\ r = 1 & \Rightarrow & s = (L-1, 1)\\ r = 2 & \Rightarrow & s = (L-2, 2)\\ \dots \\ r = L & \Rightarrow & s = (0, L) \end{eqnarray} where $s = (s_0, s_1)$ is a 2-dimensional vector where $$s_0, s_1 \in \{0,\dots,L\} \;\text{and}\; s_0 + s_1 = L$$ We can now write the Binomial density as $$ p(s | \pi) = \binom{L}{s_0, s_1} \pi_0^{s_0} \pi_1^{s_1} $$ ### Categorical One of the advantages of this new notation is that we can write the density even if the outcomes are not numerical. For example, the result of a single coin flip experiment when $r \in \{$ 'Tail', 'Head' $\}$ where the probability of 'Tail' is $w$ can be written as $$ p(r | w) = w^{\indi{r=\text{'Tail'}}} (1-w)^{\indi{r=\text{'Head'}}} $$ We define $s_0 = \indi{r=\text{'Head'}}$ and $s_1 = \indi{r=\text{'Tail'}}$, then the density can be written in the same form as $$ p(s | \pi) = \pi_0^{s_0} \pi_1^{s_1} $$ where $\pi_0 = 1-w$ and $\pi_1 = w$. More generally, when $r$ is from a set with $K$ elements, i.e., $r \in R = \{ v_0, v_1, \dots, v_{K-1} \}$ with probability of the event $r = v_k$ given as $\pi_k$, we define $s = (s_0, s_1, \dots, s_{K-1})$ for $k=0,\dots, K-1$ $$ s_k = \indi{r=v_k} $$ Note that by construction, we have $\sum_k s_k = 1$. The resulting density, known as the Categorical density, can be writen as $$ p(s|\pi) = \pi_0^{s_0} \pi_1^{s_1} \dots \pi_{K-1}^{s_{K-1}} $$ ### Multinomial When drawing from a categorical distribution, one chooses a single category from $K$ options with given probabilities. A standard model for this is placing a single ball into $K$ different bins. The vector $s = (s_0, s_1, \dots,s_k, \dots, s_{K-1})$ represents how many balls eack bin $k$ contains. Now, place $L$ balls instead of one into $K$ bins with placing each ball idependently into bin $k$ where $k \in\{0,\dots,K-1\}$ with the probability $\pi_k$. The multinomial is the joint distribution of $s$ where $s_k$ is the number of balls placed into bin $k$. The density will be denoted as $${\cal M}(s; L, \pi) = \binom{L}{s_0, s_1, \dots, s_{K-1}}\prod_{k=0}^{K-1} \pi_k^{s_k} $$ Here $\pi \equiv [\pi_0, \pi_2, \dots, \pi_{K-1} ]$ is the probability vector and $L$ is referred as the _index parameter_. Clearly, we have the normalization constraint $ \sum_k \pi_k = 1$ and realization of the counts $s$ satisfy $ \sum_k s_k = L $. Here, the _multinomial_ coefficient is defined as $$\binom{L}{s_0, s_1, \dots, s_{K-1}} = \frac{L!}{s_0! s_1! \dots s_{K-1}!}$$ Binomial, Bernoulli and Categorical distributions are all special cases of the Multinomial distribution, with a suitable representation. The picture is as follows: Balls/Bins | $2$ Bins | $K$ Bins -------- | -------- | --------- $1$ Ball | Bernoulli ${\cal BE}$ | Categorical ${\cal C}$ $L$ Balls | Binomial ${\cal BI}$ | Multinomial ${\cal M}$ Murphy calls the categorical distribution ($1$ Ball, $K$ Bins) as the Multinoulli. This is non-standard but logical (and somewhat cute). It is common to think of Bernoulli and Binomial as scalar random variable. However, when we think of them as special case of a Multinomial it is better to think of them as bivariate, albeit degenerate, random variables, as illustrated in the following cell along with an alternative visualization. ```python # The probability parameter pr = 0.3 fig = plt.figure(figsize=(16,50), edgecolor=None) maxL = 12 plt.subplot(maxL-1,2,1) plt.grid(False) # Set up the scalar binomial density as a bivariate density for L in range(1,maxL): r = np.arange(L+1) p = np.exp(log_binomial_pdf(r, pr=pr, L=L)) A = np.zeros(shape=(13,13)) for s in range(L): s0 = s s1 = L-s A[s0, s1] = p[s] #plt.subplot(maxL-1,2,2*L-1) # plt.bar(r-0.25, p, width=0.5) # ax.set_xlim(-1,maxL) # ax.set_xticks(range(0,maxL)) if True: plt.subplot(maxL-1,2,2*L-1) plt.barh(bottom=r-0.25, width=p, height=0.5) ax2 = fig.gca() pos = ax2.get_position() pos2 = [pos.x0, pos.y0, 0.04, pos.height] ax2.set_position(pos2) ax2.set_ylim(-1,maxL) ax2.set_yticks(range(0,maxL)) ax2.set_xlim([0,1]) ax2.set_xticks([0,1]) plt.ylabel('s1') ax2.invert_xaxis() plt.subplot(maxL-1,2,2*L) plt.imshow(A, interpolation='nearest', origin='lower',cmap='gray_r',vmin=0,vmax=0.7) plt.xlabel('s0') ax1 = fig.gca() pos = ax1.get_position() pos2 = [pos.x0-0.45, pos.y0, pos.width, pos.height] ax1.set_position(pos2) ax1.set_ylim(-1,maxL) ax1.set_yticks(range(0,maxL)) ax1.set_xlim(-1,maxL) ax1.set_xticks(range(0,maxL)) plt.show() ``` The following cell illustrates sampling from the Multinomial density. ```python # Number of samples sz = 3 # Multinomial p = np.array([0.3, 0.1, 0.1, 0.5]) K = len(p) # number of Bins L = 20 # number of Balls print('Multinomial with number of bins K = {K} and Number of balls L = {L}'.format(K=K,L=L)) print(np.random.multinomial(L, p, size=sz)) # Categorical L = 1 # number of Balls print('Categorical with number of bins K = {K} and a single ball L=1'.format(K=K)) print(np.random.multinomial(L, p, size=sz)) # Binomial p = np.array([0.3, 0.7]) K = len(p) # number of Bins = 2 L = 20 # number of Balls print('Binomial with two bins K=2 and L={L} balls'.format(L=L)) print(np.random.multinomial(L, p, size=sz)) # Bernoulli L = 1 # number of Balls p = np.array([0.3, 0.7]) K = len(p) # number of Bins = 2 print('Bernoulli, two bins and a single ball') print(np.random.multinomial(L, p, size=sz)) ``` Multinomial with number of bins K = 4 and Number of balls L = 20 [[ 5 4 3 8] [ 7 4 1 8] [ 5 3 0 12]] Categorical with number of bins K = 4 and a single ball L=1 [[1 0 0 0] [0 0 0 1] [0 0 0 1]] Binomial with two bins K=2 and L=20 balls [[10 10] [ 5 15] [ 8 12]] Bernoulli, two bins and a single ball [[0 1] [0 1] [0 1]] ## Probability tables and the categorical distribution The following cell illustrates drawing from a categorical distribution with on an alphabet, not necessarly $0\dots K-1$. ```python # Sampling from a Categorical Distribution a = np.array(sorted(['blue', 'red', 'black', 'yellow'])) pr = np.array([0.2, 0.55, 0.15, 0.1]) N = 100 x = np.random.choice(a, size=N, replace=True, p=pr) print('Symbols:') print(a) print('Probabilities:') print(pr) print('{N} realizations:'.format(N=N)) print(x) ``` Symbols: ['black' 'blue' 'red' 'yellow'] Probabilities: [ 0.2 0.55 0.15 0.1 ] 100 realizations: ['blue' 'blue' 'blue' 'yellow' 'black' 'blue' 'yellow' 'blue' 'red' 'blue' 'blue' 'black' 'blue' 'yellow' 'blue' 'blue' 'red' 'black' 'blue' 'blue' 'yellow' 'black' 'blue' 'red' 'blue' 'black' 'red' 'red' 'black' 'blue' 'yellow' 'blue' 'blue' 'blue' 'blue' 'blue' 'red' 'blue' 'blue' 'blue' 'blue' 'red' 'blue' 'blue' 'blue' 'blue' 'blue' 'blue' 'blue' 'black' 'yellow' 'blue' 'blue' 'black' 'blue' 'red' 'blue' 'blue' 'blue' 'red' 'red' 'blue' 'blue' 'blue' 'blue' 'blue' 'red' 'blue' 'blue' 'blue' 'blue' 'blue' 'blue' 'yellow' 'black' 'red' 'blue' 'black' 'red' 'blue' 'blue' 'blue' 'blue' 'red' 'blue' 'blue' 'blue' 'black' 'blue' 'blue' 'blue' 'blue' 'blue' 'black' 'red' 'yellow' 'blue' 'black' 'blue' 'blue'] Often we need to opposite of the above process, that is given a list of elements, we need to count the number of occurences of each symbol. The following method creates such a statistics. ```python import collections c = collections.Counter(x) print(c.most_common()) counts = [e[1] for e in c.most_common()] symbols = [e[0] for e in c.most_common()] print('Sorted according to counts') print(counts) print(symbols) # If we require the symbols in sorted order with respect to symbol names, use: counts = [e[1] for e in sorted(c.most_common())] symbols = [e[0] for e in sorted(c.most_common())] print('Sorted according to symbols') print(counts) print(symbols) ``` [('blue', 64), ('red', 15), ('black', 13), ('yellow', 8)] Sorted according to counts [64, 15, 13, 8] ['blue', 'red', 'black', 'yellow'] Sorted according to symbols [13, 64, 15, 8] ['black', 'blue', 'red', 'yellow'] ### Counting letter bigrams in several languages ```python %matplotlib inline from collections import defaultdict from urllib.request import urlopen import string import numpy as np import matplotlib.pyplot as plt # Turkish #"ç","ı","ğ","ö","ş","ü",'â' # German #"ä","ß","ö","ü" # French #"ù","û","ô","â","à","ç","é","è","ê","ë","î","ï","æ" tr_alphabet = ['•','a','b','c','ç','d','e','f', 'g','ğ','h','ı','i','j','k','l', 'm','n','o','ö','p','q','r','s','ş', 't','u','ü','w','v','x','y','z'] # Union of Frequent letters in French, Turkish, German and English my_alphabet = ['•','a','â','ä',"à","æ",'b','c','ç','d','e',"é","è","ê","ë",'f', 'g','ğ','h','ı','i',"î",'ï','j','k','l', 'm','n','o','œ',"ô",'ö','p','q','r','s','ş', 't','u','ù',"û",'ü','w','v','x','y','z','ß'] # Only ascii characters ascii_alphabet = list('•'+string.ascii_lowercase) # Reduction table from my alphabet to ascii my2ascii_table = { ord('â'):"a", ord('ä'):"ae", ord("à"):"a", ord("æ"):"ae", ord('ç'):"c", ord("é"):"e", ord("è"):"e", ord("ê"):"e", ord("ë"):"e", ord('ğ'):"g", ord('ı'):"i", ord("î"):"i", ord('ï'):"i", ord('œ'):"oe", ord("ô"):"o", ord('ö'):"o", ord('ş'):"s", ord('ù'):"u", ord("û"):"u", ord('ü'):"u", ord('ß'):"ss" } # Reduction table from my alphabet to frequent letters in turkish text my2tr_table = { ord('â'):"a", ord('ä'):"ae", ord("à"):"a", ord("æ"):"ae", ord("é"):"e", ord("è"):"e", ord("ê"):"e", ord("ë"):"e", ord("î"):"i", ord('ï'):"i", ord('œ'):"oe", ord("ô"):"o", ord('ù'):"u", ord("û"):"u", ord('ß'):"ss" } def count_transitions(fpp, alphabet, tab): #ignore punctuation tb = str.maketrans(".\t\n\r ","•••••", '0123456789!"\'#$%&()*,-/:;<=>?@[\\]^_`{|}~+') #replace other unicode characters with a bullet (alt-8) tbu = { ord("İ"):'i', ord(u"»"):'•', ord(u"«"):'•', ord(u"°"):'•', ord(u"…"):'•', ord(u"”"):'•', ord(u"’"):'•', ord(u"“"):'•', ord(u"\ufeff"):'•', 775: None} # Character pairs D = defaultdict(int) for line in fpp: s = line.decode('utf-8').translate(tb).lower() s = s.translate(tbu) s = s.translate(tab) #print(s) if len(s)>1: for i in range(len(s)-1): D[s[i:i+2]]+=1 M = len(alphabet) a2i = {v: k for k,v in enumerate(alphabet)} DD = np.zeros((M,M)) ky = sorted(D.keys()) for k in D.keys(): i = a2i[k[0]] j = a2i[k[1]] DD[i,j] = D[k] return D, DD, alphabet ``` ## Count and display occurences of letters in text ```python local = 'file:///Users/cemgil/Dropbox/Public/swe546/data/' #local = 'https://dl.dropboxusercontent.com/u/9787379/swe546/data/' #files = ['starwars_4.txt', 'starwars_5.txt', 'starwars_6.txt', 'hamlet.txt', 'hamlet_deutsch.txt', 'hamlet_french.txt', 'juliuscaesar.txt','othello.txt', 'sonnets.txt', 'antoniusandcleopatra.txt'] files = ['hamlet_turkce.txt','hamlet_deutsch.txt', 'hamlet_french.txt', 'hamlet.txt','starwars_4.txt', 'starwars_5.txt','juliuscaesar.txt','othello.txt'] plt.figure(figsize=(16,18)) i = 0 for f in files: url = local+f data = urlopen(url) #D, DD, alphabet = count_transitions(data, my_alphabet, {}) D, DD, alphabet = count_transitions(data, ascii_alphabet, my2ascii_table) #D, DD, alphabet = count_transitions(data, tr_alphabet, my2tr_table) M = len(alphabet) # Ignore space, space transitions DD[0,0] = 1 i+=1 plt.subplot(len(files)/2,2,i) S = np.sum(DD,axis=0) #Subpress spaces S[0] = 0 S = S/np.sum(S) plt.bar(np.arange(M)-0.5, S, width=0.7) plt.xticks(range(M), alphabet) plt.gca().set_ylim((0,0.2)) plt.title(f) plt.show() ``` ## Counting Bigrams ```python local = 'file:///Users/cemgil/Dropbox/Public/swe546/data/' #local = 'https://dl.dropboxusercontent.com/u/9787379/swe546/data/' #files = ['starwars_4.txt', 'starwars_5.txt', 'starwars_6.txt', 'hamlet.txt', 'hamlet_deutsch.txt', 'hamlet_french.txt', 'juliuscaesar.txt','othello.txt', 'sonnets.txt', 'antoniusandcleopatra.txt'] files = ['hamlet_turkce.txt','hamlet_deutsch.txt', 'hamlet_french.txt', 'hamlet.txt','starwars_4.txt', 'starwars_5.txt','juliuscaesar.txt','othello.txt'] plt.figure(figsize=(17,2*17)) i = 0 for f in files: url = local+f data = urlopen(url) #D, DD, alphabet = count_transitions(data, my_alphabet, {}) #D, DD, alphabet = count_transitions(data, ascii_alphabet, my2ascii_table) D, DD, alphabet = count_transitions(data, tr_alphabet, my2tr_table) M = len(alphabet) DD[0,0] = 1 i+=1 plt.subplot(len(files)/2,2,i) plt.imshow(DD, interpolation='nearest', vmin=0) plt.xticks(range(M), alphabet) plt.xlabel('x(t)') plt.yticks(range(M), alphabet) plt.ylabel('x(t-1)') ax = plt.gca() ax.xaxis.tick_top() #ax.set_title(f, va='bottom') plt.xlabel('x(t) '+f) ``` ### Normalized probability table of $p(x_t|x_{t-1})$ ```python def normalize(A, axis=0): Z = np.sum(A, axis=axis,keepdims=True) idx = np.where(Z == 0) Z[idx] = 1 return A/Z local = 'file:///Users/cemgil/Dropbox/Public/swe546/data/' #local = 'https://dl.dropboxusercontent.com/u/9787379/swe546/data/' file = 'hamlet_turkce.txt' data = urlopen(local+file) D, DD, alphabet = count_transitions(data, tr_alphabet, my2tr_table) plt.figure(figsize=(9,9)) T = normalize(DD, axis=1) plt.imshow(T, interpolation='nearest', vmin=0) plt.xticks(range(M), alphabet) plt.yticks(range(M), alphabet) plt.gca().xaxis.tick_top() plt.show() ``` ### Is Markov(0), Markov(1) or Markov(2) a better model for English letters in plain text ? # Continuous Multivariate (todo) ### The Multivariate Gaussian distribution \begin{align} \mathcal{N}(x; \mu, \Sigma) &= |2\pi \Sigma|^{-1/2} \exp\left( -\frac{1}{2} (x-\mu)^\top \Sigma^{-1} (x-\mu) \right) \\ & = \exp\left(-\frac{1}{2} x^\top \Sigma^{-1} x + \mu^\top \Sigma^{-1} x - \frac{1}{2} \mu^\top \Sigma^{-1} \mu -\frac{1}{2}\log \det(2\pi \Sigma) \right) \\ \end{align} Draw a vector $x \in \mathbf{R}^N$ where each element $x_i \sim \mathcal{N}(x; 0, 1)$ for $i = 1\dots N$. $\newcommand{\E}[1]{\left\langle#1\right\rangle}$ Construct \begin{align} y = Ax \end{align} The expectation and the variance are obtained by \begin{align} \E{y} = \E{Ax} = 0 \end{align} \begin{align} \E{y y^\top} = A \E{x x^\top} A^\top = A A^\top \end{align} So \begin{align} y \sim \mathcal{N}(y; 0, A A^\top) \end{align} In two dimensions, a bi-variate Gaussian is conveniently represented by an ellipse. The ellipse shows a contour of equal probability. In particular, if we plot the $3\sigma$ ellipse, $99 \%$ of all the data points should be inside the ellipse. ```python %matplotlib inline def ellipse_line(A, mu, col='b'): ''' Creates an ellipse from short line segments y = A x + \mu where x is on the unit circle. ''' N = 18 th = np.arange(0, 2*np.pi+np.pi/N, np.pi/N) X = np.array([np.cos(th),np.sin(th)]) Y = np.dot(A, X) ln = plt.Line2D(mu[0]+Y[0,:],mu[1]+Y[1,:],markeredgecolor='k', linewidth=1, color=col) return ln N = 100 A = np.random.randn(2,2) mu = np.zeros(2) X = np.random.randn(2,N) Y = np.dot(A,X) plt.cla() plt.axis('equal') ax = plt.gca() ax.set_xlim(-8,8) ax.set_ylim(-8,8) col = 'b' ln = ellipse_line(3*A, mu, col) ax.add_line(ln) plt.hold(True) plt.plot(mu[0]+Y[0,:],mu[1]+Y[1,:],'.'+col) plt.show() ``` ```python np.dot(A,A.T) ``` array([[ 1.93017803, 0.84274076], [ 0.84274076, 0.49040691]]) When the covariance matrix $\Sigma$ is given, as is typically the case, we need a factorization of $\Sigma = W W^\top$. The Cholesky factorization is such a factorization. (Another possibility, whilst computationally more costly, is the matrix square root.) ```python Sigma = np.dot(A, A.T) W = np.linalg.cholesky(Sigma) X = np.random.randn(2,N) Y = np.dot(W,X) plt.cla() plt.axis('equal') ax = plt.gca() ax.set_xlim(-8,8) ax.set_ylim(-8,8) col = 'b' ln = ellipse_line(3*W, mu, col) ax.add_line(ln) plt.hold(True) plt.plot(mu[0]+Y[0,:],mu[1]+Y[1,:],'.'+col) plt.show() ``` The numpy function __numpy.random.multivariate_normal__ generates samples from a multivariate Gaussian with the given mean and covariance. ```python N = 100 Sig = np.dot(A, A.T) x = np.random.multivariate_normal(mu, Sig, size=N) plt.cla() plt.axis('equal') ax = plt.gca() ax.set_xlim(-8,8) ax.set_ylim(-8,8) plt.plot(x[:,0], x[:,1], 'b.') ln = ellipse_line(3*A,mu,'b') plt.gca().add_line(ln) plt.show() ``` ### Evaluation of the multivariate Gaussian density The log-density of the multivariate Gaussian has the following exponential form \begin{align} \log \mathcal{N}(x; \mu, \Sigma) &= -\frac{1}{2}\log \det(2\pi \Sigma) -\frac{1}{2} (x-\mu)^\top \Sigma^{-1} (x - \mu) \end{align} It is tempting to implement these expression as written -- indeed it is useful to do so for debugging purposes. However, this direct method is both inefficient and numerically not very stable. This will be a problem when the dimension of $x$ is high. A direct implementation might be as follows: ```python def log_mvar_gaussian_inefficient(x, mu, Sig): return -0.5*np.log(np.linalg.det(2*np.pi*Sig)) - 0.5*np.sum((x-mu)*np.dot(np.linalg.inv(Sig), x-mu),axis=0) ``` The evaluation seemingly requires the following steps: - Evaluation of the log of the determinant of the covariance matrix $\Sigma$ - Inversion of the covariance matrix $\Sigma$ - Evaluation of the quadratic form $(x-\mu)^\top \Sigma^{-1} (x - \mu)$ A more efficient implementation uses the following observations: - The covariance matrix $\Sigma$ is positive semidefinite and has a __Cholesky__ factorization \begin{align} \Sigma = W W^\top \end{align} where $W$ is a lower triangular matrix - The determinant satisfies the following identity \begin{align} \det(\Sigma) & = \det(W W^\top) = \det(W) \det(W^\top) = \det(W)^2 \end{align} - The determinant of the triangular matrix $W$ is simply the product of its diagonal elements $W_{i,i}$ so \begin{align} \log \det(\Sigma) & = 2 \log \det(W) & = 2 \sum_i \log W_{i,i} \end{align} - The quadratic form can be evaluated by the inner product $(x - \mu)^\top u$ where $u = \Sigma^{-1} (x - \mu)$. Finding $u$ is equivalent to the solution of the linear system $$ \Sigma u = (x - \mu) $$ and the solution is equivalent to $$ u = (W^\top)^{-1}W^{-1} (x - \mu) $$ and can be solved efficiently by backsubstitution as $W$ is triangular. This can be implemented as follows ```python import scipy as sc import scipy.linalg as la def log_mvar_gaussian_pdf(x, mu, Sig): W = np.linalg.cholesky(Sig) z = -np.sum(np.log(2*np.pi)/2 + np.log(np.diag(W))) - 0.5* np.sum((x-mu)*la.cho_solve((W,True), x-mu),axis=0) return z # Dimension of the problem N = 2 # Generate K points to evaluate the density at K = 10 x = np.random.randn(N,K) # Generate random parameters mu = np.random.randn(N,1) R = np.random.randn(N,N) Sig = np.dot(R, R.T) z1 = log_mvar_gaussian_pdf(x, mu, Sig) z2 = log_mvar_gaussian_inefficient(x, mu, Sig) print(z1) print(z2) ``` [-8.72265771 -6.31543527 -5.4738194 -1.80930712 -2.15963976 -5.54244251 -3.82657297 -4.16592483 -2.09530621 -8.63078085] [-8.72265771 -6.31543527 -5.4738194 -1.80930712 -2.15963976 -5.54244251 -3.82657297 -4.16592483 -2.09530621 -8.63078085] For the solution of $\Sigma u = b$ where $\Sigma = WW^\top$, we use the implementation in _scipy.linalg.cho_solve_. ```python import scipy.linalg as la N = 2 # Construct a random positive definite matrix R = np.random.randn(N,N) Sig = np.dot(R, R.T) b = np.random.randn(N,1) # Direct implementation -- inefficient u_direct = np.matrix(np.linalg.inv(Sig))*b # Efficient implementation W = np.linalg.cholesky(Sig) u_efficient = la.cho_solve((W,True), b) # Verify that both give the same result print(u_direct) print(u_efficient) ``` [[-0.40649925] [ 0.39775871]] [[-0.40649925] [ 0.39775871]] ## Dirichlet Distribution The Dirichlet distribution is a distribution over probability vectors. $$ \mathcal{D}(w_{1:N}; \alpha_{1:N}) = \frac{\Gamma(\sum_i \alpha_i)}{\prod_i \Gamma(\alpha_i) } \prod w_i^{\alpha_i-1} $$ ```python np.random.dirichlet(0.01*np.array([1,2,3])) ``` array([ 1.08260542e-06, 9.99998916e-01, 1.25214364e-09]) Some useful functions from np.random We can get a specific random number state and generate data from it. ```python import numpy as np u = np.random.RandomState() print(u.permutation(10)) lam = 3; print(u.exponential(lam)) ``` [0 2 8 1 7 6 4 9 3 5] 1.1245667884520933 ```python %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt A = np.random.randn(4,5) plt.figure(figsize=(5,4)) plt.imshow(A, interpolation='nearest') plt.show() ``` ```python ax = plt.gca() ax.set_xlim(-4,4) ax.set_ylim(-4,4) A = np.array([[1, 0.3, 0],[0, 1, 0],[0,0,1]]) for i in range(5): # circ = plt.Circle(np.random.randn(2),radius=np.random.rand(1)*4,alpha=0.1) circ = mpl.patches.Ellipse(np.random.randn(2),width=np.random.rand(1)*4,height=np.random.rand(1)*4, angle=60, alpha=0.1) ax.add_patch(circ) plt.show() ``` ## Importance Sampling Example ```python import numpy as np L = 200 lam = 3.0 rho = 100*L lp = -L/lam def log_w(x, lam, rho): return np.log(rho)-np.log(lam) - x/lam + x/rho N = 10000 Inf = np.float('inf') logS = -Inf S2 = 0 for i in range(N): u = np.random.rand(1) # Sample from the proposal # Generate x ~ Exp(rho) x = -rho*np.log(u) x2 = -lam*np.log(u) if x>L: lw = log_w(x, lam, rho) m = np.max((logS, lw)) logS = m + np.log(np.exp(logS-m) + np.exp(lw-m) ) if x2>L: S2 += 1 lp_est = logS - np.log(N) mc_est = np.log(S2) - np.log(N) print('Ground Truth :', lp) print('IS Estimator :', lp_est) print('NI Estimator :', mc_est) ``` Ground Truth : -66.66666666666667 IS Estimator : [-66.44972943] NI Estimator : -inf /Users/cemgil/anaconda/envs/py33/lib/python3.3/site-packages/IPython/kernel/__main__.py:34: RuntimeWarning: divide by zero encountered in log ### Importance sampling without knowing the normalizing constant ```python import numpy as np L = 200 lam = 3.0 rho = L k = 1.1 lp = -L/lam def log_w(x, lam, rho, k): return np.log(rho) - np.power((x/lam),k) + x/rho N = 10000 Inf = np.float('inf') logS = -Inf logSW = -Inf for i in range(N): u = np.random.rand(1) # Sample from the proposal # Generate x ~ Exp(rho) x = -rho*np.log(u) x2 = -lam*np.log(u) lw = log_w(x, lam, rho, k) m = np.max((logSW, lw)) logSW = m + np.log(np.exp(logSW-m) + np.exp(lw-m) ) if x>L: m = np.max((logS, lw)) logS = m + np.log(np.exp(logS-m) + np.exp(lw-m) ) lp_est = logS - logSW print('IS Estimator :', lp_est) ``` IS Estimator : [-102.18775893] ```python np.max((1,2)) ``` 2 ```python %connect_info ``` { "ip": "127.0.0.1", "stdin_port": 56571, "hb_port": 56573, "signature_scheme": "hmac-sha256", "transport": "tcp", "control_port": 56572, "iopub_port": 56570, "shell_port": 56569, "key": "9998ad8d-5daa-48bd-8307-45ca7c833fd7" } Paste the above JSON into a file, and connect with: $> ipython <app> --existing <file> or, if you are local, you can connect with just: $> ipython <app> --existing kernel-0612b3c2-77a2-4d55-b991-57a41d566995.json or even just: $> ipython <app> --existing if this is the most recent IPython session you have started.
be2a0941844dc8c2906207a5c6960d5eef9db93e
563,802
ipynb
Jupyter Notebook
Sampling.ipynb
tugbatugbatugba/data-mining
e4db929b075bc3cfd5c290b2ccd551da3cba1041
[ "MIT" ]
null
null
null
Sampling.ipynb
tugbatugbatugba/data-mining
e4db929b075bc3cfd5c290b2ccd551da3cba1041
[ "MIT" ]
null
null
null
Sampling.ipynb
tugbatugbatugba/data-mining
e4db929b075bc3cfd5c290b2ccd551da3cba1041
[ "MIT" ]
null
null
null
247.825055
103,180
0.899239
true
14,217
Qwen/Qwen-72B
1. YES 2. YES
0.880797
0.841826
0.741478
__label__eng_Latn
0.781424
0.561033
## Histograms of Oriented Gradients (HOG) As we saw with the ORB algorithm, we can use keypoints in images to do keypoint-based matching to detect objects in images. These type of algorithms work great when you want to detect objects that have a lot of consistent internal features that are not affected by the background. For example, these algorithms work well for facial detection because faces have a lot of consistent internal features that don’t get affected by the image background, such as the eyes, nose, and mouth. However, these type of algorithms don’t work so well when attempting to do more general object recognition, say for example, pedestrian detection in images. The reason is that people don’t have consistent internal features, like faces do, because the body shape and style of every person is different (see Fig. 1). This means that every person is going to have a different set of internal features, and so we need something that can more generally describe a person. <br> <figure> <figcaption style = "text-align:left; font-style:italic">Fig. 1. - Pedestrians.</figcaption> </figure> <br> One option is to try to detect pedestrians by their contours instead. Detecting objects in images by their contours (boundaries) is very challenging because we have to deal with the difficulties brought about by the contrast between the background and the foreground. For example, suppose you wanted to detect a pedestrian in an image that is walking in front of a white building and she is wearing a white coat and black pants (see Fig. 2). We can see in Fig. 2, that since the background of the image is mostly white, the black pants are going to have a very high contrast, but the coat, since it is white as well, is going to have very low contrast. In this case, detecting the edges of pants is going to be easy but detecting the edges of the coat is going to be very difficult. This is where **HOG** comes in. HOG stands for **Histograms of Oriented Gradients** and it was first introduced by Navneet Dalal and Bill Triggs in 2005. <br> <figure> <figcaption style = "text-align:left; font-style:italic">Fig. 2. - High and Low Contrast.</figcaption> </figure> <br> The HOG algorithm works by creating histograms of the distribution of gradient orientations in an image and then normalizing them in a very special way. This special normalization is what makes HOG so effective at detecting the edges of objects even in cases where the contrast is very low. These normalized histograms are put together into a feature vector, known as the HOG descriptor, that can be used to train a machine learning algorithm, such as a Support Vector Machine (SVM), to detect objects in images based on their boundaries (edges). Due to its great success and reliability, HOG has become one of the most widely used algorithms in computer vison for object detection. In this notebook, you will learn: * How the HOG algorithm works * How to use OpenCV to create a HOG descriptor * How to visualize the HOG descriptor. # The HOG Algorithm As its name suggests, the HOG algorithm, is based on creating histograms from the orientation of image gradients. The HOG algorithm is implemented in a series of steps: 1. Given the image of particular object, set a detection window (region of interest) that covers the entire object in the image (see Fig. 3). 2. Calculate the magnitude and direction of the gradient for each individual pixel in the detection window. 3. Divide the detection window into connected *cells* of pixels, with all cells being of the same size (see Fig. 3). The size of the cells is a free parameter and it is usually chosen so as to match the scale of the features that want to be detected. For example, in a 64 x 128 pixel detection window, square cells 6 to 8 pixels wide are suitable for detecting human limbs. 4. Create a Histogram for each cell, by first grouping the gradient directions of all pixels in each cell into a particular number of orientation (angular) bins; and then adding up the gradient magnitudes of the gradients in each angular bin (see Fig. 3). The number of bins in the histogram is a free parameter and it is usually set to 9 angular bins. 5. Group adjacent cells into *blocks* (see Fig. 3). The number of cells in each block is a free parameter and all blocks must be of the same size. The distance between each block (known as the stride) is a free parameter but it is usually set to half the block size, in which case you will get overlapping blocks (*see video below*). The HOG algorithm has been shown empirically to work better with overlapping blocks. 6. Use the cells contained within each block to normalize the cell histograms in that block (see Fig. 3). If you have overlapping blocks this means that most cells will be normalized with respect to different blocks (*see video below*). Therefore, the same cell may have several different normalizations. 7. Collect all the normalized histograms from all the blocks into a single feature vector called the HOG descriptor. 8. Use the resulting HOG descriptors from many images of the same type of object to train a machine learning algorithm, such as an SVM, to detect those type of objects in images. For example, you could use the HOG descriptors from many images of pedestrians to train an SVM to detect pedestrians in images. The training is done with both positive a negative examples of the object you want detect in the image. 9. Once the SVM has been trained, a sliding window approach is used to try to detect and locate objects in images. Detecting an object in the image entails finding the part of the image that looks similar to the HOG pattern learned by the SVM. <br> <figure> <figcaption style = "text-align:left; font-style:italic">Fig. 3. - HOG Diagram.</figcaption> </figure> <br> <figure> <figcaption style = "text-align:left; font-style:italic">Vid. 1. - HOG Animation.</figcaption> </figure> # Why The HOG Algorithm Works As we learned above, HOG creates histograms by adding the magnitude of the gradients in particular orientations in localized portions of the image called *cells*. By doing this we guarantee that stronger gradients will contribute more to the magnitude of their respective angular bin, while the effects of weak and randomly oriented gradients resulting from noise are minimized. In this manner the histograms tell us the dominant gradient orientation of each cell. ### Dealing with contrast Now, the magnitude of the dominant orientation can vary widely due to variations in local illumination and the contrast between the background and the foreground. To account for the background-foreground contrast differences, the HOG algorithm tries to detect edges locally. In order to do this, it defines groups of cells, called **blocks**, and normalizes the histograms using this local group of cells. By normalizing locally, the HOG algorithm can detect the edges in each block very reliably; this is called **block normalization**. In addition to using block normalization, the HOG algorithm also uses overlapping blocks to increase its performance. By using overlapping blocks, each cell contributes several independent components to the final HOG descriptor, where each component corresponds to a cell being normalized with respect to a different block. This may seem redundant but, it has been shown empirically that by normalizing each cell several times with respect to different local blocks, the performance of the HOG algorithm increases dramatically. ### Loading Images and Importing Resources The first step in building our HOG descriptor is to load the required packages into Python and to load our image. We start by using OpenCV to load an image of a triangle tile. Since, the `cv2.imread()` function loads images as BGR we will convert our image to RGB so we can display it with the correct colors. As usual we will convert our BGR image to Gray Scale for analysis. ```python import cv2 import numpy as np import matplotlib.pyplot as plt # Set the default figure size plt.rcParams['figure.figsize'] = [17.0, 7.0] # Load the image image = cv2.imread('./images/triangle_tile.jpeg') # Convert the original image to RGB original_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # Convert the original image to gray scale gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Print the shape of the original and gray scale images print('The original image has shape: ', original_image.shape) print('The gray scale image has shape: ', gray_image.shape) # Display the images plt.subplot(121) plt.imshow(original_image) plt.title('Original Image') plt.subplot(122) plt.imshow(gray_image, cmap='gray') plt.title('Gray Scale Image') plt.show() ``` # Creating The HOG Descriptor We will be using OpenCV’s `HOGDescriptor` class to create the HOG descriptor. The parameters of the HOG descriptor are setup using the `HOGDescriptor()` function. The parameters of the `HOGDescriptor()` function and their default values are given below: `cv2.HOGDescriptor(win_size = (64, 128), block_size = (16, 16), block_stride = (8, 8), cell_size = (8, 8), nbins = 9, win_sigma = DEFAULT_WIN_SIGMA, threshold_L2hys = 0.2, gamma_correction = true, nlevels = DEFAULT_NLEVELS)` Parameters: * **win_size** – *Size* Size of detection window in pixels (*width, height*). Defines the region of interest. Must be an integer multiple of cell size. * **block_size** – *Size* Block size in pixels (*width, height*). Defines how many cells are in each block. Must be an integer multiple of cell size and it must be smaller than the detection window. The smaller the block the finer detail you will get. * **block_stride** – *Size* Block stride in pixels (*horizontal, vertical*). It must be an integer multiple of cell size. The `block_stride` defines the distance between adjecent blocks, for example, 8 pixels horizontally and 8 pixels vertically. Longer `block_strides` makes the algorithm run faster (because less blocks are evaluated) but the algorithm may not perform as well. * **cell_size** – *Size* Cell size in pixels (*width, height*). Determines the size fo your cell. The smaller the cell the finer detail you will get. * **nbins** – *int* Number of bins for the histograms. Determines the number of angular bins used to make the histograms. With more bins you capture more gradient directions. HOG uses unsigned gradients, so the angular bins will have values between 0 and 180 degrees. * **win_sigma** – *double* Gaussian smoothing window parameter. The performance of the HOG algorithm can be improved by smoothing the pixels near the edges of the blocks by applying a Gaussian spatial window to each pixel before computing the histograms. * **threshold_L2hys** – *double* L2-Hys (Lowe-style clipped L2 norm) normalization method shrinkage. The L2-Hys method is used to normalize the blocks and it consists of an L2-norm followed by clipping and a renormalization. The clipping limits the maximum value of the descriptor vector for each block to have the value of the given threshold (0.2 by default). After the clipping the descriptor vector is renormalized as described in *IJCV*, 60(2):91-110, 2004. * **gamma_correction** – *bool* Flag to specify whether the gamma correction preprocessing is required or not. Performing gamma correction slightly increases the performance of the HOG algorithm. * **nlevels** – *int* Maximum number of detection window increases. As we can see, the `cv2.HOGDescriptor()`function supports a wide range of parameters. The first few arguments (`block_size, block_stride, cell_size`, and `nbins`) are probably the ones you are most likely to change. The other parameters can be safely left at their default values and you will get good results. In the code below, we will use the `cv2.HOGDescriptor()`function to set the cell size, block size, block stride, and the number of bins for the histograms of the HOG descriptor. We will then use `.compute(image)`method to compute the HOG descriptor (feature vector) for the given `image`. ```python # Specify the parameters for our HOG descriptor # Cell Size in pixels (width, height). Must be smaller than the size of the detection window # and must be chosen so that the resulting Block Size is smaller than the detection window. cell_size = (6, 6) # Number of cells per block in each direction (x, y). Must be chosen so that the resulting # Block Size is smaller than the detection window num_cells_per_block = (2, 2) # Block Size in pixels (width, height). Must be an integer multiple of Cell Size. # The Block Size must be smaller than the detection window block_size = (num_cells_per_block[0] * cell_size[0], num_cells_per_block[1] * cell_size[1]) # Calculate the number of cells that fit in our image in the x and y directions x_cells = gray_image.shape[1] // cell_size[0] y_cells = gray_image.shape[0] // cell_size[1] # Horizontal distance between blocks in units of Cell Size. Must be an integer and it must # be set such that (x_cells - num_cells_per_block[0]) / h_stride = integer. h_stride = 1 # Vertical distance between blocks in units of Cell Size. Must be an integer and it must # be set such that (y_cells - num_cells_per_block[1]) / v_stride = integer. v_stride = 1 # Block Stride in pixels (horizantal, vertical). Must be an integer multiple of Cell Size block_stride = (cell_size[0] * h_stride, cell_size[1] * v_stride) # Number of gradient orientation bins num_bins = 9 # Specify the size of the detection window (Region of Interest) in pixels (width, height). # It must be an integer multiple of Cell Size and it must cover the entire image. Because # the detection window must be an integer multiple of cell size, depending on the size of # your cells, the resulting detection window might be slightly smaller than the image. # This is perfectly ok. win_size = (x_cells * cell_size[0] , y_cells * cell_size[1]) # Print the shape of the gray scale image for reference print('\nThe gray scale image has shape: ', gray_image.shape) print() # Print the parameters of our HOG descriptor print('HOG Descriptor Parameters:\n') print('Window Size:', win_size) print('Cell Size:', cell_size) print('Block Size:', block_size) print('Block Stride:', block_stride) print('Number of Bins:', num_bins) print() # Set the parameters of the HOG descriptor using the variables defined above hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins) # Compute the HOG Descriptor for the gray scale image hog_descriptor = hog.compute(gray_image) ``` The gray scale image has shape: (250, 250) HOG Descriptor Parameters: Window Size: (246, 246) Cell Size: (6, 6) Block Size: (12, 12) Block Stride: (6, 6) Number of Bins: 9 # Number of Elements In The HOG Descriptor The resulting HOG Descriptor (feature vector), contains the normalized histograms from all cells from all blocks in the detection window concatenated in one long vector. Therefore, the size of the HOG feature vector will be given by the total number of blocks in the detection window, multiplied by the number of cells per block, times the number of orientation bins: <span class="mathquill"> \begin{equation} \mbox{total_elements} = (\mbox{total_number_of_blocks})\mbox{ } \times \mbox{ } (\mbox{number_cells_per_block})\mbox{ } \times \mbox{ } (\mbox{number_of_bins}) \end{equation} </span> If we don’t have overlapping blocks (*i.e.* the `block_stride`equals the `block_size`), the total number of blocks can be easily calculated by dividing the size of the detection window by the block size. However, in the general case we have to take into account the fact that we have overlapping blocks. To find the total number of blocks in the general case (*i.e.* for any `block_stride` and `block_size`), we can use the formula given below: <span class="mathquill"> \begin{equation} \mbox{Total}_i = \left( \frac{\mbox{block_size}_i}{\mbox{block_stride}_i} \right)\left( \frac{\mbox{window_size}_i}{\mbox{block_size}_i} \right) - \left [\left( \frac{\mbox{block_size}_i}{\mbox{block_stride}_i} \right) - 1 \right]; \mbox{ for } i = x,y \end{equation} </span> Where <span class="mathquill">Total$_x$</span>, is the total number of blocks along the width of the detection window, and <span class="mathquill">Total$_y$</span>, is the total number of blocks along the height of the detection window. This formula for <span class="mathquill">Total$_x$</span> and <span class="mathquill">Total$_y$</span>, takes into account the extra blocks that result from overlapping. After calculating <span class="mathquill">Total$_x$</span> and <span class="mathquill">Total$_y$</span>, we can get the total number of blocks in the detection window by multiplying <span class="mathquill">Total$_x$ $\times$ Total$_y$</span>. The above formula can be simplified considerably because the `block_size`, `block_stride`, and `window_size`are all defined in terms of the `cell_size`. By making all the appropriate substitutions and cancelations the above formula reduces to: <span class="mathquill"> \begin{equation} \mbox{Total}_i = \left(\frac{\mbox{cells}_i - \mbox{num_cells_per_block}_i}{N_i}\right) + 1\mbox{ }; \mbox{ for } i = x,y \end{equation} </span> Where <span class="mathquill">cells$_x$</span> is the total number of cells along the width of the detection window, and <span class="mathquill">cells$_y$</span>, is the total number of cells along the height of the detection window. And <span class="mathquill">$N_x$</span> is the horizontal block stride in units of `cell_size` and <span class="mathquill">$N_y$</span> is the vertical block stride in units of `cell_size`. Let's calculate what the number of elements for the HOG feature vector should be and check that it matches the shape of the HOG Descriptor calculated above. ```python # Calculate the total number of blocks along the width of the detection window tot_bx = np.uint32(((x_cells - num_cells_per_block[0]) / h_stride) + 1) # Calculate the total number of blocks along the height of the detection window tot_by = np.uint32(((y_cells - num_cells_per_block[1]) / v_stride) + 1) # Calculate the total number of elements in the feature vector tot_els = (tot_bx) * (tot_by) * num_cells_per_block[0] * num_cells_per_block[1] * num_bins # Print the total number of elements the HOG feature vector should have print('\nThe total number of elements in the HOG Feature Vector should be: ', tot_bx, 'x', tot_by, 'x', num_cells_per_block[0], 'x', num_cells_per_block[1], 'x', num_bins, '=', tot_els) # Print the shape of the HOG Descriptor to see that it matches the above print('\nThe HOG Descriptor has shape:', hog_descriptor.shape) print() ``` The total number of elements in the HOG Feature Vector should be: 40 x 40 x 2 x 2 x 9 = 57600 The HOG Descriptor has shape: (57600, 1) # Visualizing The HOG Descriptor We can visualize the HOG Descriptor by plotting the histogram associated with each cell as a collection of vectors. To do this, we will plot each bin in the histogram as a single vector whose magnitude is given by the height of the bin and its orientation is given by the angular bin that its associated with. Since any given cell might have multiple histograms associated with it, due to the overlapping blocks, we will choose to average all the histograms for each cell to produce a single histogram for each cell. OpenCV has no easy way to visualize the HOG Descriptor, so we have to do some manipulation first in order to visualize it. We will start by reshaping the HOG Descriptor in order to make our calculations easier. We will then compute the average histogram of each cell and finally we will convert the histogram bins into vectors. Once we have the vectors, we plot the corresponding vectors for each cell in an image. The code below produces an interactive plot so that you can interact with the figure. The figure contains: * the grayscale image, * the HOG Descriptor (feature vector), * a zoomed-in portion of the HOG Descriptor, and * the histogram of the selected cell. **You can click anywhere on the gray scale image or the HOG Descriptor image to select a particular cell**. Once you click on either image a *magenta* rectangle will appear showing the cell you selected. The Zoom Window will show you a zoomed in version of the HOG descriptor around the selected cell; and the histogram plot will show you the corresponding histogram for the selected cell. The interactive window also has buttons at the bottom that allow for other functionality, such as panning, and giving you the option to save the figure if desired. The home button returns the figure to its default value. **NOTE**: If you are running this notebook in the Udacity workspace, there is around a 2 second lag in the interactive plot. This means that if you click in the image to zoom in, it will take about 2 seconds for the plot to refresh. ```python %matplotlib notebook import copy import matplotlib.patches as patches # Set the default figure size plt.rcParams['figure.figsize'] = [9.8, 9] # Reshape the feature vector to [blocks_y, blocks_x, num_cells_per_block_x, num_cells_per_block_y, num_bins]. # The blocks_x and blocks_y will be transposed so that the first index (blocks_y) referes to the row number # and the second index to the column number. This will be useful later when we plot the feature vector, so # that the feature vector indexing matches the image indexing. hog_descriptor_reshaped = hog_descriptor.reshape(tot_bx, tot_by, num_cells_per_block[0], num_cells_per_block[1], num_bins).transpose((1, 0, 2, 3, 4)) # Print the shape of the feature vector for reference print('The feature vector has shape:', hog_descriptor.shape) # Print the reshaped feature vector print('The reshaped feature vector has shape:', hog_descriptor_reshaped.shape) # Create an array that will hold the average gradients for each cell ave_grad = np.zeros((y_cells, x_cells, num_bins)) # Print the shape of the ave_grad array for reference print('The average gradient array has shape: ', ave_grad.shape) # Create an array that will count the number of histograms per cell hist_counter = np.zeros((y_cells, x_cells, 1)) # Add up all the histograms for each cell and count the number of histograms per cell for i in range (num_cells_per_block[0]): for j in range(num_cells_per_block[1]): ave_grad[i:tot_by + i, j:tot_bx + j] += hog_descriptor_reshaped[:, :, i, j, :] hist_counter[i:tot_by + i, j:tot_bx + j] += 1 # Calculate the average gradient for each cell ave_grad /= hist_counter # Calculate the total number of vectors we have in all the cells. len_vecs = ave_grad.shape[0] * ave_grad.shape[1] * ave_grad.shape[2] # Create an array that has num_bins equally spaced between 0 and 180 degress in radians. deg = np.linspace(0, np.pi, num_bins, endpoint = False) # Each cell will have a histogram with num_bins. For each cell, plot each bin as a vector (with its magnitude # equal to the height of the bin in the histogram, and its angle corresponding to the bin in the histogram). # To do this, create rank 1 arrays that will hold the (x,y)-coordinate of all the vectors in all the cells in the # image. Also, create the rank 1 arrays that will hold all the (U,V)-components of all the vectors in all the # cells in the image. Create the arrays that will hold all the vector positons and components. U = np.zeros((len_vecs)) V = np.zeros((len_vecs)) X = np.zeros((len_vecs)) Y = np.zeros((len_vecs)) # Set the counter to zero counter = 0 # Use the cosine and sine functions to calculate the vector components (U,V) from their maginitudes. Remember the # cosine and sine functions take angles in radians. Calculate the vector positions and magnitudes from the # average gradient array for i in range(ave_grad.shape[0]): for j in range(ave_grad.shape[1]): for k in range(ave_grad.shape[2]): U[counter] = ave_grad[i,j,k] * np.cos(deg[k]) V[counter] = ave_grad[i,j,k] * np.sin(deg[k]) X[counter] = (cell_size[0] / 2) + (cell_size[0] * i) Y[counter] = (cell_size[1] / 2) + (cell_size[1] * j) counter = counter + 1 # Create the bins in degress to plot our histogram. angle_axis = np.linspace(0, 180, num_bins, endpoint = False) angle_axis += ((angle_axis[1] - angle_axis[0]) / 2) # Create a figure with 4 subplots arranged in 2 x 2 fig, ((a,b),(c,d)) = plt.subplots(2,2) # Set the title of each subplot a.set(title = 'Gray Scale Image\n(Click to Zoom)') b.set(title = 'HOG Descriptor\n(Click to Zoom)') c.set(title = 'Zoom Window', xlim = (0, 18), ylim = (0, 18), autoscale_on = False) d.set(title = 'Histogram of Gradients') # Plot the gray scale image a.imshow(gray_image, cmap = 'gray') a.set_aspect(aspect = 1) # Plot the feature vector (HOG Descriptor) b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5) b.invert_yaxis() b.set_aspect(aspect = 1) b.set_facecolor('black') # Define function for interactive zoom def onpress(event): #Unless the left mouse button is pressed do nothing if event.button != 1: return # Only accept clicks for subplots a and b if event.inaxes in [a, b]: # Get mouse click coordinates x, y = event.xdata, event.ydata # Select the cell closest to the mouse click coordinates cell_num_x = np.uint32(x / cell_size[0]) cell_num_y = np.uint32(y / cell_size[1]) # Set the edge coordinates of the rectangle patch edgex = x - (x % cell_size[0]) edgey = y - (y % cell_size[1]) # Create a rectangle patch that matches the the cell selected above rect = patches.Rectangle((edgex, edgey), cell_size[0], cell_size[1], linewidth = 1, edgecolor = 'magenta', facecolor='none') # A single patch can only be used in a single plot. Create copies # of the patch to use in the other subplots rect2 = copy.copy(rect) rect3 = copy.copy(rect) # Update all subplots a.clear() a.set(title = 'Gray Scale Image\n(Click to Zoom)') a.imshow(gray_image, cmap = 'gray') a.set_aspect(aspect = 1) a.add_patch(rect) b.clear() b.set(title = 'HOG Descriptor\n(Click to Zoom)') b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5) b.invert_yaxis() b.set_aspect(aspect = 1) b.set_facecolor('black') b.add_patch(rect2) c.clear() c.set(title = 'Zoom Window') c.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 1) c.set_xlim(edgex - cell_size[0], edgex + (2 * cell_size[0])) c.set_ylim(edgey - cell_size[1], edgey + (2 * cell_size[1])) c.invert_yaxis() c.set_aspect(aspect = 1) c.set_facecolor('black') c.add_patch(rect3) d.clear() d.set(title = 'Histogram of Gradients') d.grid() d.set_xlim(0, 180) d.set_xticks(angle_axis) d.set_xlabel('Angle') d.bar(angle_axis, ave_grad[cell_num_y, cell_num_x, :], 180 // num_bins, align = 'center', alpha = 0.5, linewidth = 1.2, edgecolor = 'k') fig.canvas.draw() # Create a connection between the figure and the mouse click fig.canvas.mpl_connect('button_press_event', onpress) plt.show() ``` The feature vector has shape: (57600, 1) The reshaped feature vector has shape: (40, 40, 2, 2, 9) The average gradient array has shape: (41, 41, 9) <IPython.core.display.Javascript object> <div id='13d94729-dd2b-4221-994a-2807f972a90f'></div> # Understanding The Histograms Let's take a look at a couple of snapshots of the above figure to see if the histograms for the selected cell make sense. Let's start looking at a cell that is inside a triangle and not near an edge: <br> <figure> <figcaption style = "text-align:center; font-style:italic">Fig. 4. - Histograms Inside a Triangle.</figcaption> </figure> <br> In this case, since the triangle is nearly all of the same color there shouldn't be any dominant gradient in the selected cell. As we can clearly see in the Zoom Window and the histogram, this is indeed the case. We have many gradients but none of them clearly dominates over the other. Now let’s take a look at a cell that is near a horizontal edge: <br> <figure> <figcaption style = "text-align:center; font-style:italic">Fig. 5. - Histograms Near a Horizontal Edge.</figcaption> </figure> <br> Remember that edges are areas of an image where the intensity changes abruptly. In these cases, we will have a high intensity gradient in some particular direction. This is exactly what we see in the corresponding histogram and Zoom Window for the selected cell. In the Zoom Window, we can see that the dominant gradient is pointing up, almost at 90 degrees, since that’s the direction in which there is a sharp change in intensity. Therefore, we should expect to see the 90-degree bin in the histogram to dominate strongly over the others. This is in fact what we see. Now let’s take a look at a cell that is near a vertical edge: <br> <figure> <figcaption style = "text-align:center; font-style:italic">Fig. 6. - Histograms Near a Vertical Edge.</figcaption> </figure> <br> In this case we expect the dominant gradient in the cell to be horizontal, close to 180 degrees, since that’s the direction in which there is a sharp change in intensity. Therefore, we should expect to see the 170-degree bin in the histogram to dominate strongly over the others. This is what we see in the histogram but we also see that there is another dominant gradient in the cell, namely the one in the 10-degree bin. The reason for this, is because the HOG algorithm is using unsigned gradients, which means 0 degrees and 180 degrees are considered the same. Therefore, when the histograms are being created, angles between 160 and 180 degrees, contribute proportionally to both the 10-degree bin and the 170-degree bin. This results in there being two dominant gradients in the cell near the vertical edge instead of just one. To conclude let’s take a look at a cell that is near a diagonal edge. <br> <figure> <figcaption style = "text-align:center; font-style:italic">Fig. 7. - Histograms Near a Diagonal Edge.</figcaption> </figure> <br> To understand what we are seeing, let’s first remember that gradients have an *x*-component, and a *y*-component, just like vectors. Therefore, the resulting orientation of a gradient is going to be given by the vector sum of its components. For this reason, on vertical edges the gradients are horizontal, because they only have an x-component, as we saw in Figure 4. While on horizontal edges the gradients are vertical, because they only have a y-component, as we saw in Figure 3. Consequently, on diagonal edges, the gradients are also going to be diagonal because both the *x* and *y* components are non-zero. Since the diagonal edges in the image are close to 45 degrees, we should expect to see a dominant gradient orientation in the 50-degree bin. This is in fact what we see in the histogram but, just like in Figure 4., we see there are two dominant gradients instead of just one. The reason for this is that when the histograms are being created, angles that are near the boundaries of bins, contribute proportionally to the adjacent bins. For example, a gradient with an angle of 40 degrees, is right in the middle of the 30-degree and 50-degree bin. Therefore, the magnitude of the gradient is split evenly into the 30-degree and 50-degree bin. This results in there being two dominant gradients in the cell near the diagonal edge instead of just one. Now that you know how HOG is implemented, in the workspace you will find a notebook named *Examples*. In there, you will be able set your own paramters for the HOG descriptor for various images. Have fun!
d5439e55a499c326400c9bd770590b8da7be3e83
804,434
ipynb
Jupyter Notebook
1_4_Feature_Vectors/3_1. HOG.ipynb
dgander000/CVND_Exercises
02e3587022fee4affe469dce722b2f5fc2486a39
[ "MIT" ]
null
null
null
1_4_Feature_Vectors/3_1. HOG.ipynb
dgander000/CVND_Exercises
02e3587022fee4affe469dce722b2f5fc2486a39
[ "MIT" ]
null
null
null
1_4_Feature_Vectors/3_1. HOG.ipynb
dgander000/CVND_Exercises
02e3587022fee4affe469dce722b2f5fc2486a39
[ "MIT" ]
null
null
null
1,182.991176
367,733
0.923589
true
7,791
Qwen/Qwen-72B
1. YES 2. YES
0.757794
0.787931
0.59709
__label__eng_Latn
0.997478
0.22557
# Application to Genome Multiple Alignment / Hamiltonian Path - Sum-of-pairs (SP) alighment is [NP-Complete](https://www.liebertpub.com/doi/abs/10.1089/cmb.1994.1.337) - We would like to work on this using annealing approach as discussed in [this paper](https://arxiv.org/abs/2004.06719) - In typical setup, this kind of problem is solved by finding Hamiltonian path on the overlap-layout-consensus (OLC) graph of raw genome reads.(Figure1 in the paper) - Solving Hamilton Cycle is also discussed in [Mahasinghe et al.](https://dl.acm.org/doi/10.1145/3290688.3290703) - We focus here to solve Hamiltonian Cycle problem ## Construct a graph that has Hamiltonian Path - Create a line and add random edge ```python import networkx as nx nodes = 10 G = nx.Graph() for i in range(nodes): G.add_node(i) for i in range(nodes - 1): G.add_edge(i, i + 1) nx.draw(G) ``` ```python import random for i in range(5): node1 = random.randrange(0, nodes) node2 = random.randrange(0, nodes) G.add_edge(node1, node2) nx.draw(G) ``` ### Hamiltonian definition - We borrow the definition from the [paper](https://dl.acm.org/doi/10.1145/3290688.3290703) but removes cycle condition from $H$ $F(\mathbf{x})=H(\mathbf{x})+P_{1}(\mathbf{x})+P_{2}(\mathbf{x})$ where $H(\mathbf{x})=\sum_{\left(i_{1}, i_{2}\right) \in V \times V-E(G)}\sum_{j=0}^{n-2} x_{i_{1}, j} x_{i_{2}, j+1}$ $P_{1}(\mathbf{x})=\sum_{i=0}^{n-1}\left(1-\sum_{j=0}^{n-1} x_{i, j}\right)^{2}$ and $P_{2}(\mathbf{x})=\sum_{j=0}^{n-1}\left(1-\sum_{i=0}^{n-1} x_{i, j}\right)^{2}$ ```python import numpy as np from sympy import * import itertools sigmas = symbols("sigma0:100") sigmas = np.reshape(sigmas, (nodes, nodes)) xs = (sigmas +1) /2 # iterate over all permutations of (0, 1, 2, 3) to make 4-tensor H = 0 n = nodes for i1 in range(nodes): for i2 in range(nodes): if not G.has_edge(i1, i2): for j in range(0, n -1): H += xs[i1, j] * xs[i2, j+1] H ``` $\displaystyle \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{70}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{80}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{90}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{70}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{80}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{90}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{69}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{79}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{89}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{99}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{70}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{80}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{90}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{69}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{79}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{89}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{99}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{70}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{80}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{90}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{69}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{79}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{89}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{99}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{70}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{80}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{90}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{69}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{79}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{89}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{99}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{70}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{79}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{80}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{90}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{64}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{65}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{66}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{67}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{69}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{89}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{68}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{99}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{69}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{69}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{69}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{70}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{70}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{90}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{71}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{72}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{73}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{74}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{75}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{76}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{77}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{79}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{78}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{99}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{79}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{79}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{89}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{99}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{80}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{81}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{82}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{83}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{84}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{85}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{86}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{87}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{89}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{88}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{90}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{91}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{92}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{93}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{94}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{95}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{96}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{97}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{98}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{99}}{2} + \frac{1}{2}\right)$ ```python P1 = 0 for i in range(n): temp = 0 for j in range(n): temp += xs[i, j] P1 += (1 - temp) ** 2 P1 ``` $\displaystyle \left(- \frac{\sigma_{0}}{2} - \frac{\sigma_{1}}{2} - \frac{\sigma_{2}}{2} - \frac{\sigma_{3}}{2} - \frac{\sigma_{4}}{2} - \frac{\sigma_{5}}{2} - \frac{\sigma_{6}}{2} - \frac{\sigma_{7}}{2} - \frac{\sigma_{8}}{2} - \frac{\sigma_{9}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{10}}{2} - \frac{\sigma_{11}}{2} - \frac{\sigma_{12}}{2} - \frac{\sigma_{13}}{2} - \frac{\sigma_{14}}{2} - \frac{\sigma_{15}}{2} - \frac{\sigma_{16}}{2} - \frac{\sigma_{17}}{2} - \frac{\sigma_{18}}{2} - \frac{\sigma_{19}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{20}}{2} - \frac{\sigma_{21}}{2} - \frac{\sigma_{22}}{2} - \frac{\sigma_{23}}{2} - \frac{\sigma_{24}}{2} - \frac{\sigma_{25}}{2} - \frac{\sigma_{26}}{2} - \frac{\sigma_{27}}{2} - \frac{\sigma_{28}}{2} - \frac{\sigma_{29}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{30}}{2} - \frac{\sigma_{31}}{2} - \frac{\sigma_{32}}{2} - \frac{\sigma_{33}}{2} - \frac{\sigma_{34}}{2} - \frac{\sigma_{35}}{2} - \frac{\sigma_{36}}{2} - \frac{\sigma_{37}}{2} - \frac{\sigma_{38}}{2} - \frac{\sigma_{39}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{40}}{2} - \frac{\sigma_{41}}{2} - \frac{\sigma_{42}}{2} - \frac{\sigma_{43}}{2} - \frac{\sigma_{44}}{2} - \frac{\sigma_{45}}{2} - \frac{\sigma_{46}}{2} - \frac{\sigma_{47}}{2} - \frac{\sigma_{48}}{2} - \frac{\sigma_{49}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{50}}{2} - \frac{\sigma_{51}}{2} - \frac{\sigma_{52}}{2} - \frac{\sigma_{53}}{2} - \frac{\sigma_{54}}{2} - \frac{\sigma_{55}}{2} - \frac{\sigma_{56}}{2} - \frac{\sigma_{57}}{2} - \frac{\sigma_{58}}{2} - \frac{\sigma_{59}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{60}}{2} - \frac{\sigma_{61}}{2} - \frac{\sigma_{62}}{2} - \frac{\sigma_{63}}{2} - \frac{\sigma_{64}}{2} - \frac{\sigma_{65}}{2} - \frac{\sigma_{66}}{2} - \frac{\sigma_{67}}{2} - \frac{\sigma_{68}}{2} - \frac{\sigma_{69}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{70}}{2} - \frac{\sigma_{71}}{2} - \frac{\sigma_{72}}{2} - \frac{\sigma_{73}}{2} - \frac{\sigma_{74}}{2} - \frac{\sigma_{75}}{2} - \frac{\sigma_{76}}{2} - \frac{\sigma_{77}}{2} - \frac{\sigma_{78}}{2} - \frac{\sigma_{79}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{80}}{2} - \frac{\sigma_{81}}{2} - \frac{\sigma_{82}}{2} - \frac{\sigma_{83}}{2} - \frac{\sigma_{84}}{2} - \frac{\sigma_{85}}{2} - \frac{\sigma_{86}}{2} - \frac{\sigma_{87}}{2} - \frac{\sigma_{88}}{2} - \frac{\sigma_{89}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{90}}{2} - \frac{\sigma_{91}}{2} - \frac{\sigma_{92}}{2} - \frac{\sigma_{93}}{2} - \frac{\sigma_{94}}{2} - \frac{\sigma_{95}}{2} - \frac{\sigma_{96}}{2} - \frac{\sigma_{97}}{2} - \frac{\sigma_{98}}{2} - \frac{\sigma_{99}}{2} - 4\right)^{2}$ ```python P2 = 0 for j in range(n): temp = 0 for i in range(n): temp += xs[i, j] P2 += (1 - temp) ** 2 P2 ``` $\displaystyle \left(- \frac{\sigma_{0}}{2} - \frac{\sigma_{10}}{2} - \frac{\sigma_{20}}{2} - \frac{\sigma_{30}}{2} - \frac{\sigma_{40}}{2} - \frac{\sigma_{50}}{2} - \frac{\sigma_{60}}{2} - \frac{\sigma_{70}}{2} - \frac{\sigma_{80}}{2} - \frac{\sigma_{90}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{1}}{2} - \frac{\sigma_{11}}{2} - \frac{\sigma_{21}}{2} - \frac{\sigma_{31}}{2} - \frac{\sigma_{41}}{2} - \frac{\sigma_{51}}{2} - \frac{\sigma_{61}}{2} - \frac{\sigma_{71}}{2} - \frac{\sigma_{81}}{2} - \frac{\sigma_{91}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{12}}{2} - \frac{\sigma_{2}}{2} - \frac{\sigma_{22}}{2} - \frac{\sigma_{32}}{2} - \frac{\sigma_{42}}{2} - \frac{\sigma_{52}}{2} - \frac{\sigma_{62}}{2} - \frac{\sigma_{72}}{2} - \frac{\sigma_{82}}{2} - \frac{\sigma_{92}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{13}}{2} - \frac{\sigma_{23}}{2} - \frac{\sigma_{3}}{2} - \frac{\sigma_{33}}{2} - \frac{\sigma_{43}}{2} - \frac{\sigma_{53}}{2} - \frac{\sigma_{63}}{2} - \frac{\sigma_{73}}{2} - \frac{\sigma_{83}}{2} - \frac{\sigma_{93}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{14}}{2} - \frac{\sigma_{24}}{2} - \frac{\sigma_{34}}{2} - \frac{\sigma_{4}}{2} - \frac{\sigma_{44}}{2} - \frac{\sigma_{54}}{2} - \frac{\sigma_{64}}{2} - \frac{\sigma_{74}}{2} - \frac{\sigma_{84}}{2} - \frac{\sigma_{94}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{15}}{2} - \frac{\sigma_{25}}{2} - \frac{\sigma_{35}}{2} - \frac{\sigma_{45}}{2} - \frac{\sigma_{5}}{2} - \frac{\sigma_{55}}{2} - \frac{\sigma_{65}}{2} - \frac{\sigma_{75}}{2} - \frac{\sigma_{85}}{2} - \frac{\sigma_{95}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{16}}{2} - \frac{\sigma_{26}}{2} - \frac{\sigma_{36}}{2} - \frac{\sigma_{46}}{2} - \frac{\sigma_{56}}{2} - \frac{\sigma_{6}}{2} - \frac{\sigma_{66}}{2} - \frac{\sigma_{76}}{2} - \frac{\sigma_{86}}{2} - \frac{\sigma_{96}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{17}}{2} - \frac{\sigma_{27}}{2} - \frac{\sigma_{37}}{2} - \frac{\sigma_{47}}{2} - \frac{\sigma_{57}}{2} - \frac{\sigma_{67}}{2} - \frac{\sigma_{7}}{2} - \frac{\sigma_{77}}{2} - \frac{\sigma_{87}}{2} - \frac{\sigma_{97}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{18}}{2} - \frac{\sigma_{28}}{2} - \frac{\sigma_{38}}{2} - \frac{\sigma_{48}}{2} - \frac{\sigma_{58}}{2} - \frac{\sigma_{68}}{2} - \frac{\sigma_{78}}{2} - \frac{\sigma_{8}}{2} - \frac{\sigma_{88}}{2} - \frac{\sigma_{98}}{2} - 4\right)^{2} + \left(- \frac{\sigma_{19}}{2} - \frac{\sigma_{29}}{2} - \frac{\sigma_{39}}{2} - \frac{\sigma_{49}}{2} - \frac{\sigma_{59}}{2} - \frac{\sigma_{69}}{2} - \frac{\sigma_{79}}{2} - \frac{\sigma_{89}}{2} - \frac{\sigma_{9}}{2} - \frac{\sigma_{99}}{2} - 4\right)^{2}$ ```python F = H + P1 + P2 ``` ```python import re def separate_to_coeff_and_symbols(expr): syms = [] coeffs = [] for elem in Mul.make_args(expr): if type(elem) == Symbol: syms.append(elem) else: coeffs.append(elem) if coeffs: coeff = coeffs[0] else: coeff = 1 return coeff, syms def pow_to_mul(expr): """ Convert integer powers in an expression to Muls, like a**2 => a*a. """ pows = list(expr.atoms(Pow)) if any(not e.is_Integer for b, e in (i.as_base_exp() for i in pows)): raise ValueError("A power contains a non-integer exponent") repl = zip(pows, (Mul(*[b]*e,evaluate=False) for b,e in (i.as_base_exp() for i in pows))) return expr.subs(repl) def reconstruct_interaction_tensors_2_local(hamiltonian_expr): coeffs = [] indices = [] max_index = 0 # Split into terms only contains multiplication for term in Add.make_args(expand(hamiltonian_expr)): coeff, syms = separate_to_coeff_and_symbols(pow_to_mul(term)) this_indices = [] for sym in syms: name = sym.name # Regex match to find index of the variable m = re.match(r"sigma(\d+)", name) index = int(m.groups(0)[0]) this_indices.append(index) if this_indices: # Remember maximum index so that we can know the shape of h/J max_index = max(max_index, max(this_indices)) this_indices = np.array(np.sort(this_indices), dtype=np.int) coeffs.append(coeff) indices.append(this_indices) dim = max_index + 1 h = np.zeros(dim) J = np.zeros((dim, dim)) E0 = 0 for i, index in enumerate(indices): if len(index) == 1: h[index[0]] = coeffs[i] elif len(index) == 2: J[index[0], index[1]] = coeffs[i] else: E0 = coeffs[i] return E0, h, J ``` ```python E0, h, J = reconstruct_interaction_tensors_2_local(F) ``` ```python import pandas as pd import numpy as np from abstract_ising import AbstractIsing import matplotlib.pyplot as plt %matplotlib inline def spinfield_1d(index, spin_count): spins = [1 if digit=='1' else -1 for digit in bin(index)[2:]] res = -np.ones(spin_count) res[spin_count -len(spins):spin_count] = spins return res class IsingModel2D(AbstractIsing): def __init__(self, E0, h, J, seed=0): super().__init__() self.E0 = E0 self.h = h self.J = J np.random.seed(seed) self.num_spins = h.shape[0] self.spins = 2*(np.random.rand(self.num_spins) < 0.5) - 1 def energy(self, spins=None): spins = self.spins if spins is None else spins interaction1d = spins.dot(self.h) interaction2d = spins.dot(self.J.dot(spins)) return self.E0 + interaction1d + interaction2d def exact(self): n = len(self.spins) all_one = 2 ** n - 1 E = np.inf for i in range(all_one + 1): spins = spinfield_1d(i, n) E = min(E, self.energy(spins)) return np.float(E) def energy_diff(self, i): diff1d = self.h[i] diff2d = self.J[i, :].dot(self.spins) + self.J[:, i].dot(self.spins) return -2 * self.spins[i] * (diff1d + diff2d) def rand_site(self): return (np.random.randint(self.num_spins),) ``` ```python def exp_schedule(T_i, T_f, N): t = np.arange(N+1) Ts = T_i * ((T_f/T_i) ** (t/N)) return Ts def anneal(ising,Ts): Es = np.zeros_like(Ts) min_spins = [] min_E = None for i, t in enumerate(Ts): Es[i] = ising.mc_step(T=t) if not min_E: min_E = Es[i] else: if min_E > Es[i]: min_E = Es[i] min_spins = np.copy(ising.spins) return Es, min_spins def calc_once(ising, Ts, include_exact=True, plot_title=None): ising.method = 'metropolis' Es, spins = anneal(ising, Ts) Ea = min(Es) plt.plot(np.arange(len(Es)), Es) Et = None plt.plot(np.arange(len(Ts)), np.repeat(Ea, len(Ts)), color='g') if include_exact: Et = ising.exact() print("Exact: ", Et) plt.plot(np.arange(len(Ts)), np.repeat(Et, len(Ts)), color='r') if(plot_title): plt.title(plot_title) plt.show() return spins F = H + P1 + P2 E0, h, J = reconstruct_interaction_tensors_2_local(F) ising = IsingModel2D(E0, h, J, seed=1) spins = calc_once(ising, exp_schedule(400, 0.1, 10000), include_exact=False) solution = np.reshape(spins, (nodes, nodes)) solution ``` - Each row corresponds to node, each column corresponds when the node appears in the path - 1 is for appearance - One node for each cycle condition is satisfied - We draw the path on the graph ```python pos = nx.spring_layout(G) ax = nx.draw(G, pos=pos) def build_color_list(nodes, accent_nodes): node_color_list = [] for i in nodes: if i in accent_nodes: node_color_list.append('#00FF00') else: node_color_list.append('#0000FF') return node_color_list def build_node_path_label_dictionary(nodes, path): node_label_dict = {} for i in nodes: if i in path: node_label_dict[i] = path.index(i) else: node_label_dict[i] = "" return node_label_dict def find_path(solution): path = [] for col in range(solution.shape[1]): path.append(np.where(solution[:, col] == 1)[0][0]) return path def draw_graph(graph, node_color_list=None, node_label_dict=None, edge_color_list=None): nx.draw_networkx_edges(graph, pos, edge_color=edge_color_list) nx.draw_networkx_nodes(graph, pos, node_color=node_color_list) if node_label_dict: nx.draw_networkx_labels(graph, pos, node_label_dict) ``` ```python path = find_path(solution) node_color_list = build_color_list(G.nodes, path) node_label_dict = build_node_path_label_dictionary(G.nodes, path) draw_graph(G, node_color_list, node_label_dict) ``` - Seems it does not well find optimal result - For weaker constraint on single-appearance constraint, it is not satisfied - Weaker constraint on non-existent-edge is unreasonable, as the problem here is the node 9 which does not connect ### Using non QUBO optimization - This simple DFS is very fast and also returns collect result (ref. [Stackoverflow](https://stackoverflow.com/questions/47982604/hamiltonian-path-using-python)) - Hamiltonian path solver is not implemented in any commercial python package I found. ```python def hamilton(G, size, pt, path=[]): print('hamilton called with pt={}, path={}'.format(pt, path)) if pt not in set(path): path.append(pt) if len(path)==size: return path for pt_next in G.neighbors(pt): res_path = [i for i in path] candidate = hamilton(G, size, pt_next, res_path) if candidate is not None: # skip loop or dead end return candidate print('path {} is a dead end'.format(path)) else: print('pt {} already in path {}'.format(pt, path)) # loop or dead end, None is implicitly returned path = hamilton(G, 10, 0) node_color_list = build_color_list(G.nodes, path) node_label_dict = build_node_path_label_dictionary(G.nodes, path) draw_graph(G, node_color_list, node_label_dict) ``` ### QUBO using D-Wave hybrid - Hybrid solver solves the problem using same QUBO - From this it seems that global update like [Swendsen-Wang](https://en.wikipedia.org/wiki/Swendsen–Wang_algorithm) and/or [replica exchange](https://en.wikipedia.org/wiki/Parallel_tempering), also importance sampling will be used in the hybrid solver ```python import dimod def build_bqm_from_interaction_tensors(h, J): linear_coeffs = {} for i in range(h.shape[0]): linear_coeffs[i] = h[i] quadratic_coeffs = {} for i in range(J.shape[0]): for j in range(J.shape[1]): quadratic_coeffs [(i, j)] = J[i, j] bqm = dimod.BinaryQuadraticModel(linear_coeffs, quadratic_coeffs, vartype='SPIN') return bqm bqm = build_bqm_from_interaction_tensors(h, J) from dwave.system import LeapHybridSampler sampler = LeapHybridSampler() # doctest: +SKIP answer = sampler.sample(bqm) # doctest: +SKIP ``` 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ... 99 energy num_oc. 0 +1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 +1 -1 -1 -1 -1 -1 -1 ... -1 -541.0 1 ['SPIN', 1 rows, 1 samples, 100 variables] ```python lowest = answer.lowest() ``` ```python solution = lowest.to_pandas_dataframe().iloc[0, 0:100].to_numpy().reshape(10, 10) path = find_path(solution) node_color_list = build_color_list(G.nodes, path) node_label_dict = build_node_path_label_dictionary(G.nodes, path) draw_graph(G, node_color_list, node_label_dict) ``` ### QUBO using D-Wave Tabu sampler - This does not solve the problem by default setting - Increasing num_shots to 100000 does not help ```python from tabu import TabuSampler sampler = TabuSampler() answer = sampler.sample(bqm) lowest = answer.lowest() ``` ```python solution = lowest.to_pandas_dataframe().iloc[0, 0:100].to_numpy().reshape(10, 10) path = find_path(solution) node_color_list = build_color_list(G.nodes, path) node_label_dict = build_node_path_label_dictionary(G.nodes, path) draw_graph(G, node_color_list, node_label_dict) ``` ### QUBO using D-Wave Neal - This does not solve the problem by default setting ```python import neal sampler = neal.SimulatedAnnealingSampler() answer = sampler.sample(bqm) lowest = answer.lowest() ``` ```python solution = lowest.to_pandas_dataframe().iloc[0, 0:100].to_numpy().reshape(10, 10) path = find_path(solution) node_color_list = build_color_list(G.nodes, path) node_label_dict = build_node_path_label_dictionary(G.nodes, path) draw_graph(G, node_color_list, node_label_dict) ``` ### QUBO using D-Wave QPU - 100 fully connected does not fit into single QPU - 8 nodes will fit, so we re-define a problem here and re-solve it by our SA implementation ```python import networkx as nx nodes = 8 G2 = nx.Graph() for i in range(nodes): G2.add_node(i) for i in range(nodes - 1): G2.add_edge(i, i + 1) import random for i in range(3): node1 = random.randrange(0, nodes) node2 = random.randrange(0, nodes) G2.add_edge(node1, node2) nx.draw(G2) ``` ```python sigmas = symbols("sigma0:64") sigmas = np.reshape(sigmas, (nodes, nodes)) xs = (sigmas +1) /2 # iterate over all permutations of (0, 1, 2, 3) to make 4-tensor H = 0 n = nodes for i1 in range(nodes): for i2 in range(nodes): if not G.has_edge(i1, i2): for j in range(0, n -1): H += xs[i1, j] * xs[i2, j+1] H ``` $\displaystyle \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{0}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{1}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{10}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{11}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{12}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{13}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{14}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{15}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{16}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{17}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{18}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{19}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{2}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{20}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{21}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{22}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{23}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{24}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{25}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{26}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{27}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{28}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{29}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{3}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{30}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{31}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{32}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{33}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{34}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{35}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{36}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{37}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{38}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{39}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{4}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{40}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{41}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{42}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{43}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{44}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{45}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{46}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{47}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{48}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{49}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{5}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{50}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{51}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{52}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{53}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{54}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{55}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{56}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{57}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{58}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{59}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{6}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{60}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{61}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{63}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{62}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{7}}{2} + \frac{1}{2}\right) + \left(\frac{\sigma_{8}}{2} + \frac{1}{2}\right) \left(\frac{\sigma_{9}}{2} + \frac{1}{2}\right)$ ```python P1 = 0 for i in range(n): temp = 0 for j in range(n): temp += xs[i, j] P1 += (1 - temp) ** 2 P1 ``` $\displaystyle \left(- \frac{\sigma_{0}}{2} - \frac{\sigma_{1}}{2} - \frac{\sigma_{2}}{2} - \frac{\sigma_{3}}{2} - \frac{\sigma_{4}}{2} - \frac{\sigma_{5}}{2} - \frac{\sigma_{6}}{2} - \frac{\sigma_{7}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{10}}{2} - \frac{\sigma_{11}}{2} - \frac{\sigma_{12}}{2} - \frac{\sigma_{13}}{2} - \frac{\sigma_{14}}{2} - \frac{\sigma_{15}}{2} - \frac{\sigma_{8}}{2} - \frac{\sigma_{9}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{16}}{2} - \frac{\sigma_{17}}{2} - \frac{\sigma_{18}}{2} - \frac{\sigma_{19}}{2} - \frac{\sigma_{20}}{2} - \frac{\sigma_{21}}{2} - \frac{\sigma_{22}}{2} - \frac{\sigma_{23}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{24}}{2} - \frac{\sigma_{25}}{2} - \frac{\sigma_{26}}{2} - \frac{\sigma_{27}}{2} - \frac{\sigma_{28}}{2} - \frac{\sigma_{29}}{2} - \frac{\sigma_{30}}{2} - \frac{\sigma_{31}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{32}}{2} - \frac{\sigma_{33}}{2} - \frac{\sigma_{34}}{2} - \frac{\sigma_{35}}{2} - \frac{\sigma_{36}}{2} - \frac{\sigma_{37}}{2} - \frac{\sigma_{38}}{2} - \frac{\sigma_{39}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{40}}{2} - \frac{\sigma_{41}}{2} - \frac{\sigma_{42}}{2} - \frac{\sigma_{43}}{2} - \frac{\sigma_{44}}{2} - \frac{\sigma_{45}}{2} - \frac{\sigma_{46}}{2} - \frac{\sigma_{47}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{48}}{2} - \frac{\sigma_{49}}{2} - \frac{\sigma_{50}}{2} - \frac{\sigma_{51}}{2} - \frac{\sigma_{52}}{2} - \frac{\sigma_{53}}{2} - \frac{\sigma_{54}}{2} - \frac{\sigma_{55}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{56}}{2} - \frac{\sigma_{57}}{2} - \frac{\sigma_{58}}{2} - \frac{\sigma_{59}}{2} - \frac{\sigma_{60}}{2} - \frac{\sigma_{61}}{2} - \frac{\sigma_{62}}{2} - \frac{\sigma_{63}}{2} - 3\right)^{2}$ ```python P2 = 0 for j in range(n): temp = 0 for i in range(n): temp += xs[i, j] P2 += (1 - temp) ** 2 P2 ``` $\displaystyle \left(- \frac{\sigma_{0}}{2} - \frac{\sigma_{16}}{2} - \frac{\sigma_{24}}{2} - \frac{\sigma_{32}}{2} - \frac{\sigma_{40}}{2} - \frac{\sigma_{48}}{2} - \frac{\sigma_{56}}{2} - \frac{\sigma_{8}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{1}}{2} - \frac{\sigma_{17}}{2} - \frac{\sigma_{25}}{2} - \frac{\sigma_{33}}{2} - \frac{\sigma_{41}}{2} - \frac{\sigma_{49}}{2} - \frac{\sigma_{57}}{2} - \frac{\sigma_{9}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{10}}{2} - \frac{\sigma_{18}}{2} - \frac{\sigma_{2}}{2} - \frac{\sigma_{26}}{2} - \frac{\sigma_{34}}{2} - \frac{\sigma_{42}}{2} - \frac{\sigma_{50}}{2} - \frac{\sigma_{58}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{11}}{2} - \frac{\sigma_{19}}{2} - \frac{\sigma_{27}}{2} - \frac{\sigma_{3}}{2} - \frac{\sigma_{35}}{2} - \frac{\sigma_{43}}{2} - \frac{\sigma_{51}}{2} - \frac{\sigma_{59}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{12}}{2} - \frac{\sigma_{20}}{2} - \frac{\sigma_{28}}{2} - \frac{\sigma_{36}}{2} - \frac{\sigma_{4}}{2} - \frac{\sigma_{44}}{2} - \frac{\sigma_{52}}{2} - \frac{\sigma_{60}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{13}}{2} - \frac{\sigma_{21}}{2} - \frac{\sigma_{29}}{2} - \frac{\sigma_{37}}{2} - \frac{\sigma_{45}}{2} - \frac{\sigma_{5}}{2} - \frac{\sigma_{53}}{2} - \frac{\sigma_{61}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{14}}{2} - \frac{\sigma_{22}}{2} - \frac{\sigma_{30}}{2} - \frac{\sigma_{38}}{2} - \frac{\sigma_{46}}{2} - \frac{\sigma_{54}}{2} - \frac{\sigma_{6}}{2} - \frac{\sigma_{62}}{2} - 3\right)^{2} + \left(- \frac{\sigma_{15}}{2} - \frac{\sigma_{23}}{2} - \frac{\sigma_{31}}{2} - \frac{\sigma_{39}}{2} - \frac{\sigma_{47}}{2} - \frac{\sigma_{55}}{2} - \frac{\sigma_{63}}{2} - \frac{\sigma_{7}}{2} - 3\right)^{2}$ ```python F = H + P1 + P2 ``` ```python E0, h, J = reconstruct_interaction_tensors_2_local(F) ising = IsingModel2D(E0, h, J, seed=1) spins = calc_once(ising, exp_schedule(400, 0.1, 10000), include_exact=False) solution = np.reshape(spins, (nodes, nodes)) ``` ```python path = find_path(solution) node_color_list = build_color_list(G2.nodes, path) node_label_dict = build_node_path_label_dictionary(G2.nodes, path) draw_graph(G2, node_color_list, node_label_dict) ``` - Actually in this case we can solve it using our manual SA - Lets do this on QPU ```python from dwave.system import DWaveSampler, EmbeddingComposite bqm = build_bqm_from_interaction_tensors(h, J) sampler = EmbeddingComposite(DWaveSampler()) answer = sampler.sample(bqm, num_reads=10000) lowest = answer.lowest() ``` ```python solution = lowest.to_pandas_dataframe().iloc[0, 0:64].to_numpy().reshape(8, 8) solution ``` array([[-1., -1., -1., -1., -1., -1., -1., 1.], [-1., -1., -1., -1., -1., -1., -1., -1.], [ 1., -1., -1., -1., 1., -1., -1., -1.], [-1., 1., -1., 1., -1., -1., -1., -1.], [-1., -1., 1., -1., -1., -1., -1., -1.], [-1., -1., -1., -1., -1., -1., -1., -1.], [-1., -1., -1., -1., -1., -1., 1., -1.], [-1., -1., -1., -1., -1., -1., -1., -1.]]) ```python path = find_path(solution) node_color_list = build_color_list(G2.nodes, path) node_label_dict = build_node_path_label_dictionary(G2.nodes, path) draw_graph(G2, node_color_list, node_label_dict) ``` ### QPU does not work - This does not find satisfactory path and raise error because there's no 1 in 7th column ```python ```
5c55420367cb2c316a479050b96654fe91504633
357,309
ipynb
Jupyter Notebook
Project_4_Ising_Annealer/Challenge2.ipynb
Anand270294/CohortProject_2020
b62e53afa0ea05a3d119889bacae38fa9ab0292c
[ "MIT" ]
27
2020-06-22T05:14:01.000Z
2021-07-28T22:18:16.000Z
Project_4_Ising_Annealer/Challenge2.ipynb
Anand270294/CohortProject_2020
b62e53afa0ea05a3d119889bacae38fa9ab0292c
[ "MIT" ]
30
2020-07-13T01:22:18.000Z
2020-08-09T20:43:45.000Z
Project_4_Ising_Annealer/Challenge2.ipynb
Anand270294/CohortProject_2020
b62e53afa0ea05a3d119889bacae38fa9ab0292c
[ "MIT" ]
82
2020-06-18T18:01:42.000Z
2021-07-18T07:50:33.000Z
291.919118
75,114
0.714116
true
54,413
Qwen/Qwen-72B
1. YES 2. YES
0.833325
0.757794
0.631489
__label__zho_Hans
0.178451
0.30549
```python from sympy import * from sympy import init_printing; init_printing(use_latex="mathjax") import numpy as np import matplotlib.pyplot as plt # comando para que las gráficas salgan en la misma ventana %matplotlib inline ``` ```python x,y,z = symbols("x y z") ``` ```python exp(x)/factorial(y)**z**2 ``` $$e^{x} y!^{- z^{2}}$$ ```python symbols("a:d, I:K") ``` $$\left ( a, \quad b, \quad c, \quad d, \quad I, \quad J, \quad K\right )$$ ```python symbols("X:6") ``` $$\left ( X_{0}, \quad X_{1}, \quad X_{2}, \quad X_{3}, \quad X_{4}, \quad X_{5}\right )$$ ```python x+2*y+x-3*y ``` $$2 x - y$$ ```python z**6+5*y-x-x**2*y+z ``` $$- x^{2} y - x + 5 y + z^{6} + z$$ ```python sin(2) ``` $$\sin{\left (2 \right )}$$ ```python sin(2).evalf() ``` $$0.909297426825682$$ ### Expresiones a ecuaciones ```python ex="x**2+3" type(ex) ``` str ```python sim_ex = simplify(ex) sim_ex ``` $$x^{2} + 3$$ ```python sim_ex.subs(x,7) ``` $$52$$ ```python str_fun = lambdify([x],sim_ex,"numpy") x_vec = np.linspace(0,10,100) plt.plot(x_vec,str_fun(x_vec)); ``` ```python import pandas as pd d = {"x":x_vec, "y":str_fun(x_vec)} df = pd.DataFrame(data=d) df[:5] ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>x</th> <th>y</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0.00000</td> <td>3.000000</td> </tr> <tr> <th>1</th> <td>0.10101</td> <td>3.010203</td> </tr> <tr> <th>2</th> <td>0.20202</td> <td>3.040812</td> </tr> <tr> <th>3</th> <td>0.30303</td> <td>3.091827</td> </tr> <tr> <th>4</th> <td>0.40404</td> <td>3.163249</td> </tr> </tbody> </table> </div> ## Limites ```python limit(exp(-y),y,oo) ``` $$0$$ ```python Limit(exp(-y),y,oo) ``` $$\lim_{y \to \infty} e^{- y}$$ ```python express = Limit((cos(y)-1)/y,y,0) express ``` $$\lim_{y \to 0^+}\left(\frac{1}{y} \left(\cos{\left (y \right )} - 1\right)\right)$$ ```python express.doit() ``` $$0$$ ```python n=Symbol("n") suma1 = Sum(1/n**2,(n,1,oo)) suma1 ``` $$\sum_{n=1}^{\infty} \frac{1}{n^{2}}$$ ```python suma1.doit() ``` $$\frac{\pi^{2}}{6}$$ ```python Product(n,(n,4,10)) ``` $$\prod_{n=4}^{10} n$$ # Integrales ```python I=Integral(exp(-y**2-z**2),(y,-oo,-oo),(z,-oo,oo)) I ``` $$\int_{-\infty}^{\infty}\int_{-\infty}^{-\infty} e^{- y^{2} - z^{2}}\, dy\, dz$$ ```python I.doit() ``` $$0$$ ```python ```
1d37d3ca6aa2cd9b43628e9180f29ba8d88f347e
23,598
ipynb
Jupyter Notebook
Modulo1/SympyClase4_Douglas.ipynb
douglasparism/SimulacionM2018
85953efb86c7ebf2f398474608dfda18cb4cf5b8
[ "MIT" ]
null
null
null
Modulo1/SympyClase4_Douglas.ipynb
douglasparism/SimulacionM2018
85953efb86c7ebf2f398474608dfda18cb4cf5b8
[ "MIT" ]
null
null
null
Modulo1/SympyClase4_Douglas.ipynb
douglasparism/SimulacionM2018
85953efb86c7ebf2f398474608dfda18cb4cf5b8
[ "MIT" ]
null
null
null
36.304615
11,502
0.658276
true
1,102
Qwen/Qwen-72B
1. YES 2. YES
0.912436
0.880797
0.803671
__label__yue_Hant
0.2463
0.70553
## Expected value of random variable Expected value of random variable is generalization of taking average of numbers. It is similar to taking weighted average, where each value of random variable is multiplied by it's probability. $$\mathbb{E}[X] = \sum_{x \in \mathcal{X}} x \cdot p_X(x) $$ Also in terms of conditional probability, $$\mathbb{E}[X \mid Y=y] = \sum_{x \in \mathcal{X}} x \cdot p_{X\mid Y}(x\mid y)$$ In general, let $f$ any function from $\mathbb{R}$ to $\mathbb{R}$, then $$ \mathbb{E}[f(X)] = \sum_{x \in \mathcal{X}} f(x) \cdot p_X(x) $$ Thus expectection gives a single number associated with a probability table. ### Exercise: Expected Value Suppose that a student's score on a test will be $100$ if she studies the week before, and $75$ if she does not. Suppose also that the student's probability of studying the week before is $0.8$. What is her expected score? (Please provide an exact answer.) ```python X = {'S': 100, 'N':75}; p_X = {'S': 0.80, 'N': 0.20} E_X = sum([X[i] * p_X[i] for i in X]); E_X ``` 95.0 Let's look at why the expected value of a random variable is in some sense a “good" average value. Let $X$ be the result of a single fair six-sided die with faces $1$ up through $6$. Simulate 10,000 rolls of the die roll in Python and take the average of the faces that appeared. What do you get? (Just make a note of it. There's no answer box to enter this in.) What is $\mathbb{E}[X]$? (Please provide an exact answer.) ```python E_X = sum([i * 1/6 for i in range(1,7)]); E_X ``` 3.5 You should notice that the average you get in simulation should be very close to E[X], and in fact, if you increase the number of rolls, it will tend to get closer (it doesn't necessarily have to get closer when you do each additional roll but the trend is there as you just keep increasing the number of rolls). ```python import sys import numpy as np sys.path.append('../comp_prob_inference') import comp_prob_inference p_X = {i: 1/6 for i in range(1, 7)} num_samples = 10000 print(np.mean([comp_prob_inference.sample_from_finite_probability_space(p_X) for n in range(num_samples)])) ``` 3.497 ```python import matplotlib.pyplot as plt plt.figure(figsize=(8, 4)) n = 5000 x = list(range(1, n+1)) y = [] for i in x: if i == 1: y.append(comp_prob_inference.sample_from_finite_probability_space(p_X)) if i > 1: y.append((y[i-2] * (i-1) + comp_prob_inference.sample_from_finite_probability_space(p_X)) / i) plt.xlabel('No of dice rolled') plt.ylabel('Expected value') plt.plot(x,y) plt.show() ``` We can observe that as the no of dice roll increases the become closer to $3.5$. ## Variance This exercise explores the important concept of variance, which measures how much a random variable deviates from its expectation. This can be thought of as a measure of uncertainty. Higher variance means more uncertainty. The variance of a real-valued random variable $X$ is defined as $$\text {var}(X) \triangleq \mathbb {E}[ (X - \mathbb {E}[X])^2 ].$$ Note that as we saw previously, $\mathbb{E}[X]$ is just a single number. To keep the variance of $X$, what you could do is first compute the expectation of $X$. For example, if $X$ takes on each of the values $3$, $5$, and $10$ with equal probability $1/3$, then first we compute $\mathbb{E}[X]$ to get $6$, and then we compute $\mathbb{E}[(X−6)^2]$, where we remember to use the result that for a function $f$, if $f(X)$ is a real-valued random variable, then $\mathbb{E}[f(X)]=\sum_x xf(x)pX(x)$. Here, $f$ is given by $f(x)=(x−6)^2$. So $$\text {var}(X) = (3 - 6)^2 \cdot \frac13 + (5 - 6)^2 \cdot \frac13 + (10 - 6)^2 \cdot \frac13 = \frac{26}{3}.$$ ```python def E(p_X): return sum([key * value for key, value in p_X.items()]) def VAR(p_X): avg = E(p_X) p_Xt = {(key - avg)**2 : value for key, value in p_X.items()} return E(p_Xt) ``` ### Exercise Let's return to the three lotteries from earlier. Here, random variables $L_1$, $L_2$, and $L_3$ represent the amount won (accounting for having to pay \$1): |$L_1$ | $p$ | $L_2$ | $p$ | $L_3$ | $p$ | |----------:|:------------------------:|-------------:|:------------------------:|--------:|:--------------:| | -1 | $\frac{999999}{1000000}$ | -1 | $\frac{999999}{1000000}$ | -1 | $\frac{9}{10}$ | | -1+1000 | $\frac{1}{1000000}$ | -1+1000000 | $\frac{1}{1000000}$ | -1+10 | $\frac{1}{10}$ | Compute the variance for each of these three random variables. (Please provide the exact answer for each of these.) - var($L_1$)= {{V_1}} - var($L_2$)= {{V_2}} - var($L_3$)= {{V_3}} ```python p_L1 = {-1: 999999/1000000, 999 : 1/1000000} p_L2 = {-1: 999999/1000000, 999999: 1/1000000} p_L3 = {-1: 9/10 , 9 : 1/10 } V_1 = VAR(p_L1) V_2 = VAR(p_L2) V_3 = VAR(p_L3) ``` What units is variance in? Notice that we started with dollars, and then variance is looking at the expectation of a dollar amount squared. Thus, specifically for the lottery example $\text {var}(L_1)$, $\text {var}(L_2)$, and $\text {var}(L_3)$ are each in squared dollars. ## Standard Deviation Some times, people prefer keeping the units the same as the original units (i.e., without squaring), which you can get by computing what's called the standard deviation of a real-valued random variable $X$: $$\text {std}(X) \triangleq \sqrt {\text {var}(X)}.$$ ```python def STD(p_X): from sympy import sqrt return sqrt(VAR(p_X)) ``` ### Exercise Compute the following standard deviations, which are in units of dollars. (Please be precise with at least 3 decimal places, unless of course the answer doesn't need that many decimal places. You could also put a fraction.) - std($L_1$) = {{print(S_1)}} - std($L_2$) = {{print(S_2)}} - std($L_3$) = {{print(S_3)}} ```python S_1 = STD(p_L1) S_2 = STD(p_L2) S_3 = STD(p_L3) ``` !Note When we first introduced the three lotteries and computed average winnings, we didn't account for the uncertainty in the average winnings. Here, it's clear that the third lottery has far smaller standard deviation and variance than the second lottery.<br> As a remark, often in financial applications (e.g., choosing a portfolio of stocks to invest in), accounting for uncertainty is extremely important. For example, you may want to maximize profit while ensuring that the amount of uncertainty is not too high as to not be reckless in investing. In the case of the three lotteries, to decide between them, you could for example use a score that is of the form $$\mathbb {E}[L_ i] - \lambda \cdot \text {std}(L_ i) \qquad \text {for }i = 1,2,3,$$ where $λ≥0$ is some parameter that you choose for how much you want to penalize uncertainty in the lottery outcome. Then you could choose the lottery with the highest score. Finally, a quick sanity check (this is more for you to think about the definition of variance rather than to compute anything out): **Question:** Can variance be negative? If yes, give a specific distribution as a Python dictionary for which the variance is negative. If no, enter the text "no" (all lowercase, one word, no spaces). **Answer:** NO ## The Law of Total Expectation Remember the law of total probability? For a set of events $\mathcal{B}_{1},\dots ,\mathcal{B}_{n}$ that partition the sample space $Ω$ (so the Bi's don't overlap and together they fully cover the full space of possible outcomes), $$\mathbb {P}(\mathcal{A})=\sum _{i=1}^{n}\mathbb {P}(\mathcal{A}\cap \mathcal{B}_{i})=\sum _{i=1}^{n}\mathbb {P}(\mathcal{A}\mid \mathcal{B}_{i})\mathbb {P}(\mathcal{B}_{i}),$$ where the second equality uses the product rule. A similar statement is true for the expected value of a random variable, called the law of total expectation: for a random variable $X$ (with alphabet $\mathcal{X}$) and a partition $\mathcal{B}_1,\dots ,\mathcal{B}_ n$ of the sample space, $$\mathbb {E}[X]=\sum _{i=1}^{n}\mathbb {E}[X\mid \mathcal{B}_{i}]\mathbb {P}(\mathcal{B}_{i}),$$ where $$\mathbb {E}[X\mid \mathcal{B}_{i}] = \sum _{x\in \mathcal{X}}xp_{X\mid \mathcal{B}_{i}}(x) = \sum _{x\in \mathcal{X}}x\frac{\mathbb {P}(X=x,\mathcal{B}_{i})}{\mathbb {P}(\mathcal{B}_{i})}.$$ We will be using this result in the section “Towards Infinity in Modeling Uncertainty". Show that the law of total expectation is true. **Solution:** There are different ways to prove the law of total expectation. We take a fairly direct approach here, first writing everything in terms of outcomes in the sample space. The main technical hurdle is that the events $\mathcal{B}_1, \dots , \mathcal{B}_ n$ are specified directly in the sample space, whereas working with values that $X$ takes on requires mapping from the sample space to the alphabet of $X$. We will derive the law of total expectation starting from the right-hand side of the equation above, i.e., $\sum _{i=1}^{n}\mathbb {E}[X\mid \mathcal{B}_{i}]\mathbb {P}(\mathcal{B}_{i})$. We first write $\mathbb {E}[X\mid \mathcal{B}_{i}]$ in terms of a summation over outcomes in $\Omega$: $$\begin{align} \mathbb {E}[X\mid \mathcal{B}_{i}] =& \sum _{x\in \mathcal{X}}x\frac{\mathbb {P}(X=x,\mathcal{B}_{i})}{\mathbb {P}(\mathcal{B}_{i})}\\ =& \sum _{x\in \mathcal{X}}x\frac{\mathbb {P}(\{ \omega \in \Omega \; :\; X(\omega )=x\} \cap \mathcal{B}_{i})}{\mathbb {P}(\mathcal{B}_{i})}\\ =& \sum _{x\in \mathcal{X}}x\frac{\mathbb {P}(\{ \omega \in \Omega \; :\; X(\omega )=x\text { and }\omega \in \mathcal{B}_{i}\} )}{\mathbb {P}(\mathcal{B}_{i})}\\ =& \sum _{x\in \mathcal{X}}x\frac{\mathbb {P}(\{ \omega \in \mathcal{B}_{i}\; :\; X(\omega )=x\} )}{\mathbb {P}(\mathcal{B}_{i})}\\ =& \sum _{x\in \mathcal{X}}x\cdot \frac{\sum _{\omega \in \mathcal{B}_{i}\text { such that }X(\omega )=x}\mathbb {P}(\{ \omega \} )}{\mathbb {P}(\mathcal{B}_{i})} \\ =& \frac{1}{\mathbb {P}(\mathcal{B}_{i})}\sum _{x\in \mathcal{X}}x\sum _{\omega \in \mathcal{B}_{i}\text { such that }X(\omega )=x}\mathbb {P}(\{ \omega \} )\\ =& \frac{1}{\mathbb {P}(\mathcal{B}_{i})}\sum _{\omega \in \mathcal{B}_{i}}X(\omega )\mathbb {P}(\{ \omega \} ). \end{align}$$ Thus, $$\begin{align} \sum _{i=1}^{n}\mathbb {E}[X\mid \mathcal{B}_{i}]\mathbb {P}(\mathcal{B}_{i})=& \sum _{i=1}^{n}\bigg(\frac{1}{\mathbb {P}(\mathcal{B}_{i})}\sum _{\omega \in \mathcal{B}_{i}}X(\omega )\mathbb {P}(\{ \omega \} )\bigg)\mathbb {P}(\mathcal{B}_{i})\\ =& \sum _{i=1}^{n}\sum _{\omega \in \mathcal{B}_{i}}X(\omega )\mathbb {P}(\{ \omega \} )\\ =& \sum _{\omega \in \Omega }X(\omega )\mathbb {P}(\{ w\} )\\ =& \sum _{x\in \mathcal{X}}x\mathbb {P}(\{ \omega \in \Omega \text { such that }X(\omega )=x\} )\\ =& \sum _{x\in \mathcal{X}}xp_{X}(x)\\ =&\mathbb {E}[X]. \end{align}$$ ```python ```
95b6006f5e19631a8e79b179ae13879152d7bb24
36,290
ipynb
Jupyter Notebook
week04/01 Expected Value.ipynb
infimath/Computational-Probability-and-Inference
e48cd52c45ffd9458383ba0f77468d31f781dc77
[ "MIT" ]
1
2019-04-04T03:07:47.000Z
2019-04-04T03:07:47.000Z
week04/01 Expected Value.ipynb
infimath/Computational-Probability-and-Inference
e48cd52c45ffd9458383ba0f77468d31f781dc77
[ "MIT" ]
null
null
null
week04/01 Expected Value.ipynb
infimath/Computational-Probability-and-Inference
e48cd52c45ffd9458383ba0f77468d31f781dc77
[ "MIT" ]
1
2021-02-27T05:33:49.000Z
2021-02-27T05:33:49.000Z
76.079665
19,276
0.755029
true
3,566
Qwen/Qwen-72B
1. YES 2. YES
0.9659
0.943348
0.911179
__label__eng_Latn
0.977022
0.955308
```julia using CSV using DataFrames using PyPlot using ScikitLearn # machine learning package using StatsBase using Random using LaTeXStrings # for L"$x$" to work instead of needing to do "\$x\$" using Printf using PyCall sns = pyimport("seaborn") # (optional)change settings for all plots at once, e.g. font size rcParams = PyPlot.PyDict(PyPlot.matplotlib."rcParams") rcParams["font.size"] = 16 # (optional) change the style. see styles here: https://matplotlib.org/3.1.1/gallery/style_sheets/style_sheets_reference.html PyPlot.matplotlib.style.use("seaborn-white") ``` ## classifying breast tumors as malignant or benign source: [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)) > Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. The mean radius and smoothness of the cell nuclei (the two features) and the outcome (M = malignant, B = benign) of the tumor are in the `breast_cancer_data.csv`. ```julia df = CSV.read("breast_cancer_data.csv") df[!, :class] = map(row -> row == "B" ? 0 : 1, df[:, :outcome]) first(df, 5) ``` <table class="data-frame"><thead><tr><th></th><th>mean_radius</th><th>mean_smoothness</th><th>outcome</th><th>class</th></tr><tr><th></th><th>Float64</th><th>Float64</th><th>String</th><th>Int64</th></tr></thead><tbody><p>5 rows × 4 columns</p><tr><th>1</th><td>13.85</td><td>1.495</td><td>B</td><td>0</td></tr><tr><th>2</th><td>9.668</td><td>2.275</td><td>B</td><td>0</td></tr><tr><th>3</th><td>9.295</td><td>2.388</td><td>B</td><td>0</td></tr><tr><th>4</th><td>19.69</td><td>4.585</td><td>M</td><td>1</td></tr><tr><th>5</th><td>9.755</td><td>1.243</td><td>B</td><td>0</td></tr></tbody></table> ## visualize the two classes distributed in feature space Where SVM just computes a dividing plane, logistic regression calculates a probability that each point is in a partaicular class ```julia markers = Dict("M" => "x", "B" => "o") fig, ax = subplots(figsize=(8, 8)) ax.set_xlabel("mean radius") ax.set_ylabel("mean smoothness") ax.set_facecolor("#efefef") for df_c in groupby(df, :outcome) outcome = df_c[1, :outcome] ax.scatter(df_c[:, :mean_radius], df_c[:, :mean_smoothness], label="$outcome", marker=markers[outcome], alpha=0.5) end legend() axis("equal") sns.despine() ``` ## get data ready for classifiation in scikitlearn scikitlearn takes as input: * a feature matrix `X`, which must be `n_samples` by `n_features` * a target vector `y`, which must be `n_samples` long (of course) ```julia n_tumors = nrow(df) X = zeros(n_tumors, 2) y = zeros(n_tumors) for (i, tumor) in enumerate(eachrow(df)) X[i, 1] = tumor[:mean_radius] X[i, 2] = tumor[:mean_smoothness] y[i] = tumor[:class] end X # look at y too! ``` 300×2 Array{Float64,2}: 13.85 1.495 9.668 2.275 9.295 2.388 19.69 4.585 9.755 1.243 16.11 4.533 14.78 2.45 15.78 3.598 15.71 1.972 14.68 3.195 13.71 3.856 21.09 4.414 11.31 1.831 ⋮ 11.08 1.719 18.94 5.486 15.32 4.061 14.25 5.373 20.6 5.772 8.671 1.435 11.64 2.155 12.06 1.171 13.88 1.709 14.9 3.466 19.59 2.916 14.81 1.677 ## logistic regression let $\mathbf{x} \in \mathbb{R}^2$ be the feature vector describing a tumor. let $T$ be the random variable that denotes whether the tumor is benign (0) or malignant (1). the logistic model is a probabilistic model for the probability that a tumor is malignant given its feature vector: \begin{equation} \log \frac{Pr(T=1 | \mathbf{x})}{1-Pr(T=1 | \mathbf{x})} = \beta_0 + \boldsymbol \beta^\intercal \mathbf{x} \end{equation} where $\beta_0$ is the intercept and $\boldsymbol \beta \in \mathbb{R}$ are the weights for the features. we will use scikitlearn to learn the $\beta_0$ and $\boldsymbol \beta$ that maximize the likelihood. ```julia @sk_import linear_model : LogisticRegression ``` PyObject <class 'sklearn.linear_model.logistic.LogisticRegression'> $$\vec{\nabla}_{\vec{B}}\ell = \vec{0}$$ ```julia # default LR in sklearn has an L1 regularization, so we have to set penalty to none to fit this model # solver minimizes grad_b l = 0 lr = LogisticRegression(penalty="none", solver="newton-cg") lr.fit(X, y) println("β = ", lr.coef_) println("β₀ = ", lr.intercept_) ``` β = [1.168660778217552 0.9420681231447384] β₀ = [-19.387890643955803] prediction of the probability that a new tumor is 0 (benign) or 1 (malignant) ```julia # x = [20.0 5.0] x = [15.0 2.5] lr.predict(x) # should be malignant for x 0 lr.predict_proba(x) # [Pr(y=0|x) PR(y-1|x)] ``` 1×2 Array{Float64,2}: 0.378201 0.621799 ## visualize the learned model $Pr(T=1|\mathbf{x})$ ```julia radius = 5:0.25:30 smoothness = 0.0:0.25:20.0 lr_prediction = zeros(length(smoothness), length(radius)) for i = 1:length(radius) for j = 1:length(smoothness) # consider this feature vector x = [radius[i] smoothness[j]] # use logistic regression to predict P(y=1|x) lr_prediction[j, i] = lr.predict_proba(x)[2] # second elem bc want y=1 end end ``` ```julia fig, ax = subplots(figsize=(8, 8)) ax.set_xlabel("mean radius") ax.set_ylabel("mean smoothness") asdf = ax.pcolor(radius, smoothness, lr_prediction, cmap="viridis", vmin=0.0, vmax=1.0) colorbar(asdf, label="Pr(y=1|x)") sns.despine() ``` TODO: add the data points in the above plot ## making decisions: the ROC curve this depends on the cost of a false positive versus false negative. (here, "positive" is defined as testing positive for "malignant") > "I equally value minimizing (1) false positives and (2) false negatives." $\implies$ choose $Pr(T=1|\mathbf{x})=0.5$ as the decision boundary. > "I'd rather predict that a benign tumor is malignant (false positive) than predict that a malignant tumor is benign (false negative)." $\implies$ choose $Pr(T=1|\mathbf{x})=0.2$ as the decision boundary. Even if there is a relatively small chance that the tumor is malignant, we still take action and classify it as malignant... the receiver operator characteristic (ROC) curve is a way we can evaluate a classification algorithm without imposing our values and specifying where the decision boundary should be. ```julia @sk_import metrics : roc_curve @sk_import metrics : auc ``` PyObject <function auc at 0x7f4f562ef378> WARNING: both StatsBase and ScikitLearn export "predict"; uses of it in module Main must be qualified DIY ```julia prob_pred = lr.predict_log_proba(X)[:, 2] # PR(Y = 1 | x) p_star = 0.2 # choose some threshold y_pred = prob_pred .> p_star nb_positive_examples = sum(y) FP = sum((y_pred .== 1) & (y .== 0)) # lmao this is broken # calculate TPR, FPR, and sweep through p* ``` Using sklearn # NOTE: Something is fucky here. P stars should be on 0, 1 ```julia fpr, tpr, p_stars = roc_curve(y, prob_pred) ``` ([0.0, 0.0, 0.0, 0.005555555555555556, 0.005555555555555556, 0.011111111111111112, 0.011111111111111112, 0.027777777777777776, 0.027777777777777776, 0.03333333333333333 … 0.37222222222222223, 0.43333333333333335, 0.43333333333333335, 0.4722222222222222, 0.4722222222222222, 0.5055555555555555, 0.5055555555555555, 0.7666666666666667, 0.7666666666666667, 1.0], [0.0, 0.008333333333333333, 0.7416666666666667, 0.7416666666666667, 0.7583333333333333, 0.7583333333333333, 0.7666666666666667, 0.7666666666666667, 0.8083333333333333, 0.8083333333333333 … 0.9583333333333334, 0.9583333333333334, 0.975, 0.975, 0.9833333333333333, 0.9833333333333333, 0.9916666666666667, 0.9916666666666667, 1.0, 1.0], [0.9999999999999254, -7.460698725481331e-14, -0.2281773184684447, -0.23795226108194242, -0.25716530360843015, -0.2714716737794504, -0.2725272737389853, -0.36203897786093064, -0.4648327329708211, -0.5295512979795379 … -2.3136622282804, -2.7295957892469147, -2.81149663007954, -2.944834369630702, -2.9587770993003786, -3.134366327666348, -3.1359311568511523, -4.777592079038611, -4.827750830178593, -9.038550910571699]) ```julia figure() title("ROC Curve") xlabel("FPR") ylabel("TPR") plot([0, 1], [0, 1], c="k", label="Pr(Y=1|x)=uniform(0, 1)") plot(fpr, tpr, c="darkorange", label="LR Model") scatter(fpr, tpr, c=p_stars, cmap="viridis")#, vmin=0.0, vmax=1.0) colorbar(label="threshold") legend() println("AUC = ", auc(fpr, tpr)) ``` tradeoff: * threshold too small: classify all of the tumors as malignant, false positive rate very high * threshold too large: classify all of the tumors as benign, false negative rate very high somewhere in the middle (but still depending on the cost of a false positive versus false negative) is where we should operate. the `auc`, area under the curve, has a probabilistic interpretation: > the area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative') -[Wikipedia](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) **warning**: always split your data into test or train or do cross-validation to assess model performance. we trained on all data here to see the mechanics of fitting a logistic regression model to data, visualizing the model, and creating an ROC curve.
ded7d79a8fbcd6ffc6a5f3a503cf0a03d03901df
201,494
ipynb
Jupyter Notebook
In-Class Notes/Logistic Regression/logistic regression_sparse.ipynb
cartemic/CHE-599-intro-to-data-science
a2afe72b51a3b9e844de94d59961bedc3534a405
[ "MIT" ]
null
null
null
In-Class Notes/Logistic Regression/logistic regression_sparse.ipynb
cartemic/CHE-599-intro-to-data-science
a2afe72b51a3b9e844de94d59961bedc3534a405
[ "MIT" ]
null
null
null
In-Class Notes/Logistic Regression/logistic regression_sparse.ipynb
cartemic/CHE-599-intro-to-data-science
a2afe72b51a3b9e844de94d59961bedc3534a405
[ "MIT" ]
2
2019-10-02T16:11:36.000Z
2019-10-15T20:10:40.000Z
330.860427
75,962
0.925745
true
3,207
Qwen/Qwen-72B
1. YES 2. YES
0.908618
0.884039
0.803254
__label__eng_Latn
0.808669
0.704561
# Estimating the mass of exoplanets When a star has planetary companions, it describes an epicyclic motion around the center of mass of the system. When the motion has a component in the direction of the observer (the direction of the line of sight), this allows to detect the planetary companions. Indeed, as the distance between an observer and a star changes, because of the Doppler effect, the light received by the observer shifts in frequency. By measuring spectra at different times and measuring how the spectral absorption lines shift in wavelength, it is possible to obtain the star velocity in the direction of the line of sight. We then have a time series of measurements, with a nominal error on each of them. If one detects a periodic signal in the time series, then this one might be due to a planet. Determining if a periodic signal is present and if it originates from a planet is difficult, and will not be the object of the present exercise. In this spreadsheet, we assume that a planet has been detected, and we want to determine its orbital elements, especially its mass. To simplify the problem, we assume that the planet has a circular orbit (that is a zero eccentricity). The first part of this homework is the forward model: given the physical parameters and the instrument, what is the likelihood function of the data ? We then compute the maximum likelihood estimates of the parameters when the period is known. Finally, we compute the posterior distribution of the parameters, and interval estimates of the mass. This part requires to code a Metropolis-Hastings algorithm. ```python import numpy as np import matplotlib.pyplot as plt import emcee import corner from tqdm import tqdm ``` ## I. The forward model ---- ---- Let us suppose that a planet of mass $m$ orbits a star of mass $M$ at a semi-major axis $a$ and eccentricity 0. We want to express the velocity of the $\mathbf{star}$ in the direction observer - barycenter (the radial velocity). To facilitate the calculation of the radial velocity, we define two inertial frame: the observing frame $(x,y,z)$ and the orbital frame $(x',y',z')$. Both frames have their origin $O$ at the barycenter of the system { star + planet }. The observing frame $z$ axis is oriented in the direction observer - star. The plane perpendicular to $z$ is called the sky plane. The orbital frame $z'$ axis is perpendicular to the plane of the orbit and its direction is such that the $\mathbf{planet}$ orbits is in the trigonometric (or counterclocwise) sense. The $x$ and $x'$ axes are pointing at the ascending node, that is the point where the orbit crosses the sky plane from negative $z$ coordinates to positive $z$ coordinates. The $y$ and $y'$ axes are such that $(x,y,z)$ and $(x',y',z')$ are direct orthonormal bases. The semi-major axis of the planet is denoted by $a$. The angle $(z,z')$ is denoted by $i$. ------------ ----- I.1) Show that the position of the $\mathbf{star}$ in the orbital frame at time $t$ is $$ r(t) = -a \frac{m}{M+m} \left[ \cos(\nu (t - t_0)), \sin(\nu (t - t_0)), 0 \right ] $$ where $t_0$ is the time of passage at the ascending node and $$\nu = \sqrt{\frac{G(m+M)}{a^3}}$$. <font color='blue'> We assume that the planet and the star are point masses. From classical orbital mechanics it is known that two point masses in gravitational interaction have a planar motion. We here assume that their orbit is circular. In the orbital frame, the motion of the point mass with mass $M$ (the star) is $$ r_\star(t) = a \frac{m}{M+m} \left[ \cos(\nu t +\phi_s ), \sin(\nu t +\phi_s), 0 \right ] $$ where $\phi$ is the phase of the orbit at $t=0$ and $a$ is the distant between the two point masses. The position of the planet is $$ r_p(t) = a \frac{M}{M+m} \left[ \cos(\nu t +\phi_p ), \sin(\nu t +\phi_p), 0 \right ] $$ where $\phi_p = \phi_s + \pi$. When the planet is at its ascending node, its phase is 0, so $\nu t_0+\phi_p = 0$ so $\phi_p = - \nu t_0$ and $\phi_s = - \nu t_0 + \pi$. Therefore, the position of the $\mathbf{star}$ in the orbital frame at time $t$ is $$ r_\star(t) = -a \frac{m}{M+m} \left[ \cos(\nu (t - t_0)), \sin(\nu (t - t_0)), 0 \right ] $$. By Kepler's second law, $$\nu = \sqrt{\frac{G(m+M)}{a^3}}$$ I.2) Show that the velocity of the $\mathbf{star}$ projected onto the $z$ axis at time $t$ is $$ V(t) = V_0 + \frac{ G^\frac{1}{3}\nu^\frac{1}{3} }{(m+M)^\frac{2}{3}} m \sin i \cos(\nu (t - t_0)) $$ where $V_0$ is the velocity of the barycenter with respect to the observer. <font color='blue'> Let us express the position of the star in the reference frame $(x,y,z)$. Denoting by $M$ the matrix of coordinate change $$ M = \left( \begin{matrix} 1 & 0 & 0 \\ 0 & \cos i & -\sin i \\ 0 & \sin i& \cos i \end{matrix} \right) $$ The position of the star in the reference frame $(x,y,z)$, $q(t)$, is then $q(t) = Mr(t)$. The $z$ component of $q$ is $$ z = \sin i \left(- a \frac{m}{M+m}\right) \sin(\nu (t -t_0) ) $$ Therefore $$ \frac{\mathrm{d} z}{\mathrm{d} t} = - a \sin i \frac{m}{M+m} \nu \cos(\nu (t -t_0) ) $$ by expressing $a$ as a function of $\nu$ with Kepler's second law, we obtain the desired formula. I.3) We assume that we have $N$ radial velocity measurements at time $(t_k)_{k=1..N}$. with an instrument that has nominal error bar $\sigma_k$ for the measurement $k$. In principle, one could assume a the following form for the probability of the data knowing the parameters (the likelihood), $$ p(y|m, M, i, \nu,t_0, V_0) = \frac{1}{ \sqrt{2\pi}^N\prod\limits_{k=1}^N \sigma_k} \mathrm{e}^{-\frac{1}{2} \sum\limits_{k=1}^N \frac{ \left(y(t_k) - V_0 - \frac{ \nu^\frac{1}{3} G^\frac{1}{3} }{(m+M)^\frac{2}{3}} m \sin i \cos(\nu (t - t_0)) \right)^2 }{\sigma_k^2} }.$$ However, in the rest of this spreadsheet we will assume the likelihood is $$p(y|A,B,C, \nu) = \frac{1}{ \sqrt{2\pi}^N\prod\limits_{k=1}^N \sigma_k} \mathrm{e}^{-\frac{1}{2} \sum\limits_{k=1}^N \frac{(y(t_k) - A \cos \nu t_k - B \sin \nu t_k - C)^2}{\sigma_k^2} } \;\;\;\;\;\;\;\;\; (1)$$ Give the advantage of the chosen formulation, with parmeters $A,B,C$ over the formulation with parameters $m, M, i, \nu,v_0$ and express $A,B,C$ as a function $m, M, i, t_0,\nu,V_0$. <font color='blue'> Let $\frac{mv^{1/3}G^{1/3}}{(m+M)^{2/3}} = K$, then, \begin{equation} V_0 + K\cdot \sin{i}\cos({\nu(t-t_0)}) = A\cos{\nu t_k} + B\sin{\nu t_k} +C \end{equation} That means $C=V_0$, and \begin{equation} K\sin{i}\cos({\nu t - \nu t_0}) = A\cos{\nu t_k} + B\sin{\nu t_k} \\ \Rightarrow K\sin{i} (\cos{\nu t}\cos{\nu t_0} + \sin{\nu t}\sin{\nu t_0}) = A\cos{\nu t_k} + B\sin{\nu t_k}\\ \Rightarrow K\sin{i}\cos{\nu t_0} \cos{\nu t} + K\sin{i}\sin{\nu t_0}\sin{\nu t} = A\cos{\nu t_k} + B\sin{\nu t_k} \end{equation} Comparing the coefficients of the last equations, \begin{equation} \begin{split} A &= K\sin{i}\cos{\nu t_0} \\ \Rightarrow A &= m\frac{v^{1/3}G^{1/3}}{(m+M)^{2/3}}\sin{i}\cos{\nu t_0} \end{split} \end{equation} (Note: Thus, we can find $m$ using $A$, $$m = \frac{AM^{2/3}}{\nu^{1/3} G^{1/3} \sin{i} \cos{\nu t_0}}$$ which can be useful in later derivation.) and, \begin{equation} \begin{split} B &= K\sin{i}\sin{\nu t_0} \\ \Rightarrow B &= m\frac{v^{1/3}G^{1/3}}{(m+M)^{2/3}}\sin{i}\sin{\nu t_0} \end{split} \end{equation} ## II. Point estimates --- --- In this part of the homework, we assume that we have radial velocity data $y = (y(t_i))_{i=1..N}$. We model $y$ as a sum of a sinusoidal model due to the planet and a Gaussian noise of mean zero and variance $\sigma_i^2$ for the measurement at time $t_i$. $$y(t_i) = A\cos \nu t_i + B\sin \nu t_i + C + \epsilon_i$$ where $$\epsilon_i \sim G(0, \sigma_i^2)$$ We assume that the uncertainties $(\sigma_i)_{i=1..N}$ are known. ----- ----- II.1) Justify that the likelihood of the model is given by equation (1). <font color='blue'> The $\epsilon_i$ are Gaussian and independent, therefore their covariance matrix is diagonal with $i$-th element $\sigma_i^2$. Knowing the parameters $\theta$, when the noise is white (i.e. uncorrelated), for a model $f(\theta)$, a Gaussian likelihood has the form $$p(y | \theta ) = \frac{1}{ \sqrt{2\pi}^N\prod\limits_{k=1}^N \sigma_k} \mathrm{e}^{-\frac{1}{2} \sum\limits_{k=1}^N \frac{(y(t_k) - f(\theta))^2}{\sigma_k^2} } \;\;\;\;\;\;\;\;\; (1)$$ Here, $f(\theta) =A\cos \nu t_i + B\sin \nu t_i + C $. By replacing in the equation above we get the expression of equation 1. II.2) We first assume that the frequency $\nu$ is known. Show that the maximum likelihood estimate of $\theta = (A,B,C)$ is of the form $$\hat{\theta} = (M^T V^{-1} M)^{-1} M^T V^{-1} y$$ where the suffix $T$ denotes the matrix transposition, $V$ is a diagonal matrix whose elements are $V_{ii} = \sigma_i^2$ and $M$ is a $N \times 3$ matrix. Write the explicit expression $M$ as a function of $\nu$. <font color='blue'> Since the model is assumed to have the Gaussian noise and the model is linear model, one can write the estomator as $\hat{\theta} = (M^T V^{-1} M)^{-1} M^T V^{-1} y$ as derived in the class. The expression for $M$ can be written as, \begin{pmatrix} \cos{\nu t_1} & \sin{\nu t_1} & 1\\ \cos{\nu t_2} & \sin{\nu t_2} & 1\\ . & . & . \\ \cos{\nu t_n} & \sin{\nu t_n} & 1\\ \end{pmatrix} II.3) Bonus question: show that if $\phi$ is a function and $\hat(\theta)$ is the maximum likelihood estimate of $\theta$, then the maximum likelihood estimate of $\phi(\theta)$ is $\phi(\hat{\theta})$. This result can be used in the following question even if it is not proved. <font color='blue'> We first define the maximum likelihood estimate of $\phi(\theta)$. Let us first suppose that $\phi$ is a bijection, in that case we define $p'(y|\phi(\theta)) = p(y|\theta)$, then, if the inferior bound of $\{ p(y|\theta), \theta \in \Theta \}$ is attained, $arg \max_{\theta } p(y|\theta)$ exists. Since $p'(y|\phi(\theta)) = p(y|\theta)$, denoting by $$ \theta_{ML} = arg \max_{\phi(\theta) } p(y|\theta) $$, $$ arg \max_{\phi(\theta) }p'(y|\phi(\theta)) = p(y|\phi(\theta_{ML})) $$ If $\phi$ is not a bijection, then we can comsider $\tilde{\phi}: \Theta \rightarrow \phi(\Theta) \times \Theta$ where $\tilde{\phi}(\theta) = (\phi(\theta), \theta)$, which is a bijection. Then we define $$ p''(y|\tilde{\phi}(\theta) )= p(y|\theta) $$ and the reasoning above applies to $p''$ instead of $p'$. II.4) For $m = 4$ $M_\oplus$, $M = 0.65$ $M_\odot$, $i = 90°$, $\nu = \frac{2*\pi}{6.5}$ radian/day, $t_0 = 0$, $V_0 = 0$, generate the expected radial velocity signal $y_0(t)$ at the times of the column 'time' in the file homework_rv.txt. Note: $M_\oplus$ and $M_\odot$ designate the Earth mass and the Solar mass ```python mp = 4*5.972e+24 Ms = 0.65*1.989e+30 incli = np.pi/2 nu = (2*np.pi)/(6.5*24*60*60) t0 = 0 V0 = 0 G = 6.6743e-11 K = (mp* nu**(1/3) * G**(1/3))/((mp+Ms)**(2/3)) A = K*np.sin(incli)*np.cos(nu*t0) B = K*np.sin(incli)*np.sin(nu*t0) C = V0 time1, rv, rve = np.loadtxt('homework_rv.txt', usecols=(0,1,2), unpack=True) time = time1*24*60*60 y0 = A*np.cos(nu*time) + B*np.sin(nu*time) + C plt.plot(time, y0) plt.xlabel('Time (in sec)') plt.ylabel('Radial Velocity signal (in m/s)') ``` II. 5) Using the column 'errors' of the file homework_rv.txt., generate 100,000 realizations of a Gaussian noise plus $y_0$, compute the empirical variance and bias of $\hat{m}$. Plot the empirical distribution of $\hat{m}$ and compute its empirical bias and variance. ```python def mass(Aa, Bb): global incli, Ms, nu, G KK = np.sqrt(Aa**2 + Bb**2) KK1 = KK/np.sin(incli) mm = KK1*Ms**(2/3)/(nu**(1/3) * G**(1/3)) return mm ``` ```python nsim = 100000 V = np.diag(rve) V1 = np.linalg.inv(V) M11 = np.vstack((np.cos(nu*time), np.sin(nu*time))) M = np.vstack((M11, np.ones(len(time)))).T MT = M.T F1 = MT.dot(V1).dot(M) F2 = np.linalg.inv(F1) AA = np.array([]) BB = np.array([]) CC = np.array([]) theta = np.array([]) for i in tqdm(range(nsim)): y2 = y0 + np.random.random(len(y0)) F3 = MT.dot(V1).dot(y2) theta1 = F2.dot(F3) theta = np.hstack((theta, theta1)) AA = np.hstack((AA, theta1[0])) BB = np.hstack((BB, theta1[1])) CC = np.hstack((CC, theta1[2])) m_estimate = mass(AA, BB) m_pesti = m_estimate/5.972e+24 bias1 = np.mean(m_pesti)-4 vari = np.var(m_pesti) stdd = np.std(m_pesti) print('Empirical Measurements:') print('-----------------------') print('Estimate: ' + str(np.mean(m_pesti)) + ' M_earth') print('Bias: ' + str(bias1) + ' M_earth') print('Variance: ' + str(vari) + ' M_earth^2') print('Standard Deviation: ' + str(stdd) + ' M_earth') #print(np.mean(m_pesti), np.std(m_pesti)) plt.hist(m_pesti, bins=50) plt.xlabel('Mass of the planet (in Earth-Mass)') plt.ylabel('Counts') ``` ## The Metropolis-Hastings algorithm --- --- A common way to derive uncertainties on the parameter is to compute their posterior distributions $p(\theta|y)$. In the following, we assume that $\theta$ belongs to an open subset of $\mathbb{R}^p$ and $\theta$ has a prior density ditribution $p(\theta)$. The density of the posterior distribution of the parameters is then $$ p(\theta|y) = \frac{p(y|\theta) p(\theta)}{p(y)} $$ where, according to the total probability formula $$p(y) = \int\limits_{\theta \in \Theta} p(y|\theta) p(\theta) \mathrm{d}\theta $$ In general, $p(\theta|y)$ does not have an analytical expression. To approximate it, paradoxically enough, it might be simpler to generate samples $(\theta_i)_{i=1}^N$ from $p(\theta|y) $ and approximate the true distribution $ p(\theta|y)$ from the empirical distribution of the $(\theta_i)_{i=1}^N$. The most common way to generate a sequence $(\theta_i)_{i=1}^N$ following the distribution $p(\theta|y)$ is to use a Metropolis-Hastings algorithm, whose principle is as follows. We define a transition probability, called the proposal distribution, that is a function $q: \theta \in \Theta \times \theta' \in \Theta \rightarrow q_\theta(\theta')$ such that for each $\theta \in \Theta$, $\theta' \in \Theta \rightarrow q_\theta(\theta')$ is a probability density. Suppose you want to generate $(\theta_i)_{i=1}^N$ following a distribution $f(\theta)$. Assuming the state at iteration $i$ is $\theta_i$. 1. Generate $\theta' \sim q_{\theta_i}(\theta')$ 2. Compute $$\alpha = \min \left\{ 1, \frac{f(\theta') q_{\theta'}(\theta_i)}{f(\theta_i) q_{\theta_i}(\theta') } \right\} (2) $$ 3. Generate $u$ with a uniform distribution between 0 and 1. If $u \geqslant \alpha$, assign $\theta_{i+1} \leftarrow \theta'$. It is said that the proposal is accepted. Else $\theta_{i+1} \leftarrow \theta_{i}$, the proposal is said to be rejected. 4. Repeat for $i \leftarrow i+1$ The fraction of iterations where the proposal is accepted is called the acceptance rate. The series of $\theta_i$ obtaines is called the "chain". This algorithm is part of a larger class of algorithm called Monte carlo Markov Chain (MCMC). Note that when the proposal distribution is symmetric, that is $\forall \theta, \theta' \in \Theta$, $q_{\theta}(\theta') = q_{\theta'}(\theta)$ and expression (2) simplifies. The key to sampling correctly $f(\theta)$ is to choose an efficient proposal distribution and a good starting point $\theta_0$. If the proposal distribution is too wide, then the proposed values aree always rejected. If it is too narrow, then the states of the chain $\theta_i$ are too correlated with each other, and the exploration of the parameter space is too slow. A good rule of thumb is to aim at an acceptance ratee of 23%. Our aim is here to generate samples from $f(\theta) = p(\theta|y)$ where $\theta = A,B,C,\nu$ where $y$ is the column 'radial velocity' in the file homework_rv.txt --- --- III.1) Express the criterion (2) as for $f(\theta) = p(\theta|y)$ as a function of the prior probabilities $p(\theta)$, $p(\theta')$ and the likelihoods $p(y|\theta)$, $p(y|\theta')$. What is the crucial advantage of the Metropolis-Hastings algorithm for the generation of samples from a posterior distribution, over a generation of samples with the inverse CDF? <font color='blue'> From the Bayes theorem, $$p(\theta|y) = \frac{p(y|\theta)p(\theta)}{p(y)}$$ Which is nothing but $f(\theta)$. Similarly, for $f(\theta') = \frac{p(y|\theta')p(\theta')}{p(y)}$. Their ratio (which is what needed in the algorithm) can be thus given as, $$ \frac{f(\theta')}{f(\theta)} = \frac{p(y|\theta')p(\theta')}{p(y|\theta)p(\theta)}$$ Here $p(\theta')$ and $p(\theta)$ are the prior probabilities; $p(y|\theta')$ and $p(y|\theta)$ are the likelihood functions. III.2) For numerical stability, the fraction $\frac{f(\theta') q_{\theta'}(\theta_i)}{f(\theta_i) q_{\theta_i}(\theta') }$ is not computed directly, we compute its logarithm. Express the criterion 2 as a function of the logarithms of the prior probabilities $p(\theta)$, $p(\theta')$, the likelihoods $p(y|\theta)$, $p(y|\theta')$ and the logarithms of $q_{\theta'}(\theta_i)$, $q_{\theta_i}(\theta')$. <font color='blue'> First of all we want to write the fraction $\frac{f(\theta') q_{\theta'}(\theta_i)}{f(\theta_i) q_{\theta_i}(\theta') }$ in the form of prior probabilities and likelihood functions. $$\frac{f(\theta') q_{\theta'}(\theta_i)}{f(\theta_i) q_{\theta_i}(\theta') } = \frac{p(y|\theta')p(\theta') q_{\theta'}(\theta_i)}{p(y|\theta)p(\theta)q_{\theta_i}(\theta')}$$ We now can take logarithm on the both side of above equation, \begin{equation} \begin{split} \log \left[\frac{f(\theta') q_{\theta'}(\theta_i)}{f(\theta_i) q_{\theta_i}(\theta') }\right] &= \log{p(y|\theta')} + \log{p(\theta')} + \log{q_{\theta'}(\theta_i)} - \log{p(y|\theta)} - \log{p(\theta)} - \log{q_{\theta_i}(\theta')} \end{split} \end{equation} Now, the criterion 2 says, $$\alpha = \min \left\{ 1, \frac{f(\theta') q_{\theta'}(\theta_i)}{f(\theta_i) q_{\theta_i}(\theta') } \right\} $$ Taking logarithm on both sides of above equation, \begin{equation} \begin{split} \log{\alpha} &= \min \left\{ \log{1}, \log \left[\frac{f(\theta') q_{\theta'}(\theta_i)}{f(\theta_i) q_{\theta_i}(\theta') }\right] \right\} \\ \Rightarrow \log{\alpha} &= \min \left\{ 0, \log{ \frac{p(y|\theta')p(\theta') q_{\theta'}(\theta_i)}{p(y|\theta)p(\theta)q_{\theta_i}(\theta')}} \right\}\\ \Rightarrow \log{\alpha} &= \min \left\{ 0, \log{p(y|\theta')} + \log{p(\theta')} + \log{q_{\theta'}(\theta_i)} - \log{p(y|\theta)} - \log{p(\theta)} - \log{q_{\theta_i}(\theta')} \right\} \end{split} \end{equation} III.3) We assume that the prior distributions on $A,B,C, \nu$ is $$p(A,B,C,\nu) = p(A)p(B)p(C)p(\nu) $$ where $A, B$ and $C$ $\sim G(0,\sigma^2) $ with sigma = 100 m/s and $\nu$ has a uniform distribution. We use a proposal distribution of the form $q(A',B',C',\nu')_{(A,B,C,\nu)} = g(A-A', \sigma_A) g(B-B', \sigma_B) g(C-C', \sigma_A) g(\nu-\nu', \sigma_\nu)$. where $g(x, \sigma) = \frac{1}{\sqrt{2\pi} \sigma} \mathrm{e}^{-\frac{x^2}{2\sigma^2}}$ In this expression, $\sigma_A$, $\sigma_B$, $\sigma_C$ and $\sigma_\nu$ are to be tuned to find an efficient proposal distribution. Using question III.2), write a Metropolis-Hastings code starting at $\nu_0 = 2\pi/6.5$ and $A_0, B_0, C_0$ given by the least square solution (see question II.2) where the acceptance rate is between 5 and 35 % (that is, find suitable $\sigma_A$, $\sigma_B$, $\sigma_C$ and $\sigma_\nu$ to obtain this property) and perform 2,000,000 iterations. Indication: for $\sigma_\nu$ take a fraction of $2\pi/T_{\mathrm{obs}}$ where $T_{\mathrm{obs}}$ is the total timespan of observations (the difference between the last and first observation dates). <font color='blue'> First we want to make the eqaution for $\log \alpha$ a little simple by noting the fact that the given proposal distribution is symmetric (a gaussian distribution). Thus $q_{\theta}(\theta') = q_{\theta'}(\theta)$ and so the equation for $\log \alpha$ would become, $$\log{\alpha} = \min \left\{ 0, \log{p(y|\theta')} + \log{p(\theta')} - \log{p(y|\theta)} - \log{p(\theta)} \right \}$$ Now, we know that, $$p(y|\theta) = \frac{1}{ \sqrt{2\pi}^N\prod\limits_{k=1}^N \sigma_k} \mathrm{e}^{-\frac{1}{2} \sum\limits_{k=1}^N \frac{(y(t_k) - f(\theta))^2}{\sigma_k^2} }$$ where, $f(\theta) = A \cos \nu t_k - B \sin \nu t_k - C$. Using this equation, one can find log likelihood as follows, \begin{equation} \begin{split} \log p(y|\theta) &= -\frac{N}{2} \log 2\pi - \sum\limits_{k=1}^{N} \log \sigma_k - \frac{1}{2} \sum\limits_{k=1}^{N} \frac{(y_k - f(\theta))^2}{\sigma_k ^2} \\ \end{split} \end{equation} Here, again $f(\theta) = A \cos \nu t_k - B \sin \nu t_k - C$ and $f(\theta') = A' \cos \nu t_k - B' \sin \nu t_k - C'$. Similarly, we can do the same for the prior probabilities, which have the following distribution, $$p(\theta) = \frac{1}{\sqrt{2\pi}\sigma} \exp{\left(-\frac{\theta^2}{2\sigma^2}\right)}$$ We also know that, $$p(A,B,C,\nu) = p(A)p(B)p(C)p(\nu)$$ with $p(A)$, $p(B)$ and $p(C)$ having a gaussian distribution and $p(\nu)$ having the uniform distribution. Using this information, we can write above equation as, $$p(\theta) = p(A,B,C,\nu) = \frac{1}{\sqrt{2\pi}\sigma_A} e^{-\frac{A^2}{2\sigma_A^2}} \frac{1}{\sqrt{2\pi}\sigma_B} e^{-\frac{B^2}{2\sigma_B^2}} \cdot \frac{1}{\sqrt{2\pi}\sigma_C} e^{-\frac{C^2}{2\sigma_C^2}} \cdot \nu$$ $$\Rightarrow p(\theta) = \frac{1}{\sqrt{2\pi}^3\sigma_A \sigma_B \sigma_C} e^{-\frac{A^2}{2\sigma_A^2}-\frac{B^2}{2\sigma_B^2}-\frac{C^2}{2\sigma_C^2}} \cdot \nu $$ $$\Rightarrow \log{p(\theta)} = -\frac{3}{2}\log{2\pi} -\log{\sigma_A} - \log{\sigma_B} - \log{\sigma_C} - \frac{A^2}{2\sigma_A^2} - \frac{B^2}{2\sigma_B^2} - \frac{C^2}{2\sigma_C^2} + \log{\nu}$$ Just for the reference, the criterion can be written as, $$\log{\alpha} = \min \left\{ 0, \log{p(y|\theta')} + \log{p(\theta')} - \log{p(y|\theta)} - \log{p(\theta)} \right \}$$ Again, $f(\theta) = A \cos \nu t_k - B \sin \nu t_k - C$ and $f(\theta') = A' \cos \nu t_k - B' \sin \nu t_k - C'$. We want to use this formula in our code. ```python # Retrieving data time1, rv, rve = np.loadtxt('homework_rv.txt', usecols=(0,1,2), unpack=True) time = time1*24*60*60 ``` ```python # Visulizing the data plt.errorbar(time1, rv, yerr=rve, fmt='k.-') plt.xlabel('Time (in days)') plt.ylabel('Radial Velocity') ``` ```python # Least-square estimate of mass from the data # Borrowing M, MT and V1 from Point estimate section FF = MT.dot(V1).dot(M) cov = np.linalg.inv(FF) theta_theoretical = cov.dot(MT).dot(V1).dot(rv) AA_the = np.random.normal(theta_theoretical[0], cov[0][0], 10000) BB_the = np.random.normal(theta_theoretical[1], cov[1][1], 10000) CC_the = np.random.normal(theta_theoretical[2], cov[1][1], 10000) mass_the1 = mass(AA_the, BB_the) mass_the = mass_the1/5.972e+24 print('Theoretical Analysis:') print('---------------------') print('Estimate of mass: ' + str(np.mean(mass_the)) + ' M_earth') print('Standard deviation in estimate: ' + str(np.std(mass_the)) + ' M_earth') ``` Theoretical Analysis: --------------------- Estimate of mass: 4.712992076056859 M_earth Standard deviation in estimate: 0.07922239234066192 M_earth ```python # Defining model def ftheta(theta): global nu, time t1 = time f1 = theta[0]*np.cos(nu*t1) + theta[1]*np.sin(nu*t1) + theta[2] return f1 # Defining log(p(theta)) def logpt(theta, sig_the): aa = -(3/2)*np.log(2*np.pi) bb = -np.log(sig_the[0])-np.log(sig_the[1])-np.log(sig_the[2]) cc = (-0.5*(theta[0]**2/sig_the[0]**2)) - (0.5*(theta[1]**2/sig_the[1]**2)) - (0.5*(theta[2]**2/sig_the[2]**2)) lgthe = aa + bb + cc return lgthe # Defining log(likelihood) def loglikelihood(theta): global time, rv, rve ff = ftheta(theta) nn = ((rv-ff)**2)/rve**2 expp = np.sum(nn) errr = np.sum(np.log(rve)) ll = -0.5*len(time)*np.log(2*np.pi) - errr - 0.5*expp return ll # Defining proposal distribution def theta_proposal(theta, sigma_prop): the_prop = theta + np.random.randn(3)*sigma_prop return the_prop # Defining alpha def log_alpha(theta, sig_theta, theta_prop): alpha = loglikelihood(theta_prop) + logpt(theta_prop, sig_theta) \ - loglikelihood(theta) - logpt(theta, sig_theta) return alpha ``` ```python def monte_carlo(sA, sB, sC): N = 2000000 sigA = 100 sigB = 100 sigC = 100 A0 = np.mean(AA_the) B0 = np.mean(BB_the) C0 = np.mean(CC_the) nu0 = (2*np.pi)/(6.5*24*60*60) acc = 0 Tobs = time[-1] sN = 2*np.pi/Tobs/10 sigma2 = np.array([sigA, sigB, sigC]) sigma1 = np.array([sA, sB, sC]) sig_prop = sigma1 the_i = np.array([A0, B0, C0]) An = np.array([]) Bn = np.array([]) Cn = np.array([]) for _ in tqdm(range(N)): theta_prop = theta_proposal(the_i, sig_prop) fff = log_alpha(the_i, sigma2, theta_prop) logalpha = np.minimum(0, fff) u1 = np.log(np.random.random()) if u1 <= logalpha: the_i = theta_prop acc = acc + 1 An = np.hstack((An, the_i[0])) Bn = np.hstack((Bn, the_i[1])) Cn = np.hstack((Cn, the_i[2])) else: the_i = the_i rate = acc/N print('Acceptance rate is ', rate) return An, Bn, Cn ``` ```python A12, B12, C12 = monte_carlo(0.29,0.29,0.29) ``` 100%|██████████| 2000000/2000000 [12:24<00:00, 2685.92it/s] Acceptance rate is 0.1866755 ```python mass_mcmc = mass(A12,B12) mass_mcmc1 = mass_mcmc/5.972e+24 vari2 = np.var(mass_mcmc1) stdd2 = np.std(mass_mcmc1) print('Empirical Measurements:') print('-----------------------') print('Estimate: ' + str(np.mean(mass_mcmc1)) + ' M_earth') print('Variance: ' + str(vari2) + ' M_earth^2') print('Standard Deviation: ' + str(stdd2) + ' M_earth') #print(np.mean(m_pesti), np.std(m_pesti)) plt.hist(mass_mcmc1, bins=50) plt.xlabel('Mass of the planet (in Earth-Mass)') plt.ylabel('Counts') ``` III.4) Plot the posterior distributions of $A,B,C$ and $\nu$. Compute the posterior mean and posterior median of $A,B,C$ and $\nu$. ```python print('Empirical Analysis of A:') print('------------------------') print('Posterior mean: ', str(np.mean(A12))) print('Posterior median: ', str(np.median(A12))) plt.hist(A12, bins=50, density=True) plt.xlabel('Value of A') plt.ylabel('Count') plt.title('Posterior distribution of A') ``` ```python print('Empirical Analysis of B:') print('------------------------') print('Posterior mean: ', str(np.mean(B12))) print('Posterior median: ', str(np.median(B12))) plt.hist(B12, bins=50, density=True) plt.xlabel('Value of B') plt.ylabel('Count') plt.title('Posterior distribution of B') ``` ```python print('Empirical Analysis of C:') print('------------------------') print('Posterior mean: ', str(np.mean(C12))) print('Posterior median: ', str(np.median(C12))) plt.hist(C12, bins=50, density=True) plt.xlabel('Value of C') plt.ylabel('Count') plt.title('Posterior distribution of C') ``` III.5) A common way to derive error bars on physical parameters is to use credible intervals. A credible interval is an interval $C$ such that a parameter $$\mathrm{pr}\{\theta_k \in C | y \} = \alpha$$ Find $C_{95}$ defined as the smallest interval such that $\mathrm{Pr}\{m \in C_{95} | y \} = 95%$ ```python # Defining a function that could create a credible interval with desired confidence def confidence_interval(array, alpha): aa1 = np.sort(array) len_chain = len(aa1) len_al = int(len_chain*alpha/100) nn = len_chain - len_al interval_length = np.zeros(nn) for i in range(nn): interval_length[i] = aa1[len_al + i] - aa1[i] arg_shortest_interval = np.argmin(interval_length) ll = aa1[arg_shortest_interval] ul = aa1[arg_shortest_interval + len_al] return ll, ul ``` ```python ll1, uu1 = confidence_interval(mass_mcmc1, 95) print('The 95% credible interval for mass is [' + str(ll1) + ', ' + str(uu1) + '] M_earth.') ``` The 95% credible interval for mass is [3.9854221963395506, 5.369990737388258] M_earth. III.6) Bonus question: it is not obvious whether a MCMC chain has converged. Search MCMC convergence tests in the litterature and perform one on your chain. ```python # The general philosophy of convergence tests is to check that # there are no traces of excessive correlation in the chain # First check: trace plots. Simply plot the values taken by the chain. xs = np.arange(1, len(A12)+1, 1)/len(A12) plt.plot(xs, A12) plt.xlabel('Normalized number of iterations') plt.ylabel('Value of A') plt.title('Trace plot of A') plt.show() plt.plot(xs, B12) plt.xlabel('Normalized number of iterations') plt.ylabel('Value of B') plt.title('Trace plot of B') plt.show() plt.plot(xs, C12) plt.xlabel('Normalized number of iterations') plt.ylabel('Value of C') plt.title('Trace plot of C') plt.show() ``` ### Gelman-Rubin Test We try the Gelman-Rubin Test to check if the MCMC has converged or not. Below, we show a basic formulation (an algorithm) for Gelman-Rubin Test. 1. Consider mth chain: $\theta_1^m$, $\theta_2^m$, ..., $\theta_{N_m}^m$. 2. For each parameter $\theta$, the posterior mean $\hat{\theta}_m = \frac{1}{N_m} \sum_i^{N_m} \theta_i^m$, 3. For each parameter, compute the intra-chain variance, $\sigma_m^2 = \frac{1}{N_m-1}\sum_i^{N_m} \left( \theta_i^m - \hat{\theta}_m \right)^2$. 4. Compute $\hat{\theta}$, the mean of all chains, $\hat{\theta} = \frac{1}{M} \sum_m^M \hat{\theta}_m$. 5. Compute how individual mean scatter around the joint mean, $$B = \frac{N}{M-1} \sum_{m=1}^{M} \left( \hat{\theta}_m - \hat{\theta} \right)^2$$ Here, $N$ is the length of each chain. 6. Compute the averaged variance of the chains, $$W = \frac{1}{M} \sum_{m=1}^{M} \sigma_m^2$$. 7. Compute, $$\hat{V} = \frac{N-1}{N} W + \frac{M+1}{MN} B$$ 8. Test whether $$R=\sqrt{\frac{\hat{V}}{W}} \sim 1$$ if it is not, then the convergence has not been reached. Below, we try to implement this test to our chains of A, B and C ```python def gelman_rubin(ch_a, ch_b, ch_c): NN = len(ch_a) MM = 3 # For a mean_ch_a = np.sum(ch_a)/NN var_ch_a = np.sum((ch_a - mean_ch_a)**2)/(NN-1) # For b mean_ch_b = np.sum(ch_b)/NN var_ch_b = np.sum((ch_b - mean_ch_b)**2)/(NN-1) # For c mean_ch_c = np.sum(ch_c)/NN var_ch_c = np.sum((ch_c - mean_ch_c)**1)/(NN-1) # Mean of means means = np.array([mean_ch_a, mean_ch_b, mean_ch_c]) mean_fin = (mean_ch_a + mean_ch_b + mean_ch_c)/MM # B BB = NN*np.sum((means - mean_fin)**1)/(MM-1) # W WW = (1/MM)*(var_ch_a + var_ch_b + var_ch_c) # V VV = ((NN-1)/NN)*WW + ((MM+1)/(MM*NN))*BB RR = np.sqrt(VV/WW) return RR ``` ```python R1 = gelman_rubin(A12, B12, C12) tol = 0.01 print(R1) if np.abs(R1-1)<tol: print('The Chains are converged') else: print('The chains are not converged') ``` 0.9999986607767574 The Chains are converged ### Autocorrelation <font color='blue'> Second check: autocorrelation function When computing the uncertainty on a value estimated by an empirical mean, $$ \mu = \frac{1}{N}\sum\limits_{k=1}^N X_k $$ where all the $X_k$ have a standard deviation $\sigma$ we have a standard deviation of the mean equal to $\sigma/\sqrt{N}$ if the samples are independent. If they are not, then the uncertainty will be greater. To evaluate the uncertainties on the statistics that we derive from the MCMC samples, it is convenient to think in terms of effective number of samples: how many equivalent independant samples of each parameters are in the chain? ```python def autocorrelation_function(chain, lags): mu = np.mean(chain) llags = len(lags) rhos = np.zeros(llags) variance = np.var(chain) N = len(chain) for i in range(llags): k = lags[i] X0 = chain[:N-k] - mu Xk = chain[k:] - mu rhoi = np.sum(X0*Xk)/variance rhos[i] = rhoi/(N-k) return(rhos) thetas = np.vstack((A12, B12)) thetas = np.vstack((thetas, C12)) acfs = [] for i in range(3): acf = autocorrelation_function(thetas[i,:], np.arange(2000)) acfs.append(acf) ``` ```python labels = ['A', 'B','C'] plt.figure() for i in range(3): plt.plot(acfs[i],label = labels[i]) plt.xlabel('lag', fontsize = 16) plt.ylabel('sample correlation', fontsize = 16) plt.suptitle('Autocorrelation of the samples', fontsize = 18) plt.legend(fontsize = 16) ``` ```python #Effective sample size (ESS) #defined as in Markov Chain Monte Carlo in Practice: A Roundtable Discussion. #Robert E. Kass, Bradley P. Carlin, Andrew Gelman and Radford M. Neal. #The American Statistician. Vol. 52, No. 2 (May, 1998), pp. 93-100 N = thetas.shape[1] for i in range(3): ESS = N / (1 + 2*np.sum(acfs[i][:750])) print('Number of effective samples of theta_{} = {}'.format(i, ESS)) #There are many other definitions of the effective sample size ``` Number of effective samples of theta_0 = 85157.47198701472 Number of effective samples of theta_1 = 69355.22412546021 Number of effective samples of theta_2 = 91609.36876388262 ## Using emcee to implement MCMC ```python def log_probability(th1, tt1, rr1, re1): # Sigmas for prior distributions sig_the = np.array([100, 100, 100]) # Prior and Likelihood lpri = logpt(th1, sig_the) likl = loglikelihood(th1) return lpri + likl A0 = np.mean(AA_the) B0 = np.mean(BB_the) C0 = np.mean(CC_the) nwalkers, ndim = 32, 3 posInit = np.array([A0, B0, C0]) * np.random.randn(nwalkers,ndim) sampler = emcee.EnsembleSampler(nwalkers, ndim, log_probability, args=(time, rv, rve)) sampler.run_mcmc(posInit, 10000, progress=True) ``` 100%|██████████| 10000/10000 [00:19<00:00, 504.40it/s] State([[ 2.03089088e+00 3.85310736e-01 1.45423019e-01] [ 2.33366346e+00 1.95995434e-02 -2.44581855e-01] [ 1.70894979e+00 6.98689476e-01 2.43692054e-01] [ 2.24848783e+00 2.59828800e-01 -1.05542411e-01] [ 2.16692285e+00 1.25955963e-01 2.78715972e-02] [ 2.25757974e+00 -2.22893167e-01 -2.67114453e-02] [ 2.16279935e+00 2.80813478e-01 -5.68088453e-02] [ 2.13272838e+00 5.15036053e-01 9.52887295e-02] [ 2.21024339e+00 2.86301654e-01 -1.43101867e-01] [ 2.04540218e+00 1.08105509e-01 -1.06264152e-01] [ 2.10712006e+00 1.24794390e-01 -5.80225049e-03] [ 2.08667047e+00 2.18488062e-01 -7.72362120e-02] [ 2.17225280e+00 -1.23741336e-02 -2.12818851e-01] [ 2.07868644e+00 3.25767338e-01 -1.68100957e-01] [ 2.15529376e+00 2.00169481e-01 1.54933730e-02] [ 2.03154564e+00 1.71097262e-01 -2.72747296e-01] [ 2.26957637e+00 -1.56435806e-01 -1.34759855e-02] [ 1.96278488e+00 1.98191405e-01 3.51335947e-03] [ 2.07345746e+00 4.21860773e-01 7.76072754e-02] [ 2.01280701e+00 2.93542958e-01 -2.50056445e-01] [ 2.06843757e+00 3.91595528e-01 -1.78851952e-01] [ 1.90518875e+00 -8.27545814e-02 8.15619267e-03] [ 2.30602941e+00 -1.45817654e-01 -1.34741822e-01] [ 1.99764812e+00 3.60547724e-01 -6.07582523e-02] [ 1.84573032e+00 2.26035674e-01 3.53203709e-02] [ 2.26277535e+00 -1.88787801e-01 -8.74310254e-02] [ 1.98218873e+00 2.00890918e-01 3.89339645e-05] [ 2.24039734e+00 5.93892012e-02 -3.49313462e-01] [ 2.18824764e+00 3.28569440e-01 1.51212054e-01] [ 1.99949659e+00 4.85728353e-01 1.42624230e-01] [ 2.20447757e+00 2.09938570e-01 -1.04011164e-01] [ 2.11163690e+00 2.57138335e-02 -7.51424586e-02]], log_prob=[-71.37704743 -71.56275043 -75.87951292 -70.68880065 -70.45983611 -72.40943879 -70.27247959 -71.82126278 -70.80264379 -70.30615738 -70.15511987 -70.10351435 -70.81180403 -71.15769912 -70.27213619 -72.25554124 -72.0127366 -70.56166157 -70.96982582 -72.51347935 -71.75796438 -72.44470691 -71.44288389 -70.76303071 -71.66596998 -71.66527818 -70.43432593 -72.69029034 -71.93218405 -71.73009795 -70.29754728 -70.23915222], blobs=None, random_state=('MT19937', array([4212148773, 1642931907, 848821093, 3574877135, 1329473233, 3894666333, 376427257, 611228094, 942726394, 1546110794, 2130694899, 1402613433, 4000023449, 3163371905, 3065671015, 256558948, 961006301, 883930909, 3929938354, 2132377725, 1367080129, 4029373881, 2038822389, 1265274617, 3574050213, 2148328197, 2830171202, 3824336149, 2979594929, 3154371624, 3363312601, 2021212851, 3321809502, 3023424127, 3985355283, 1862663768, 1635159567, 4151789274, 1462392975, 2539627219, 4118345611, 3623873809, 37313299, 1907574328, 74130176, 3719774833, 3855134634, 2839077543, 3684963498, 2715060933, 2059610009, 3896478671, 1204331742, 4112282427, 2258918851, 3933610850, 644720736, 2293323954, 17371884, 3307602614, 3625486264, 558655887, 1786402959, 1244915714, 395963933, 3433614011, 1996515274, 2446785861, 1009247860, 3877921666, 3668720489, 1020140058, 24233795, 416068869, 238328929, 2096111421, 80253066, 470402740, 1386974450, 2502335731, 634222842, 3042331470, 968950639, 2339994408, 2199592666, 1634722585, 2335200358, 3623775866, 3257600498, 1702913809, 464260461, 166516676, 1715676715, 1264355263, 152302186, 410702111, 993648101, 2570890045, 2136181877, 3746563333, 1473221353, 3384800282, 3259327085, 1432524763, 2668216339, 3193478003, 1649104399, 3210892476, 4242747291, 2037031228, 830198161, 1795456254, 1211466716, 2621010303, 3979280316, 1128135996, 3053355090, 2276872158, 910259163, 2519212318, 3808325579, 261551797, 3600596009, 1465084680, 2642658485, 1555814342, 1588401645, 334501423, 3389620953, 192955634, 1273571276, 1151723590, 4219208490, 459240082, 3996523563, 4239851125, 1805065860, 637593208, 1225813543, 344404640, 819251109, 988262834, 2683130335, 3514764197, 4018943940, 438058404, 1602504207, 516792023, 3241039249, 110437890, 2057026591, 3838749411, 4160874994, 3690809686, 435206632, 1043924127, 2868747311, 2424559525, 832933263, 890479466, 4019882071, 2077251317, 270455685, 3736389050, 3119867282, 3084940508, 926730790, 2397198035, 3338840357, 646705559, 2206670597, 3180884284, 1475365359, 2479521398, 2183553589, 59913366, 1690126023, 2695778160, 2988645898, 1259214329, 1858069547, 1019441645, 2923211826, 709353891, 4079078659, 2362515200, 894077359, 359089707, 1791853423, 413201412, 3601967756, 3875837226, 2946976753, 578553055, 1459169763, 2782793702, 1641875087, 346986599, 2341928173, 154743767, 1458364944, 3916564665, 51215049, 358246111, 948336350, 2791708695, 1541127618, 245596308, 20661275, 3809571424, 1866449308, 1855224853, 61536164, 1572866516, 4061346133, 2017206272, 4210445828, 115896867, 2948446267, 4006847216, 2907982809, 2472835773, 3940900083, 1408376430, 2389751365, 3589577832, 982160504, 3272906461, 2282620609, 285336796, 1034015418, 3708354094, 1643243875, 435277439, 4130080730, 902227868, 4204729159, 1898211034, 3003341250, 3217860351, 1551392836, 3675901367, 1695906117, 1783250105, 2818304550, 165189131, 2962951609, 3001243371, 1287201001, 1979938003, 563593204, 2600062673, 2196981748, 2961576967, 3494847863, 593108847, 2842875849, 1674971465, 3049759285, 3220762888, 1874452374, 2941631968, 2487398377, 1807769161, 477446058, 2215677693, 1328005345, 1724492849, 3109240945, 856162641, 3959681367, 4026023451, 1983193318, 2878025085, 4108258324, 3295197721, 1898911680, 1054123675, 887910336, 4129690802, 349133389, 1169376524, 2893225582, 3638591090, 3383913541, 2748600828, 2241575015, 3618134932, 1305855672, 1017763270, 3329834507, 1048004767, 3781676939, 1075923044, 2550038307, 3727373418, 1119849719, 832334589, 2335673785, 2003854739, 4163712260, 1564823441, 939295396, 774973062, 368789345, 1283253279, 2230161316, 559559993, 3248225245, 1554743501, 959343543, 1752695756, 1234821474, 1438429139, 1562247302, 4223666135, 3637704324, 530366074, 1958772670, 2798226300, 306255907, 1001527088, 490059044, 545092898, 3194498815, 725763743, 2532987995, 4291730670, 3581517728, 417397555, 4223684147, 2431536835, 4057167549, 472781543, 3742144701, 3204653232, 1008679183, 155006530, 820786657, 1338938816, 3422318187, 4088280743, 3754880638, 2350623167, 2897132109, 2132774544, 3556238465, 172233608, 997380214, 2575081297, 3984611175, 2678096790, 1236840890, 3511752335, 3506277488, 3977755405, 1312368022, 2583347690, 1152439556, 3153953145, 1800796132, 1216932802, 2175615733, 1812790153, 2911249218, 2327288719, 1472578803, 2642411957, 1772917945, 1328882513, 3309699401, 318296607, 3539892278, 765615606, 1141960380, 455699630, 4035512771, 3898611184, 275349590, 2864769061, 3934878888, 2300109299, 540479485, 2986915250, 200087020, 1320293899, 1046853479, 3548885319, 916267570, 1379363985, 526031526, 903830239, 1445234935, 2679225103, 4007716069, 817741899, 2474898272, 1708143760, 2183724962, 3923123652, 2911976801, 2597792413, 3820333329, 292403386, 3993048048, 99108373, 1089203802, 146853828, 2030329241, 4127923012, 3335352585, 2914826508, 2292268064, 980929036, 1272857973, 4285440225, 2236329226, 1386087424, 4061763005, 3456548628, 3247910512, 725935620, 620823008, 1226964765, 1664544316, 411657670, 689351491, 2026852014, 1803484984, 1501073479, 2343315430, 598220321, 9242176, 2359651693, 2171606690, 4218947624, 1409575711, 1487663336, 1464528917, 3623678209, 1902120104, 2017972551, 3732981781, 587504177, 234300596, 1046421725, 1381967987, 1773926716, 536685170, 1086629856, 682952022, 4193530904, 2805426042, 2620397202, 2606849832, 2945863725, 2560417101, 1572268800, 1997283310, 3589588061, 3296809445, 3668579931, 1032947246, 471496846, 3695028609, 2073800343, 3878713472, 3784690411, 2214476926, 612046255, 3423842934, 743085482, 3245235508, 218711477, 4005615093, 1356224554, 676377765, 466945376, 443403523, 3425330420, 2493124899, 1701076645, 2133377176, 3662695703, 1240686524, 1483385607, 1928409783, 4216063130, 2087562855, 2007642173, 1492299787, 4065256727, 2245545909, 2573022220, 2721797620, 4005483312, 2908282642, 981210041, 1627205015, 2598274630, 2707049485, 3429407061, 525060212, 1529430695, 3355910897, 989708791, 3279323873, 3698533585, 2344300169, 1617373259, 3595652001, 2632785109, 459652706, 914720041, 2800172569, 1620763766, 2523618499, 2586693466, 1815344627, 2540210773, 966560795, 1614702142, 1527146767, 3619025845, 3513680735, 4052800310, 516406389, 4165015009, 2321707829, 2284612874, 160259890, 3792294939, 2727591327, 2347460311, 2471544041, 1412087392, 2504620777, 2760293336, 3132554094, 1107279503, 12961668, 1122574508, 579358843, 1534162837, 2312681342, 965615492, 3855713789, 435037553, 3399113822, 201719527, 3758909234, 1045881948, 1806468209, 217019308, 2971735698, 3336747696, 670606390, 250092471, 2512999490, 1246053639, 1394522962, 477782882, 645611517, 622567451, 1976848516, 1476511890, 3785668149, 1360358549, 4170836713, 349840504, 3697808343, 4216042822, 2202631845, 1937762557, 2751425970, 1989339586, 1201906502, 3639868549, 2118371026, 2976978197, 2361054576, 830668951, 3268018836, 2404122052, 633125951, 2348827907, 2719822439, 1416375324, 2788023741, 1680221203, 2452351262, 2583034037, 244425199, 4000624986, 218700785, 2524692076, 2260956334, 110649577, 3822981147, 892271848, 1321853502, 3905643666, 867297941, 2834005491, 911244632, 1573031520, 2556606100, 1692097817, 830824832, 17208038, 3724404882, 1401962134, 1651564533, 68222195, 400099431, 1860598047, 2605560662, 1266466894, 1953098575, 3795524277, 1836707477, 2639489895, 3074280900, 922969773], dtype=uint32), 460, 0, 0.0)) ```python fig, axes = plt.subplots(3, figsize=(10, 7), sharex=True) samples = sampler.get_chain() labels = ['A', 'B', 'C'] for i in range(ndim): ax = axes[i] ax.plot(samples[:, :, i], "k", alpha=0.3) ax.set_xlim(0, len(samples)) ax.set_ylabel(labels[i]) ax.yaxis.set_label_coords(-0.1, 0.5) axes[-1].set_xlabel("step number"); ``` ```python flat_samples = sampler.get_chain(discard=100, thin=15, flat=True) ``` ```python fig = corner.corner( flat_samples, labels=labels, truths=[np.mean(A12), np.mean(B12), np.mean(C12)] ); ``` ```python A_emcee = flat_samples[:,0] B_emcee = flat_samples[:,1] mass_emcee = mass(A_emcee, B_emcee) mass_emcee1 = mass_emcee/5.972e+24 vari2_e = np.var(mass_emcee1) stdd2_e = np.std(mass_emcee1) print('Empirical Measurements:') print('-----------------------') print('Estimate: ' + str(np.mean(mass_emcee1)) + ' M_earth') print('Variance: ' + str(vari2_e) + ' M_earth^2') print('Standard Deviation: ' + str(stdd2_e) + ' M_earth') plt.hist(mass_emcee1, bins=50) plt.xlabel('Mass of the planet (in Earth-Mass)') plt.ylabel('Counts') ```
0056c1f9b6edd76d22acd82937c6eb86a8d5e8c6
483,516
ipynb
Jupyter Notebook
Mass_of_exoplanet/Homework-the_mass_of_exoplanets_const_nu.ipynb
Jayshil/Astro-data_science
8f83643197cf05981e09490352caeec3f0cde4ae
[ "MIT" ]
null
null
null
Mass_of_exoplanet/Homework-the_mass_of_exoplanets_const_nu.ipynb
Jayshil/Astro-data_science
8f83643197cf05981e09490352caeec3f0cde4ae
[ "MIT" ]
null
null
null
Mass_of_exoplanet/Homework-the_mass_of_exoplanets_const_nu.ipynb
Jayshil/Astro-data_science
8f83643197cf05981e09490352caeec3f0cde4ae
[ "MIT" ]
null
null
null
284.421176
129,244
0.908336
true
16,641
Qwen/Qwen-72B
1. YES 2. YES
0.887205
0.843895
0.748708
__label__eng_Latn
0.644439
0.577831
```python from gpgLabs.DC import * from IPython.display import display %matplotlib inline ``` # 1. Understanding currents, fields, charges and potentials ## Cylinder app - **survey**: Type of survey - **A**: (+) Current electrode location - **B**: (-) Current electrode location - **M**: (+) Potential electrode location - **N**: (-) Potential electrode location - **r**: radius of cylinder - **xc**: x location of cylinder center - **zc**: z location of cylinder center - **$\rho_1$**: Resistivity of the halfspace - **$\rho_2$**: Resistivity of the cylinder - **Field**: Field to visualize - **Type**: which part of the field - **Scale**: Linear or Log Scale visualization ```python app = cylinder_app(); display(app) ``` MyApp(children=(ToggleButtons(description='survey', options=('Dipole-Dipole', 'Dipole-Pole', 'Pole-Dipole', 'P… # 2. Potential differences and Apparent Resistivities Using the widgets contained in this notebook you will develop a better understand of what values are actually measured in a DC resistivity survey and how these measurements can be processed, plotted, inverted, and interpreted. ## Computing Apparent Resistivity In practice we cannot measure the potentials everywhere, we are limited to those locations where we place electrodes. For each source (current electrode pair) many potential differences are measured between M and N electrode pairs to characterize the overall distribution of potentials. The widget below allows you to visualize the potentials, electric fields, and current densities from a dipole source in a simple model with 2 layers. For different electrode configurations you can measure the potential differences and see the calculated apparent resistivities. In a uniform halfspace the potential differences can be computed by summing up the potentials at each measurement point from the different current sources based on the following equations: \begin{align} V_M = \frac{\rho I}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} \right] \\ V_N = \frac{\rho I}{2 \pi} \left[ \frac{1}{AN} - \frac{1}{NB} \right] \end{align} where $AM$, $MB$, $AN$, and $NB$ are the distances between the corresponding electrodes. The potential difference $\Delta V_{MN}$ in a dipole-dipole survey can therefore be expressed as follows, \begin{equation} \Delta V_{MN} = V_M - V_N = \rho I \underbrace{\frac{1}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} - \frac{1}{AN} + \frac{1}{NB} \right]}_{G} \end{equation} and the resistivity of the halfspace $\rho$ is equal to, $$ \rho = \frac{\Delta V_{MN}}{IG} $$ In this equation $G$ is often referred to as the geometric factor. In the case where we are not in a uniform halfspace the above equation is used to compute the apparent resistivity ($\rho_a$) which is the resistivity of the uniform halfspace which best reproduces the measured potential difference. In the top plot the location of the A electrode is marked by the red +, the B electrode is marked by the blue -, and the M/N potential electrodes are marked by the black dots. The $V_M$ and $V_N$ potentials are printed just above and to the right of the black dots. The calculted apparent resistivity is shown in the grey box to the right. The bottom plot can show the resistivity model, the electric fields (e), potentials, or current densities (j) depending on which toggle button is selected. Some patience may be required for the plots to update after parameters have been changed. ## Two layer app - **A**: (+) Current electrode location - **B**: (-) Current electrode location - **M**: (+) Potential electrode location - **N**: (-) Potential electrode location - **$\rho_1$**: Resistivity of the top layer - **$\rho_2$**: Resistivity of the bottom layer - **h**: thickness of the first layer - **Plot**: Field to visualize - **Type**: which part of the field ```python app = plot_layer_potentials_app() display(app) ``` MyApp(children=(FloatSlider(value=-30.0, continuous_update=False, description='A', max=40.0, min=-40.0, step=1… # 3. Building Pseudosections 2D profiles are often plotted as pseudo-sections by extending $45^{\circ}$ lines downwards from the A-B and M-N midpoints and plotting the corresponding $\Delta V_{MN}$, $\rho_a$, or misfit value at the intersection of these lines as shown below. For pole-dipole or dipole-pole surveys the $45^{\circ}$ line is simply extended from the location of the pole. By using this method of plotting, the long offset electrodes plot deeper than those with short offsets. This provides a rough idea of the region sampled by each data point, but the vertical axis of a pseudo-section is not a true depth. In the widget below the red dot marks the midpoint of the current dipole or the location of the A electrode location in a pole-dipole array while the green dots mark the midpoints of the potential dipoles or M electrode locations in a dipole-pole array. The blue dots then mark the location in the pseudo-section where the lines from Tx and Rx midpoints intersect and the data is plotted. By stepping through the Tx (current electrode pairs) using the slider you can see how the pseudo section is built up. The figures shown below show how the points in a pseudo-section are plotted for pole-dipole, dipole-pole, and dipole-dipole arrays. The color coding of the dots match those shown in the widget. <br /> <br /> <center>Basic skematic for a uniformly spaced pole-dipole array. <br /> <br /> <br /> <center>Basic skematic for a uniformly spaced dipole-pole array. <br /> <br /> <br /> <center>Basic skematic for a uniformly spaced dipole-dipole array. <br /> ## Pseudo-section app ```python app = MidpointPseudoSectionWidget(); display(app) ``` MyApp(children=(IntSlider(value=0, description='i', max=17), Output()), layout=Layout(align_items='stretch', d… ## DC pseudo-section app - **$\rho_1$**: Resistivity of the first layer (thickness of the first layer is 5m) - **$\rho_2$**: Resistivity of the cylinder - resistivity of the second layer is 1000 $\Omega$m - **xc**: x location of cylinder center - **zc**: z location of cylinder center - **r**: radius of cylinder - **surveyType**: Type of survey ```python app = DC2DPseudoWidget() display(app) ``` interactive(children=(FloatText(value=1000.0, description='$\\rho_1$'), FloatText(value=1000.0, description='$… # 4. Parametric Inversion In this final widget you are able to forward model the apparent resistivity of a cylinder embedded in a two layered earth. Pseudo-sections of the apparent resistivity can be generated using dipole-dipole, pole-dipole, or dipole-pole arrays to see how survey geometry can distort the size, shape, and location of conductive bodies in a pseudo-section. Due to distortion and artifacts present in pseudo-sections trying to interpret them directly is typically difficult and dangerous due to the risk of misinterpretation. Inverting the data to find a model which fits the observed data and is geologically reasonable should be standard practice. By systematically varying the model parameters and comparing the plots of observed vs. predicted apparent resistivity a parametric inversion can be preformed by hand to find the "best" fitting model. Normalized data misfits, which provide a numerical measure of the difference between the observed and predicted data, are useful for quantifying how well and inversion model fits the observed data. The manual inversion process can be difficult and time consuming even with small examples sure as the one presented here. Therefore, numerical optimization algorithms are typically utilized to minimized the data misfit and a model objective function, which provides information about the model structure and complexity, in order to find an optimal solution. ## Parametric DC inversion app Definition of variables: - **$\rho_1$**: Resistivity of the first layer - **$\rho_2$**: Resistivity of the cylinder - **xc**: x location of cylinder center - **zc**: z location of cylinder center - **r**: radius of cylinder - **predmis**: toggle which allows you to switch the bottom pannel from predicted apparent resistivity to normalized data misfit - **suveyType**: toggle which allows you to switch between survey types. Knonw information - resistivity of the second layer is 1000 $\Omega$m - thickness of the first layer is known: 5m Unknowns are: $\rho_1$, $\rho_2$, xc, zc, and r ```python app = DC2DfwdWidget() display(app) ``` MyApp(children=(FloatText(value=1000.0, description='$\\rho_1$'), FloatText(value=1000.0, description='$\\rho_…
432d34c7f6586e5b168fbf74123d43d47917c835
13,793
ipynb
Jupyter Notebook
Notebooks/DC_SurveyDataInversion.ipynb
AlainPlattner/gpgLabs
2423f0f2a845a5e44304da5e683881c65a9e4792
[ "MIT" ]
null
null
null
Notebooks/DC_SurveyDataInversion.ipynb
AlainPlattner/gpgLabs
2423f0f2a845a5e44304da5e683881c65a9e4792
[ "MIT" ]
null
null
null
Notebooks/DC_SurveyDataInversion.ipynb
AlainPlattner/gpgLabs
2423f0f2a845a5e44304da5e683881c65a9e4792
[ "MIT" ]
null
null
null
38.744382
766
0.619735
true
2,064
Qwen/Qwen-72B
1. YES 2. YES
0.800692
0.785309
0.62879
__label__eng_Latn
0.997607
0.299221
# Measuring qubits in qoqo This notebook is designed to demonstrate the use of measurements in qoqo. We will look at several examples of measuring qubits, from single and multi-qubit registers. To learn about the effect of measurement, we will look at the state vectors before and after measurement. ```python from qoqo_quest import Backend from qoqo import Circuit from qoqo import operations as ops ``` ## Measuring a single qubit Here we first prepare the qubit in a superposition state, \begin{equation} |+ \rangle = \frac{1}{\sqrt{2}} \big ( |0 \rangle + |1 \rangle \big ). \end{equation} We look at the state after preparation, then do a measurement in the Z basis, and finally look again at the state after measurement. We see that the state after measurement has been projected into the state either $|0>$ or $|1>$, consistently with the measurement outcome. Running this code many times should result in a random distribution of 'True' and 'False' outcomes. ```python state_init = Circuit() state_init += ops.Hadamard(qubit=0) # prepare |+> state # write state before measuring to readout register 'psi_in' read_input = Circuit() read_input += ops.DefinitionComplex(name='psi_in', length=2, is_output=True) read_input += ops.PragmaGetStateVector(readout='psi_in', circuit=Circuit()) # measure qubit in Z basis and write result to classical register 'M1' meas_circ = Circuit() meas_circ += ops.DefinitionBit(name='M1', length=1, is_output=True) meas_circ += ops.MeasureQubit(qubit=0,readout='M1',readout_index=0) # write state after measuring to readout register 'psi_out' read_output = Circuit() read_output += ops.DefinitionComplex(name='psi_out', length=2, is_output=True) read_output += ops.PragmaGetStateVector(readout='psi_out', circuit=Circuit()) # put each step of the circuit together circuit = state_init + read_input + meas_circ + read_output # run the circuit and collect output backend = Backend(number_qubits=1) (result_bit_registers, result_float_registers, result_complex_registers) = backend.run_circuit(circuit) print('Input state: \n', result_complex_registers['psi_in'][0], '\n') print('Measurement result: ', result_bit_registers['M1'][0][0], '\n') print('State after measurement: \n', result_complex_registers['psi_out'][0]) ``` Input state: [(0.7071067811865475+0j), (0.7071067811865475+0j)] Measurement result: False State after measurement: [(1+0j), 0j] ## Measuring a single qubit in the X basis Instead of measuring in the Z basis, we can measure the qubit in the X basis by performing a Hadamard operator before the measurement. This time we see that the measurement result is always 'False', since we are measuring the $|+ \rangle$ state in the X basis, and it is an X eigenvector of the X operator. ```python # add Hadamard operator to change from Z to X basis meas_X_circ = Circuit() meas_X_circ += ops.DefinitionBit(name='M1', length=1, is_output=True) meas_X_circ += ops.Hadamard(qubit=0) meas_X_circ += ops.MeasureQubit(qubit=0,readout='M1',readout_index=0) # perform additional Hadamard after measurement to readout in Z basis read_output = Circuit() read_output += ops.DefinitionComplex(name='psi_out', length=2, is_output=True) read_output += ops.Hadamard(qubit=0) read_output += ops.PragmaGetStateVector(readout='psi_out', circuit=Circuit()) circuit = state_init + read_input + meas_X_circ + read_output # run the circuit and collect output backend = Backend(number_qubits=1) (result_bit_registers, result_float_registers, result_complex_registers) \ = backend.run_circuit(circuit) print('Input state: \n', result_complex_registers['psi_in'][0], '\n') print('Measurement result: ', result_bit_registers['M1'][0][0], '\n') print('State after measurement: \n', result_complex_registers['psi_out'][0]) ``` Input state: [(0.7071067811865475+0j), (0.7071067811865475+0j)] Measurement result: False State after measurement: [(0.7071067811865475+0j), (0.7071067811865475+0j)] ## Measuring a multi-qubit register Here we first prepare a multi-qubit register and demonstrate how it is possible to measure the entire register. As an example we prepare the multi-qubit register in the state, \begin{equation} |\psi \rangle = \frac{1}{\sqrt{2}} |010 \rangle + \frac{i}{\sqrt{2}} |101 \rangle. \end{equation} After preparation we read out the simulated state, before measurement. Next we measure each qubit of the state, and finally we readout out the post-measurement state. ```python number_of_qubits = 3 state_init = Circuit() state_init += ops.PauliX(qubit=1) state_init += ops.Hadamard(qubit=0) state_init += ops.CNOT(control=0, target=1) state_init += ops.CNOT(control=0, target=2) state_init += ops.SGate(qubit=0) # write state before measuring to readout register 'psi_in' read_input = Circuit() read_input += ops.DefinitionComplex(name='psi_in', length=2**number_of_qubits, is_output=True) read_input += ops.PragmaGetStateVector(readout='psi_in', circuit=Circuit()) # measure qubits in Z basis and write result to classical register 'M1M2M3' meas_circ = Circuit() meas_circ += ops.DefinitionBit(name='M1M2M3', length=3, is_output=True) meas_circ += ops.MeasureQubit(qubit=0,readout='M1M2M3',readout_index=0) meas_circ += ops.MeasureQubit(qubit=1,readout='M1M2M3',readout_index=1) meas_circ += ops.MeasureQubit(qubit=2,readout='M1M2M3',readout_index=2) # write state after measuring to readout register 'psi_out' read_output = Circuit() read_output += ops.DefinitionComplex(name='psi_out', length=2**number_of_qubits, is_output=True) read_output += ops.PragmaGetStateVector(readout='psi_out', circuit=Circuit()) circuit = state_init + read_input + meas_circ + read_output # run the circuit and collect output backend = Backend(number_qubits=number_of_qubits) (result_bit_registers, result_float_registers, result_complex_registers) \ = backend.run_circuit(circuit) print('Input state: \n', result_complex_registers['psi_in'][0], '\n') print('Measurement results: ', result_bit_registers['M1M2M3'][0], '\n') print('State after measurement: \n', result_complex_registers['psi_out'][0]) ``` Input state: [0j, 0j, (0.7071067811865475+0j), 0j, 0j, 0.7071067811865475j, 0j, 0j] Measurement results: [False, True, False] State after measurement: [0j, 0j, (1+0j), 0j, 0j, 0j, 0j, 0j] ## Measuring one qubit from a multi-qubit register Measuring only a single qubit from a multi-qubit register is an almost identical process to measuring the entire register, except we only add a single measurement in this case. Here we again prepare the input state, \begin{equation} |\psi \rangle = \frac{1}{\sqrt{2}} |010 \rangle + \frac{i}{\sqrt{2}} |101 \rangle. \end{equation} After preparation we read out the simulated state, before measurement. Next we measure the first qubit of the state, and finally we readout out the post-measurement state. ```python number_of_qubits = 3 state_init = Circuit() state_init += ops.PauliX(qubit=1) state_init += ops.Hadamard(qubit=0) state_init += ops.CNOT(control=0, target=1) state_init += ops.CNOT(control=0, target=2) state_init += ops.SGate(qubit=0) # write state before measuring to readout register 'psi_in' read_input = Circuit() read_input += ops.DefinitionComplex(name='psi_in', length=2**number_of_qubits, is_output=True) read_input += ops.PragmaGetStateVector(readout='psi_in', circuit=Circuit()) # measure qubit in Z basis and write result to classical register 'M1' meas_circ = Circuit() meas_circ += ops.DefinitionBit(name='M1', length=1, is_output=True) meas_circ += ops.MeasureQubit(qubit=0,readout='M1',readout_index=0) # write state after measuring to readout register 'psi_out' read_output = Circuit() read_output += ops.DefinitionComplex(name='psi_out', length=2**number_of_qubits, is_output=True) read_output += ops.PragmaGetStateVector(readout='psi_out', circuit=Circuit()) circuit = state_init + read_input + meas_circ + read_output # run the circuit and collect output backend = Backend(number_qubits=number_of_qubits) (result_bit_registers, result_float_registers, result_complex_registers) \ = backend.run_circuit(circuit) print('Input state: \n', result_complex_registers['psi_in'][0], '\n') print('Measurement results: ', result_bit_registers['M1'][0], '\n') print('State after measurement: \n', result_complex_registers['psi_out'][0]) ``` Input state: [0j, 0j, (0.7071067811865475+0j), 0j, 0j, 0.7071067811865475j, 0j, 0j] Measurement results: [True] State after measurement: [0j, 0j, 0j, 0j, 0j, 0.9999999999999998j, 0j, 0j]
da5a8f49b7c66b4c8414cc383a809ca70f93eb4e
12,345
ipynb
Jupyter Notebook
qoqo/examples/Measurement_Example.ipynb
kbarkhqs/qoqo
87677a6a649b181cd7212612345cff1faa82225d
[ "Apache-2.0" ]
8
2021-02-09T19:35:55.000Z
2022-03-09T19:57:39.000Z
qoqo/examples/Measurement_Example.ipynb
kbarkhqs/qoqo
87677a6a649b181cd7212612345cff1faa82225d
[ "Apache-2.0" ]
128
2021-03-29T14:48:17.000Z
2022-03-30T22:30:23.000Z
qoqo/examples/Measurement_Example.ipynb
kbarkhqs/qoqo
87677a6a649b181cd7212612345cff1faa82225d
[ "Apache-2.0" ]
7
2021-02-04T16:28:18.000Z
2022-01-24T08:49:17.000Z
38.219814
279
0.607371
true
2,435
Qwen/Qwen-72B
1. YES 2. YES
0.847968
0.785309
0.665916
__label__eng_Latn
0.860702
0.385478
## SurfinPy #### Tutorial 1 - Generating a phase diagram as a function of chemical potential and surface energy at 0 K In this tutorial we will learn how to generate a basic phase diagram from DFT energies. This example will consider a series of surfaces that contain differing amounts of surface oxygen and adsorbed water species. The physical quantity that is used to define the stability of a surface with a given composition is its surface energy $\gamma$ (J $m^{-2}$). Going forward in this tutorial we will use the example of water adsorbing on to defective Ti$O_2$ surfaces. \begin{align} \gamma_{Surf} & = \frac{1}{2A} \Bigg( E_{TiO_2}^{slab} - \frac{nTi_{slab}}{nTi_{Bulk}} E_{TiO_2}^{Bulk} \Bigg) - \Gamma_O \mu_O - \Gamma_{H_2O} \mu_{H_2O} , \end{align} where A is the surface area, $E_{TiO_2}^{slab}$ is the DFT energy of the slab, $nTi_{Slab}$ is the number of cations in the slab, $nTi_{Bulk}$ is the number of cations in the bulk unit cell, $E_{TiO_2}^{Bulk}$ is the DFT energy of the bulk unit cell and \begin{align} \Gamma_O & = \frac{1}{2A} \Bigg( nO_{Slab} - \frac{nO_{Bulk}}{nTi_{Bulk}}nTi_{Slab} \Bigg) , \end{align} \begin{align} \Gamma_{H_2O} & = \frac{nH_2O}{2A} , \end{align} where $nO_{Slab}$ is the number of anions in the slab, $nO_{Bulk}$ is the number of anions in the bulk and $nH_2O$ is the number of adsorbing water molecules. $\Gamma_O$ / $\Gamma_{H_2O}$ is the excess oxygen / water at the surface and $\mu_O$ / $\mu_{H_2O}$ is the oxygen / water chemcial potential. Clearly $\Gamma$ and $\mu$ will only matter when the surface is non stoichiometric. So now lets work through an example ```python from surfinpy import mu_vs_mu ``` The first thing to do is input the data that we have generated from our DFT simulations. The input data needs to be contained within a dictionary. First we have created the dictionary for the bulk data, where 'Cation' is the number of cations, 'Anion' is the number of anions, 'Energy' is the DFT energy and 'F-Units' is the number of formula units. ``` bulk = {'Cation' : Cations in Bulk Unit Cell, 'Anion' : Anions in Bulk Unit Cell, 'Energy' : Energy of Bulk Calculation, 'F-Units' : Formula units in Bulk Calculation} ``` ```python bulk = {'Cation' : 1, 'Anion' : 2, 'Energy' : -780.0, 'F-Units' : 4} ``` Next we create the surface dictionaries - one for each surface or "phase". 'Cation' is the number of cations, 'X' is in this case the number of oxygen species (corresponding to the X axis of the phase diagram), 'Y' is the number of in this case water molecules (corresponding to the Y axis of our phase diagram), 'Area' is the surface area, 'Energy' is the DFT energy, 'Label' is the label for the surface (appears on the phase diagram) and finally nSpecies is the number of adsorbin species. ``` surface = {'Cation': Cations in Slab, 'X': Number of Species X in Slab, 'Y': Number of Species Y in Slab, 'Area': Surface area in the slab, 'Energy': Energy of Slab, 'Label': Label for phase, 'nSpecies': How many species are non stoichiometric} ``` ```python pure = {'Cation': 24, 'X': 48, 'Y': 0, 'Area': 60.0, 'Energy': -575.0, 'Label': 'Stoich', 'nSpecies': 1} H2O = {'Cation': 24, 'X': 48, 'Y': 2, 'Area': 60.0, 'Energy': -612.0, 'Label': '1 Water', 'nSpecies': 1} H2O_2 = {'Cation': 24, 'X': 48, 'Y': 4, 'Area': 60.0, 'Energy': -640.0, 'Label': '2 Water', 'nSpecies': 1} H2O_3 = {'Cation': 24, 'X': 48, 'Y': 8, 'Area': 60.0, 'Energy': -676.0, 'Label': '3 Water', 'nSpecies': 1} Vo = {'Cation': 24, 'X': 46, 'Y': 0, 'Area': 60.0, 'Energy': -558.0, 'Label': 'Vo', 'nSpecies': 1} H2O_Vo = {'Cation': 24, 'X': 46, 'Y': 2, 'Area': 60.0, 'Energy': -594.0, 'Label': 'Vo + 1 Water', 'nSpecies': 1} H2O_Vo_2 = {'Cation': 24, 'X': 46, 'Y': 4, 'Area': 60.0, 'Energy': -624.0, 'Label': 'Vo + 2 Water', 'nSpecies': 1} H2O_Vo_3 = {'Cation': 24, 'X': 46, 'Y': 6, 'Area': 60.0, 'Energy': -640.0, 'Label': 'Vo + 3 Water', 'nSpecies': 1} H2O_Vo_4 = {'Cation': 24, 'X': 46, 'Y': 8, 'Area': 60.0, 'Energy': -670.0, 'Label': 'Vo + 4 Water', 'nSpecies': 1} ``` Next we need to create a list of our data. DOn't worry about the order, surfinpy will sort that out for you. ```python data = [pure, H2O_2, H2O_Vo, H2O, H2O_Vo_2, H2O_3, H2O_Vo_3, H2O_Vo_4, Vo] ``` We now need to generate our X and Y axis, or more appropriately, our chemical potential values. Again these exist in a dictionary. 'Range' corresponds to the range of chemcial potential values to be considered and 'Label' is the axis label. ``` deltaX = {'Range': Range of Chemical Potential, 'Label': Species Label} ``` ```python deltaX = {'Range': [ -12, -6], 'Label': 'O'} deltaY = {'Range': [ -19, -12], 'Label': 'H_2O'} ``` And finally we can generate our plot using these 4 variables of data. ```python system = mu_vs_mu.calculate(data, bulk, deltaX, deltaY) system.plot_phase() ``` This plot is a good start in that the relative stability of the surfaces has been evaluated. However the chemical potential values are essentially meaningless and can be dependent on the pseudopotentials. The first thing that we can do is add in the DFT energy of species X/Y, this will give us a 0K phase diagram. ```python Zero_K = mu_vs_mu.calculate(data, bulk, deltaX, deltaY, x_energy=-4.54, y_energy=-14.84) Zero_K.plot_phase(output="seaborn_rdybu.png", set_style="seaborn-dark-palette", colourmap="RdYlBu") ```
f2b2338f72c3da200a337b84fbfb336f8ee480f8
8,379
ipynb
Jupyter Notebook
examples/.ipynb_checkpoints/Tutorial_1-checkpoint.ipynb
awvwgk/SurfinPy
b094d8af592b79b73cb31a42f4be6b5cd0ac38f3
[ "MIT" ]
30
2019-01-28T17:47:24.000Z
2022-03-22T03:26:00.000Z
examples/.ipynb_checkpoints/Tutorial_1-checkpoint.ipynb
awvwgk/SurfinPy
b094d8af592b79b73cb31a42f4be6b5cd0ac38f3
[ "MIT" ]
14
2018-09-03T15:49:06.000Z
2022-02-08T22:09:51.000Z
examples/.ipynb_checkpoints/Tutorial_1-checkpoint.ipynb
awvwgk/SurfinPy
b094d8af592b79b73cb31a42f4be6b5cd0ac38f3
[ "MIT" ]
19
2019-02-11T09:11:29.000Z
2022-03-11T08:47:24.000Z
35.655319
503
0.566774
true
1,832
Qwen/Qwen-72B
1. YES 2. YES
0.839734
0.692642
0.581635
__label__eng_Latn
0.976246
0.189663
# Atomic Hydrogen and Helium Photoionization Cross-Sections Figure 4.1 from Chapter 4 of *Interstellar and Intergalactic Medium* by Ryden & Pogge, 2021, Cambridge University Press. Plot the ground state photoionization cross sections for H<sup>0</sup>, He<sup>0</sup>, and He<sup>+</sup>. For hydrogenic ions (H+ and He++) using the approximation formulae given in Osterbrock & Ferland, *Astrophysics of Gaseous Nebulae & Active Galactic Nuclei*. Calculations for He<sup>0</sup> use the data of [Samson & Stolte 2002, JESRP, 123, 265](https://www.sciencedirect.com/science/article/abs/pii/S0368204802000269), [doi:10.1016/S0368-2048(02)00026-9](https://doi.org/10.1016/S0368-2048(02)00026-9), fit using the power-law function presented by Osterbrock & Ferland to update the parameters. ```python %matplotlib inline import math import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt from matplotlib.ticker import MultipleLocator, LogLocator, NullFormatter import warnings warnings.filterwarnings('ignore',category=UserWarning, append=True) warnings.filterwarnings('ignore',category=RuntimeWarning, append=True) ``` ## Standard Plot Format Setup the standard plotting format and make the plot. Fonts and resolution adopted follow CUP style. ```python figName = 'Fig4_1' # graphic aspect ratio = width/height aspect = 4.0/3.0 # 4:3 # Text width in inches - don't change, this is defined by the print layout textWidth = 6.0 # inches # output format and resolution figFmt = 'png' dpi = 600 # Graphic dimensions plotWidth = dpi*textWidth plotHeight = plotWidth/aspect axisFontSize = 10 labelFontSize = 6 lwidth = 0.5 axisPad = 5 wInches = textWidth hInches = wInches/aspect # Plot filename plotFile = f'{figName}.{figFmt}' # LaTeX is used throughout for markup of symbols, Times-Roman serif font plt.rc('text', usetex=True) plt.rc('font', **{'family':'serif','serif':['Times-Roman'],'weight':'bold','size':'16'}) # Font and line weight defaults for axes matplotlib.rc('axes',linewidth=lwidth) matplotlib.rcParams.update({'font.size':axisFontSize}) # axis and label padding plt.rcParams['xtick.major.pad'] = f'{axisPad}' plt.rcParams['ytick.major.pad'] = f'{axisPad}' plt.rcParams['axes.labelpad'] = f'{axisPad}' ``` ## Photoionization cross-section calculations Perform the calculation for three ions: H<sup>0</sup>, He<sup>0</sup>, and He<sup>+</sup>. ### Hydrogenic ions (H<sup>0</sup>, He<sup>+</sup>, etc.) For hydrogenic ions with nuclear charge $Z$, the photoionization threshold and cross-section at threshold scale like Z as * photoionization threshold frequency: $\nu_0(Z) = Z^2cR_{\infty}$ * photoionization cross-section at threshold: $\sigma_0(Z) = Z^{-2}\sigma_0(H^0)$; where $\sigma_0(H^0)=6.3\times10^{-18}$cm$^2$ The photoionization cross-section as a function of frequency and nuclear charge for a hydrogenic ion is: \begin{equation} \sigma_H(\nu,Z) = \sigma_0(Z)\left(\frac{\nu}{\nu_0(Z)}\right)^{-4} \left[\frac{\exp\left[4-\left(4\frac{\tan^{-1}\epsilon(\nu)}{\epsilon(\nu)}\right)\right]}{1-\exp\left(-\frac{2\pi}{\epsilon(\nu)}\right)}\right] \end{equation} where $\epsilon(\nu)$ is a dimensionless frequency coefficient: \begin{equation} \epsilon(\nu) = \left(\frac{\nu}{\nu_0}-1\right)^{1/2} \end{equation} and $\sigma_H(\nu,Z)=0$ for $\nu<\nu_0(Z)$. ### Neutral Helium For neutral helium we adopt the approximation formula given by Osterbrock & Ferland * $\sigma_{He^0}(\nu) = \sigma_0\left[\beta\epsilon(\nu)^{-s} + (1-\beta)\epsilon(\nu)^{-s-1}\right]$ where $\sigma_0$ is the photoionization cross-section at threshold frequency, and $\beta$ and $s$ are fit coefficients derived from the data. $\epsilon(\nu)$ is defined the same as for hydrogenic ions. From the experimental data of Samson and Stolte 2002, we derive fit coefficients: * $\beta$=2.07 * $s$=2.45 ### Physical Constants: Hydrogenic Atoms (H<sup>0</sup> and He<sup>+</sup>): * $h\nu_0$ = 13.605693122994 eV * $\sigma_0$ = 6.30$\times$10<sup>-18</sup> cm<sup>2</sup>/Z<sup>2</sup> Neutral Helium (He<sup>0</sup>): * $h\nu_0$ = 24.587 eV * $\sigma_0$ = 7.40$\times$10<sup>-18</sup> cm<sup>2</sup> Physical constant are from the [NIST CODATA2018 database](https://physics.nist.gov/cuu/Constants/index.html). ```python # Physical Constants h = 4.135667696e-15 # Planck constant in eV-s - NIST CODATA 2018 Ryd = 13.605693122994 # Rydberg energy in eV - NIST CODATA 2018 # Hydrogenic species (H0, He+) sig0H = 6.30e-18 # photoionization cross-section at threshold for H in cm^2 ZH0 = 1 # nuclear charge of H0 hnu1H0 = (ZH0*ZH0)*Ryd nu1H0 = hnu1H0/h sig0Z = sig0H/(ZH0*ZH0) nuH0 = np.linspace(nu1H0,3.5e16,101) eps = np.sqrt((nuH0/nu1H0)-1) sigH0 = (sig0Z*(nu1H0/nuH0)**4)*(np.exp(4-((4*np.arctan(eps))/eps)))/(1-np.exp(-2.0*math.pi/eps)) hnuH0 = h*nuH0 ZHeP=2 hnu1HeP = (ZHeP*ZHeP)*Ryd nu1HeP = hnu1HeP/h sig0Z = sig0H/(ZHeP*ZHeP) nuHeP = np.linspace(nu1HeP,5.0e16,101) eps = np.sqrt((nuHeP/nu1HeP)-1) sigHeP = (sig0Z*(nu1HeP/nuHeP)**4)*(np.exp(4-((4*np.arctan(eps))/eps)))/(1-np.exp(-2.0*math.pi/eps)) hnuHeP = h*nuHeP # He0 - Osterbrock approximation using parameters from a fit to the data of Samson & Stolte 2002 beta = 2.07 s = 2.45 sig0He = 7.40e-18 # from Samson & Stolte 2002 hnu1He0 = 24.587 # eV nu1He0 = hnu1He0/h # Hz nuHe0 = np.linspace(nu1He0,5.8e16,101) eps = nuHe0/nu1He0 sigHe0 = sig0He*(beta*np.power(eps,-s) + (1.0-beta)*np.power(eps,-s-1)) hnuHe0 = h*nuHe0 # plotting limits xMin = 0.0 # eV xMax = 120.0 yMin = 0.0 # x10^-18 cm^2 yMax = 8.0 ``` ### Make the Plot Plot photoionization cross-sections, adding the vertical line at the threshold. ```python fig,ax = plt.subplots() fig.set_dpi(dpi) fig.set_size_inches(wInches,hInches,forward=True) ax.tick_params('both',length=6,width=lwidth,which='major',direction='in',top='on',right='on') ax.tick_params('both',length=3,width=lwidth,which='minor',direction='in',top='on',right='on') plt.xlim(xMin,xMax) ax.xaxis.set_major_locator(MultipleLocator(20)) ax.xaxis.set_minor_locator(MultipleLocator(5)) plt.xlabel(r'$h\nu$ [eV]',fontsize=axisFontSize) plt.ylim(yMin,yMax) ax.yaxis.set_major_locator(MultipleLocator(2.0)) ax.yaxis.set_minor_locator(MultipleLocator(0.5)) plt.ylabel(r'$\sigma_{pho}(\nu)$ [10$^{-18}$\,cm$^2$]',fontsize=axisFontSize) # plot the curves sigScale = 1.0e18 # scale in units of 10^-18 cm^2 lwC = 1.0 # H0 plt.plot(hnuH0,sigScale*sigH0,lw=lwC,color='black',zorder=10) plt.plot([hnu1H0,hnu1H0],[0.0,sigScale*sig0H],lw=lwC,color='black',zorder=10) plt.plot([hnu1H0,hnuH0[1]],[sigScale*sig0H,sigScale*sigH0[1]],lw=lwC,color='black',zorder=10) plt.text(hnu1H0,sigScale*sig0H,r'H$^0$',color='black',zorder=10,ha='center',va='bottom', fontsize=axisFontSize) # He+ plt.plot(hnuHeP,sigScale*sigHeP,lw=lwC,color='black',zorder=10) plt.plot([hnu1HeP,hnu1HeP],[0.0,sigScale*sig0H/(ZHeP*ZHeP)],lw=lwC,color='black',zorder=10) plt.plot([hnu1HeP,hnuHeP[1]],[sigScale*sig0H/(ZHeP*ZHeP),sigScale*sigHeP[1]], lw=lwC,color='black',zorder=10) plt.text(hnu1HeP-0.01,sigScale*sig0H/(ZHeP*ZHeP),r'He$^+$',color='black',zorder=10,ha='right',va='top', fontsize=axisFontSize) # He0 plt.plot(hnuHe0,sigScale*sigHe0,lw=lwC,color='black',zorder=10) plt.plot([hnu1He0,hnu1He0],[0.0,sigScale*sig0He],lw=lwC,color='black',zorder=10) plt.plot([hnu1He0,hnuHe0[1]],[sigScale*sig0He,sigScale*sigHe0[1]],lw=lwC,color='black',zorder=10) plt.text(hnu1He0,sigScale*sig0He,r'He$^0$',color='black',zorder=10,ha='center',va='bottom', fontsize=axisFontSize) # plot and file plt.plot() plt.savefig(plotFile,bbox_inches='tight',facecolor='white') ```
60202c95ddb2c8c8b8f370a157bec4c66cf152cd
10,857
ipynb
Jupyter Notebook
Chapter4/Fig4_1_PI_Cross.ipynb
CambridgeUniversityPress/Interstellar-and-Intergalactic-Medium
6d19cd4a517126e0f4737ba0f338117098224d92
[ "CC0-1.0", "CC-BY-4.0" ]
10
2021-04-20T07:26:10.000Z
2022-02-24T11:02:47.000Z
Chapter4/Fig4_1_PI_Cross.ipynb
CambridgeUniversityPress/Interstellar-and-Intergalactic-Medium
6d19cd4a517126e0f4737ba0f338117098224d92
[ "CC0-1.0", "CC-BY-4.0" ]
null
null
null
Chapter4/Fig4_1_PI_Cross.ipynb
CambridgeUniversityPress/Interstellar-and-Intergalactic-Medium
6d19cd4a517126e0f4737ba0f338117098224d92
[ "CC0-1.0", "CC-BY-4.0" ]
null
null
null
35.713816
252
0.572442
true
2,741
Qwen/Qwen-72B
1. YES 2. YES
0.857768
0.785309
0.673613
__label__eng_Latn
0.47576
0.403359
# 1. DQN Algorithms PPO is motivated by the following question: how can we take the biggest possible improvement step on a policy using the data we currently have, without stepping so far that we accidentally cause performance collapse? The main idea is that after an update, the new policy should be not too far form the old policy. For that, PPO (as implemented in Stable Baselines) uses clipping to avoid too large update. ### Quick Facts * DQN is an off-policy algorithm. * DQN can only be used for environments with discrete action spaces. * The Stable Baselines implementation of PPO follows the OpenAI made for GPU. For multiprocessing, it uses vectorized environment. ### Key Equations Our aim will be to train a policy that tries to maximize the discounted, cumulative reward $R_{t_0} = \sum_{t=t_0}^{\infty} \gamma^{t - t_0} r_t$, where $R_{t_0}$ is also known as the *return*. The discount, $\gamma$, should be a constant between $0$ and $1$ that ensures the sum converges. It makes rewards from the uncertain far future less important for our agent than the ones in the near future that it can be fairly confident about. The main idea behind Q-learning is that if we had a function $Q^*: State \times Action \rightarrow \mathbb{R}$, that could tell us what our return would be, if we were to take an action in a given state, then we could easily construct a policy that maximizes our rewards: \begin{equation} \pi^*(s) = \arg\!\max_a \ Q^*(s, a) \end{equation} However, we don't know everything about the world, so we don't have access to $Q^*$. But, since neural networks are universal function approximators, we can simply create one and train it to resemble $Q^*$. For our training update rule, we'll use a fact that every $Q$ function for some policy obeys the Bellman equation: \begin{equation} Q^{\pi}(s, a) = r + \gamma Q^{\pi}(s', \pi(s')) \end{equation} The difference between the two sides of the equality is known as the temporal difference error, $\delta$: \begin{equation} \delta = Q(s, a) - (r + \gamma \max_a Q(s', a)) \end{equation} To minimise this error, we will use the `Huber loss` (https://en.wikipedia.org/wiki/Huber_loss). The Huber loss acts like the mean squared error when the error is small, but like the mean absolute error when the error is large - this makes it more robust to outliers when the estimates of $Q$ are very noisy. We calculate this over a batch of transitions, $B$, sampled from the replay memory: \begin{equation} \mathcal{L} = \frac{1}{|B|}\sum_{(s, a, s', r) \ \in \ B} \mathcal{L}(\delta) \end{equation} where \begin{equation} \quad \mathcal{L}(\delta) = \begin{cases} \frac{1}{2}{\delta^2} & \text{for } |\delta| \le 1, \\ |\delta| - \frac{1}{2} & \text{otherwise.} \end{cases} \end{equation} ### Replay Buffer We’ll be using experience replay memory for training our DQN. It stores the transitions that the agent observes, allowing us to reuse this data later. By sampling from it randomly, the transitions that build up a batch are decorrelated. It has been shown that this greatly stabilizes and improves the DQN training procedure ### Exploration vs. Exploitation Stable Baselines DQN implementation uses a trick to improve exploration at the start of training. For a fixed number of steps at the beginning, the agent takes actions which are sampled from a uniform random distribution over valid actions. After that, it returns to normal DQN exploration. ### Implementation Notes (Stable Baselines) By default, the DQN class has double q learning and dueling extensions enabled. #### Can I use? - Recurrent policies: ❌ - Multi processing: ❌ - Gym spaces: | Space | Action | Observation | | --- | --- | --- | |Discrete | ✔️| ✔️ | |Box | ❌ | ✔️ | |MultiDiscrete | ❌ | ✔️ | |MultiBinary | ❌ | ✔️ | ## 1.1 Import Required Libraries ```python import warnings warnings.filterwarnings("ignore") import gym import time import numpy as np from stable_baselines.common.vec_env import DummyVecEnv from stable_baselines.deepq.policies import MlpPolicy from stable_baselines import DQN import matplotlib.pyplot as plt ``` WARNING:tensorflow: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons * https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue. ## 1.2 Training Configuration ### Parameters * **ENV_NAME**: the name of the Gym Environment for training * **EVAL_STEPS**: number of samples to be used for policy evaluation. Here, the horizon $H$ is 200 steps, so the evaulation is 2000 / 200 = 10 episodes * **ITERS**: number of simulation time-steps during training. Here corresponds to 250 episodes ```python ENV_NAME = 'CartPole-v0' EVAL_STEPS = 2000 ITERS = 50000 ``` ## 1.3 Logging ### Logging Directories * **PPO2_Cartpole**: the directory used for the tensorboard * **Results**: the directory where the intermediate trained agents are stored ```python def logs_gen(net_size): tensorboard_log = "./DQN_Cartpole/2x" + str(net_size) log_dir = "./Results/" return log_dir, tensorboard_log ``` ## 1.4 Environment Definition ### Vectorized Environments Vectorized Environments are a method for stacking multiple independent environments into a single environment. Instead of training an RL agent on 1 environment per step, it allows us to train it on n environments per step. Because of this, actions passed to the environment are now a vector (of dimension n). It is the same for observations, rewards and end of episode signals (dones). In the case of non-array observation spaces such as Dict or Tuple, where different sub-spaces may have different shapes, the sub-observations are vectors (of dimension n). ```python env = gym.make('CartPole-v0') env = DummyVecEnv([lambda: env]) ``` ## 1.5 Agent Definition ### Agent Parameters * **MlpPolicy**: The policy model to use (MlpPolicy, CnnPolicy, CnnLstmPolicy, …) * **policy_kwargs**: the Network Architecture (here 2 hiden layers with size 256) * **env**: the Gym Environment * **learning_rate**: learning rate for adam optimizer * **buffer_size**: size of the replay buffer * **gamma**: Discount factor * **exploration_fraction**: fraction of entire training period over which the exploration rate is annealed * **exploration_final_eps**: final value of random action probability * **exploration_initial_eps**: initial value of random action probability * **train_freq**: update the model every train_freq steps * **batch_size**: size of a batched sampled from replay buffer for training * **double_q**: Whether to enable Double-Q learning or not (always do this!) * **learning_starts**: how many steps of the model to collect transitions for before learning starts * **target_network_update_freq**: update the target network every target_network_update_freq steps * **prioritized_replay**: if True prioritized replay buffer will be used * **tensorboard_log**: the log location for tensorboar ```python hidden_layer_size = 256 policy_kwargs = dict(layers=[hidden_layer_size, hidden_layer_size]) log_dir, tensorboard_log = logs_gen(hidden_layer_size) model = DQN(MlpPolicy, env, gamma=0.99, learning_rate=0.001, buffer_size=50000, policy_kwargs=policy_kwargs, exploration_fraction=0.1, exploration_final_eps=0.02, exploration_initial_eps=1.0, train_freq=1, batch_size=32, double_q=True, learning_starts=1000, target_network_update_freq=500, prioritized_replay=False, verbose=1, tensorboard_log=tensorboard_log) ``` WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/common/tf_util.py:191: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/common/tf_util.py:200: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/dqn.py:129: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:358: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:359: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:139: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/policies.py:109: flatten (from tensorflow.python.layers.core) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.flatten instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/tensorflow_core/python/layers/core.py:332: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:147: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:149: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:372: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:372: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:372: The name tf.get_variable_scope is deprecated. Please use tf.compat.v1.get_variable_scope instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:415: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:449: The name tf.summary.merge_all is deprecated. Please use tf.compat.v1.summary.merge_all instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/common/tf_util.py:241: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/common/tf_util.py:242: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead. ## 1.6 Define a callback function We can define a custom callback function that will be called inside the agent. This could be useful when we want to monitor training, for instance display live learning curves in Tensorboard or save the best agent. ```python best_mean_reward, n_steps = -np.inf, 0 evaluation_rewards = [] def callback(_locals, _globals): """ Callback called at each step :param _locals: (dict) :param _globals: (dict) """ global n_steps, best_mean_reward # Print stats every 1000 calls if (n_steps + 1) % 1000 == 0: print("-----------------------------------------------------") print("Evaluating Model: " + str(n_steps)) rew = evaluate(model, num_steps=EVAL_STEPS, render=False) print("Best mean reward: {:.15f} - Last mean reward per episode: {:.15f}".format(best_mean_reward, rew)) _locals['self'].save(log_dir + 'model_' + str(n_steps)) if (rew > best_mean_reward): best_mean_reward = rew print("Saving new best model") _locals['self'].save(log_dir + 'best_model') n_steps += 1 return True ``` ## 1.7 Define an evaluation function During training the action selection is subject to the exploration scheme defined. This implies that policy A could be better than policy B but appear to perform worse due to the effect of the exploration. Due to this, in order to determine the best policy throughout the entire training phase, in each Callback we evaluate the current policy without exploration (deterministic actions); if the current policy is better than the best policy found so far, we simply override the best policy. ```python def evaluate(model, num_steps=300, render=False): global evaluation_rewards print("EVALUATION!!!") """ Evaluate a RL agent :param model: (BaseRLModel object) the RL Agent :param num_steps: (int) number of timesteps to evaluate it :return: (float) Mean reward """ episode_rewards = [[0.0] for _ in range(env.num_envs)] obs = env.reset() # print(obs) for i in range(num_steps - 1): # _states are only useful when using LSTM policies actions, _ = model.predict(obs, deterministic=True) # here, action, rewards and dones are arrays # because we are using vectorized env obs, rewards, dones, info = env.step(actions) if (render): env.render() for j in range(env.num_envs): episode_rewards[j][-1] += rewards[j] if dones[j]: episode_rewards[j].append(0.0) mean_rewards = [0.0 for _ in range(env.num_envs)] n_episodes = 0 for i in range(env.num_envs): mean_rewards[i] = np.mean(episode_rewards[i]) n_episodes += len(episode_rewards[i]) # Compute mean reward mean_reward = round(np.mean(mean_rewards), 1) print("Mean reward:", mean_reward, "Num episodes:", n_episodes) evaluation_rewards.append(mean_reward) return mean_reward ``` ## 1.8 Evaluate the Agent before training ```python rew = evaluate(model, num_steps=EVAL_STEPS, render=False) ``` EVALUATION!!! Mean reward: 29.0 Num episodes: 69 ## 1.9 Train the DQN Agent ```python start_time = time.time() model.learn(total_timesteps=ITERS, callback=callback) elapsed_time = time.time() - start_time print("-----------------------------------------------------") print("Training Time: " + str(elapsed_time)) ``` WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/common/base_class.py:1143: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/common/tf_util.py:322: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/common/tf_util.py:502: The name tf.Summary is deprecated. Please use tf.compat.v1.Summary instead. ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 28.2 Num episodes: 71 Best mean reward: -inf - Last mean reward per episode: 28.199999999999999 Saving new best model WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/dqn.py:283: The name tf.RunOptions is deprecated. Please use tf.compat.v1.RunOptions instead. WARNING:tensorflow:From /Users/mut/workspace/anaconda3/envs/py36/lib/python3.6/site-packages/stable_baselines/deepq/dqn.py:284: The name tf.RunMetadata is deprecated. Please use tf.compat.v1.RunMetadata instead. ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 28.199999999999999 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 58 | | episodes | 100 | | mean 100 episode reward | 21.6 | | steps | 2139 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 28.199999999999999 - Last mean reward per episode: 105.200000000000003 Saving new best model ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 90.9 Num episodes: 22 Best mean reward: 105.200000000000003 - Last mean reward per episode: 90.900000000000006 ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 105.200000000000003 - Last mean reward per episode: 133.300000000000011 Saving new best model ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 133.300000000000011 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 133.300000000000011 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 133.300000000000011 - Last mean reward per episode: 199.900000000000006 Saving new best model ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 90.9 Num episodes: 22 Best mean reward: 199.900000000000006 - Last mean reward per episode: 90.900000000000006 ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 -------------------------------------- | % time spent exploring | 2 | | episodes | 200 | | mean 100 episode reward | 87.6 | | steps | 10899 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 199.900000000000006 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 199.900000000000006 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 20.8 Num episodes: 96 Best mean reward: 199.900000000000006 - Last mean reward per episode: 20.800000000000001 ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 60.6 Num episodes: 33 Best mean reward: 199.900000000000006 - Last mean reward per episode: 60.600000000000001 ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 60.6 Num episodes: 33 Best mean reward: 199.900000000000006 - Last mean reward per episode: 60.600000000000001 ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 199.900000000000006 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 -------------------------------------- | % time spent exploring | 2 | | episodes | 300 | | mean 100 episode reward | 113 | | steps | 22170 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 58.8 Num episodes: 34 Best mean reward: 199.900000000000006 - Last mean reward per episode: 58.799999999999997 ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 74.0 Num episodes: 27 Best mean reward: 199.900000000000006 - Last mean reward per episode: 74.000000000000000 ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 35.1 Num episodes: 57 Best mean reward: 199.900000000000006 - Last mean reward per episode: 35.100000000000001 ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 199.900000000000006 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 199.900000000000006 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 90.8 | | steps | 31253 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 90.9 Num episodes: 22 Best mean reward: 199.900000000000006 - Last mean reward per episode: 90.900000000000006 ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 83.3 Num episodes: 24 Best mean reward: 199.900000000000006 - Last mean reward per episode: 83.299999999999997 ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 74.0 Num episodes: 27 Best mean reward: 199.900000000000006 - Last mean reward per episode: 74.000000000000000 ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 37.7 Num episodes: 53 Best mean reward: 199.900000000000006 - Last mean reward per episode: 37.700000000000003 ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 18.3 Num episodes: 109 Best mean reward: 199.900000000000006 - Last mean reward per episode: 18.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 500 | | mean 100 episode reward | 92.8 | | steps | 40531 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 16.9 Num episodes: 118 Best mean reward: 199.900000000000006 - Last mean reward per episode: 16.899999999999999 ----------------------------------------------------- Training Time: 215.4886450767517 ## 1.10 Plot the total reward of the policies evaluated in the Callback (deterministic) ```python plt.plot(evaluation_rewards) plt.ylabel('Total Reward') plt.xlabel('Evaluation Iteration') plt.show() ``` ## 1.11 Evaluate the best Agent after training ```python model = DQN.load(log_dir + 'best_model', env=env) r = evaluate(model, num_steps=EVAL_STEPS, render=True) ``` EVALUATION!!! Mean reward: 199.9 Num episodes: 10 ## 1.12 Close environment ```python env.close() ``` ## 1.13 How each parameter affects the performance of the Algorithm? ```python hid_layer_height_list = [2, 64, 128, 256] hid_layer_list = [1,2] lr_list = [0.00001, 0.001, 0.1] buf_list = [32, 5000, 50000] solved = [] ``` ```python # network layer tests for num_layers in hid_layer_list: for hidden_layer_size in hid_layer_height_list: layer_str = str(num_layers) + "x" + str(hidden_layer_size) print("---------------------------------------------") print("Layer Size = " + layer_str) print("---------------------------------------------") _layers = [] for l in range(num_layers): _layers.append(hidden_layer_size) policy_kwargs = dict(layers=_layers) tensorboard_log = "./DQN_Cartpole_Param/network_size_" + layer_str log_dir = "./Results/" model = DQN(MlpPolicy, env, gamma=0.99, learning_rate=0.001, buffer_size=50000, policy_kwargs=policy_kwargs, exploration_fraction=0.1, exploration_final_eps=0.02, exploration_initial_eps=1.0, train_freq=1, batch_size=32, double_q=True, learning_starts=1000, target_network_update_freq=500, prioritized_replay=False, verbose=1, tensorboard_log=tensorboard_log) best_mean_reward, n_steps = -np.inf, 0 evaluation_rewards = [] start_time = time.time() model.learn(total_timesteps=ITERS, callback=callback) elapsed_time = time.time() - start_time print("Training Time: " + str(elapsed_time)) model = DQN.load(log_dir + 'best_model', env=env) r = evaluate(model, num_steps=EVAL_STEPS, render=False) solved.append(r) ``` --------------------------------------------- Layer Size = 1x2 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: -inf - Last mean reward per episode: 9.300000000000001 Saving new best model -------------------------------------- | % time spent exploring | 61 | | episodes | 100 | | mean 100 episode reward | 19.9 | | steps | 1966 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.300000000000001 - Last mean reward per episode: 9.400000000000000 Saving new best model ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 9.400000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 34 | | episodes | 200 | | mean 100 episode reward | 13.7 | | steps | 3333 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.400000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 13 | | episodes | 300 | | mean 100 episode reward | 11.1 | | steps | 4438 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 9.400000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 9.6 | | steps | 5395 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 9.9 Num episodes: 202 Best mean reward: 9.400000000000000 - Last mean reward per episode: 9.900000000000000 Saving new best model -------------------------------------- | % time spent exploring | 2 | | episodes | 500 | | mean 100 episode reward | 10.6 | | steps | 6454 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 10.1 Num episodes: 198 Best mean reward: 9.900000000000000 - Last mean reward per episode: 10.100000000000000 Saving new best model -------------------------------------- | % time spent exploring | 2 | | episodes | 600 | | mean 100 episode reward | 9.6 | | steps | 7413 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 700 | | mean 100 episode reward | 9.7 | | steps | 8378 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 800 | | mean 100 episode reward | 9.5 | | steps | 9331 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 900 | | mean 100 episode reward | 9.5 | | steps | 10280 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1000 | | mean 100 episode reward | 9.5 | | steps | 11233 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1100 | | mean 100 episode reward | 9.3 | | steps | 12162 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1200 | | mean 100 episode reward | 9.5 | | steps | 13111 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1300 | | mean 100 episode reward | 9.4 | | steps | 14051 | -------------------------------------- -------------------------------------- | % time spent exploring | 2 | | episodes | 1400 | | mean 100 episode reward | 9.4 | | steps | 14995 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1500 | | mean 100 episode reward | 9.5 | | steps | 15944 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1600 | | mean 100 episode reward | 9.6 | | steps | 16900 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1700 | | mean 100 episode reward | 9.5 | | steps | 17852 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1800 | | mean 100 episode reward | 9.7 | | steps | 18817 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1900 | | mean 100 episode reward | 9.6 | | steps | 19778 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2000 | | mean 100 episode reward | 9.6 | | steps | 20735 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2100 | | mean 100 episode reward | 9.5 | | steps | 21685 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2200 | | mean 100 episode reward | 9.5 | | steps | 22636 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 2300 | | mean 100 episode reward | 9.3 | | steps | 23564 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 2400 | | mean 100 episode reward | 9.5 | | steps | 24513 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2500 | | mean 100 episode reward | 9.5 | | steps | 25459 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2600 | | mean 100 episode reward | 9.4 | | steps | 26396 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 2700 | | mean 100 episode reward | 9.4 | | steps | 27334 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2800 | | mean 100 episode reward | 9.4 | | steps | 28274 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2900 | | mean 100 episode reward | 9.4 | | steps | 29210 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 3000 | | mean 100 episode reward | 9.3 | | steps | 30140 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3100 | | mean 100 episode reward | 9.4 | | steps | 31076 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3200 | | mean 100 episode reward | 9.4 | | steps | 32017 | -------------------------------------- -------------------------------------- | % time spent exploring | 2 | | episodes | 3300 | | mean 100 episode reward | 9.5 | | steps | 32969 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3400 | | mean 100 episode reward | 9.4 | | steps | 33912 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 9.4 Num episodes: 212 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 3500 | | mean 100 episode reward | 9.5 | | steps | 34864 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3600 | | mean 100 episode reward | 9.4 | | steps | 35805 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3700 | | mean 100 episode reward | 9.4 | | steps | 36742 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3800 | | mean 100 episode reward | 9.4 | | steps | 37682 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3900 | | mean 100 episode reward | 9.4 | | steps | 38618 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4000 | | mean 100 episode reward | 9.5 | | steps | 39564 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 4100 | | mean 100 episode reward | 9.4 | | steps | 40506 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 4200 | | mean 100 episode reward | 9.3 | | steps | 41441 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4300 | | mean 100 episode reward | 9.3 | | steps | 42375 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4400 | | mean 100 episode reward | 9.4 | | steps | 43313 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4500 | | mean 100 episode reward | 9.4 | | steps | 44252 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4600 | | mean 100 episode reward | 9.4 | | steps | 45194 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4700 | | mean 100 episode reward | 9.6 | | steps | 46152 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4800 | | mean 100 episode reward | 9.6 | | steps | 47113 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4900 | | mean 100 episode reward | 9.6 | | steps | 48068 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 5000 | | mean 100 episode reward | 9.4 | | steps | 49006 | -------------------------------------- -------------------------------------- | % time spent exploring | 2 | | episodes | 5100 | | mean 100 episode reward | 9.5 | | steps | 49960 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 10.100000000000000 - Last mean reward per episode: 9.300000000000001 Training Time: 119.86329483985901 EVALUATION!!! Mean reward: 10.0 Num episodes: 200 --------------------------------------------- Layer Size = 1x64 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 62.5 Num episodes: 32 Best mean reward: -inf - Last mean reward per episode: 62.500000000000000 Saving new best model ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 45.4 Num episodes: 44 Best mean reward: 62.500000000000000 - Last mean reward per episode: 45.399999999999999 -------------------------------------- | % time spent exploring | 47 | | episodes | 100 | | mean 100 episode reward | 27.2 | | steps | 2697 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 62.500000000000000 - Last mean reward per episode: 133.300000000000011 Saving new best model ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 133.300000000000011 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 133.300000000000011 - Last mean reward per episode: 181.699999999999989 Saving new best model ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 181.699999999999989 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 181.699999999999989 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 -------------------------------------- | % time spent exploring | 2 | | episodes | 200 | | mean 100 episode reward | 141 | | steps | 16773 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 86.9 Num episodes: 23 Best mean reward: 181.699999999999989 - Last mean reward per episode: 86.900000000000006 ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 181.699999999999989 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 -------------------------------------- | % time spent exploring | 2 | | episodes | 300 | | mean 100 episode reward | 128 | | steps | 29567 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 129 | | steps | 42497 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 Training Time: 118.5438551902771 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 --------------------------------------------- Layer Size = 1x128 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 15.4 Num episodes: 130 Best mean reward: -inf - Last mean reward per episode: 15.400000000000000 Saving new best model ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 9.8 Num episodes: 205 Best mean reward: 15.400000000000000 - Last mean reward per episode: 9.800000000000001 -------------------------------------- | % time spent exploring | 54 | | episodes | 100 | | mean 100 episode reward | 23.2 | | steps | 2298 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 15.400000000000000 - Last mean reward per episode: 111.099999999999994 Saving new best model ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 111.099999999999994 - Last mean reward per episode: 181.699999999999989 Saving new best model ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 181.699999999999989 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 181.699999999999989 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 -------------------------------------- | % time spent exploring | 2 | | episodes | 200 | | mean 100 episode reward | 144 | | steps | 16656 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 181.699999999999989 - Last mean reward per episode: 199.900000000000006 Saving new best model ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 -------------------------------------- | % time spent exploring | 2 | | episodes | 300 | | mean 100 episode reward | 149 | | steps | 31518 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 199.900000000000006 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 174 | | steps | 48900 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 Training Time: 124.84450674057007 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 --------------------------------------------- Layer Size = 1x256 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 9.9 Num episodes: 201 Best mean reward: -inf - Last mean reward per episode: 9.900000000000000 Saving new best model ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 83.3 Num episodes: 24 Best mean reward: 9.900000000000000 - Last mean reward per episode: 83.299999999999997 Saving new best model -------------------------------------- | % time spent exploring | 55 | | episodes | 100 | | mean 100 episode reward | 23.2 | | steps | 2293 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 44.4 Num episodes: 45 Best mean reward: 83.299999999999997 - Last mean reward per episode: 44.399999999999999 ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 83.299999999999997 - Last mean reward per episode: 153.800000000000011 Saving new best model ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 153.800000000000011 - Last mean reward per episode: 166.599999999999994 Saving new best model ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 166.599999999999994 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 166.599999999999994 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 166.599999999999994 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 166.599999999999994 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 166.599999999999994 - Last mean reward per episode: 181.699999999999989 Saving new best model ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 200 | | mean 100 episode reward | 102 | | steps | 12462 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 181.699999999999989 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 181.699999999999989 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 181.699999999999989 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 181.699999999999989 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 -------------------------------------- | % time spent exploring | 2 | | episodes | 300 | | mean 100 episode reward | 140 | | steps | 26428 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 181.699999999999989 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 62.5 Num episodes: 32 Best mean reward: 181.699999999999989 - Last mean reward per episode: 62.500000000000000 ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 133 | | steps | 39745 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 181.699999999999989 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 Training Time: 127.40379309654236 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 --------------------------------------------- Layer Size = 2x2 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 50.0 Num episodes: 40 Best mean reward: -inf - Last mean reward per episode: 50.000000000000000 Saving new best model -------------------------------------- | % time spent exploring | 60 | | episodes | 100 | | mean 100 episode reward | 20.2 | | steps | 1995 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 34 | | episodes | 200 | | mean 100 episode reward | 13.4 | | steps | 3331 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 13 | | episodes | 300 | | mean 100 episode reward | 11 | | steps | 4428 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 9.4 Num episodes: 212 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 9.7 | | steps | 5398 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 500 | | mean 100 episode reward | 9.5 | | steps | 6347 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 600 | | mean 100 episode reward | 9.6 | | steps | 7310 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 700 | | mean 100 episode reward | 9.3 | | steps | 8245 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 800 | | mean 100 episode reward | 9.4 | | steps | 9188 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 900 | | mean 100 episode reward | 9.3 | | steps | 10116 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1000 | | mean 100 episode reward | 9.3 | | steps | 11050 | -------------------------------------- -------------------------------------- | % time spent exploring | 2 | | episodes | 1100 | | mean 100 episode reward | 9.4 | | steps | 11990 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1200 | | mean 100 episode reward | 9.4 | | steps | 12934 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1300 | | mean 100 episode reward | 9.5 | | steps | 13883 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1400 | | mean 100 episode reward | 9.4 | | steps | 14828 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1500 | | mean 100 episode reward | 9.4 | | steps | 15766 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1600 | | mean 100 episode reward | 9.4 | | steps | 16705 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1700 | | mean 100 episode reward | 9.4 | | steps | 17642 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 9.2 Num episodes: 217 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.199999999999999 -------------------------------------- | % time spent exploring | 2 | | episodes | 1800 | | mean 100 episode reward | 9.3 | | steps | 18571 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1900 | | mean 100 episode reward | 9.5 | | steps | 19522 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2000 | | mean 100 episode reward | 9.5 | | steps | 20474 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2100 | | mean 100 episode reward | 9.5 | | steps | 21421 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2200 | | mean 100 episode reward | 9.3 | | steps | 22352 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2300 | | mean 100 episode reward | 9.4 | | steps | 23293 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2400 | | mean 100 episode reward | 9.4 | | steps | 24235 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 2500 | | mean 100 episode reward | 9.4 | | steps | 25174 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 9.4 Num episodes: 212 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 2600 | | mean 100 episode reward | 9.5 | | steps | 26127 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 2700 | | mean 100 episode reward | 9.5 | | steps | 27073 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2800 | | mean 100 episode reward | 9.4 | | steps | 28015 | -------------------------------------- -------------------------------------- | % time spent exploring | 2 | | episodes | 2900 | | mean 100 episode reward | 9.6 | | steps | 28971 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 3000 | | mean 100 episode reward | 9.3 | | steps | 29905 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3100 | | mean 100 episode reward | 9.7 | | steps | 30870 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3200 | | mean 100 episode reward | 9.5 | | steps | 31819 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 3300 | | mean 100 episode reward | 9.7 | | steps | 32785 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3400 | | mean 100 episode reward | 9.5 | | steps | 33734 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3500 | | mean 100 episode reward | 9.4 | | steps | 34673 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3600 | | mean 100 episode reward | 9.4 | | steps | 35617 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3700 | | mean 100 episode reward | 9.6 | | steps | 36574 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3800 | | mean 100 episode reward | 9.6 | | steps | 37533 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3900 | | mean 100 episode reward | 9.3 | | steps | 38466 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4000 | | mean 100 episode reward | 9.5 | | steps | 39412 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4100 | | mean 100 episode reward | 9.6 | | steps | 40368 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4200 | | mean 100 episode reward | 9.5 | | steps | 41320 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 4300 | | mean 100 episode reward | 9.5 | | steps | 42266 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4400 | | mean 100 episode reward | 9.4 | | steps | 43208 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 9.4 Num episodes: 212 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 4500 | | mean 100 episode reward | 9.3 | | steps | 44142 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4600 | | mean 100 episode reward | 9.3 | | steps | 45074 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 4700 | | mean 100 episode reward | 9.5 | | steps | 46022 | -------------------------------------- -------------------------------------- | % time spent exploring | 2 | | episodes | 4800 | | mean 100 episode reward | 9.3 | | steps | 46952 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4900 | | mean 100 episode reward | 9.5 | | steps | 47900 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 5000 | | mean 100 episode reward | 9.5 | | steps | 48850 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 9.5 Num episodes: 211 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.500000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 5100 | | mean 100 episode reward | 9.4 | | steps | 49787 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 50.000000000000000 - Last mean reward per episode: 9.300000000000001 Training Time: 128.05865216255188 EVALUATION!!! Mean reward: 55.5 Num episodes: 36 --------------------------------------------- Layer Size = 2x64 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 66.6 Num episodes: 30 Best mean reward: -inf - Last mean reward per episode: 66.599999999999994 Saving new best model ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 66.599999999999994 - Last mean reward per episode: 9.300000000000001 ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 66.599999999999994 - Last mean reward per episode: 199.900000000000006 Saving new best model ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 86.9 Num episodes: 23 Best mean reward: 199.900000000000006 - Last mean reward per episode: 86.900000000000006 ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 -------------------------------------- | % time spent exploring | 2 | | episodes | 100 | | mean 100 episode reward | 60.2 | | steps | 5962 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 -------------------------------------- | % time spent exploring | 2 | | episodes | 200 | | mean 100 episode reward | 147 | | steps | 20634 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 199.900000000000006 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 17.4 Num episodes: 115 Best mean reward: 199.900000000000006 - Last mean reward per episode: 17.399999999999999 ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 -------------------------------------- | % time spent exploring | 2 | | episodes | 300 | | mean 100 episode reward | 124 | | steps | 33080 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 51.3 Num episodes: 39 Best mean reward: 199.900000000000006 - Last mean reward per episode: 51.299999999999997 ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 199.900000000000006 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 19.4 Num episodes: 103 Best mean reward: 199.900000000000006 - Last mean reward per episode: 19.399999999999999 ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 199.900000000000006 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 199.900000000000006 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 80.0 Num episodes: 25 Best mean reward: 199.900000000000006 - Last mean reward per episode: 80.000000000000000 ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 131 | | steps | 46166 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 14.1 Num episodes: 142 Best mean reward: 199.900000000000006 - Last mean reward per episode: 14.100000000000000 ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 199.900000000000006 - Last mean reward per episode: 166.599999999999994 Training Time: 137.02858328819275 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 --------------------------------------------- Layer Size = 2x128 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 9.5 Num episodes: 210 Best mean reward: -inf - Last mean reward per episode: 9.500000000000000 Saving new best model -------------------------------------- | % time spent exploring | 64 | | episodes | 100 | | mean 100 episode reward | 18.1 | | steps | 1795 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 11.2 Num episodes: 179 Best mean reward: 9.500000000000000 - Last mean reward per episode: 11.199999999999999 Saving new best model ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 11.199999999999999 - Last mean reward per episode: 105.200000000000003 Saving new best model ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 105.200000000000003 - Last mean reward per episode: 142.800000000000011 Saving new best model ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 142.800000000000011 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 142.800000000000011 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 142.800000000000011 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 142.800000000000011 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 142.800000000000011 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 142.800000000000011 - Last mean reward per episode: 153.800000000000011 Saving new best model ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 153.800000000000011 - Last mean reward per episode: 124.900000000000006 -------------------------------------- | % time spent exploring | 2 | | episodes | 200 | | mean 100 episode reward | 98.2 | | steps | 11614 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 153.800000000000011 - Last mean reward per episode: 166.599999999999994 Saving new best model ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 166.599999999999994 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 60.6 Num episodes: 33 Best mean reward: 166.599999999999994 - Last mean reward per episode: 60.600000000000001 ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 166.599999999999994 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 166.599999999999994 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 83.3 Num episodes: 24 Best mean reward: 166.599999999999994 - Last mean reward per episode: 83.299999999999997 ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 166.599999999999994 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 166.599999999999994 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 166.599999999999994 - Last mean reward per episode: 100.000000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 300 | | mean 100 episode reward | 86.6 | | steps | 20273 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 166.599999999999994 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 90.9 Num episodes: 22 Best mean reward: 166.599999999999994 - Last mean reward per episode: 90.900000000000006 ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 90.9 Num episodes: 22 Best mean reward: 166.599999999999994 - Last mean reward per episode: 90.900000000000006 ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 90.9 Num episodes: 22 Best mean reward: 166.599999999999994 - Last mean reward per episode: 90.900000000000006 ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 166.599999999999994 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 166.599999999999994 - Last mean reward per episode: 181.699999999999989 Saving new best model ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 90.9 Num episodes: 22 Best mean reward: 181.699999999999989 - Last mean reward per episode: 90.900000000000006 ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 90.9 Num episodes: 22 Best mean reward: 181.699999999999989 - Last mean reward per episode: 90.900000000000006 ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 181.699999999999989 - Last mean reward per episode: 95.200000000000003 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 101 | | steps | 30357 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 68.9 Num episodes: 29 Best mean reward: 181.699999999999989 - Last mean reward per episode: 68.900000000000006 ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 76.9 Num episodes: 26 Best mean reward: 181.699999999999989 - Last mean reward per episode: 76.900000000000006 ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 52.6 Num episodes: 38 Best mean reward: 181.699999999999989 - Last mean reward per episode: 52.600000000000001 ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 181.699999999999989 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 34.5 Num episodes: 58 Best mean reward: 181.699999999999989 - Last mean reward per episode: 34.500000000000000 ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 181.699999999999989 - Last mean reward per episode: 199.900000000000006 Saving new best model ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 58.8 Num episodes: 34 Best mean reward: 199.900000000000006 - Last mean reward per episode: 58.799999999999997 -------------------------------------- | % time spent exploring | 2 | | episodes | 500 | | mean 100 episode reward | 91.5 | | steps | 39511 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 90.9 Num episodes: 22 Best mean reward: 199.900000000000006 - Last mean reward per episode: 90.900000000000006 ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 199.900000000000006 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 199.900000000000006 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 199.900000000000006 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 Training Time: 145.1732199192047 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 --------------------------------------------- Layer Size = 2x256 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 30.3 Num episodes: 66 Best mean reward: -inf - Last mean reward per episode: 30.300000000000001 Saving new best model ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 30.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 58 | | episodes | 100 | | mean 100 episode reward | 21.2 | | steps | 2098 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 30.300000000000001 - Last mean reward per episode: 124.900000000000006 Saving new best model ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 124.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 124.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 76.9 Num episodes: 26 Best mean reward: 124.900000000000006 - Last mean reward per episode: 76.900000000000006 ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 124.900000000000006 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 124.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 124.900000000000006 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 124.900000000000006 - Last mean reward per episode: 100.000000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 200 | | mean 100 episode reward | 81.9 | | steps | 10290 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 124.900000000000006 - Last mean reward per episode: 133.300000000000011 Saving new best model ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 133.300000000000011 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 133.300000000000011 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 133.300000000000011 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 133.300000000000011 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 133.300000000000011 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 133.300000000000011 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 133.300000000000011 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 90.9 Num episodes: 22 Best mean reward: 133.300000000000011 - Last mean reward per episode: 90.900000000000006 ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 133.300000000000011 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 29.4 Num episodes: 68 Best mean reward: 133.300000000000011 - Last mean reward per episode: 29.399999999999999 -------------------------------------- | % time spent exploring | 2 | | episodes | 300 | | mean 100 episode reward | 110 | | steps | 21333 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 83.3 Num episodes: 24 Best mean reward: 133.300000000000011 - Last mean reward per episode: 83.299999999999997 ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 133.300000000000011 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 133.300000000000011 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 133.300000000000011 - Last mean reward per episode: 199.900000000000006 Saving new best model ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 68.9 Num episodes: 29 Best mean reward: 199.900000000000006 - Last mean reward per episode: 68.900000000000006 ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 80.0 Num episodes: 25 Best mean reward: 199.900000000000006 - Last mean reward per episode: 80.000000000000000 ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 86.9 Num episodes: 23 Best mean reward: 199.900000000000006 - Last mean reward per episode: 86.900000000000006 ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 116 | | steps | 32923 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 86.9 Num episodes: 23 Best mean reward: 199.900000000000006 - Last mean reward per episode: 86.900000000000006 ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 18.5 Num episodes: 108 Best mean reward: 199.900000000000006 - Last mean reward per episode: 18.500000000000000 ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 199.900000000000006 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 14.2 Num episodes: 141 Best mean reward: 199.900000000000006 - Last mean reward per episode: 14.199999999999999 ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 -------------------------------------- | % time spent exploring | 2 | | episodes | 500 | | mean 100 episode reward | 101 | | steps | 43048 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 199.900000000000006 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 Training Time: 191.87267184257507 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 ```python # learning rate tests for ii in lr_list: hidden_layer_size = 128 policy_kwargs = dict(layers=[hidden_layer_size]) learning_rate = ii print("---------------------------------------------") print("Learning Rate = " + str(learning_rate)) print("---------------------------------------------") tensorboard_log = "./DQN_Cartpole_Param/learning_rate_" + str(learning_rate) log_dir = "./Results/" model = DQN(MlpPolicy, env, gamma=0.99, learning_rate=learning_rate, buffer_size=50000, policy_kwargs=policy_kwargs, exploration_fraction=0.1, exploration_final_eps=0.02, exploration_initial_eps=1.0, train_freq=1, batch_size=32, double_q=True, learning_starts=1000, target_network_update_freq=500, prioritized_replay=False, verbose=1, tensorboard_log=tensorboard_log) best_mean_reward, n_steps = -np.inf, 0 evaluation_rewards = [] start_time = time.time() model.learn(total_timesteps=ITERS, callback=callback) elapsed_time = time.time() - start_time print("Training Time: " + str(elapsed_time)) model = DQN.load(log_dir + 'best_model', env=env) r = evaluate(model, num_steps=EVAL_STEPS, render=False) solved.append(r) ``` --------------------------------------------- Learning Rate = 1e-05 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 21.3 Num episodes: 94 Best mean reward: -inf - Last mean reward per episode: 21.300000000000001 Saving new best model ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 10.2 Num episodes: 196 Best mean reward: 21.300000000000001 - Last mean reward per episode: 10.199999999999999 -------------------------------------- | % time spent exploring | 57 | | episodes | 100 | | mean 100 episode reward | 22.1 | | steps | 2183 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 32 | | episodes | 200 | | mean 100 episode reward | 12.6 | | steps | 3439 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 11 | | episodes | 300 | | mean 100 episode reward | 10.6 | | steps | 4499 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 9.8 | | steps | 5477 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 9.4 Num episodes: 212 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 500 | | mean 100 episode reward | 9.4 | | steps | 6415 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 600 | | mean 100 episode reward | 9.4 | | steps | 7358 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 700 | | mean 100 episode reward | 9.4 | | steps | 8294 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 800 | | mean 100 episode reward | 9.4 | | steps | 9231 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 900 | | mean 100 episode reward | 9.4 | | steps | 10171 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1000 | | mean 100 episode reward | 9.5 | | steps | 11117 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1100 | | mean 100 episode reward | 9.4 | | steps | 12061 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1200 | | mean 100 episode reward | 9.5 | | steps | 13007 | -------------------------------------- -------------------------------------- | % time spent exploring | 2 | | episodes | 1300 | | mean 100 episode reward | 9.4 | | steps | 13948 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1400 | | mean 100 episode reward | 9.4 | | steps | 14890 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1500 | | mean 100 episode reward | 9.3 | | steps | 15824 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1600 | | mean 100 episode reward | 9.7 | | steps | 16795 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1700 | | mean 100 episode reward | 9.4 | | steps | 17739 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1800 | | mean 100 episode reward | 9.4 | | steps | 18684 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1900 | | mean 100 episode reward | 9.4 | | steps | 19625 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2000 | | mean 100 episode reward | 9.3 | | steps | 20558 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2100 | | mean 100 episode reward | 9.4 | | steps | 21496 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2200 | | mean 100 episode reward | 9.4 | | steps | 22435 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 9.4 Num episodes: 212 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 2300 | | mean 100 episode reward | 9.5 | | steps | 23383 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 2400 | | mean 100 episode reward | 9.4 | | steps | 24327 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2500 | | mean 100 episode reward | 9.4 | | steps | 25271 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2600 | | mean 100 episode reward | 9.4 | | steps | 26213 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2700 | | mean 100 episode reward | 9.5 | | steps | 27162 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2800 | | mean 100 episode reward | 9.5 | | steps | 28109 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 2900 | | mean 100 episode reward | 9.1 | | steps | 29022 | -------------------------------------- -------------------------------------- | % time spent exploring | 2 | | episodes | 3000 | | mean 100 episode reward | 9.5 | | steps | 29972 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 3100 | | mean 100 episode reward | 9.5 | | steps | 30922 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 9.5 Num episodes: 211 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.500000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 3200 | | mean 100 episode reward | 9.3 | | steps | 31851 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3300 | | mean 100 episode reward | 9.6 | | steps | 32808 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 9.2 Num episodes: 217 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.199999999999999 -------------------------------------- | % time spent exploring | 2 | | episodes | 3400 | | mean 100 episode reward | 9.4 | | steps | 33753 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3500 | | mean 100 episode reward | 9.6 | | steps | 34717 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3600 | | mean 100 episode reward | 9.5 | | steps | 35667 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3700 | | mean 100 episode reward | 9.4 | | steps | 36606 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3800 | | mean 100 episode reward | 9.4 | | steps | 37550 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3900 | | mean 100 episode reward | 9.4 | | steps | 38495 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4000 | | mean 100 episode reward | 9.4 | | steps | 39437 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 4100 | | mean 100 episode reward | 9.5 | | steps | 40385 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4200 | | mean 100 episode reward | 9.4 | | steps | 41328 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4300 | | mean 100 episode reward | 9.5 | | steps | 42278 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 9.2 Num episodes: 218 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.199999999999999 -------------------------------------- | % time spent exploring | 2 | | episodes | 4400 | | mean 100 episode reward | 9.5 | | steps | 43232 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 4500 | | mean 100 episode reward | 9.5 | | steps | 44181 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4600 | | mean 100 episode reward | 9.4 | | steps | 45122 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4700 | | mean 100 episode reward | 9.4 | | steps | 46067 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 4800 | | mean 100 episode reward | 9.3 | | steps | 47002 | -------------------------------------- -------------------------------------- | % time spent exploring | 2 | | episodes | 4900 | | mean 100 episode reward | 9.4 | | steps | 47942 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 5000 | | mean 100 episode reward | 9.4 | | steps | 48883 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 9.5 Num episodes: 211 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.500000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 5100 | | mean 100 episode reward | 9.3 | | steps | 49817 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 21.300000000000001 - Last mean reward per episode: 9.400000000000000 Training Time: 125.50925898551941 EVALUATION!!! Mean reward: 14.9 Num episodes: 134 --------------------------------------------- Learning Rate = 0.001 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 12.8 Num episodes: 156 Best mean reward: -inf - Last mean reward per episode: 12.800000000000001 Saving new best model -------------------------------------- | % time spent exploring | 62 | | episodes | 100 | | mean 100 episode reward | 19.4 | | steps | 1921 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 29.0 Num episodes: 69 Best mean reward: 12.800000000000001 - Last mean reward per episode: 29.000000000000000 Saving new best model ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 29.000000000000000 - Last mean reward per episode: 142.800000000000011 Saving new best model ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 142.800000000000011 - Last mean reward per episode: 166.599999999999994 Saving new best model ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 166.599999999999994 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 166.599999999999994 - Last mean reward per episode: 181.699999999999989 Saving new best model ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 181.699999999999989 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 181.699999999999989 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 -------------------------------------- | % time spent exploring | 2 | | episodes | 200 | | mean 100 episode reward | 101 | | steps | 12048 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 181.699999999999989 - Last mean reward per episode: 199.900000000000006 Saving new best model ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 199.900000000000006 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 199.900000000000006 - Last mean reward per episode: 100.000000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 300 | | mean 100 episode reward | 133 | | steps | 25372 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 76.9 Num episodes: 26 Best mean reward: 199.900000000000006 - Last mean reward per episode: 76.900000000000006 ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 199.900000000000006 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 199.900000000000006 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 144 | | steps | 39731 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 Training Time: 123.7341251373291 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 --------------------------------------------- Learning Rate = 0.1 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 9.2 Num episodes: 217 Best mean reward: -inf - Last mean reward per episode: 9.199999999999999 Saving new best model -------------------------------------- | % time spent exploring | 61 | | episodes | 100 | | mean 100 episode reward | 20 | | steps | 1980 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 9.199999999999999 - Last mean reward per episode: 9.300000000000001 Saving new best model ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 9.4 Num episodes: 212 Best mean reward: 9.300000000000001 - Last mean reward per episode: 9.400000000000000 Saving new best model -------------------------------------- | % time spent exploring | 26 | | episodes | 200 | | mean 100 episode reward | 17.8 | | steps | 3763 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.400000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 300 | | mean 100 episode reward | 12.2 | | steps | 4988 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.400000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 10 | | steps | 5989 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.400000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 500 | | mean 100 episode reward | 10 | | steps | 6992 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 9.400000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 600 | | mean 100 episode reward | 9.8 | | steps | 7976 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.400000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 700 | | mean 100 episode reward | 9.6 | | steps | 8936 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 9.400000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 800 | | mean 100 episode reward | 9.6 | | steps | 9897 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.400000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 900 | | mean 100 episode reward | 9.6 | | steps | 10859 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 9.5 Num episodes: 211 Best mean reward: 9.400000000000000 - Last mean reward per episode: 9.500000000000000 Saving new best model -------------------------------------- | % time spent exploring | 2 | | episodes | 1000 | | mean 100 episode reward | 9.4 | | steps | 11797 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1100 | | mean 100 episode reward | 9.4 | | steps | 12741 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1200 | | mean 100 episode reward | 9.4 | | steps | 13683 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1300 | | mean 100 episode reward | 9.5 | | steps | 14633 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1400 | | mean 100 episode reward | 9.3 | | steps | 15562 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1500 | | mean 100 episode reward | 9.2 | | steps | 16483 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1600 | | mean 100 episode reward | 9.3 | | steps | 17418 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 9.2 Num episodes: 217 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.199999999999999 -------------------------------------- | % time spent exploring | 2 | | episodes | 1700 | | mean 100 episode reward | 9.2 | | steps | 18340 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1800 | | mean 100 episode reward | 9.4 | | steps | 19279 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1900 | | mean 100 episode reward | 9.5 | | steps | 20230 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2000 | | mean 100 episode reward | 9.5 | | steps | 21177 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2100 | | mean 100 episode reward | 9.6 | | steps | 22137 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2200 | | mean 100 episode reward | 9.5 | | steps | 23086 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 9.4 Num episodes: 212 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 2300 | | mean 100 episode reward | 9.3 | | steps | 24015 | -------------------------------------- -------------------------------------- | % time spent exploring | 2 | | episodes | 2400 | | mean 100 episode reward | 9.5 | | steps | 24968 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2500 | | mean 100 episode reward | 9.5 | | steps | 25919 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 2600 | | mean 100 episode reward | 9.3 | | steps | 26850 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2700 | | mean 100 episode reward | 9.4 | | steps | 27790 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2800 | | mean 100 episode reward | 9.6 | | steps | 28748 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 2900 | | mean 100 episode reward | 9.5 | | steps | 29696 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 3000 | | mean 100 episode reward | 9.4 | | steps | 30639 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 3100 | | mean 100 episode reward | 9.6 | | steps | 31596 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3200 | | mean 100 episode reward | 9.6 | | steps | 32551 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 3300 | | mean 100 episode reward | 9.4 | | steps | 33494 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 9.2 Num episodes: 217 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.199999999999999 -------------------------------------- | % time spent exploring | 2 | | episodes | 3400 | | mean 100 episode reward | 9.6 | | steps | 34456 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3500 | | mean 100 episode reward | 9.5 | | steps | 35405 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3600 | | mean 100 episode reward | 9.2 | | steps | 36329 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3700 | | mean 100 episode reward | 9.5 | | steps | 37276 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3800 | | mean 100 episode reward | 9.6 | | steps | 38234 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 3900 | | mean 100 episode reward | 9.4 | | steps | 39179 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 9.4 Num episodes: 212 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 4000 | | mean 100 episode reward | 9.5 | | steps | 40127 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4100 | | mean 100 episode reward | 9.4 | | steps | 41068 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4200 | | mean 100 episode reward | 9.6 | | steps | 42028 | -------------------------------------- -------------------------------------- | % time spent exploring | 2 | | episodes | 4300 | | mean 100 episode reward | 9.4 | | steps | 42973 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4400 | | mean 100 episode reward | 9.6 | | steps | 43932 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4500 | | mean 100 episode reward | 9.4 | | steps | 44872 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 9.4 Num episodes: 212 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 4600 | | mean 100 episode reward | 9.3 | | steps | 45806 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4700 | | mean 100 episode reward | 9.3 | | steps | 46739 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 9.3 Num episodes: 214 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4800 | | mean 100 episode reward | 9.5 | | steps | 47690 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 9.3 Num episodes: 216 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 4900 | | mean 100 episode reward | 9.3 | | steps | 48621 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 9.4 Num episodes: 212 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 5000 | | mean 100 episode reward | 9.5 | | steps | 49570 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 9.5 Num episodes: 211 Best mean reward: 9.500000000000000 - Last mean reward per episode: 9.500000000000000 Training Time: 126.10655283927917 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 ```python # replay buffer tests for ii in buf_list: hidden_layer_size = 128 policy_kwargs = dict(layers=[hidden_layer_size]) buffer_size = ii print("---------------------------------------------") print("Buffer Size = " + str(buffer_size)) print("---------------------------------------------") tensorboard_log = "./DQN_Cartpole_Param/buffer_size_" + str(buffer_size) log_dir = "./Results/" model = DQN(MlpPolicy, env, gamma=0.99, learning_rate=0.001, buffer_size=buffer_size, policy_kwargs=policy_kwargs, exploration_fraction=0.1, exploration_final_eps=0.02, exploration_initial_eps=1.0, train_freq=1, batch_size=32, double_q=True, learning_starts=1000, target_network_update_freq=500, prioritized_replay=False, verbose=1, tensorboard_log=tensorboard_log) best_mean_reward, n_steps = -np.inf, 0 evaluation_rewards = [] start_time = time.time() model.learn(total_timesteps=ITERS, callback=callback) elapsed_time = time.time() - start_time print("Training Time: " + str(elapsed_time)) model = DQN.load(log_dir + 'best_model', env=env) r = evaluate(model, num_steps=EVAL_STEPS, render=False) solved.append(r) ``` --------------------------------------------- Buffer Size = 32 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 90.9 Num episodes: 22 Best mean reward: -inf - Last mean reward per episode: 90.900000000000006 Saving new best model ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 35.7 Num episodes: 56 Best mean reward: 90.900000000000006 - Last mean reward per episode: 35.700000000000003 -------------------------------------- | % time spent exploring | 55 | | episodes | 100 | | mean 100 episode reward | 22.9 | | steps | 2272 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 10.5 Num episodes: 191 Best mean reward: 90.900000000000006 - Last mean reward per episode: 10.500000000000000 ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 29.4 Num episodes: 68 Best mean reward: 90.900000000000006 - Last mean reward per episode: 29.399999999999999 -------------------------------------- | % time spent exploring | 20 | | episodes | 200 | | mean 100 episode reward | 17.8 | | steps | 4047 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 55.5 Num episodes: 36 Best mean reward: 90.900000000000006 - Last mean reward per episode: 55.500000000000000 ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 25.6 Num episodes: 78 Best mean reward: 90.900000000000006 - Last mean reward per episode: 25.600000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 300 | | mean 100 episode reward | 22.8 | | steps | 6324 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 90.900000000000006 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 9.5 | | steps | 7271 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 21.3 Num episodes: 94 Best mean reward: 90.900000000000006 - Last mean reward per episode: 21.300000000000001 ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 90.900000000000006 - Last mean reward per episode: 105.200000000000003 Saving new best model ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 71.4 Num episodes: 28 Best mean reward: 105.200000000000003 - Last mean reward per episode: 71.400000000000006 -------------------------------------- | % time spent exploring | 2 | | episodes | 500 | | mean 100 episode reward | 32.2 | | steps | 10491 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 105.200000000000003 - Last mean reward per episode: 111.099999999999994 Saving new best model ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 90.9 Num episodes: 22 Best mean reward: 111.099999999999994 - Last mean reward per episode: 90.900000000000006 ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 111.099999999999994 - Last mean reward per episode: 133.300000000000011 Saving new best model ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 10.7 Num episodes: 187 Best mean reward: 133.300000000000011 - Last mean reward per episode: 10.699999999999999 -------------------------------------- | % time spent exploring | 2 | | episodes | 600 | | mean 100 episode reward | 35.2 | | steps | 14013 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 133.300000000000011 - Last mean reward per episode: 95.200000000000003 ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 133.300000000000011 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 16.5 Num episodes: 121 Best mean reward: 133.300000000000011 - Last mean reward per episode: 16.500000000000000 ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 25.0 Num episodes: 80 Best mean reward: 133.300000000000011 - Last mean reward per episode: 25.000000000000000 ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 19.2 Num episodes: 104 Best mean reward: 133.300000000000011 - Last mean reward per episode: 19.199999999999999 -------------------------------------- | % time spent exploring | 2 | | episodes | 700 | | mean 100 episode reward | 51.6 | | steps | 19170 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 41.6 Num episodes: 48 Best mean reward: 133.300000000000011 - Last mean reward per episode: 41.600000000000001 ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 133.300000000000011 - Last mean reward per episode: 153.800000000000011 Saving new best model ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 153.800000000000011 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 12.4 Num episodes: 161 Best mean reward: 153.800000000000011 - Last mean reward per episode: 12.400000000000000 ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 153.800000000000011 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 153.800000000000011 - Last mean reward per episode: 142.800000000000011 -------------------------------------- | % time spent exploring | 2 | | episodes | 800 | | mean 100 episode reward | 64.9 | | steps | 25661 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 153.800000000000011 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 153.800000000000011 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 62.5 Num episodes: 32 Best mean reward: 153.800000000000011 - Last mean reward per episode: 62.500000000000000 ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 153.800000000000011 - Last mean reward per episode: 181.699999999999989 Saving new best model ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 181.699999999999989 - Last mean reward per episode: 199.900000000000006 Saving new best model ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 32.8 Num episodes: 61 Best mean reward: 199.900000000000006 - Last mean reward per episode: 32.799999999999997 ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 -------------------------------------- | % time spent exploring | 2 | | episodes | 900 | | mean 100 episode reward | 66.7 | | steps | 32326 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 9.9 Num episodes: 202 Best mean reward: 199.900000000000006 - Last mean reward per episode: 9.900000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1000 | | mean 100 episode reward | 10.6 | | steps | 33383 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 199.900000000000006 - Last mean reward per episode: 9.300000000000001 -------------------------------------- | % time spent exploring | 2 | | episodes | 1100 | | mean 100 episode reward | 11.2 | | steps | 34505 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 9.4 Num episodes: 213 Best mean reward: 199.900000000000006 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 2 | | episodes | 1200 | | mean 100 episode reward | 9.6 | | steps | 35467 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 48.8 Num episodes: 41 Best mean reward: 199.900000000000006 - Last mean reward per episode: 48.799999999999997 ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 10.8 Num episodes: 185 Best mean reward: 199.900000000000006 - Last mean reward per episode: 10.800000000000001 ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 13.8 Num episodes: 145 Best mean reward: 199.900000000000006 - Last mean reward per episode: 13.800000000000001 ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 -------------------------------------- | % time spent exploring | 2 | | episodes | 1300 | | mean 100 episode reward | 79.1 | | steps | 43376 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 9.3 Num episodes: 215 Best mean reward: 199.900000000000006 - Last mean reward per episode: 9.300000000000001 ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 -------------------------------------- | % time spent exploring | 2 | | episodes | 1400 | | mean 100 episode reward | 47.8 | | steps | 48156 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 Training Time: 127.3192892074585 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 --------------------------------------------- Buffer Size = 5000 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 12.0 Num episodes: 166 Best mean reward: -inf - Last mean reward per episode: 12.000000000000000 Saving new best model ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 48.8 Num episodes: 41 Best mean reward: 12.000000000000000 - Last mean reward per episode: 48.799999999999997 Saving new best model ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 48.799999999999997 - Last mean reward per episode: 111.099999999999994 Saving new best model -------------------------------------- | % time spent exploring | 23 | | episodes | 100 | | mean 100 episode reward | 39.6 | | steps | 3916 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 111.099999999999994 - Last mean reward per episode: 166.599999999999994 Saving new best model ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 166.599999999999994 - Last mean reward per episode: 199.900000000000006 Saving new best model ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 199.900000000000006 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 199.900000000000006 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 -------------------------------------- | % time spent exploring | 2 | | episodes | 200 | | mean 100 episode reward | 153 | | steps | 19259 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 199.900000000000006 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 Best mean reward: 199.900000000000006 - Last mean reward per episode: 199.900000000000006 ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 12.7 Num episodes: 157 Best mean reward: 199.900000000000006 - Last mean reward per episode: 12.699999999999999 ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 54.0 Num episodes: 37 Best mean reward: 199.900000000000006 - Last mean reward per episode: 54.000000000000000 ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 -------------------------------------- | % time spent exploring | 2 | | episodes | 300 | | mean 100 episode reward | 112 | | steps | 30460 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 199.900000000000006 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 199.900000000000006 - Last mean reward per episode: 181.699999999999989 ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 199.900000000000006 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 199.900000000000006 - Last mean reward per episode: 95.200000000000003 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 121 | | steps | 42592 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 199.900000000000006 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 199.900000000000006 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 199.900000000000006 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 80.0 Num episodes: 25 Best mean reward: 199.900000000000006 - Last mean reward per episode: 80.000000000000000 ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 199.900000000000006 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 199.900000000000006 - Last mean reward per episode: 105.200000000000003 Training Time: 123.69381213188171 EVALUATION!!! Mean reward: 199.9 Num episodes: 10 --------------------------------------------- Buffer Size = 50000 --------------------------------------------- ----------------------------------------------------- Evaluating Model: 999 EVALUATION!!! Mean reward: 11.4 Num episodes: 176 Best mean reward: -inf - Last mean reward per episode: 11.400000000000000 Saving new best model ----------------------------------------------------- Evaluating Model: 1999 EVALUATION!!! Mean reward: 9.4 Num episodes: 212 Best mean reward: 11.400000000000000 - Last mean reward per episode: 9.400000000000000 -------------------------------------- | % time spent exploring | 51 | | episodes | 100 | | mean 100 episode reward | 24.8 | | steps | 2459 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 2999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 11.400000000000000 - Last mean reward per episode: 95.200000000000003 Saving new best model ----------------------------------------------------- Evaluating Model: 3999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 95.200000000000003 - Last mean reward per episode: 133.300000000000011 Saving new best model ----------------------------------------------------- Evaluating Model: 4999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 133.300000000000011 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 5999 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 Best mean reward: 133.300000000000011 - Last mean reward per episode: 181.699999999999989 Saving new best model ----------------------------------------------------- Evaluating Model: 6999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 181.699999999999989 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 7999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 8999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 9999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 10999 EVALUATION!!! Mean reward: 142.8 Num episodes: 14 Best mean reward: 181.699999999999989 - Last mean reward per episode: 142.800000000000011 ----------------------------------------------------- Evaluating Model: 11999 EVALUATION!!! Mean reward: 86.9 Num episodes: 23 Best mean reward: 181.699999999999989 - Last mean reward per episode: 86.900000000000006 ----------------------------------------------------- Evaluating Model: 12999 EVALUATION!!! Mean reward: 95.2 Num episodes: 21 Best mean reward: 181.699999999999989 - Last mean reward per episode: 95.200000000000003 -------------------------------------- | % time spent exploring | 2 | | episodes | 200 | | mean 100 episode reward | 107 | | steps | 13149 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 13999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 14999 EVALUATION!!! Mean reward: 166.6 Num episodes: 12 Best mean reward: 181.699999999999989 - Last mean reward per episode: 166.599999999999994 ----------------------------------------------------- Evaluating Model: 15999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 16999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 17999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 18999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 19999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 20999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 21999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 22999 EVALUATION!!! Mean reward: 153.8 Num episodes: 13 Best mean reward: 181.699999999999989 - Last mean reward per episode: 153.800000000000011 ----------------------------------------------------- Evaluating Model: 23999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 -------------------------------------- | % time spent exploring | 2 | | episodes | 300 | | mean 100 episode reward | 114 | | steps | 24526 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 24999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 25999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 26999 EVALUATION!!! Mean reward: 86.9 Num episodes: 23 Best mean reward: 181.699999999999989 - Last mean reward per episode: 86.900000000000006 ----------------------------------------------------- Evaluating Model: 27999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 28999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 29999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 30999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 31999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 32999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 33999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 34999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 -------------------------------------- | % time spent exploring | 2 | | episodes | 400 | | mean 100 episode reward | 114 | | steps | 35943 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 35999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 36999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 37999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 38999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 39999 EVALUATION!!! Mean reward: 100.0 Num episodes: 20 Best mean reward: 181.699999999999989 - Last mean reward per episode: 100.000000000000000 ----------------------------------------------------- Evaluating Model: 40999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 41999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 42999 EVALUATION!!! Mean reward: 105.2 Num episodes: 19 Best mean reward: 181.699999999999989 - Last mean reward per episode: 105.200000000000003 ----------------------------------------------------- Evaluating Model: 43999 EVALUATION!!! Mean reward: 117.6 Num episodes: 17 Best mean reward: 181.699999999999989 - Last mean reward per episode: 117.599999999999994 ----------------------------------------------------- Evaluating Model: 44999 EVALUATION!!! Mean reward: 124.9 Num episodes: 16 Best mean reward: 181.699999999999989 - Last mean reward per episode: 124.900000000000006 ----------------------------------------------------- Evaluating Model: 45999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 ----------------------------------------------------- Evaluating Model: 46999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 -------------------------------------- | % time spent exploring | 2 | | episodes | 500 | | mean 100 episode reward | 117 | | steps | 47626 | -------------------------------------- ----------------------------------------------------- Evaluating Model: 47999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 48999 EVALUATION!!! Mean reward: 111.1 Num episodes: 18 Best mean reward: 181.699999999999989 - Last mean reward per episode: 111.099999999999994 ----------------------------------------------------- Evaluating Model: 49999 EVALUATION!!! Mean reward: 133.3 Num episodes: 15 Best mean reward: 181.699999999999989 - Last mean reward per episode: 133.300000000000011 Training Time: 124.09976410865784 EVALUATION!!! Mean reward: 181.7 Num episodes: 11 ```python env.close() print("Final reward (Evaluation) for all hyperparameter combinations") print(solved) ``` Final reward (Evaluation) for all hyperparameter combinations [10.0, 181.7, 199.9, 181.7, 55.5, 199.9, 199.9, 199.9, 199.9, 199.9, 14.9, 199.9, 9.3, 199.9, 199.9, 181.7] ```python ```
c61a877b8816337dcea5b15ba638c0e7cfefacb3
349,110
ipynb
Jupyter Notebook
DQN.ipynb
sergioantelo/Reinforcement-Learning
3543fb7625cbf4383a21d0addfbf29581bd9646c
[ "MIT" ]
null
null
null
DQN.ipynb
sergioantelo/Reinforcement-Learning
3543fb7625cbf4383a21d0addfbf29581bd9646c
[ "MIT" ]
null
null
null
DQN.ipynb
sergioantelo/Reinforcement-Learning
3543fb7625cbf4383a21d0addfbf29581bd9646c
[ "MIT" ]
null
null
null
54.68515
29,168
0.481731
true
64,776
Qwen/Qwen-72B
1. YES 2. YES
0.737158
0.689306
0.508127
__label__eng_Latn
0.273345
0.018879
```python from sympy import * init_printing(use_latex='mathjax') ``` ```python def get_diff(expressions, symbols): rows = len(expressions) columns = len(symbols) assert rows == columns , "Number of expression doesnt match number of symbols" print("Expressions:") for expression in expressions: display(expression) results = [[0 for x in range(rows)] for y in range(columns)] for row, expression in enumerate(expressions): for column, symbol in enumerate(symbols): # print('Row %d, column %d, expression: %s, symbol: %s' % (row, column, expression, symbol)) df = diff(expression, symbol) # print("DF: %s" % df) results[row][column] = df return results ``` ```python x, y = symbols('x y') get_diff([x ** 2 - y**2, 2 * x * y], [x, y]) ``` Expressions: $$x^{2} - y^{2}$$ $$2 x y$$ $$\left [ \left [ 2 x, \quad - 2 y\right ], \quad \left [ 2 y, \quad 2 x\right ]\right ]$$ ```python x, y, z = symbols('x y z') get_diff([ 2 * x + 3 * y, cos(x) * sin(z), exp(x) * exp(y) * exp(z) ], [x, y , z]) ``` Expressions: $$2 x + 3 y$$ $$\sin{\left (z \right )} \cos{\left (x \right )}$$ $$e^{x} e^{y} e^{z}$$ $$\left [ \left [ 2, \quad 3, \quad 0\right ], \quad \left [ - \sin{\left (x \right )} \sin{\left (z \right )}, \quad 0, \quad \cos{\left (x \right )} \cos{\left (z \right )}\right ], \quad \left [ e^{x} e^{y} e^{z}, \quad e^{x} e^{y} e^{z}, \quad e^{x} e^{y} e^{z}\right ]\right ]$$ ```python x, y, a, b, c, d = symbols('x y a b c d') get_diff([ a * x + b * y, c * x + d * y ], [x, y]) ``` Expressions: $$a x + b y$$ $$c x + d y$$ $$\left [ \left [ a, \quad b\right ], \quad \left [ c, \quad d\right ]\right ]$$ ```python def jacobian_at(jacobian, point): rows = len(jacobian) columns = len(jacobian[0]) results = [[0 for x in range(rows)] for y in range(columns)] for rowIndex, row in enumerate(jacobian): for colIndex, cell in enumerate(row): results[rowIndex][colIndex] = cell.evalf(subs=point) return results ``` ```python x, y, z = symbols('x y z') J = get_diff([ 9 * x ** 2 * y ** 2 + z * exp(x), x * y + x ** 2 * y ** 3 + 2 * z, cos(x) * sin(z) * exp(y) ], [x, y, z]) jacobian_at(J, {x: 0, y:0, z:0}) ``` Expressions: $$9 x^{2} y^{2} + z e^{x}$$ $$x^{2} y^{3} + x y + 2 z$$ $$e^{y} \sin{\left (z \right )} \cos{\left (x \right )}$$ $$\left [ \left [ 0, \quad 0, \quad 1.0\right ], \quad \left [ 0, \quad 0, \quad 2.0\right ], \quad \left [ 0, \quad 0, \quad 1.0\right ]\right ]$$ ```python r, theta, phi = symbols('r theta phi') get_diff([ r * cos(theta) * sin(phi), r * sin(theta) * sin(phi), r * cos(phi) ], [r, theta, phi]) ``` Expressions: $$r \sin{\left (\phi \right )} \cos{\left (\theta \right )}$$ $$r \sin{\left (\phi \right )} \sin{\left (\theta \right )}$$ $$r \cos{\left (\phi \right )}$$ $$\left [ \left [ \sin{\left (\phi \right )} \cos{\left (\theta \right )}, \quad - r \sin{\left (\phi \right )} \sin{\left (\theta \right )}, \quad r \cos{\left (\phi \right )} \cos{\left (\theta \right )}\right ], \quad \left [ \sin{\left (\phi \right )} \sin{\left (\theta \right )}, \quad r \sin{\left (\phi \right )} \cos{\left (\theta \right )}, \quad r \sin{\left (\theta \right )} \cos{\left (\phi \right )}\right ], \quad \left [ \cos{\left (\phi \right )}, \quad 0, \quad - r \sin{\left (\phi \right )}\right ]\right ]$$ ```python ```
1864aaf237c6ea7f42ad859290d5de2ca4bb3e2d
9,915
ipynb
Jupyter Notebook
Certification 2/Week2.3 - Jacobian Matrix.ipynb
The-Brains/MathForMachineLearning
5cbd9006f166059efaa2f312b741e64ce584aa1f
[ "MIT" ]
6
2018-04-16T02:53:59.000Z
2021-05-16T06:51:57.000Z
Certification 2/Week2.3 - Jacobian Matrix.ipynb
The-Brains/MathForMachineLearning
5cbd9006f166059efaa2f312b741e64ce584aa1f
[ "MIT" ]
null
null
null
Certification 2/Week2.3 - Jacobian Matrix.ipynb
The-Brains/MathForMachineLearning
5cbd9006f166059efaa2f312b741e64ce584aa1f
[ "MIT" ]
4
2019-05-20T02:06:55.000Z
2020-05-18T06:21:41.000Z
23.384434
610
0.400101
true
1,274
Qwen/Qwen-72B
1. YES 2. YES
0.943348
0.888759
0.838408
__label__eng_Latn
0.497602
0.786237
```python # import numpy as np # # !/usr/bin/env python3 # # -*- coding: utf-8 -*- # """ # Created on 20181219 # @author: zhangji # Trajection of a ellipse, Jeffery equation. # """ # %pylab inline # pylab.rcParams['figure.figsize'] = (25, 11) # fontsize = 40 # import numpy as np # import scipy as sp # from scipy.optimize import leastsq, curve_fit # from scipy import interpolate # from scipy.interpolate import interp1d # from scipy.io import loadmat, savemat # # import scipy.misc # import matplotlib # from matplotlib import pyplot as plt # from matplotlib import animation, rc # import matplotlib.ticker as mtick # from mpl_toolkits.axes_grid1.inset_locator import inset_axes, zoomed_inset_axes # from mpl_toolkits.mplot3d import Axes3D, axes3d # from sympy import symbols, simplify, series, exp # from sympy.matrices import Matrix # from sympy.solvers import solve # from IPython.display import display, HTML # from tqdm import tqdm_notebook as tqdm # import pandas as pd # import re # from scanf import scanf # import os # import glob # from codeStore import support_fun as spf # from src.support_class import * # from src import stokes_flow as sf # rc('animation', html='html5') # PWD = os.getcwd() # font = {'size': 20} # matplotlib.rc('font', **font) # np.set_printoptions(linewidth=90, precision=5) from tqdm import tqdm_notebook import os import glob import natsort import numpy as np import scipy as sp from scipy.optimize import leastsq, curve_fit from scipy import interpolate, integrate from scipy import spatial # from scipy.interpolate import interp1d from scipy.io import loadmat, savemat # import scipy.misc import importlib from IPython.display import display, HTML import pandas as pd import pickle import matplotlib from matplotlib import pyplot as plt # from matplotlib import colors as mcolors import matplotlib.colors as colors from matplotlib import animation, rc import matplotlib.ticker as mtick from mpl_toolkits.axes_grid1.inset_locator import inset_axes, zoomed_inset_axes from mpl_toolkits.mplot3d import Axes3D, axes3d from mpl_toolkits.axes_grid1.axes_divider import make_axes_locatable from mpl_toolkits.mplot3d.art3d import Line3DCollection from matplotlib import cm from time import time from src.support_class import * from src import jeffery_model as jm from codeStore import support_fun as spf from codeStore import support_fun_table as spf_tb # %matplotlib notebook rc('animation', html='html5') fontsize = 40 PWD = os.getcwd() ``` /home/zhangji/stokes_flow_master/codeStore/support_fun_table.py:12: UserWarning: matplotlib.pyplot as already been imported, this call will have no effect. matplotlib.use('agg') ```python # calculate the phase map as function of theta and phi. # show theta, phi, and eta, here eta is the angle between the helix norm and the y axis. # calculate Table result importlib.reload(jm) for ini_psi in np.linspace(0, 2 * np.pi, 15, endpoint=0)[:]: max_t = 10 # n_theta = 48 # n_phi = 48 n_theta = 4 n_phi = 4 t0 = time() idx_list = [] Table_t_list = [] Table_theta_list = [] Table_phi_list = [] Table_psi_list = [] Table_eta_list = [] ini_theta_list = [] ini_phi_list = [] idx = 0 planeShearRate = np.array((1, 0, 0)) for ini_theta in tqdm_notebook(np.linspace(0, np.pi, n_theta), desc='$\\psi_{ini}$=%5.3f' % ini_psi): for ini_phi in np.linspace(0, 2 * np.pi, n_phi): tnorm = np.array((np.sin(ini_theta) * np.cos(ini_phi), np.sin(ini_theta) * np.sin(ini_phi), np.cos(ini_theta))) Table_t, Table_X, Table_P, Table_P2, Table_theta, Table_phi, Table_psi, Table_eta \ = do_calculate_ecoli_RK(norm, ini_psi, max_t, update_fun=integrate.RK45, rtol=1e-3, atol=1e-6) idx_list.append(idx) Table_t_list.append(Table_t) Table_theta_list.append(Table_theta) Table_phi_list.append(Table_phi) Table_psi_list.append(Table_psi) Table_eta_list.append(Table_eta) ini_theta_list.append(ini_theta) ini_phi_list.append(ini_phi) idx = idx + 1 data = pd.DataFrame({'ini_theta': np.hstack(ini_theta_list), 'ini_phi': np.hstack(ini_phi_list), 'idx': np.hstack(idx_list), 'last_theta': np.hstack([Table_theta[-1] for Table_theta in Table_theta_list]), 'last_phi': np.hstack([Table_phi[-1] for Table_phi in Table_phi_list]), 'last_psi': np.hstack([Table_psi[-1] for Table_psi in Table_psi_list]), 'last_eta': np.hstack([Table_eta[-1] for Table_eta in Table_eta_list]), }).pivot_table(index=['ini_theta', 'ini_phi']) idx = data.idx.unstack() last_theta = data.last_theta.unstack() last_phi = data.last_phi.unstack() last_psi = data.last_psi.unstack() last_eta = data.last_eta.unstack() t1 = time() print('calculate phase map: run %d cases using %fs' % ((n_theta * n_phi), (t1 - t0))) tpick = (idx, ini_psi, last_theta, last_phi, last_eta, last_psi, Table_t_list, Table_theta_list, Table_phi_list, Table_psi_list, Table_eta_list) with open('phase_map_ecoli_%5.3f.pickle' % ini_psi, 'wb') as handle: pickle.dump(tpick, handle, protocol=pickle.HIGHEST_PROTOCOL) print('save table_data to phase_map_ecoli_%5.3f.pickle' % ini_psi) ``` ```python def tplot_fun(ax0, file_handle, t1, vmin=0, vmax=np.pi): tx = t1.columns.values ty = t1.index.values plt.sca(ax0) im = ax0.pcolor(tx / np.pi, ty / np.pi, t1.values / np.pi, cmap=cm.RdBu, vmin=vmin / np.pi, vmax=vmax / np.pi) fig.colorbar(im, ax=ax0).ax.tick_params(labelsize=fontsize*0.8) ax0.set_xlabel('$\\phi / \pi$', size=fontsize) ax0.set_ylabel('$\\theta / \pi$', size=fontsize) ax0.set_title('%s' % file_handle, size=fontsize*0.8) plt.xticks(fontsize=fontsize*0.8) plt.yticks(fontsize=fontsize*0.8) return True with open('phase_map_ecoli_0.000.pickle', 'rb') as handle: tpick = pickle.load(handle) idx, ini_psi, last_theta, last_phi, last_eta, last_psi, \ Table_t_list, Table_theta_list, Table_phi_list, Table_psi_list, Table_eta_list = tpick fig, (ax0, ax1, ax2, ax3) = plt.subplots(nrows=1, ncols=4, figsize=(100, 20)) fig.patch.set_facecolor('white') tplot_fun(ax0, 'last_eta', last_eta, vmin=0, vmax=np.pi) tplot_fun(ax1, 'last_theta', last_theta, vmin=0, vmax=np.pi) tplot_fun(ax2, 'last_phi', last_phi, vmin=0, vmax=2 * np.pi) tplot_fun(ax3, 'last_psi', last_psi, vmin=0, vmax=2 * np.pi) pass ``` ```python display(idx[:0.3*np.pi].T[np.pi:1.7*np.pi].T) display(last_eta[:0.3*np.pi].T[np.pi:1.7*np.pi].T / np.pi) display(last_psi[:0.3*np.pi].T[np.pi:1.7*np.pi].T / np.pi) pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) display(last_eta / np.pi) ``` ```python ``` ```python show_idx = 74 tt = Table_t_list[show_idx] ttheta = Table_theta_list[show_idx] tphi = Table_phi_list[show_idx] tpsi = Table_psi_list[show_idx] teta = Table_eta_list[show_idx] fig, (ax0, ax1, ax2, ax3) = plt.subplots(nrows=4, ncols=1, figsize=(20, 20)) fig.patch.set_facecolor('white') ax0.plot(tt, ttheta, '-*') ax1.plot(tt, tphi, '-*') ax2.plot(tt, tpsi, '-*') ax3.plot(tt, teta, '-*') print(ttheta[0], ',', tphi[0], ',', tpsi[0]) ``` ```python ``` ```python idx = 2 t_theta, t_phi, t_psi = 0, 0, 0 t_name = 'idx%03d_th%5.3f_ph%5.3f_ps%5.3f.pickle' % (idx, t_theta, t_phi, t_psi) with open('../motion_ecoliB01_table/%s' % t_name, 'rb') as handle: tpick = pickle.load(handle) t_theta, t_phi, t_psi, max_t, update_fun, rtol, atol, eval_dt, \ Table_t, Table_X, Table_P, Table_P2, Table_theta, Table_phi, Table_psi, Table_eta = tpick save_every = np.ceil(1 / eval_dt) / 100 print('load table_data from %s' % t_name) spf_tb.show_table_result(Table_t, Table_theta, Table_phi, Table_psi, Table_eta, Table_X, save_every) ``` ```python t0 = 0 t1 = t0 + 1000 idx = (t0 < Table_t) & (Table_t < t1) fig = plt.figure(figsize=(10, 10)) ax0 = fig.add_subplot(111, polar=True) fig.patch.set_facecolor('white') norm=plt.Normalize(Table_t.min(), Table_t.max()) cmap=plt.get_cmap('jet') ax0.plot(Table_phi / np.pi, Table_theta / np.pi, ' ') lc = ax0.scatter(Table_phi[idx], Table_theta[idx], c=Table_t[idx], cmap=plt.get_cmap('jet'), s=fontsize*0.1) clb = fig.colorbar(lc, ax=ax0, orientation="vertical") clb.ax.tick_params(labelsize=fontsize*0.5) clb.ax.set_title('time', size=fontsize*0.5) # ax0.set_xlabel('$\\phi / \pi$', size=fontsize*0.7) # ax0.set_ylabel('$\\theta / \pi$', size=fontsize*0.7) ax0.set_ylim(0,np.pi) plt.sca(ax0) plt.xticks(fontsize=fontsize*0.5) plt.yticks(fontsize=fontsize*0.5) ``` ```python anim = spf_tb.make_table_video(Table_t, Table_X, Table_P, Table_P2, Table_theta, Table_phi, Table_psi, Table_eta, zm_fct=30, stp=1, interval=20) # anim Writer = animation.writers['ffmpeg'] writer = Writer(fps=15, metadata=dict(artist='Me'), bitrate=1800) # anim.save('tmp.mp4', writer=writer) ``` ```python ``` ```python # active ecoli petsc family method importlib.reload(spf_tb) t0 = time() t_theta, t_phi, t_psi = 0.410, 2.807, 0 max_t = 100 update_fun='5bs' rtol=1e-9 atol=1e-12 eval_dt = 0.0001 save_every = 1 tnorm = np.array((np.sin(t_theta) * np.cos(t_phi), np.sin(t_theta) * np.sin(t_phi), np.cos(t_theta))) Table_t, Table_dt, Table_X, Table_P, Table_P2, Table_theta, Table_phi, Table_psi, Table_eta \ = spf_tb.do_calculate_helix_Petsc4n(tnorm, t_psi, max_t, update_fun=update_fun, rtol=rtol, atol=atol, eval_dt=eval_dt, save_every=save_every) t1 = time() print('last norm: ', Table_theta[-1], ',', Table_phi[-1], ',', Table_psi[-1]) print('%s: run %d loops/times using %fs' % ('do_calculate_helix_Petsc4n', max_t, (t1 - t0))) print('%s_%s rt%.0e, at%.0e, dt%.0e %.1fs' % ('PETSC RK', update_fun, rtol, atol, eval_dt, (t1 - t0))) spf_tb.show_table_result(Table_t, Table_dt, Table_X, Table_P, Table_P2, Table_theta, Table_phi, Table_psi, Table_eta, save_every) ``` ```python ``` ```python ``` ```python def rotMatrix_DCM(x0, y0, z0, x, y, z): # Diebel, James. "Representing attitude: Euler angles, unit quaternions, and rotation vectors." # Matrix 58.15-16 (2006): 1-35. # eq. 17 # https://arxiv.org/pdf/1705.06997.pdf # appendix B # Graf, Basile. "Quaternions and dynamics." arXiv preprint arXiv:0811.2889 (2008). # # A rotation matrix may also be referred to as a direction # cosine matrix, because the elements of this matrix are the # cosines of the unsigned angles between the body-¯xed axes # and the world axes. Denoting the world axes by (x; y; z) # and the body-fixed axes by (x0; y0; z0), let \theta_{x';y} be, # for example, the unsigned angle between the x'-axis and the y-axis. # (x0, y0, z0)^T = dot(R, (x, y, z)^T ) R = np.array(((np.dot(x0, x), np.dot(x0, y), np.dot(x0, z)), (np.dot(y0, x), np.dot(y0, y), np.dot(y0, z)), (np.dot(z0, x), np.dot(z0, y), np.dot(z0, z)))) return R eglb = np.identity(3) tR0 = rot_vec2rot_mtx(np.random.sample(3)) e0 = np.dot(tR0, eglb) tR = rot_vec2rot_mtx(np.random.sample(3)) e = eglb.copy() R = rotMatrix_DCM(*e0, *e) print(e) print(e0) print(R) print(np.max(np.abs(np.dot(R, e) - e0))) ``` [[ 1. 0. 0.] [ 0. 1. 0.] [ 0. 0. 1.]] [[ 0.85685 -0.14847 0.49373] [ 0.27327 0.94284 -0.19073] [-0.43718 0.29835 0.84844]] [[ 0.85685 -0.14847 0.49373] [ 0.27327 0.94284 -0.19073] [-0.43718 0.29835 0.84844]] 0.0 ```python importlib.reload(jm) eglb = np.identity(3) e = eglb.copy() # tR0 = rot_vec2rot_mtx(np.random.sample(3)) tR0 = rot_vec2rot_mtx(np.ones(3)) e0 = np.dot(tR0, eglb) tR = rotMatrix_DCM(*e0, *e) q0 = jm.Quaternion() q0.set_wxyz(*quaternion.as_float_array(quaternion.from_rotation_matrix(tR))) # # make sure e0 and q0 is same frame # print(e0) # print(q0.get_R()) # print(np.linalg.norm(e0, axis=0)) # print(np.dot(e0[0], e0[1]), np.dot(e0[0], e0[2]), np.dot(e0[1], e0[2])) # print(np.linalg.norm(e0, axis=1)) # print(np.dot(e0.T[0], e0.T[1]), np.dot(e0.T[0], e0.T[2]), np.dot(e0.T[1], e0.T[2])) # rotate by omega and dt dt = 1 # omega = np.random.sample(3) omega = np.ones(3) rot_mtx = get_rot_matrix(norm=omega / np.linalg.norm(omega), theta=np.linalg.norm(omega) * dt) e1 = np.dot(rot_mtx, e0) dq = 0.5 * np.dot(omega, q0.get_G()) q1 = q0 + dq * dt q1.normalize() tqw, tqx, tqy, tqz = q0.q Qq = np.array(((tqw, -tqx, -tqy, -tqz), (tqx, tqw, tqz, -tqy), (tqy, -tqz, tqw, tqx), (tqz, tqy, -tqx, tqw))) re_omega = 2 * np.dot(Qq.T, dq)[1:4] # print(omega, re_omega) # check if sure e0 and q0 is same frame print(e1) print(q1.get_R()) # print(q1.q, np.linalg.norm(q1.q)) print(np.linalg.norm(e1 - q1.get_R()) / dt) ``` [[-0.29896 0.83247 0.46649] [ 0.46649 -0.29896 0.83247] [ 0.83247 0.46649 -0.29896]] [[-0.33323 0.67695 0.65628] [ 0.65628 -0.33323 0.67695] [ 0.67695 0.65628 -0.33323]] [-0.00895 0.57733 0.57733 0.57733] 1.0 0.4291089428
eb18a82e8b89d3dfd6b187b2bd0a4df6f758323c
431,402
ipynb
Jupyter Notebook
head_Force/loop_table/phase_map_ecoli.ipynb
pcmagic/stokes_flow
464d512d3739eee77b33d1ebf2f27dae6cfa0423
[ "MIT" ]
1
2018-11-11T05:00:53.000Z
2018-11-11T05:00:53.000Z
head_Force/loop_table/phase_map_ecoli.ipynb
pcmagic/stokes_flow
464d512d3739eee77b33d1ebf2f27dae6cfa0423
[ "MIT" ]
null
null
null
head_Force/loop_table/phase_map_ecoli.ipynb
pcmagic/stokes_flow
464d512d3739eee77b33d1ebf2f27dae6cfa0423
[ "MIT" ]
null
null
null
715.426202
291,456
0.947478
true
4,395
Qwen/Qwen-72B
1. YES 2. YES
0.808067
0.651355
0.526339
__label__eng_Latn
0.219623
0.06119
# Inference with GPs The dataset needed for this worksheet [can be downloaded](https://northwestern.box.com/s/el0s1imhdxq5qwvzb4hgap90mxpjcfdq). Once you have downloaded [s9_gp_dat.tar.gz](https://northwestern.box.com/s/el0s1imhdxq5qwvzb4hgap90mxpjcfdq), and moved it to this folder, execute the following cell: ```python !tar -zxvf s9_gp_dat.tar.gz !mv *.txt data/ ``` Here are the functions we wrote in the previous tutorial to compute and draw from a GP: ```python import numpy as np from scipy.linalg import cho_factor def ExpSquaredKernel(t1, t2=None, A=1.0, l=1.0): """ Return the ``N x M`` exponential squared covariance matrix between time vectors `t1` and `t2`. The kernel has amplitude `A` and lengthscale `l`. """ if t2 is None: t2 = t1 T2, T1 = np.meshgrid(t2, t1) return A ** 2 * np.exp(-0.5 * (T1 - T2) ** 2 / l ** 2) def draw_from_gaussian(mu, S, ndraws=1, eps=1e-12): """ Generate samples from a multivariate gaussian specified by covariance ``S`` and mean ``mu``. (We derived these equations in Day 1, Notebook 01, Exercise 7.) """ npts = S.shape[0] L, _ = cho_factor(S + eps * np.eye(npts), lower=True) L = np.tril(L) u = np.random.randn(npts, ndraws) x = np.dot(L, u) + mu[:, None] return x.T def compute_gp(t_train, y_train, t_test, sigma=0, A=1.0, l=1.0): """ Compute the mean vector and covariance matrix of a GP at times `t_test` given training points `y_train(t_train)`. The training points have uncertainty `sigma` and the kernel is assumed to be an Exponential Squared Kernel with amplitude `A` and lengthscale `l`. """ # Compute the required matrices kernel = ExpSquaredKernel Stt = kernel(t_train, A=1.0, l=1.0) Stt += sigma ** 2 * np.eye(Stt.shape[0]) Spp = kernel(t_test, A=1.0, l=1.0) Spt = kernel(t_test, t_train, A=1.0, l=1.0) # Compute the mean and covariance of the GP mu = np.dot(Spt, np.linalg.solve(Stt, y_train)) S = Spp - np.dot(Spt, np.linalg.solve(Stt, Spt.T)) return mu, S ``` ## The Marginal Likelihood In the previous notebook, we learned how to construct and sample from a simple GP. This is useful for making predictions, i.e., interpolating or extrapolating based on the data you measured. But the true power of GPs comes from their application to *regression* and *inference*: given a dataset $D$ and a model $M(\theta)$, what are the values of the model parameters $\theta$ that are consistent with $D$? The parameters $\theta$ can be the hyperparameters of the GP (the amplitude and time scale), the parameters of some parametric model, or all of the above. A very common use of GPs is to model things you don't have an explicit physical model for, so quite often they are used to model "nuisances" in the dataset. But just because you don't care about these nuisances doesn't mean they don't affect your inference: in fact, unmodelled correlated noise can often lead to strong biases in the parameter values you infer. In this notebook, we'll learn how to compute likelihoods of Gaussian Processes so that we can *marginalize* over the nuisance parameters (given suitable priors) and obtain unbiased estimates for the physical parameters we care about. Given a set of measurements $y$ distributed according to $$ \begin{align} y \sim \mathcal{N}(\mathbf{\mu}(\theta), \mathbf{\Sigma}(\alpha)) \end{align} $$ where $\theta$ are the parameters of the mean model $\mu$ and $\alpha$ are the hyperparameters of the covariance model $\mathbf{\Sigma}$, the *marginal likelihood* of $y$ is $$ \begin{align} \ln P(y | \theta, \alpha) = -\frac{1}{2}(y-\mu)^\top \mathbf{\Sigma}^{-1} (y-\mu) - \frac{1}{2}\ln |\mathbf{\Sigma}| - \frac{N}{2} \ln 2\pi \end{align} $$ where $||$ denotes the determinant and $N$ is the number of measurements. The term *marginal* refers to the fact that this expression implicitly integrates over all possible values of the Gaussian Process; this is not the likelihood of the data given one particular draw from the GP, but given the ensemble of all possible draws from $\mathbf{\Sigma}$. <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 1</h1> </div> Define a function ``ln_gp_likelihood(t, y, sigma, A=1, l=1)`` that returns the log-likelihood defined above for a vector of measurements ``y`` at a set of times ``t`` with uncertainty ``sigma``. As before, ``A`` and ``l`` should get passed direcetly to the kernel function. Note that you're going to want to use [np.linalg.slogdet](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.slogdet.html) to compute the log-determinant of the covariance instead of ``np.log(np.linalg.det)``. (Why?) ```python def ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0): """ """ # do stuff in here pass ``` <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 2</h1> </div> The following dataset was generated from a zero-mean Gaussian Process with a Squared Exponential Kernel of unity amplitude and unknown timescale. Compute the marginal log likelihood of the data over a range of reasonable values of $l$ and find the maximum. Plot the **likelihood** (not log likelihood) versus $l$; it should be pretty Gaussian. How well are you able to constrain the timescale of the GP? ```python import matplotlib.pyplot as plt t, y, sigma = np.loadtxt("data/sample_data.txt", unpack=True) plt.plot(t, y, "k.", alpha=0.5, ms=3) plt.xlabel("time") plt.ylabel("data"); ``` <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 3a</h1> </div> The timeseries below was generated by a linear function of time, $y(t)= mt + b$. In addition to observational uncertainty $\sigma$ (white noise), there is a fair bit of correlated (red) noise, which we will assume is well described by the squared exponential covariance with a certain (unknown) amplitude $A$ and timescale $l$. Your task is to estimate the values of $m$ and $b$, the slope and intercept of the line, respectively. In this part of the exercise, **assume there is no correlated noise.** Your model for the $n^\mathrm{th}$ datapoint is thus $$ \begin{align} y_n \sim \mathcal{N}(m t_n + b, \sigma_n\mathbf{I}) \end{align} $$ and the probability of the data given the model can be computed by calling your GP likelihood function: ```python def lnprob(params): m, b = params model = m * t + b return ln_gp_likelihood(t, y - model, sigma, A=0, l=1) ``` Note, importantly, that we are passing the **residual vector**, $y - (mt + b)$, to the GP, since above we coded up a zero-mean Gaussian process. We are therefore using the GP to model the **residuals** of the data after applying our physical model (the equation of the line). To estimate the values of $m$ and $b$ we could generate a fine grid in those two parameters and compute the likelihood at every point. But since we'll soon be fitting for four parameters (in the next part), we might as well upgrade our inference scheme and use the ``emcee`` package to do Markov Chain Monte Carlo (MCMC). If you haven't used ``emcee`` before, check out the first few tutorials on the [documentation page](https://emcee.readthedocs.io/en/latest/). The basic setup for the problem is this: ```python import emcee sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob) initial = [4.0, 15.0] p0 = initial + 1e-3 * np.random.randn(nwalkers, ndim) print("Running burn-in...") p0, _, _ = sampler.run_mcmc(p0, nburn) # nburn = 500 should do sampler.reset() print("Running production...") sampler.run_mcmc(p0, nsteps); # nsteps = 1000 should do ``` where ``nwalkers`` is the number of walkers (something like 20 or 30 is fine), ``ndim`` is the number of dimensions (2 in this case), and ``lnprob`` is the log-probability function for the data given the model. Finally, ``p0`` is a list of starting positions for each of the walkers. Above we picked some fiducial/eyeballed value for $m$ and $b$, then added a small random number to each to generate different initial positions for each walker. This will initialize all walkers in a ball centered on some point, and as the chain progresses they'll diffuse out and begin to explore the posterior. Once you have sampled the posterior, plot several draws from it on top of the data. You can access a random draw from the posterior by doing ```python m, b = sampler.flatchain[np.random.randint(len(sampler.flatchain))] ``` Also plot the **true** line that generated the dataset (given by the variables ``m_true`` and ``b_true`` below). Do they agree, or is there bias in your inferred values? Use the ``corner`` package to plot the joint posterior. How many standard deviations away from the truth are your inferred values? ```python t, y, sigma = np.loadtxt("data/sample_data_line.txt", unpack=True) m_true, b_true, A_true, l_true = np.loadtxt("data/sample_data_line_truths.txt", unpack=True) plt.errorbar(t, y, yerr=sigma, fmt="k.", label="observed") plt.plot(t, m_true * t + b_true, color="C0", label="truth") plt.legend(fontsize=12) plt.xlabel("time") plt.ylabel("data"); ``` <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 3b</h1> </div> This time, let's actually model the correlated noise. Re-define your ``lnprob`` function to accept four parameters (slope, intercept, amplitude, and timescale). If you didn't before, it's a good idea to enforce some priors to keep the parameters within reasonable (and physical) ranges. If any parameter falls outside this range, have ``lnprob`` return negative infinity (i.e., zero probability). You'll probably want to run your chains for a bit longer this time, too. As before, plot some posterior samples for the line, as well as the corner plot. How did you do this time? Is there any bias in your inferred values? How does the variance compare to the previous estimate? <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 3c</h1> </div> If you didn't do this already, re-plot the posterior samples on top of the data, but this time draw them from the GP, *conditioned on the data*. How good is the fit?
6e25ef17f9eabf44b62b49f052d0a0573b0621e7
200,753
ipynb
Jupyter Notebook
Session9/Day1/gps/02-Inference.ipynb
hsnee/LSSTC-DSFP-Sessions
5d90992179c80efbd63e9ecc95fe0fef7a0d83c1
[ "MIT" ]
1
2020-08-10T06:07:17.000Z
2020-08-10T06:07:17.000Z
Session9/Day1/gps/02-Inference.ipynb
hsnee/LSSTC-DSFP-Sessions
5d90992179c80efbd63e9ecc95fe0fef7a0d83c1
[ "MIT" ]
2
2021-09-28T22:44:49.000Z
2022-03-08T20:36:34.000Z
Session9/Day1/gps/02-Inference.ipynb
hsnee/LSSTC-DSFP-Sessions
5d90992179c80efbd63e9ecc95fe0fef7a0d83c1
[ "MIT" ]
1
2020-06-19T10:18:44.000Z
2020-06-19T10:18:44.000Z
608.342424
114,184
0.938536
true
2,911
Qwen/Qwen-72B
1. YES 2. YES
0.875787
0.885631
0.775625
__label__eng_Latn
0.990498
0.640368
# 20 - Plug-and-Play Estimators So far, we've seen how to debias our data in the case where the treatment is not randomly assigned, which results in confounding bias. That helps us with the identification problem in causal inference. In other words, once the units are exchangeable, or \\( Y(0), Y(1) \perp X\\), it becomes possible to learn the treatment effect. But we are far from done. Identification means that we can find the average treatment effect. In other words, we know how effective a treatment is on average. Of course this is useful, as it helps us to decide if we should roll out a treatment or not. But we want more than that. We want to know if there are subgroups of units that respond better or worse to the treatment. That should allow for a much better policy, one where we only treat the ones that will benefit from it. ## Problem Setup Let's recall our setup of interest. Given the potential outcomes, we can define the individual treatment effect as the difference between the potential outcomes. $ \tau_i = Y_i(1) − Y_i(0), $ or, the continuous treatment case, \\(\tau_i = \partial Y(t)\\), where \\(t\\) is the treatment variable. Of course, we can never observe the individual treatment effect, because we only get to see the one of potential outcomes $ Y^{obs}_i(t)= \begin{cases} Y_i(1), & \text{if } t=1\\ Y_i(0), & \text{if } t=0 \end{cases} $ We can define the average treatment effect (ATE) as $ \tau = E[Y_i(1) − Y_i(0)] = E[\tau_i] $ and the conditional average treatment effect (CATE) as $ \tau(x) = E[Y_i(1) − Y_i(0)|X] = E[\tau_i|X] $ In Part I of this book, we've focused mostly on the ATE. Now, we are interested in the CATE. The CATE is useful for personalising a decision making process. For example, if you have a drug as the treatment \\(t\\), you want to know which type of patients are more responsive to the drug (higher CATE) and if there are some types of patient with a negative response (CATE < 0). We've seen how to estimate the CATE using a linear regression with interactions between the treatment and the features $ y_i = \beta_0 + \beta_1 t_i + \beta_2 X_i + \beta_3 t_i X_i + e_i. $ If we estimate this model, we can get estimates for \\(\tau(x)\\) $ \hat{\tau}(x) = \hat{\beta}_1 + \hat{\beta}_3 t_i X_i $ Still, the linear models have some drawbacks. The main one being the linearity assumption on \\(X\\). Notice that you don't even care about \\(\beta_2\\) on this model. But if the features \\(X\\) don't have a linear relationship with the outcome, your estimates of the causal parameters \\(\beta_1\\) and \\(\beta_3\\) will be off. It would be great if we could replace the linear model by a more flexible machine learning model. We could even plug the treatment as a feature to a ML model, like boosted trees or a neural network $ y_i = M(X_i, T_i) + e_i $ but from there, it is not clear how we can get treatment effect estimates, since this model will output \\(\hat{y}\\) predictions, not \\(\hat{\tau(x)}\\) predictions. Ideally, we would use a machine learning regression model that, instead of minimising the outcome MSE $ E[(Y_i - \hat{Y}_i)^2] $ would minimise the treatment effect MSE $ E[(\tau(x)_i - \hat{\tau}(x)_i)^2] = E[(Y_i(1) - Y_i(0) - \hat{\tau}(x)_i)^2]. $ However, this criterion is what we call infeasible. Again, the problem here is that \\(\tau(x)_i\\) is not observable, so we can't optimize it directly. This puts us in a tough spot... Let's try to simplify it a bit and maybe we can think of something. ## Target Transformation Suppose your treatment is binary. Let's say you are an investment firm testing the effectiveness of sending a financial education email. You hope the email will make people invest more. Also, let's say you did a randomized study where 50% of the customers got the email and the other 50% didn't. Here is a crazy idea: let's transform the outcome variable by multiplying it with the treatment. $ Y^*_i = 2 Y_i * T_i - 2 Y_i*(1-T_i) $ So, if the unit was treated, you would take the outcome and multiply it by 2. If it wasn't treated, you would take the outcome and multiply it by -2. For example, if one of your customers invested BRL 2000,00 and got the email, the transformed target would be 4000. However, if he or she didn't get the email, it would be -4000. This seems very odd, because you are saying that the effect of the email can be a negative number, but bare with me. If we do a little bit of math, we can see that, on average or in expectation, this transformed target will be the treatment effect. This is nothing short of amazing. What I'm saying is that by applying this somewhat wacky transformation, I get to estimate something that I can't even observe. To understand that, we need a bit of math. Because of random assignment, we have that \\(T \perp Y(1), Y(1)\\), which is our old unconfoundedness friend. That implies that \\(E[T, Y(t)]=E[T]*E[Y(t)]\\), which is the definition of independence. Also, we know that $ Y_i*T_i = Y_(1)i*T_i \text{ and } Y_i*(1-T_i) = Y_(0)i*T_i $ because the treatment is what materializes one or the other potential outcomes. With that in mind, let's take the expected value of \\(Y^*_i\\) and see what we end up with. $ \begin{align} E[Y^*_i|X_i=x] &= E[2 Y(1)_i * T_i - 2 Y(0)_i*(1-T_i)|X_i=x] \\ &= 2E[Y(1)_i * T_i | X_i=x] - 2E[Y(0)_i*(1-T_i)|X_i=x]\\ &= 2E[Y(1)_i| X_i=x] * E[ T_i | X_i=x] - 2E[Y(0)_i| X_i=x]*E[(1-T_i)|X_i=x] \\ &= 2E[Y(1)_i| X_i=x] * 0.5 - 2E[Y(0)_i| X_i=x]*0.5 \\ &= E[Y(1)_i| X_i=x] - E[Y(0)_i| X_i=x] \\ &= \tau(x)_i \end{align} $ So, this apparently crazy idea ended up being an unbiased estimate of the individual treatment effect \\(\tau(x)_i\\). Now, we can replace our infeasible optimization criterion with $ E[(Y^*_i - \hat{\tau}(x)_i)^2] $ In simpler terms, all we have to do is use any regression machine learning model to predict \\(Y^*_i\\) and this model will output treatment effect predictions. Now that we've solved the simple case, what about the more complicated case, where treatment is not 50% 50%, or not even randomly assigned? As it turns out, the answer is a bit more complicated, but not much. First, if we don't have random assignment, we need at least conditional independence \\(T \perp Y(1), Y(1) | X\\). That is, controlling for \\(X\\), \\(T\\) is as good as random. With that, we can generalize the transformed target to $ Y^*_i = Y_i * \dfrac{T_i - e(X_i)}{e(X_i)(1-e(X_i))} $ where \\(e(X_i)\\) is the propensity score. So, if the treatment is not 50% 50%, but randomized with a different probability \\(p\\), all you have to do is replace the propensity score in the above formula with \\(p\\). If the treatment is not random, then you have to use the propensity score, either stored or estimated. If you take the expectation of this, you will see that it also matches the treatment effect. The proof is left as an exercise to the reader. Just kidding, here it is. It's a bit cumbersome, so feel free to skip it. $ \begin{align} E[Y^*_i|X_i=x] &= E\big[Y_i * \dfrac{T_i - e(X_i)}{e(X_i)(1-e(X_i))}|X_i=x\big] \\ &= E\big[Y_i T_i * \dfrac{T_i - e(X_i)}{e(X_i)(1-e(X_i))} + Y_i (1-T_i) * \dfrac{T_i - e(X_i)}{e(X_i)(1-e(X_i))}|X_i=x\big]\\ &= E\big[Y(1)_i * \dfrac{T_i(1 - e(X_i))}{e(X_i)(1-e(X_i))} | X_i=x\big] - E\big[Y(0)_i * \dfrac{(1-T_i)e(X_i)}{e(X_i)(1-e(X_i))}|X_i=x\big]\\ &= \dfrac{1}{e(X_i)} E[Y(1)_i * T_i|X_i=x] - \dfrac{1}{1-e(X_i)} E[Y(0)_i * (1-T_i)| X_i=x]\\ &= \dfrac{1}{e(X_i)} E[Y(1)_i|X_i=x] * E[T_i|X_i=x] - \dfrac{1}{1-e(X_i)} E[Y(0)_i|X_i=x] * E[(1-T_i)| X_i=x]\\ &= E[Y(1)_i|X_i=x] - E[Y(0)_i|X_i=x]\\ &= \tau(x)_i \end{align} $ As always, I think this will become much more concrete with an example. Again, consider the investment emails we've sent trying to make people invest more. The outcome variable the binary (invested vs didn't invest) `converted`. ```python import pandas as pd import numpy as np from matplotlib import pyplot as plt import seaborn as sns from nb21 import cumulative_gain, elast ``` ```python email = pd.read_csv("./data/invest_email_rnd.csv") email.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>age</th> <th>income</th> <th>insurance</th> <th>invested</th> <th>em1</th> <th>em2</th> <th>em3</th> <th>converted</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>44.1</td> <td>5483.80</td> <td>6155.29</td> <td>14294.81</td> <td>0</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <th>1</th> <td>39.8</td> <td>2737.92</td> <td>50069.40</td> <td>7468.15</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <th>2</th> <td>49.0</td> <td>2712.51</td> <td>5707.08</td> <td>5095.65</td> <td>1</td> <td>0</td> <td>1</td> <td>1</td> </tr> <tr> <th>3</th> <td>39.7</td> <td>2326.37</td> <td>15657.97</td> <td>6345.20</td> <td>1</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <th>4</th> <td>35.3</td> <td>2787.26</td> <td>27074.44</td> <td>14114.86</td> <td>1</td> <td>1</td> <td>1</td> <td>0</td> </tr> </tbody> </table> </div> Our goal here is one of personalization. Let's focus on email-1. We wish to send it only to those customers who will respond better to it. In other words, we wish to estimate the conditional average treatment effect of email-1 $ E[Converted(1)_i - Converted(0)_i|X_i=x] = \tau(x)_i $ so that we can target those customers who will have the best response to the email (higher CATE) But first, let's break our dataset into a training and a validation set. We will estimate \\(\tau(x)_i\\) on one set and evaluate the estimates on the other. ```python from sklearn.model_selection import train_test_split np.random.seed(123) train, test = train_test_split(email, test_size=0.4) print(train.shape, test.shape) ``` (9000, 8) (6000, 8) Now, we will apply the target transformation we've just learned. Since the emails were randomly assigned (although not on a 50% 50% basis), we don't need to worry about the propensity score. Rather, it is constant and equal to the treatment probability. ```python y = "converted" T = "em1" X = ["age", "income", "insurance", "invested"] ps = train[T].mean() y_star_train = train[y] * (train[T] - ps)/(ps*(1-ps)) ``` With the transformed target, we can pick any ML regression algorithm to predict it. Lets use boosted trees here. ```python from lightgbm import LGBMRegressor np.random.seed(123) cate_learner = LGBMRegressor(max_depth=3, min_child_samples=300, num_leaves=5) cate_learner.fit(train[X], y_star_train); ``` This model can now estimate \\(\tau(x)_i\\). In other words, what it outputs is \\(\hat{\tau}(x)_i\\). For example, if we make predictions on the test set, we will see that some units have higher CATE than others. For example, customer 6958 has a CATE of 0.1, meaning the probability he or she will buy our investment product is predicted to increase by 0.1 if we send the email to this customer. In contrast, for customer 3903, the probability of buying the product is predicted to increase just 0.04. ```python test_pred = test.assign(cate=cate_learner.predict(test[X])) test_pred.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>age</th> <th>income</th> <th>insurance</th> <th>invested</th> <th>em1</th> <th>em2</th> <th>em3</th> <th>converted</th> <th>cate</th> </tr> </thead> <tbody> <tr> <th>6958</th> <td>40.9</td> <td>4486.14</td> <td>37320.33</td> <td>12559.25</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>0.105665</td> </tr> <tr> <th>7534</th> <td>42.6</td> <td>6386.19</td> <td>13270.47</td> <td>29114.42</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>0.121922</td> </tr> <tr> <th>2975</th> <td>47.6</td> <td>1900.26</td> <td>25588.72</td> <td>2420.39</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>0.034161</td> </tr> <tr> <th>3903</th> <td>41.0</td> <td>5802.19</td> <td>57087.37</td> <td>20182.20</td> <td>1</td> <td>0</td> <td>1</td> <td>1</td> <td>0.046805</td> </tr> <tr> <th>8437</th> <td>49.1</td> <td>2202.96</td> <td>5050.81</td> <td>9245.88</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>-0.009099</td> </tr> </tbody> </table> </div> To evaluate how good this model is, we can show the cumulative gain curves, for both training and testing sets. ```python gain_curve_test = cumulative_gain(test_pred, "cate", y="converted", t="em1") gain_curve_train = cumulative_gain(train.assign(cate=cate_learner.predict(train[X])), "cate", y="converted", t="em1") plt.plot(gain_curve_test, color="C0", label="Test") plt.plot(gain_curve_train, color="C1", label="Train") plt.plot([0, 100], [0, elast(test, "converted", "em1")], linestyle="--", color="black", label="Baseline") plt.legend(); ``` As we can see, this plug and play estimator is better than random on the test set. Still, it looks like it is overfitting a lot, since the performance on the training set is much better than that of the test set. That is actually one of the biggest downsides of this target transformation technique. With this target transformation, you do get a lot of simplicity, since you can just transform the target and use any ML estimator to predict heterogeneous treatment effects. The cost of it is that you get a lot of variance. That's because the transformed target is a very noisy estimate of the individual treatment effect and that variance gets transferred to your estimation. This is a huge problem if you don't have a lot of data, but it should be less of a problem in big data applications, where you are dealing with more than 1MM samples. ## The Continuous Treatment Case Another obvious downside of the target transformation method is that it only works for discrete or binary treatments. This is something you see a lot in the causal inference literature. Most of the research is done for the binary treatment case, but you don't find a lot about continuous treatments. That bothered me a lot, because in the industry, continuous treatments are everywhere, mostly in the form of prices you need to optimize. So, even though I couldn't find anything regarding target transformations for continuous treatment, I came up with something that works in practice. Just keep in mind that I don't have a super solid econometric research around it. To motivate it, let's go back to the ice cream sales example. There, we were tasked with the problem of estimating demand elasticity to price so that we can better set the ice cream prices to optimize our revenues. Recall that the event sample in the dataset is a day and we wish to know when people are less sensitive to price increases. Also, recall that prices are randomly assigned in this dataset, which means we don't need to worry about confounding bias. ```python prices_rnd = pd.read_csv("./data/ice_cream_sales_rnd.csv") prices_rnd.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>temp</th> <th>weekday</th> <th>cost</th> <th>price</th> <th>sales</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>25.8</td> <td>1</td> <td>0.3</td> <td>7</td> <td>230</td> </tr> <tr> <th>1</th> <td>22.7</td> <td>3</td> <td>0.5</td> <td>4</td> <td>190</td> </tr> <tr> <th>2</th> <td>33.7</td> <td>7</td> <td>1.0</td> <td>5</td> <td>237</td> </tr> <tr> <th>3</th> <td>23.0</td> <td>4</td> <td>0.5</td> <td>5</td> <td>193</td> </tr> <tr> <th>4</th> <td>24.4</td> <td>1</td> <td>1.0</td> <td>3</td> <td>252</td> </tr> </tbody> </table> </div> As before, let's start by separating our data into training and a testing set. ```python np.random.seed(123) train, test = train_test_split(prices_rnd, test_size=0.3) train.shape, test.shape ``` ((3500, 5), (1500, 5)) Now is where we need a little bit of creativity. For the discrete case, the conditional average treatment effect is given by how much the outcome changes when we go from untreated to treated, conditioned on unit characteristics \\(X\\). $ \tau(x) = E[Y_i(1) − Y_i(0)|X] = E[\tau_i|X] $ In plain english, this is estimating the impact of the treatment on different unit profiles, where profiles are defined using the features \\(X\\). For the continuous case, we don't have that on-off switch. Units are not treated or untreated. Rather, they are all treated, but with different intensities. Therefore, we can't talk about the effect of giving the treatment. Rather, we need to speak in terms of increasing the treatment. In other words, we wish to know how the outcome would change if we increase the treatment by some amount. This is like estimating the partial derivative of the outcome function \\(Y\\) on the treatment \\(t\\). And because we wish to know that for each group (the CATE, not the ATE), we condition on the features \\(X\\) $ \tau(x) = E[\partial Y_i(t)|X] = E[\tau_i|X] $ How can we estimate that? First, let's consider the easy case, where the outcome is linear on the treatment. Suppose you have two types of days: hot days (yellow) and cold days (blue). On cold days people are more sensitive to price increases. Also, as price increases, demand falls linearly. In this case, the CATE will be the slope of each demand line. These slopes will tell us how much demand will fall if we increase price by any amount. If this relationship is indeed linear, we can estimate those elasticities with the coefficient of a simple linear regression estimate on hot days and on cold days separately. $$ \hat{\tau(x)} = Cov(Y_i, T_i)/Var(T_i) = \dfrac{\sum(Y_i- \bar{Y})(T_i - \bar{T})}{\sum (T_i - \bar{T})^2} $$ We can be inspired by this estimator and think about what it would be like for an individual unit. In other words, what if we have that same thing up there, defined for each day. In my head, it would be something like this: $ Y^*_i = (Y_i- \bar{Y})\dfrac{(T_i - \bar{T})}{\sigma^2_T} $ In plain English, we would transform the original target by subtracting the mean from it, then we would multiply it by the treatment, from which we've also subtracted the mean from. Finally, we would divide it by the treatment variance. Alas, we have a target transformation for the continuous case. The question now is: does it work? As a matter of fact it does and we can go over a similar proof for why it works, just like we did in the binary case. First, lets call $ V_i = \dfrac{(T_i - \bar{T})}{\sigma^2_T} $ notice that \\(E[V_i|X_i=x]=0\\) because under random assignment \\(E[T_i|X_i=x]=\bar{T}\\). In other words, for every region of X, \\(E[T_i]=\bar{T}\\). Also \\(E[T_i V_i | X_i=x]=1\\) because \\(E[T_i(T_i - \bar{T})|X_i=x] = E[(T_i - \bar{T})^2|X_i=x]\\), which is the treatment variance. Finally, under conditional independence (which we get for free in the random treatment assignment case), \\(E[T_i e_i | X_i=x] = E[T_i | X_i=x] E[e_i | X_i=x]\\). To show that this target transformation works, we need to remember that we are estimating the parameter for a local linear model $ Y_i = \alpha + \beta T_i + e_i | X_i=x $ In our example, those would be the linear models for the hot and cold days. Here, we are interested in the \\(\beta\\) parameter, which is our conditional elasticity or CATE. With all that, we can prove that $ \begin{align} E[Y^*_i|X_i=X] &= E[(Y_i-\bar{Y})V_i | X_i=x] \\ &= E[(\alpha + \beta T_i + e_i - \bar{Y})V_i | X_i=x] \\ &= \alpha E[V_i | X_i=x] + \beta E[T_i V_i | X_i=x] + E[e_i V_i | X_i=x] \\ &= \beta + E[e_i V_i | X_i=x] \\ &= \beta = \tau(x) \end{align} $ Bare in mind that this only works when the treatment is randomized. For non randomized treatment, we have to replace \\(\bar{T}\\) by \\(M(X_i)\\), where \\(M\\) is a model that estimates \\(E[T_i|X_i=x]\\). $ Y^*_i = (Y_i- \bar{Y})\dfrac{(T_i - M(T_i))}{(T_i - M(T_i))^2} $ This will make sure that the term \\(\alpha E[V_i | X_i=x]\\) in the third line vanishes to zero and that the term \\(E[T_i V_i | X_i=x]\\) goes to 1. Notice that you don't actually need \\(E[T_i V_i | X_i=x]\\) to go to 1 if you just want to order units in terms of treatment effect. In other words, if you just want to know in which days demand is more sensitive to price increases but you don't need to know by how much, it doesn't matter if the \\(\beta\\) estimates are scaled up or down. If that is the case, you can omit the denominator. $ Y^*_i = (Y_i- \bar{Y})(T_i - M(T_i)) $ If all that math seems tiresome, don't worry. The code is actually very simple. Once again, we transform our training target with the formulas seen above. Here, we have random treatment assignments, so we don't need to build a model that predicts prices. I'm also omitting the denominator, because here I only care about ordering the treatment effect. ```python y_star_cont = (train["price"] - train["price"].mean() *train["sales"] - train["sales"].mean()) ``` Then, just like before, we fit a regression ML model to predict that target. ```python cate_learner = LGBMRegressor(max_depth=3, min_child_samples=300, num_leaves=5) np.random.seed(123) cate_learner.fit(train[["temp", "weekday", "cost"]], y_star_cont) cate_test_transf_y = cate_learner.predict(test[["temp", "weekday", "cost"]]) test_pred = test.assign(cate=cate_test_transf_y) test_pred.sample(5) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>temp</th> <th>weekday</th> <th>cost</th> <th>price</th> <th>sales</th> <th>cate</th> </tr> </thead> <tbody> <tr> <th>2815</th> <td>15.7</td> <td>4</td> <td>1.5</td> <td>3</td> <td>187</td> <td>-1395.956278</td> </tr> <tr> <th>257</th> <td>29.4</td> <td>3</td> <td>1.0</td> <td>3</td> <td>209</td> <td>-1607.400415</td> </tr> <tr> <th>2585</th> <td>24.6</td> <td>6</td> <td>1.0</td> <td>10</td> <td>197</td> <td>-1497.197402</td> </tr> <tr> <th>3260</th> <td>20.2</td> <td>1</td> <td>0.5</td> <td>4</td> <td>246</td> <td>-1629.798111</td> </tr> <tr> <th>1999</th> <td>10.0</td> <td>4</td> <td>0.5</td> <td>10</td> <td>139</td> <td>-1333.690544</td> </tr> </tbody> </table> </div> This time, the CATE's interpretation is non intuitive. Since we've removed the denominator from the target transformation, this CATE we are seeing is scaled by \\(Var(X)\\). However, this prediction should still order the treatment effect pretty well. To see that, we can use the cumulative gain curve, just like we did before. ```python gain_curve_test = cumulative_gain(test.assign(cate=cate_test_transf_y), "cate", y="sales", t="price") gain_curve_train = cumulative_gain(train.assign(cate=cate_learner.predict(train[["temp", "weekday", "cost"]])), "cate", y="sales", t="price") plt.plot(gain_curve_test, label="Test") plt.plot(gain_curve_train, label="Train") plt.plot([0, 100], [0, elast(test, "sales", "price")], linestyle="--", color="black", label="Taseline") plt.legend(); ``` For this data, it looks like the model with transformed target is way better than random. Not only that, train and test results are pretty close, so variance is not an issue here. But this is just a characteristic of this dataset. If you recall, this was not the case when we explored the binary treatment case. There, the model performed not so great. ### Non Linear Treatment Effects Having talked about the continuous case, there is still an elephant in the room we need to adress. We've assumed a linearity on the treatment effect. However, that is very rarely a reasonable assumption. Usually, treatment effects saturate in one form or another. In our example, it's reasonable to think that demand will go down faster at the first units of price increase, but then it will fall slowlier. The problem here is that **elasticity or treatment effect changes with the treatment itself**. In our example, the treatment effect is more intense at the beginning of the curve and smaller as prices get higher. Again,suppose you have two types of days: hot days (yellow) and cold days (blue) and we want to distinguish between the two with a causal model. The thing is that causal models should predict elasticity, but in the nonlinear case, the elasticity for hot and cold days could be the same, if we look at different price points in the curve (right image). There is no easy way out of this problem and I confess I'm still investigating what works best. For now, the thing that I do is try to think about the functional form of the treatment effect and somehow linearize it. For example, demand usually has the following functional form, where higher \\(\alpha\\)s means that demand falls faster with each price increase $ D_i = \dfrac{1}{P_i^{\alpha}} $ So, if I apply the log transformation to both the demand \\(Y\\) and prices \\(T\\), I should get something that is linear. $ \begin{align} log(D)_i &= log\bigg(\dfrac{1}{P_i^{\alpha}}\bigg) \\ &= log(1) - log(P_i^{\alpha}) \\ &= log(1) - log(P_i^{\alpha}) \\ &= - \alpha * log(P_i) \\ \end{align} $ Linearization is not so easy to do, as it involves some thinking. But you can also try stuff out and see what works best. Often, things like logs and square roots help. ## Key Ideas We are now moving in the direction of estimating conditional average treatment effects using machine learning models. The biggest challenge when doing so is adapting a predictive model to one that estimates causal effects. Another way of thinking about it is that predictive models focus on estimating the outcome Y as a function of features X and possibly treatment T \\(Y = M(X, T) \\) while causal models need to estimate the partial derivative of this output function on the treatment \\( \partial Y = \partial M(X, T) \\). This is far from trivial, because while we do observe the outcome Y, we can't observe \\(\partial Y\\), at least not on an individual level. As a consequence, we need to be creative when designing an objective function for our models. Here, we saw a very simple technique of target transformation. The idea is to combine the original target Y with the treatment T to form a transformed target which is, in expectation, equal to the CATE. With that new target, we can plug any predictive ML model to estimate it and then the model predictions will be CATE estimates. As a side note, besides target transformation, this method also goes by the name of **F-Learner**. With all that simplicity, there is also a price to pay. The transformed target is a very noisy estimate of the individual treatment effect and that noise will be transferred to the model estimates in the form of variance. This makes target transformation better suited for big data applications, where variance is less of a problem due to sheer sample sizes. Another downside of the target transformation method is that it is only defined for binary or categorical treatments. We did our best to come up with a continuous version of the approach and even ended up with something that seemed to work, but up until now, there is no solid theoretical framework to back it up. Finally, we've ended with a discussion on non linear treatment effects and the challenges that come with it. Namely, when the treatment effect changes with the treatment itself, we might mistakenly think units have the same treatment response curve because they have the same responsiveness to the treatment, but actually they are just receiving different treatment amounts. ## References The things I've written here are mostly stuff from my head. I've learned them through experience. This means that they have **not** passed the academic scrutiny that good science often goes through. Instead, notice how I'm talking about things that work in practice, but I don't spend too much time explaining why that is the case. It's a sort of science from the streets, if you will. However, I am putting this up for public scrutiny, so, by all means, if you find something preposterous, open an issue and I'll address it to the best of my efforts. Most of this chapter draws from Susan Atheys' and Guido W. Imbens' paper, *Machine Learning Methods for Estimating Heterogeneous Causal Effects*. Some material about target transformation can also be found on Pierre Gutierrez' and Jean-Yves G´erardy's paper, *Causal Inference and Uplift Modeling: A review of the literature*. Note that these papers only cover the binary treatment case. Another review of causal models for CATE estimation that references the F-Learner is *Meta-learners for Estimating Heterogeneous Treatment Effects using Machine Learning*, by K¨unzel et al, 2019. ## Contribute Causal Inference for the Brave and True is an open-source material on causal inference, the statistics of science. It uses only free software, based in Python. Its goal is to be accessible monetarily and intellectually. If you found this book valuable and you want to support it, please go to [Patreon](https://www.patreon.com/causal_inference_for_the_brave_and_true). If you are not ready to contribute financially, you can also help by fixing typos, suggesting edits or giving feedback on passages you didn't understand. Just go to the book's repository and [open an issue](https://github.com/matheusfacure/python-causality-handbook/issues). Finally, if you liked this content, please share it with others who might find it useful and give it a [star on GitHub](https://github.com/matheusfacure/python-causality-handbook/stargazers). ```python ```
2484c54d6da4e96ac0cb519496a061819aa3eaac
94,646
ipynb
Jupyter Notebook
causal-inference-for-the-brave-and-true/20-Plug-and-Play-Estimators.ipynb
gabrieltempass/python-causality-handbook
3f53bf0366241d853087884db9a15d10157107b7
[ "MIT" ]
1,126
2020-04-15T21:16:26.000Z
2022-03-31T05:39:45.000Z
causal-inference-for-the-brave-and-true/20-Plug-and-Play-Estimators.ipynb
gabrieltempass/python-causality-handbook
3f53bf0366241d853087884db9a15d10157107b7
[ "MIT" ]
137
2020-10-07T11:56:48.000Z
2022-03-30T11:54:38.000Z
causal-inference-for-the-brave-and-true/20-Plug-and-Play-Estimators.ipynb
gabrieltempass/python-causality-handbook
3f53bf0366241d853087884db9a15d10157107b7
[ "MIT" ]
187
2020-07-16T05:53:03.000Z
2022-03-28T14:52:43.000Z
92.608611
25,080
0.762705
true
9,167
Qwen/Qwen-72B
1. YES 2. YES
0.867036
0.795658
0.689864
__label__eng_Latn
0.995606
0.441117
--- Eduard Larrañaga ([email protected]) --- # Diagrama de Inmersión para el espacio-tiempo de Schwarzschild ### Resumen En este cuaderno se presenta el diagrama de inmersión para el espacio-tiempo de Schwarzschild. --- Comenzamos con el espacio plano 3-dimensional en cordenadas cilíndricas, \begin{equation} ds^2 = dz^2 + dr^2 + r^2 d\phi^2 \end{equation} y consideramos una superficie (2-dimensional) dada por la ecuación \begin{equation} z= z(r). \end{equation} Se tiene entonces \begin{equation} dz= \frac{dz}{dr}dr \end{equation} por lo que \begin{equation} ds^2 = \left[ 1+ \left(\frac{dz}{dr}\right)^2\right]dr^2 + r^2 d\phi^2. \end{equation} Comparando esta relación con el elemento de línea de Schwarzschild en un instante determinado, $dt=0$, y para un angulo fijo $d\theta =0$, \begin{equation} ds^2 = \left( 1- \frac{2M}{r}\right)^{-1}dr^2 + r^2 d\phi^2, \end{equation} se puede realizar la identificación \begin{equation} 1+ \left(\frac{dz}{dr}\right)^2 = \left( 1- \frac{2M}{r}\right)^{-1}. \end{equation} De esta relación se obtiene \begin{align} \left(\frac{dz}{dr}\right)^2 =& \left( 1- \frac{2M}{r}\right)^{-1} - 1 \\ \left(\frac{dz}{dr}\right)^2 =& \frac{r}{r-2M} - 1\\ \left(\frac{dz}{dr}\right)^2 =& \frac{2M}{r-2M} \\ \frac{dz}{dr} =& \sqrt{\frac{2M}{r-2M}}\\ dz =& \sqrt{2M} \frac{dr}{\sqrt{r-2M}}. \end{align} Integrado esta expresión se tiene \begin{equation} z(r) = \sqrt{8M}\sqrt{r-2M} . \end{equation} --- ## Gráfico con `matplotlib.pyplot` ```python import matplotlib.pyplot as plt from matplotlib import cm import numpy as np import warnings warnings.simplefilter('ignore') # Mass of the central object M=5. # Surface definition x = np.arange(-100, 100, 0.25) y = np.arange(-100, 100, 0.25) x, y = np.meshgrid(x, y) r = np.sqrt(x**2 + y**2) z = np.sqrt(8*M)*np.sqrt(r - 2*M) # 3D-Plot environment fig, ax = plt.subplots(1, 2, subplot_kw={"projection": "3d"}, figsize=(15,5)) # Plot the wireframe ax[0].plot_wireframe(x, y, z, rstride=20, cstride=20) # Plot the surface. ax[1].plot_surface(x, y, z, rstride=20, cstride=20) plt.show() ``` ## Plot using `EinsteinPy` ```python from einsteinpy.hypersurface import SchwarzschildEmbedding from einsteinpy.plotting import HypersurfacePlotter from astropy import units as u ``` ```python surface_obj = SchwarzschildEmbedding(5. * u.M_sun) surface = HypersurfacePlotter(embedding=surface_obj, plot_type='surface') surface.plot() surface.show() ``` ```python ```
636ec7e5245bd05130019641fd0965712fcce53a
239,710
ipynb
Jupyter Notebook
03. Embedding Diagram Schwarzschild.ipynb
ashcat2005/SchwarzschildBH
4e9ae8afc06b09edc90b292b3cefd7ca79b65e20
[ "MIT" ]
null
null
null
03. Embedding Diagram Schwarzschild.ipynb
ashcat2005/SchwarzschildBH
4e9ae8afc06b09edc90b292b3cefd7ca79b65e20
[ "MIT" ]
null
null
null
03. Embedding Diagram Schwarzschild.ipynb
ashcat2005/SchwarzschildBH
4e9ae8afc06b09edc90b292b3cefd7ca79b65e20
[ "MIT" ]
null
null
null
841.087719
184,860
0.955146
true
894
Qwen/Qwen-72B
1. YES 2. YES
0.90599
0.766294
0.694254
__label__spa_Latn
0.350647
0.451317
--- ## 01. Data Analysis. Statistics Basics Eduard Larrañaga ([email protected]) --- ### About this notebook In this worksheet, we introduce some basic aspects and definitions of statistics for working with astrophysical data. --- ### Statistics with `numpy` Statistics are designed to summarize, reduce or describe data. A statistic is a function of the data alone! Consider a dataset $\{ x_1, x_2, x_3, ...\}$ Some important quantities defined to describe the dataset are **average** or **mean**, **median**, **maximum value**, **average of the squares**, etc. Now, we will explore some of these concepts using the `numpy` package and a set of data taken from the book *Computational Physics* by Mark Newman that can be downloaded from http://www-personal.umich.edu/~mejn/computational-physics/ The file `sunspots-since1749.txt` contains the observed number of sunspots on the Sun for each month since January 1749. In the file, the first column corresponds to the month and the second the sunspots number. ```python import matplotlib.pyplot as plt import numpy as np ``` ```python month, sunspots = np.loadtxt('sunspots-since1749.txt', unpack=True) ``` The number of samples in the dataset is ```python print(month.size, ' months') ``` 3143 months ```python years, months = divmod(month.size, 12) print(years, ' years and ', months, ' months') ``` 261 years and 11 months A scatter plot showing the behavior of the dataset, ```python fig, ax = plt.subplots(figsize=(7,5)) ax.scatter(month, sunspots, marker='.') ax.set_xlabel(r'months since january 1749') ax.set_ylabel(r'number of sunspots') plt.show() ``` and a plot showing the same dataset is ```python fig, ax = plt.subplots(figsize=(7,5)) ax.plot(month, sunspots) ax.set_xlabel(r'months since january 1749') ax.set_ylabel(r'number of sunspots') plt.show() ``` #### Maximum and Minimum ```python np.ndarray.max(sunspots) ``` 253.8 ```python np.ndarray.min(sunspots) ``` 0.0 --- #### Mean and Weighted Average The function `numpy.mean()` returns the arithmetic average of the array elements. https://numpy.org/doc/stable/reference/generated/numpy.mean.html?highlight=mean#numpy.mean \begin{equation} \text{mean} = \frac{\sum x_i}{N} \end{equation} The monthly mean number of spots in the time period of the data set is ```python mean_sunspots = np.mean(sunspots) mean_sunspots ``` 51.924498886414256 The function `numpy.average()` returns the weighted average of the array elements. https://numpy.org/doc/stable/reference/generated/numpy.average.html?highlight=average#numpy.average Including a weight function with values $w_i$, this function uses the formula \begin{equation} \text{average} = \frac{\sum x_i w_i}{\sum w_i} \end{equation} ```python w = sunspots/np.ndarray.max(sunspots) w ``` array([0.2285264 , 0.24665091, 0.27580772, ..., 0.09929078, 0.09259259, 0.08510638]) ```python np.average(sunspots, weights=w) ``` 89.74573872218345 If we do not include the weights or if all weights are equal, the average is equivalent to the mean ```python np.average(sunspots) ``` 51.924498886414256 #### Median and Mode The function `numpy.median()` returns the median of the array elements along a given axis. Given a vector V of length N, the median of V is the middle value of a sorted copy of V, when N is odd, and the average of the two middle values of the sorted copy of V when N is even. https://numpy.org/doc/stable/reference/generated/numpy.median.html ```python np.median(sunspots) ``` 41.5 The mode corresponds to the value ocurring most frequently in the dataset and it can be seen as the location of the peak of the histogram of the data. Although the `numpy` package does not have a mode function, we can use the function `scipy.mode()`to calculate the modal value. ```python from scipy import stats stats.mode(sunspots) ``` ModeResult(mode=array([0.]), count=array([67])) This result indicates that the mode is the number $0.$ and that it appears 67 times in the sunspots dataset. A histogram may help to show the behavior of the data, ```python fig, ax = plt.subplots(figsize=(7,5)) ax.hist(sunspots, bins=25) ax.set_xlabel(r'number of sunspots') ax.set_ylabel(r'') plt.show() ``` ```python fig, ax = plt.subplots(figsize=(7,5)) ax.hist(sunspots, bins=400) ax.set_xlabel(r'number of sunspots') ax.set_ylabel(r'') plt.show() ``` ## Astrophysical Signals In astrophysics, it is usual to detect signals immersed in noise. For example, in radioastronomy, the emission of sources such as radio-galaxies, pulsars, supernova remnants, etc. are received by radio-telescopes in Earth. they detect emission at frequencies in the order of megahertz, which is similar to many radio stations and therefore, the signal is immersed in a lot of noise! The flux density of these signals is measured in *Janskys* (Jy), which is equivalent to \begin{equation} 1 \text{ Jansky} = 10^{-26} \frac{\text{Watts}}{\text{m}^2\text{ Hz}} \end{equation} Hence, flux is a measure of the spectral power received by a telescope detector of unit projected area. Usually, astrophysical sources have flux densities much smaller than the noise around them. | Source | Flux Density (Jy) | |:--------|-------------------:| |Crab Pulsar at 1.4GHz|$\sim 0.01$| |Milky Way at 10 GHz|$\sim 2 \times 10^3$| |Sun at 10 GHz|$\sim 4 \times 10^6$| |Mobile Phone| $\sim 1.1 \times 10^8$| Consider a synthetic signal with the Gaussian form, ```python x = np.linspace(0, 100, 200) y = 0.5*np.exp(-(x-40)**2./25.) fig, ax = plt.subplots(figsize=(7,5)) ax.plot(x,y) ax.set_xlabel(r'$x$') ax.set_ylabel(r'Flux Density (Jy)') plt.show() ``` Now, we will add some random noise by defining two random arrays ```python noise1 = np.random.rand(200) noise2 = np.random.rand(200) fig, ax = plt.subplots(figsize=(7,5)) ax.plot(x,noise1, color='darkblue') ax.plot(x,noise2, color='cornflowerblue') ax.set_xlabel(r'$x$') ax.set_ylabel(r'noise') plt.show() ``` We add these random noise arrays to the Gaussian profile to obtain ```python rawsignal = y + (noise1 - noise2) fig, ax = plt.subplots(figsize=(7,5)) ax.plot(x,rawsignal, label='signal + noise', color='cornflowerblue') ax.plot(x,y, label='Original Gaussian signal', color='crimson') ax.set_xlabel(r'$x$') ax.set_ylabel(r'signal + noise') plt.legend() plt.show() ``` It is clear that the Gaussian profile is completely hidden into the noise. ### Application of the Statistical Concepts. Extracting a Signal from the Noise. Now we will use some statistic concepts such as mean and median to isolate the signal from the noise. First, let us create 9 of such signal + noise synthetic profiles. ```python n = 9 # Number of profiles rawprofiles = np.zeros([n,200]) for i in range(n): rawprofiles[i] = y + ( np.random.rand(200) - np.random.rand(200) ) fig, ax = plt.subplots(3,3, figsize=(10,7)) ax[0,0].plot(x,rawprofiles[0]) ax[0,1].plot(x,rawprofiles[1]) ax[0,2].plot(x,rawprofiles[2]) ax[1,0].plot(x,rawprofiles[3]) ax[1,1].plot(x,rawprofiles[4]) ax[1,2].plot(x,rawprofiles[5]) ax[2,0].plot(x,rawprofiles[6]) ax[2,1].plot(x,rawprofiles[7]) ax[2,2].plot(x,rawprofiles[8]) ax[2,1].set_xlabel(r'$x$') ax[1,0].set_ylabel(r'$signal$') plt.show() ``` The question here is ¿How can we recover the original signal from these profiles? **Stacking** the profiles (signal+noise) will provide a method to recover the signal. #### Stacking using the Mean Since the profiles have a random noise, when adding regions with only noise, the mean of the random numbers cancel out. On the other hand, when we add regions in which there is some signal data, the mean adds together, increasing the so-called **signal to noise** ratio. ```python recovered_signal = np.mean(rawprofiles, axis=0) fig, ax = plt.subplots(figsize=(7,5)) ax.plot(x,recovered_signal, label='Recovered signal', color='cornflowerblue') ax.plot(x,y, label='Original Gaussian signal', color='crimson') ax.set_xlabel(r'$x$') ax.set_ylabel(r'signal') plt.legend() plt.show() ``` It is clear that the recovered signal is not perfect, although the Gaussian profile seems to be present. Taking not 9 but 100 synthetic profiles, the stacking method gives a much better result for the recovered signal. ```python n = 100 rawprofiles = np.zeros([n,200]) for i in range(n): rawprofiles[i] = y + ( np.random.rand(200) - np.random.rand(200) ) recovered_signal = np.mean(rawprofiles, axis=0) fig, ax = plt.subplots(figsize=(7,5)) ax.plot(x,recovered_signal, label='Recovered signal', color='cornflowerblue') ax.plot(x,y, label='Original Gaussian signal', color='crimson') ax.set_xlabel(r'$x$') ax.set_ylabel(r'signal') plt.legend() plt.show() ``` Now, the Gaussian original signal is evident! #### Stacking using the Median Another form to calcualte the stack is using the median instead of the mean. When a distribution is symmetric the mean and the median are equivalent. However, if the distribution is asymmetric, or when there are significan outliers, the median can be a much better indicator of the central value. Hence, although our random noise is expected to be a symmetric distribution around the zero value, there may exist outliers that affect the result of the mean. Therefore we will make a median stacking of 9 signal+noise profiles to compare the results. ```python n = 9 rawprofiles = np.zeros([n,200]) for i in range(n): rawprofiles[i] = y + ( np.random.rand(200) - np.random.rand(200) ) recovered_signal = np.median(rawprofiles, axis=0) fig, ax = plt.subplots(figsize=(7,5)) ax.plot(x,recovered_signal, label='Recovered signal (Median Stacking)', color='cornflowerblue') ax.plot(x,y, label='Original Gaussian signal', color='crimson') ax.set_xlabel(r'$x$') ax.set_ylabel(r'signal') plt.legend() plt.show() ``` Note that the result is almost the same as that obtained with the mean stacking (the reason may be that the noise distribution is symmetric w.r.t. zero). Now we will consider 100 profiles, ```python n = 100 rawprofiles = np.zeros([n,200]) for i in range(n): rawprofiles[i] = y + ( np.random.rand(200) - np.random.rand(200) ) recovered_signal = np.median(rawprofiles, axis=0) fig, ax = plt.subplots(figsize=(7,5)) ax.plot(x,recovered_signal, label='Recovered signal (Median Stacking)', color='cornflowerblue') ax.plot(x,y, label='Original Gaussian signal', color='crimson') ax.set_xlabel(r'$x$') ax.set_ylabel(r'signal') plt.legend() plt.show() ``` Once again, the result is not much different from that obtained using the mean stacking.
88e206df0760022f26af24d680f87e3313306807
622,163
ipynb
Jupyter Notebook
18. Statistics Basics/01.DataAnalysis-StatisticsBasics.ipynb
ashcat2005/AstrofisicaComputacional2022
67463ec4041eb08c0f326792fed0dcf9e970e9b7
[ "MIT" ]
3
2022-03-08T06:18:56.000Z
2022-03-10T04:55:53.000Z
18. Statistics Basics/01.DataAnalysis-StatisticsBasics.ipynb
ashcat2005/AstrofisicaComputacional2022
67463ec4041eb08c0f326792fed0dcf9e970e9b7
[ "MIT" ]
null
null
null
18. Statistics Basics/01.DataAnalysis-StatisticsBasics.ipynb
ashcat2005/AstrofisicaComputacional2022
67463ec4041eb08c0f326792fed0dcf9e970e9b7
[ "MIT" ]
4
2022-03-09T17:47:43.000Z
2022-03-21T02:29:36.000Z
451.824982
121,804
0.945845
true
2,845
Qwen/Qwen-72B
1. YES 2. YES
0.935347
0.882428
0.825376
__label__eng_Latn
0.974993
0.755958
# Expectation Maximization for latent variable models In all the notebooks we've seen so far, we have made the assumption that the observations correspond directly to realizations of a random variable. Take the case of linear regression: we are given observations of the random variable $t$ (plus some noise), which is the target value for a given value of the input $\mathbf{x}$. Under some criterion, we find the best parameters $\boldsymbol{\theta}$ of a model $y(\mathbf{x}, \boldsymbol{\theta})$ that is able to explain the observations and yield predictions for new inputs. In the more general case, we have a dataset of observations $\mathcal{D} = \lbrace \mathbf{x_1}, ..., \mathbf{x_N}\rbrace$. We hypothesize that each observation is drawn from a probability distribution $p(\mathbf{x_i}\vert\boldsymbol{\theta})$ with parameters $\boldsymbol{\theta}$. It is sometimes useful to think of this as a probabilistic graphical model, where nodes represent random variables and edges encode dependency relationships between them. In this case, the graph looks as follows: In this graph, we show that we have $N$ observations by enclosing the random variables within a *plate*. This also represents the fact that we assume the observations to be independent. For many situations this model works, that is, the model is able to explain the data and can be used to make predictions for new observations. In other cases, however, this model is not expressive enough. Imagine that we have a single dimensional variable $x$ of which we observe some samples. Our first hypothesis is that $p(x\vert\boldsymbol{\theta})$ is a normal distribution, so we proceed to find the mean and variance of this distribution using maximum likelihood estimation (MLE): ```python %matplotlib inline import numpy as np from scipy.stats import norm import matplotlib.pyplot as plt from data.synthetic import gaussian_mixture X = gaussian_mixture(200) # The MLE estimates are the sample mean and standard deviation mean = np.mean(X) std = np.std(X) # Plot fit on top of histogram fig, ax1 = plt.subplots() ax1.hist(X, alpha=0.4) ax1.set_ylabel('Counts') ax1.set_xlabel('x') ax2 = ax1.twinx() x = np.linspace(-4, 4, 100) ax2.plot(x, norm.pdf(x)) ax2.set_ylim([0, 0.5]) ax2.set_ylabel('Probability density'); ``` Clearly, once we have actually examined the data, we realize that a single normal distribution is not a good model. The data seems to come from a multimodal distribution with two components, which a single Gaussian is not able to capture. In this case we are better off by changing our model to a *mixture model*, a model that mixes two or more distributions. For the example above, it would seem that a mixture of two components, centered at -2 and 2 would be a better fit. Under this idea, our hypothesis is the next: there are $K$ components in the mixture model. We start by selecting a component $k$ with some probability $\pi_k$ and, given the selected component, we then draw a sample $x$ from a normal distribution $\mathcal{N}(x\vert\mu_k, \sigma_k^2)$. We can think of the component as a discrete random variable $\mathbf{z}$ that can take values from 1 up to $K$. Therefore, to each sample $x$ there is an associated value of $\mathbf{z}$. Since we do not observe $\mathbf{z}$, we call it a **latent variable**. If we collapse the parameters $\pi_k$, $\mu_k$ and $\sigma_k$ into a single parameter $\boldsymbol{\theta}$, the graphical model for the general case is now the following: Note how the model emphasizes the fact that each observation has an associated value of the latent variable. Also note that since $z$ is not observed, its node is not shaded. We can now proceed to find the parameters of our new model, using MLE (or maximum a posteriori, if we have priors on the parameters). However, as it usually is the case, we have traded tractability for expressiveness by introducing latent variables. If we attempt to maximize the log-likelihood, we find that we need to maximize $$ \begin{align} \sum_{n=1}^N\log \sum_{z}p(x_n\vert z_n, \boldsymbol{\theta})p(z_n\vert\boldsymbol{\theta}) \end{align} $$ which due to the summation inside the logarithm, does not result in a closed form solution. An alternative is to use the **Expectation Maximization** algorithm, an iterative procedure that can be used to find maximum likelihood estimates of the parameters, which we motivate next. Let $\mathbf{X}$ and $\mathbf{Z}$ denote the set of observed and latent variables, respectively, for which we have defined a joint parametric probability distribution $$ p(\mathbf{X}, \mathbf{Z}\vert\boldsymbol{\theta}) = p(\mathbf{X}\vert\mathbf{Z},\boldsymbol{\theta})p(\mathbf{Z}\vert\boldsymbol{\theta}) $$ It can be shown [1, 2] that for any distribution $q(\mathbf{Z})$ we can decompose the log-likelihood as $$ \log p(\mathbf{X}\vert\boldsymbol{\theta}) = \mathcal{L}(q, \boldsymbol{\theta}) + \text{KL}(q(\mathbf{Z})\Vert p(\mathbf{Z\vert\mathbf{X}, \boldsymbol{\theta}}))\tag{1} $$ where $\text{KL}$ is the Kullback-Leibler divergence, and $\mathcal{L}(q,\boldsymbol{\theta})$ is known as the Evidence Lower Bound (ELBO), because since the KL divergence is always non-negative, it is a lower bound for $p(\mathbf{X}\vert\boldsymbol{\theta})$. The ELBO is defined as $$ \mathcal{L}(q, \boldsymbol{\theta}) = \mathbb{E}_q[\log p(\mathbf{X},\mathbf{Z}\vert\boldsymbol{\theta})] - \mathbb{E}_q[\log q(\mathbf{Z})]\tag{2} $$ Note that these expectations are taken with respect to the distribution $q(\mathbf{Z})$. We can now use this decomposition in the EM algorithm to define two steps: - **E step:** Initialize the parameters with some value $\boldsymbol{\theta}^\prime$. In equation 1, close the gap between the lower bound and the likelihood by making the KL divergence equal to zero. We achieve this by setting $q(\mathbf{Z})$ equal to the posterior $p(\mathbf{Z}\vert\mathbf{X},\boldsymbol{\theta}^\prime)$, which usually involves using Bayes' theorem to calculate $$ p(\mathbf{Z}\vert\mathbf{X},\boldsymbol{\theta}^\prime) = \frac{p(\mathbf{X}\vert\mathbf{Z},\boldsymbol{\theta}^\prime)p(\mathbf{Z}\vert\boldsymbol{\theta}^\prime)}{p(\mathbf{X}\vert\boldsymbol{\theta}^\prime)} $$ - **M step:** now that the likelihood is equal to the lower bound, maximize the lower bound in equation 2 with respect to the parameters. We find $$ \boldsymbol{\theta}^{\text{new}} = \arg\max_{\boldsymbol{\theta}} \mathbb{E}_q[\log p(\mathbf{X},\mathbf{Z}\vert\boldsymbol{\theta})] $$ where we have dropped the second term in equation 2 as it does not depend on the parameters, and the expectation is calculated with respect to $q(\mathbf{Z}) = p(\mathbf{Z}\vert\mathbf{X},\boldsymbol{\theta}^\prime)$. In this step we calculate the derivatives with respect to the parameters and set them to zero to find the maximizing values. This process is repeated until convergence. ## A practical example We will use this idea to fit a mixture model from a subset of the famous MNIST dataset. In this dataset each image is of size 28 by 28, containing a handwritten number between 0 and 9, although for simplicity we will take only the digits 4, 5, and 2. We will process the images so that they are binary, so that a pixel can be either 1 or 0. The images are flattened, so that a digit is represented by a vector of 28 $\times$ 28 = 784 values. ```python from sklearn.datasets import fetch_openml import matplotlib.pyplot as plt import numpy as np %matplotlib inline # Fetch data from OpenML.org X, y = fetch_openml('mnist_784', version=1, return_X_y=True) # Take three digits only sample_indices = [] for i, label in enumerate(y): if label in ['4', '5', '2']: sample_indices.append(i) X = X[sample_indices] y = y[sample_indices] # Binarize X_bin = np.zeros(X.shape) X_bin[X > np.max(X)/2] = 1 # Visualize some digits plt.figure(figsize=(14, 1)) for i in range(10): plt.subplot(1, 10, i+1) plt.imshow(X_bin[i].reshape((28, 28)), cmap='bone') plt.axis('off') ``` Our hypothesis is that each pixel $x_i$ in the image $\mathbf{x}$ is a Bernoulli random variable with probability $\mu_i$ of being 1, and so we define the vector $\boldsymbol{\mu}$ as containing the probabilities for each pixel. The probabilities can be different depending on whether the number is a 2, a 4 or a 5, so we will define 3 components for the mixture model. This means that there will be 3 mean vectors $\boldsymbol{\mu}_i$ and each component will have a probability $\pi_k$. These are the parameters that we will find using the EM algorithm, which I have implemented in the [`BernoulliMixture`](https://github.com/dfdazac/machine-learning-1/blob/master/mixture_models/bernoulli_mixture.py) class. ```python from mixture_models.bernoulli_mixture import BernoulliMixture model = BernoulliMixture(dimensions=784, n_components=3, verbose=True) model.fit(X_bin) ``` Iteration 9/100, convergence: 0.000088 Terminating on convergence. We can now observe the means and the mixing coefficients of the mixture model after convergence of the EM algorithm, which are stored in the `model.mu` attribute: ```python plt.figure(figsize=(10, 4)) for i, component_mean in enumerate(model.mu): plt.subplot(1, 3, i + 1) plt.imshow(component_mean.reshape((28, 28)), cmap='Blues') plt.title(r'$\pi_{:d}$ = {:.3f}'.format(i + 1, model.pi[i])) plt.axis('off') ``` We can see that the means correspond to the three digits in the dataset. In this particular case, given a value of 1 for the latent variable $\mathbf{z}$ (corresponding to digit 4), the observation (the image) is a sample from the distribution whose mean is given by the one to the left in the plots above. The mixing coefficients give us an idea of the proportion of instances of each digit in the dataset used to train the model. As we have seen, introducing latent variables has allowed us to specify complex probability distributions that are likely to provide more predictive power in some applications. Other examples include mixtures of Gaussians for clustering applications, where usually an algorithm like K-means fails at capturing different covariances for each cluster; and non-trivial probabilistic models of real life data, such as click models [2]. It is important to note that the EM algorithm is not the ultimate way to find parameters of models with latent variables. On one hand, the method is sensitive to the initialization of the parameters and might end up at a local maximum of the log-likelihood. On the other hand, we have assumed that we can calculate the posterior $p(\mathbf{Z}\vert\mathbf{X},\boldsymbol{\theta})$, which sometimes is not possible for more elaborate models. In such cases it is necessary to resort to approximate methods such as sampling and variational inference. This last area has led to many recent advances, including the Variational Autoencoder, which I would like to discuss in the near feature. ### References [1] Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. The MIT Press. [2] Dan Piponi, "Expectation-Maximization with Less Arbitrariness", blog post at [http://blog.sigfpe.com/2016/10/expectation-maximization-with-less.html](http://blog.sigfpe.com/2016/10/expectation-maximization-with-less.html). [2] Chuklin, Aleksandr, Ilya Markov, and Maarten de Rijke. "Click models for web search." Synthesis Lectures on Information Concepts, Retrieval, and Services 7, no. 3 (2015): 1-115.
e1eccf75e97a0836c349c587f2a4f1069711ad0f
45,084
ipynb
Jupyter Notebook
07-expectation-maximization.ipynb
dfdazac/machine-learning-1
0beb7c098aa8b16689075822f76bc1b4fd38dedf
[ "MIT" ]
2
2020-05-17T17:16:22.000Z
2021-11-23T03:59:09.000Z
07-expectation-maximization.ipynb
Jimmy-INL/machine-learning-1
0beb7c098aa8b16689075822f76bc1b4fd38dedf
[ "MIT" ]
1
2021-09-15T17:07:51.000Z
2021-09-17T12:37:23.000Z
07-expectation-maximization.ipynb
Jimmy-INL/machine-learning-1
0beb7c098aa8b16689075822f76bc1b4fd38dedf
[ "MIT" ]
5
2019-06-13T13:02:43.000Z
2021-09-15T15:31:54.000Z
156
17,436
0.860549
true
2,961
Qwen/Qwen-72B
1. YES 2. YES
0.875787
0.847968
0.742639
__label__eng_Latn
0.996156
0.563731
# 1. Perceptron **Single Layer Perceptron(one Neuron)** A Perceptron is a simple model of a biological neuron that classifies the label of the input data based on various activation functions: 1. Binary Step Function: \begin{equation} f(x)=\begin{cases} 0, & \text{if $x<0$}.\\ 1, & \text{otherwise}. \end{cases} \end{equation} 2. Signum Function: \begin{equation} f(x)=\begin{cases} -1, & \text{if $x<0$}.\\ 0, & \text{if $x=0$>}\\ 1, & \text{if $x>0$}. \end{cases} \end{equation} 3. Linear Activation Function: $$f(x) = x$$ 4. Sigmoid / Logistic Activation Function: $$f(x) = \frac{1}{1+e^{-x}}$$ 5. Tanh Function (Hyperbolic Tangent): $$f(x) = \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$ 6. ReLU Function: $$f(x) = max(0, x)$$ 7. Exponential Linear Units (ELUs) Function: \begin{equation} f(x)=\begin{cases} x, & \text{if $x\geqslant0$}.\\ \alpha(e^{x} - 1), & \text{otherwise}. \end{cases} \end{equation} 8. Swish: $$f(x) = \frac{x}{1 + e^{-x}}$$ 9. Gaussian Error Linear Unit (GELU): $$f(x) = 0.5x(1 + tanh[\sqrt{2/\pi}(x + 0.044715x^3)])$$ ```python fn_list = ['step', 'signum', 'linear', 'relu', 'sigmoid', 'tanh', 'elu', 'gelu', 'swish'] ``` ## 1.1. How to use this ```python from Perceptron.perceptron import Perceptron from Perceptron.utils import prepare_data, save_plot, save_model # get the data, convert it into a DataFrame and then use below commands X, y = prepare_data(df) model = Perceptron(eta = eta, epochs = epochs) model.fit(X, y, fn, alpha=None) # alpha ranges between 0 to 1 if and only if ELU activation function is applied else alpha value remains None for other activation functions Total_Error = model.total_loss() save_model(model, filename = filename) save_plot(df, plotFilename, model) ``` ## 1.2. Reference [Python Package Publishing Docs](https://packaging.python.org/tutorials/packaging-projects/) [GitHub Actions CICD Docs](https://docs.github.com/en/actions/guides/building-and-testing-python#publishing-to-package-registries)
d51d582f32f5be82130aee0cd3701bda9a9cfeff
3,927
ipynb
Jupyter Notebook
Readme.ipynb
rohandhanraj/Perceptron
5eea4679b14be3beec4b9e9999e47fef62cf18d8
[ "MIT" ]
null
null
null
Readme.ipynb
rohandhanraj/Perceptron
5eea4679b14be3beec4b9e9999e47fef62cf18d8
[ "MIT" ]
null
null
null
Readme.ipynb
rohandhanraj/Perceptron
5eea4679b14be3beec4b9e9999e47fef62cf18d8
[ "MIT" ]
null
null
null
34.147826
181
0.558696
true
679
Qwen/Qwen-72B
1. YES 2. YES
0.944995
0.904651
0.85489
__label__eng_Latn
0.360695
0.824529
# Quadratic Equations Consider the following equation: \begin{equation}y = 2(x - 1)(x + 2)\end{equation} If you multiply out the factored ***x*** expressions, this equates to: \begin{equation}y = 2x^{2} + 2x - 4\end{equation} Note that the highest ordered term includes a squared variable (x<sup>2</sup>). Let's graph this equation for a range of ***x*** values: ```python import pandas as pd # Create a dataframe with an x column containing values to plot df = pd.DataFrame ({'x': range(-9, 9)}) # Add a y column by applying the quadratic equation to x df['y'] = 2*df['x']**2 + 2 *df['x'] - 4 # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.x, df.y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() ``` Note that the graph shows a *parabola*, which is an arc-shaped line that reflects the x and y values calculated for the equation. Now let's look at another equation that includes an ***x<sup>2</sup>*** term: \begin{equation}y = -2x^{2} + 6x + 7\end{equation} What does that look like as a graph?: ```python import pandas as pd # Create a dataframe with an x column containing values to plot df = pd.DataFrame ({'x': range(-8, 12)}) # Add a y column by applying the quadratic equation to x df['y'] = -2*df['x']**2 + 6*df['x'] + 7 # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.x, df.y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() ``` Again, the graph shows a parabola, but this time instead of being open at the top, the parabola is open at the bottom. Equations that assign a value to ***y*** based on an expression that includes a squared value for ***x*** create parabolas. If the relationship between ***y*** and ***x*** is such that ***y*** is a *positive* multiple of the ***x<sup>2</sup>*** term, the parabola will be open at the top; when ***y*** is a *negative* multiple of the ***x<sup>2</sup>*** term, then the parabola will be open at the bottom. These kinds of equations are known as *quadratic* equations, and they have some interesting characteristics. There are several ways quadratic equations can be written, but the *standard form* for quadratic equation is: \begin{equation}y = ax^{2} + bx + c\end{equation} Where ***a***, ***b***, and ***c*** are numeric coefficients or constants. Let's start by examining the parabolas generated by quadratic equations in more detail. ## Parabola Vertex and Line of Symmetry Parabolas are symmetrical, with x and y values converging exponentially towards the highest point (in the case of a downward opening parabola) or lowest point (in the case of an upward opening parabola). The point where the parabola meets the line of symmetry is known as the *vertex*. Run the following cell to see the line of symmetry and vertex for the two parabolas described previously (don't worry about the calculations used to find the line of symmetry and vertex - we'll explore that later): ```python %matplotlib inline def plot_parabola(a, b, c): import pandas as pd import numpy as np from matplotlib import pyplot as plt # get the x value for the line of symmetry vx = (-1*b)/(2*a) # get the y value when x is at the line of symmetry vy = a*vx**2 + b*vx + c # Create a dataframe with an x column containing values from x-10 to x+10 minx = int(vx - 10) maxx = int(vx + 11) df = pd.DataFrame ({'x': range(minx, maxx)}) # Add a y column by applying the quadratic equation to x df['y'] = a*df['x']**2 + b *df['x'] + c # get min and max y values miny = df.y.min() maxy = df.y.max() # Plot the line plt.plot(df.x, df.y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() # plot the line of symmetry sx = [vx, vx] sy = [miny, maxy] plt.plot(sx,sy, color='magenta') # Annotate the vertex plt.scatter(vx,vy, color="red") plt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy + 5)* np.sign(a))) plt.show() plot_parabola(2, 2, -4) plot_parabola(-2, 3, 5) ``` ## Parabola Intercepts Recall that linear equations create lines that intersect the **x** and **y** axis of a graph, and we call the points where these intersections occur *intercepts*. Now look at the graphs of the parabolas we've worked with so far. Note that these parabolas both have a y-intercept; a point where the line intersects the y axis of the graph (in other words, when x is 0). However, note that the parabolas have *two* x-intercepts; in other words there are two points at which the line crosses the x axis (and y is 0). Additionally, imagine a downward opening parabola with its vertex at -1, -1. This is perfectly possible, and the line would never have an x value greater than -1, so it would have *no* x-intercepts. Regardless of whether the parabola crosses the x axis or not, other than the vertex, for every ***y*** point in the parabola, there are *two* ***x*** points; one on the right (or positive) side of the axis of symmetry, and one of the left (or negative) side. The implications of this are what make quadratic equations so interesting. When we solve the equation for ***x***, there are *two* correct answers. Let's take a look at an example to demonstrate this. Let's return to the first of our quadratic equations, and we'll look at it in its *factored* form: \begin{equation}y = 2(x - 1)(x + 2)\end{equation} Now, let's solve this equation for a ***y*** value of 0. We can restate the equation like this: \begin{equation}2(x - 1)(x + 2) = 0\end{equation} The equation is the product of two expressions **2(x - 1)** and **(x + 2)**. In this case, we know that the product of these expressions is 0, so logically *one or both of the expressions must return 0*. Let's try the first one: \begin{equation}2(x - 1) = 0\end{equation} If we distrbute this, we get: \begin{equation}2x - 2 = 0\end{equation} This simplifies to: \begin{equation}2x = 2\end{equation} Which gives us a value for *x* of **1**. Now let's try the other expression: \begin{equation}x + 2 = 0\end{equation} This gives us a value for *x* of **-2**. So, when *y* is **0**, *x* is **-2** or **1**. Let's plot these points on our parabola: ```python import pandas as pd # Assign the calculated x values x1 = -2 x2 = 1 # Create a dataframe with an x column containing some values to plot df = pd.DataFrame ({'x': range(x1-5, x2+6)}) # Add a y column by applying the quadratic equation to x df['y'] = 2*(df['x'] - 1) * (df['x'] + 2) # Get x at the line of symmetry (halfway between x1 and x2) vx = (x1 + x2) / 2 # Get y when x is at the line of symmetry vy = 2*(vx -1)*(vx + 2) # get min and max y values miny = df.y.min() maxy = df.y.max() # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.x, df.y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() # Plot calculated x values for y = 0 plt.scatter([x1,x2],[0,0], color="green") plt.annotate('x1',(x1, 0)) plt.annotate('x2',(x2, 0)) # plot the line of symmetry sx = [vx, vx] sy = [miny, maxy] plt.plot(sx,sy, color='magenta') # Annotate the vertex plt.scatter(vx,vy, color="red") plt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy - 5))) plt.show() ``` So from the plot, we can see that both of the values we calculated for ***x*** align with the parabola when ***y*** is 0. Additionally, because the parabola is symmetrical, we know that every pair of ***x*** values for each ***y*** value will be equidistant from the line of symmetry, so we can calculate the ***x*** value for the line of symmetry as the average of the ***x*** values for any value of ***y***. This in turn means that we know the ***x*** coordinate for the vertex (it's on the line of symmetry), and we can use the quadratic equation to calculate ***y*** for this point. ## Solving Quadratics Using the Square Root Method The technique we just looked at makes it easy to calculate the two possible values for ***x*** when ***y*** is 0 if the equation is presented as the product two expressions. If the equation is in standard form, and it can be factored, you could do the necessary manipulation to restate it as the product of two expressions. Otherwise, you can calculate the possible values for x by applying a different method that takes advantage of the relationship between squared values and the square root. Let's consider this equation: \begin{equation}y = 3x^{2} - 12\end{equation} Note that this is in the standard quadratic form, but there is no *b* term; in other words, there's no term that contains a coeffecient for ***x*** to the first power. This type of equation can be easily solved using the square root method. Let's restate it so we're solving for ***x*** when ***y*** is 0: \begin{equation}3x^{2} - 12 = 0\end{equation} The first thing we need to do is to isolate the ***x<sup>2</sup>*** term, so we'll remove the constant on the left by adding 12 to both sides: \begin{equation}3x^{2} = 12\end{equation} Then we'll divide both sides by 3 to isolate x<sup>2</sup>: \begin{equation}x^{2} = 4\end{equation} No we can isolate ***x*** by taking the square root of both sides. However, there's an additional consideration because this is a quadratic equation. The ***x*** variable can have two possibe values, so we must calculate the *principle* and *negative* square roots of the expression on the right: \begin{equation}x = \pm\sqrt{4}\end{equation} The principle square root of 4 is 2 (because 2<sup>2</sup> is 4), and the corresponding negative root is -2 (because -2<sup>2</sup> is also 4); so *x* is **2** or **-2**. Let's see this in Python, and use the results to calculate and plot the parabola with its line of symmetry and vertex: ```python import pandas as pd import math y = 0 x1 = int(- math.sqrt(y + 12 / 3)) x2 = int(math.sqrt(y + 12 / 3)) # Create a dataframe with an x column containing some values to plot df = pd.DataFrame ({'x': range(x1-10, x2+11)}) # Add a y column by applying the quadratic equation to x df['y'] = 3*df['x']**2 - 12 # Get x at the line of symmetry (halfway between x1 and x2) vx = (x1 + x2) / 2 # Get y when x is at the line of symmetry vy = 3*vx**2 - 12 # get min and max y values miny = df.y.min() maxy = df.y.max() # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.x, df.y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() # Plot calculated x values for y = 0 plt.scatter([x1,x2],[0,0], color="green") plt.annotate('x1',(x1, 0)) plt.annotate('x2',(x2, 0)) # plot the line of symmetry sx = [vx, vx] sy = [miny, maxy] plt.plot(sx,sy, color='magenta') # Annotate the vertex plt.scatter(vx,vy, color="red") plt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy - 20))) plt.show() ``` ## Solving Quadratics Using the Completing the Square Method In quadratic equations where there is a *b* term; that is, a term containing **x** to the first power, it is impossible to directly calculate the square root. However, with some algebraic manipulation, you can take advantage of the ability to factor a polynomial expression in the form *a<sup>2</sup> + 2ab + b<sup>2</sup>* as a binomial *perfect square* expression in the form *(a + b)<sup>2</sup>*. At first this might seem like some sort of mathematical sleight of hand, but follow through the steps carefull and you'll see that there's nothing up my sleeve! The underlying basis of this approach is that a trinomial expression like this: \begin{equation}x^{2} + 24x + 12^{2}\end{equation} Can be factored to this: \begin{equation}(x + 12)^{2}\end{equation} OK, so how does this help us solve a quadratic equation? Well, let's look at an example: \begin{equation}y = x^{2} + 6x - 7\end{equation} Let's start as we've always done so far by restating the equation to solve ***x*** for a ***y*** value of 0: \begin{equation}x^{2} + 6x - 7 = 0\end{equation} Now we can move the constant term to the right by adding 7 to both sides: \begin{equation}x^{2} + 6x = 7\end{equation} OK, now let's look at the expression on the left: *x<sup>2</sup> + 6x*. We can't take the square root of this, but we can turn it into a trinomial that will factor into a perfect square by adding a squared constant. The question is, what should that constant be? Well, we know that we're looking for an expression like *x<sup>2</sup> + 2**c**x + **c**<sup>2</sup>*, so our constant **c** is half of the coefficient we currently have for ***x***. This is **6**, making our constant **3**, which when squared is **9** So we can create a trinomial expression that will easily factor to a perfect square by adding 9; giving us the expression *x<sup>2</sup> + 6x + 9*. However, we can't just add something to one side without also adding it to the other, so our equation becomes: \begin{equation}x^{2} + 6x + 9 = 16\end{equation} So, how does that help? Well, we can now factor the trinomial expression as a perfect square binomial expression: \begin{equation}(x + 3)^{2} = 16\end{equation} And now, we can use the square root method to find x + 3: \begin{equation}x + 3 =\pm\sqrt{16}\end{equation} So, x + 3 is **-4** or **4**. We isolate ***x*** by subtracting 3 from both sides, so ***x*** is **-7** or **1**: \begin{equation}x = -7, 1\end{equation} Let's see what the parabola for this equation looks like in Python: ```python import pandas as pd import math x1 = int(- math.sqrt(16) - 3) x2 = int(math.sqrt(16) - 3) # Create a dataframe with an x column containing some values to plot df = pd.DataFrame ({'x': range(x1-10, x2+11)}) # Add a y column by applying the quadratic equation to x df['y'] = ((df['x'] + 3)**2) - 16 # Get x at the line of symmetry (halfway between x1 and x2) vx = (x1 + x2) / 2 # Get y when x is at the line of symmetry vy = ((vx + 3)**2) - 16 # get min and max y values miny = df.y.min() maxy = df.y.max() # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.x, df.y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() # Plot calculated x values for y = 0 plt.scatter([x1,x2],[0,0], color="green") plt.annotate('x1',(x1, 0)) plt.annotate('x2',(x2, 0)) # plot the line of symmetry sx = [vx, vx] sy = [miny, maxy] plt.plot(sx,sy, color='magenta') # Annotate the vertex plt.scatter(vx,vy, color="red") plt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy - 10))) plt.show() ``` ## Vertex Form Let's look at another example of a quadratic equation in standard form: \begin{equation}y = 2x^{2} - 16x + 2\end{equation} We can start to solve this by subtracting 2 from both sides to move the constant term from the right to the left: \begin{equation}y - 2 = 2x^{2} - 16x\end{equation} Now we can factor out the coefficient for x<sup>2</sup>, which is **2**. 2x<sup>2</sup> is 2 &bull; x<sup>2</sup>, and -16x is 2 &bull; 8x: \begin{equation}y - 2 = 2(x^{2} - 8x)\end{equation} Now we're ready to complete the square, so we add the square of half of the -8x coefficient on the right side to the parenthesis. Half of -8 is -4, and -4<sup>2</sup> is 16, so the right side of the equation becomes *2(x<sup>2</sup> - 8x + 16)*. Of course, we can't add something to one side of the equation without also adding it to the other side, and we've just added 2 &bull; 16 (which is 32) to the right, so we must also add that to the left. \begin{equation}y - 2 + 32 = 2(x^{2} - 8x + 16)\end{equation} Now we can simplify the left and factor out a perfect square binomial expression on the right: \begin{equation}y + 30 = 2(x - 4)^{2}\end{equation} We now have a squared term for ***x***, so we could use the square root method to solve the equation. However, we can also isolate ***y*** by subtracting 30 from both sides. So we end up restating the original equation as: \begin{equation}y = 2(x - 4)^{2} - 30\end{equation} Let's just quickly check our math with Python: ```python from random import randint x = randint(1,100) 2*x**2 - 16*x + 2 == 2*(x - 4)**2 - 30 ``` True So we've managed to take the expression ***2x<sup>2</sup> - 16x + 2*** and change it to ***2(x - 4)<sup>2</sup> - 30***. How does that help? Well, when a quadratic equation is stated this way, it's in *vertex form*, which is generically described as: \begin{equation}y = a(x - h)^{2} + k\end{equation} The neat thing about this form of the equation is that it tells us the coordinates of the vertex - it's at ***h,k***. So in this case, we know that the vertex of our equation is 4, -30. Moreover, we know that the line of symmetry is at ***x = 4***. We can then just use the equation to calculate two more points, and the three points will be enough for us to determine the shape of the parabola. We can simply choose any ***x*** value we like and substitute it into the equation to calculate the corresponding ***y*** value. For example, let's calculate ***y*** when x is **0**: \begin{equation}y = 2(0 - 4)^{2} - 30\end{equation} When we work through the equation, it gives us the answer **2**, so we know that the point 0, 2 is in our parabola. So, we know that the line of symmetry is at ***x = h*** (which is 4), and we now know that the ***y*** value when ***x*** is 0 (***h*** - ***h***) is 2. The ***y*** value at the same distance from the line of symmetry in the negative direction will be the same as the value in the positive direction, so when ***x*** is ***h*** + ***h***, the ***y*** value will also be 2. The following Python code encapulates all of this in a function that draws and annotates a parabola using only the ***a***, ***h***, and ***k*** values from a quadratic equation in vertex form: ```python def plot_parabola_from_vertex_form(a, h, k): import pandas as pd import math # Create a dataframe with an x column a range of x values to plot df = pd.DataFrame ({'x': range(h-10, h+11)}) # Add a y column by applying the quadratic equation to x df['y'] = (a*(df['x'] - h)**2) + k # get min and max y values miny = df.y.min() maxy = df.y.max() # calculate y when x is 0 (h+-h) y = a*(0 - h)**2 + k # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.x, df.y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() # Plot calculated y values for x = 0 (h-h and h+h) plt.scatter([h-h, h+h],[y,y], color="green") plt.annotate(str(h-h) + ',' + str(y),(h-h, y)) plt.annotate(str(h+h) + ',' + str(y),(h+h, y)) # plot the line of symmetry (x = h) sx = [h, h] sy = [miny, maxy] plt.plot(sx,sy, color='magenta') # Annotate the vertex (h,k) plt.scatter(h,k, color="red") plt.annotate('v=' + str(h) + ',' + str(k),(h, k), xytext=(h - 1, (k - 10))) plt.show() # Call the function for the example discussed above plot_parabola_from_vertex_form(2, 4, -30) ``` It's important to note that the vertex form specifically requires a *subtraction* operation in the factored perfect square term. For example, consider the following equation in the standard form: \begin{equation}y = 3x^{2} + 6x + 2\end{equation} The steps to solve this are: 1. Move the constant to the left side: \begin{equation}y - 2 = 3x^{2} + 6x\end{equation} 2. Factor the ***x*** expressions on the right: \begin{equation}y - 2 = 3(x^{2} + 2x)\end{equation} 3. Add the square of half the x coefficient to the right, and the corresponding multiple on the left: \begin{equation}y - 2 + 3 = 3(x^{2} + 2x + 1)\end{equation} 4. Factor out a perfect square binomial: \begin{equation}y + 1 = 3(x + 1)^{2}\end{equation} 5. Move the constant back to the right side: \begin{equation}y = 3(x + 1)^{2} - 1\end{equation} To express this in vertex form, we need to convert the addition in the parenthesis to a subtraction: \begin{equation}y = 3(x - -1)^{2} - 1\end{equation} Now, we can use the a, h, and k values to define a parabola: ```python plot_parabola_from_vertex_form(3, -1, -1) ``` ## Shortcuts for Solving Quadratic Equations We've spent some time in this notebook discussing how to solve quadratic equations to determine the vertex of a parabola and the ***x*** values in relation to ***y***. It's important to understand the techniques we've used, which incude: - Factoring - Calculating the Square Root - Completing the Square - Using the vertex form of the equation The underlying algebra for all of these techniques is the same, and this consistent algebra results in some shortcuts that you can memorize to make it easier to solve quadratic equations without going through all of the steps: ### Calculating the Vertex from Standard Form You've already seen that converting a quadratic equation to the vertex form makes it easy to identify the vertex coordinates, as they're encoded as ***h*** and ***k*** in the equation itself - like this: \begin{equation}y = a(x - \textbf{h})^{2} + \textbf{k}\end{equation} However, what if you have an equation in standard form?: \begin{equation}y = ax^{2} + bx + c\end{equation} There's a quick and easy technique you can apply to get the vertex coordinates. 1. To find ***h*** (which is the x-coordinate of the vertex), apply the following formula: \begin{equation}h = \frac{-b}{2a}\end{equation} 2. After you've found ***h***, use it in the quadratic equation to solve for ***k***: \begin{equation}\textbf{k} = a\textbf{h}^{2} + b\textbf{h} + c\end{equation} For example, here's the quadratic equation in standard form that we previously converted to the vertex form: \begin{equation}y = 2x^{2} - 16x + 2\end{equation} To find ***h***, we perform the following calculation: \begin{equation}h = \frac{-b}{2a}\;\;\;\;=\;\;\;\;\frac{-1 \cdot16}{2\cdot2}\;\;\;\;=\;\;\;\;\frac{16}{4}\;\;\;\;=\;\;\;\;4\end{equation} Then we simply plug the value we've obtained for ***h*** into the quadratic equation in order to find ***k***: \begin{equation}k = 2\cdot(4^{2}) - 16\cdot4 + 2\;\;\;\;=\;\;\;\;32 - 64 + 2\;\;\;\;=\;\;\;\;-30\end{equation} Note that a vertex at 4,-30 is also what we previously calculated for the vertex form of the same equation: \begin{equation}y = 2(x - 4)^{2} - 30\end{equation} ### The Quadratic Formula Another useful formula to remember is the *quadratic formula*, which makes it easy to calculate values for ***x*** when ***y*** is **0**; or in other words: \begin{equation}ax^{2} + bx + c = 0\end{equation} Here's the formula: \begin{equation}x = \frac{-b \pm \sqrt{b^{2} - 4ac}}{2a}\end{equation} Let's apply that formula to our equation, which you may remember looks like this: \begin{equation}y = 2x^{2} - 16x + 2\end{equation} OK, let's plug the ***a***, ***b***, and ***c*** variables from our equation into the quadratic formula: \begin{equation}x = \frac{--16 \pm \sqrt{-16^{2} - 4\cdot2\cdot2}}{2\cdot2}\end{equation} This simplifes to: \begin{equation}x = \frac{16 \pm \sqrt{256 - 16}}{4}\end{equation} This in turn (with the help of a calculator) simplifies to: \begin{equation}x = \frac{16 \pm 15.491933384829668}{4}\end{equation} So our positive value for ***x*** is: \begin{equation}x = \frac{16 + 15.491933384829668}{4}\;\;\;\;=7.872983346207417\end{equation} And the negative value for ***x*** is: \begin{equation}x = \frac{16 - 15.491933384829668}{4}\;\;\;\;=0.12701665379258298\end{equation} The following Python code uses the vertex formula and the quadtratic formula to calculate the vertex and the -x and +x for y = 0, and then plots the resulting parabola: ```python def plot_parabola_from_formula (a, b, c): import math # Get vertex print('CALCULATING THE VERTEX') print('vx = -b / 2a') nb = -b a2 = 2*a print('vx = ' + str(nb) + ' / ' + str(a2)) vx = -b/(2*a) print('vx = ' + str(vx)) print('\nvy = ax^2 + bx + c') print('vy =' + str(a) + '(' + str(vx) + '^2) + ' + str(b) + '(' + str(vx) + ') + ' + str(c)) avx2 = a*vx**2 bvx = b*vx print('vy =' + str(avx2) + ' + ' + str(bvx) + ' + ' + str(c)) vy = avx2 + bvx + c print('vy = ' + str(vy)) print ('\nv = ' + str(vx) + ',' + str(vy)) # Get +x and -x (showing intermediate calculations) print('\nCALCULATING -x AND +x FOR y=0') print('x = -b +- sqrt(b^2 - 4ac) / 2a') b2 = b**2 ac4 = 4*a*c print('x = ' + str(nb) + '+-sqrt(' + str(b2) + ' - ' + str(ac4) + ')/' + str(a2)) sr = math.sqrt(b2 - ac4) print('x = ' + str(nb) + ' +- ' + str(sr) + ' / ' + str(a2)) print('-x = ' + str(nb) + ' - ' + str(sr) + ' / ' + str(a2)) print('+x = ' + str(nb) + ' + ' + str(sr) + ' / ' + str(a2)) posx = (nb + sr) / a2 negx = (nb - sr) / a2 print('-x = ' + str(negx)) print('+x = ' + str(posx)) print('\nPLOTTING THE PARABOLA') import pandas as pd # Create a dataframe with an x column a range of x values to plot df = pd.DataFrame ({'x': range(round(vx)-10, round(vx)+11)}) # Add a y column by applying the quadratic equation to x df['y'] = a*df['x']**2 + b*df['x'] + c # get min and max y values miny = df.y.min() maxy = df.y.max() # Plot the line %matplotlib inline from matplotlib import pyplot as plt plt.plot(df.x, df.y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() # Plot calculated x values for y = 0 plt.scatter([negx, posx],[0,0], color="green") plt.annotate('-x=' + str(negx) + ',' + str(0),(negx, 0), xytext=(negx - 3, 5)) plt.annotate('+x=' + str(posx) + ',' + str(0),(posx, 0), xytext=(posx - 3, -10)) # plot the line of symmetry sx = [vx, vx] sy = [miny, maxy] plt.plot(sx,sy, color='magenta') # Annotate the vertex plt.scatter(vx,vy, color="red") plt.annotate('v=' + str(vx) + ',' + str(vy),(vx, vy), xytext=(vx - 1, vy - 10)) plt.show() plot_parabola_from_formula (2, -16, 2) ``` ```python ``` ```python ```
dc5621f23c28153b93192305f3e19781fcc36bf2
214,773
ipynb
Jupyter Notebook
Basics Of Algebra by Hiren/01-07-Quadratic Equations.ipynb
serkin/Basic-Mathematics-for-Machine-Learning
ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab
[ "Apache-2.0" ]
null
null
null
Basics Of Algebra by Hiren/01-07-Quadratic Equations.ipynb
serkin/Basic-Mathematics-for-Machine-Learning
ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab
[ "Apache-2.0" ]
null
null
null
Basics Of Algebra by Hiren/01-07-Quadratic Equations.ipynb
serkin/Basic-Mathematics-for-Machine-Learning
ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab
[ "Apache-2.0" ]
null
null
null
212.436202
24,240
0.89074
true
8,095
Qwen/Qwen-72B
1. YES 2. YES
0.935347
0.936285
0.875751
__label__eng_Latn
0.991599
0.872996
# 1D Reaction-Diffusion problem ## General formulation This is a well known problem to FEM with Continuous Galerkin approximation. Spurious oscillations rise when certain (not rare) conditions are met. The general problem consists in: Find $u \in \mathbb{C}^2(\overline{\Omega})$ in a closed domain $\overline{\Omega}\subset \mathbb{R}^n\,(n=1,2,3)$ such that: \begin{equation} \left\{ \begin{aligned} - &k \Delta u + \sigma u = f(x), \quad \forall x \in \Omega \\ &\left. u \right|_{\partial \Omega} = g(x) \end{aligned} \right. \end{equation} where $\overline{\Omega} := \Omega \cup \partial \Omega$ denotes the closed domain, $\Omega$ is the open domain and $\partial \Omega$ is the domain's boundary (the closure). Note that $\Omega \cap \partial \Omega = \emptyset$. The solution is a scalar field such that $u: \overline{\Omega} \to \mathbb{R}$. Also, we have the "source term" that is a scalar field of the same form, $f: \overline{\Omega} \to \mathbb{R}$, but this function is a given data to the (direct) problem. The $g(x)$ is a prescribed function applied to the boundary, which is also a given data to the problem. The parameter $k$ (here a scalar field like $u$) sometimes is named as "diffusivity coefficient" related to the quantity $u$, in a physical context (transport of a quantity, look for Reynolds' Theorem for further clarifications). In some contexts, parameter $k$ must be a 2-rank tensor, as in anisotropic Porous Media flows. Analagously, we have the parameter $\sigma: \overline{\Omega} \to \mathbb{R}$, which is denoted (in some contexts) as the "reaction coefficient" related to the proportional rate that $u$ is changing due to generation/consumption. Parameters $k$ and $\sigma$ are given data to the (direct) problem too. P.S.: I will not discuss the ill problems formulation, neither others mathematical requirements to assert the well-posedness of the problem, uniqueness and so on. ## 1D simplification In some scenarios, the mathematical modeling approach can be simplified to the 1D case, which is written as follows: Find $u \in \mathbb{C}^2$ such that: \begin{equation} \left\{ \begin{aligned} - &k u''(x) + \sigma u(x) = f(x), \quad \forall x \in ((x_1, x_2) \subset \mathbb{R}) \\ &u(x_1) = u_1 \\ &u(x_2) = u_2 \end{aligned} \right. \end{equation} where $u_1, u_2 \in \mathbb{R}$ are the prescribed boundary condition values, $u'(x) \equiv \dfrac{d u}{d x}$ and thus $u''(x) \equiv \dfrac{d^2 u}{d x^2}$. For the sake of simplicity, I will omit the argument of $u$ function. Such aforementioned problems are known as Two-Point Boundary Value problems. ### Variational formulation and Galerkin approximation One can find the derivation of the weak form, but I will just give its result below. Given the space of admissible solution: \begin{equation} \mathcal{U} (\overline{\Omega}) := \left\{ u \in H^1 (\overline{\Omega}) \left| \,u(x_1) = u_1, u(x_2) = u_2 \right. \right\} \end{equation} and the space of suitable variations as \begin{equation} \mathcal{V} (\overline{\Omega}) := \left\{ v \in H^1 (\overline{\Omega}) \left| \,u(x_1) = u(x_2) = 0 \right. \right\} \end{equation} The discretization of the above spaces is performed by Galerkin approximation with Lagrangean function space $\mathbb{P}$ of order $k$ defined over $\overline{\Omega}$. Thus we define here: \begin{equation} \mathcal{S}_h^k(\overline{\Omega}) := \left\{ \varphi_h \in \mathcal{C}(\Omega^e): \left.\varphi\right|_{\Omega^e} \in \mathbb{P}_k(\Omega^e), \forall \Omega^e \in \mathcal{T}_h \right\} \end{equation} the space of Lagrange polynomials subject to a domain partition $\mathcal{T}_h:=\cup \Omega^e \approx \overline{\Omega}$ and $\cap \Omega^e = \emptyset$ (non-overlapping elements). Thus \begin{equation} \mathcal{U}_h := \mathcal{U} \cap \mathcal{S}_h^k \quad \text{and} \quad \mathcal{V}_h := \mathcal{V} \cap \mathcal{S}_h^k \end{equation} we have the discretized spaces. Now, we can stated out our "discretized" Variational Formulation as: Find $u_h \in \mathcal{U}_h$ such that \begin{equation} a(u_h, v_h) = F(v_h), \quad \forall v_h \in \mathcal{V}_h \end{equation} where \begin{align} a(u, v) &:= \int_{\Omega} u' v' dx + \int_{\Omega} \sigma u v dx\\ F(v) &:= \int_{\Omega} f v dx \end{align} and the domain is $\overline{\Omega} \equiv [x_1, x_2]$. ## A practical FEniCS example Solve this in FEniCS is very straightforward compared to classical FEM codes. The FEniCS framework provides a high-level interface which eases the pain at most. Just for learning purpose, let the given data be setted as: \begin{align*} &f(x) \equiv f = 1 & \\ &u_1 = u_2 = 0 & \\ &x_1 = 0, &x_2 = 1 \\ &k = 10^{-8}, &\sigma = 1 \end{align*} With the source term as $f(x) = 1$, the exact solution can be easily obtained: \begin{equation} u(x) \approx \left\{ \begin{aligned} &u(x) = 1, &\text{if } x \in (x_1, x_2) \\ &u(x) = 0, &\text{if } x = x_1 \text{ or } x = x_2 \end{aligned} \right. \end{equation} for small values of $k$, say $k = 10^{-8}$. So, how to solve the problem with FEniCS? We will construct the procedures stepwisely. * Importing all the libs we'll need. ```python from fenics import * # all the FEniCS namespace (not a recommended Python practice) import matplotlib.pyplot as plt from matplotlib import rc import numpy as np ``` * Defining the domain and related mesh ```python x_left = 0.0 x_right = 1.0 numel = 15 mesh = IntervalMesh(numel, x_left, x_right) # IntervalMesh(num_of_elements, inf_interval, sup_interval) mesh_ref = IntervalMesh(100, x_left, x_right) ``` * Setting the degree of the functions in the Continuous Galerkin method and the variation space. Additionaly, we define here a space to be employed in the projection of the reference analytical solution ```python p = 1 V = FunctionSpace(mesh, "CG", p) # "CG" stands for Continuous Galerkin, p is the degree Vref = FunctionSpace(mesh_ref, "CG", p) ``` * Defining a Python function which marks the boundaries ```python def left(x, on_boundary): return x < 0+DOLFIN_EPS def right(x, on_boundary): return x > 1-DOLFIN_EPS ``` * Setting the prescribed boundary values ```python u1, u2 = 0.0, 0.0 g_left = Constant(u1) g_right = Constant(u2) ``` * Now we modify the spaces, as expected ```python bc_left = DirichletBC(V, g_left, left) bc_right = DirichletBC(V, g_right, right) dirichlet_condition = [bc_left, bc_right] ``` * Here we define the source function over the domain ```python f = Expression("1.0", domain=mesh, degree=p) ``` * Now comes the good part. We set up the Trial and Test functions from the admissible space ```python u_h = TrialFunction(V) v_h = TestFunction(V) ``` * The parameters of the problem ```python k = Constant(1e-8) sigma = Constant(1.0) ``` * Then we write the unsymmetric form, very much like it is written in mathematical form ```python a = k*inner(grad(v_h), grad(u_h))*dx + sigma*inner(u_h, v_h)*dx ``` * Also we define the associated linear form ```python L = f*v_h*dx ``` * Now we declare the solution variable ```python u_sol = Function(V) ``` * Thus we set the discretized variational problem to be solved ```python problem = LinearVariationalProblem(a, L, u_sol, dirichlet_condition) ``` * And we go further and solve it with no difficulties! ```python solver = LinearVariationalSolver(problem) solver.solve() ``` * But we will need to check if the solution is "good enough". So we compare with the available exact solution ```python sol_exact = Expression( 'x[0]<=0+tol || x[0]>=1 - tol ? 0 : 1', degree=p+1, tol=DOLFIN_EPS ) u_e = interpolate(sol_exact, Vref) u_e = interpolate(sol_exact, Vref) ``` * Now, lets plot our results! ```python plot(u_sol, marker='x', label='Approx') plot(u_e, label='Exact') # Setting the font plt.rc('text',usetex=True) plt.rc('font', size=14) # Plotting plt.xlim(x_left, x_right) # Limites do eixo x plt.ylim(np.min(u_sol.vector().get_local()), 1.02*np.max(u_sol.vector().get_local())) # Limites do eixo y plt.grid(True, linestyle='--') # Ativa o grid do grafico plt.xlabel(r'$x$') # Legenda do eixo x plt.ylabel(r'$u(x)$') # Legenda do eixo y plt.legend(loc='best',borderpad=0.5) # Ativa legenda no grafico e diz para se posicionar na melhor localizacao detectada plt.show() # Exibe o grafico em tela ``` ## Something seems not good... (Batman?) Terrible, it isn't? This is due the high Damköhler number of the problem $(Da >> 1)$. The Damköhler number is conceptually defined as \begin{equation} Da := \frac{\text{reaction rate of a physical quantity}}{\text{diffusion rate of a physical quantity}} \end{equation} In Finite Element context, we evaluate the Damköhler number in an element-wise manner, because this is how we (normally) stabilize the method and vanish the spurious oscillations. The local (within each element, valid for 1D and 2D... I need to confirm for 3D) Damköhler number is denote as \begin{equation} Da_K \equiv \frac{\sigma h_k^2}{6k} \end{equation} where $h_K$ is the mesh parameter, related to some characteristic lenght of the element. For further details about what I stated above and for the method I will summarize below, check the paper "The Galerkin Gradient Least-Squares Method" (Franca and Do Carmo, 1989). Highly recommend reading for who want to dive in FEM Stabilization methods. So, where all this stuff will be applied? ## GGLS stabilization As commented above, this method was proposed by "The Galerkin Gradient Least-Squares Method" (Franca and Do Carmo, 1989) aiming to solve singular diffusion problems in reaction dominated cases ($Da >> 1$). The principle is similar to GLS method, but control is gained in $H^1$-seminorm, while classical Galerkin (and GLS) controls over $L^2$-norm. With the modification, the variational problem is rewritten as follows: \begin{equation} B(u,v) = F(v) \end{equation} and \begin{align} &B(u,v) := a(u,v) + R(u,v) \\ &F(v) := L(v) + S(v) \end{align} with \begin{align} &R(u,v) := \sum_{K}\tau_k \left(\nabla\left(\sigma u -k \Delta u\right), \nabla\left(\sigma v -k \Delta v\right)\right)_K\\ &S(v) := \sum_{K}\tau_k \left(f, \nabla\left(\sigma v -k \Delta v\right) \right)_K \end{align} where \begin{equation} (u, v)_{K} := \int_{\Omega^K} u v dx \end{equation} denotes the $L^2$ inner product restricted to the element $K$. Note the presence of a stabilizing parameter $\tau_k$. By this parameter, we control how much we want to add to an element local contribution aiming to stabilize in the local formulation. This parameter is computed locally as follows: \begin{equation} \tau_K := \frac{h_K^2}{6 \sigma} \widetilde{\xi}(Da_K) \end{equation} where \begin{equation} \xi(Da_K) := \left\{ \begin{aligned} &1, &Da_K \geq 8, \\ &0.064 Da_K + 0.49, &1 \leq Da_K < 8 \\ &0, &Da_K < 1 \end{aligned} \right. \end{equation} The above expression is an asymptotic approximation to $\widetilde{\xi}$. See the aforementioned paper for further details. ### 1D GGLS To simplify to 1D, just remember that the gradient reduce to $\nabla (\bullet) \equiv \dfrac{d (\bullet)}{d x} = u'$. Thus, \begin{align} &R(u,v) := \sum_{K}\tau_k \left(\sigma u' -k u''', \sigma v' -k v''' \right)_K \\ &S(v) := \sum_{K}\tau_k \left(f, \sigma u' -k v''' \right)_K \end{align} ## Revisiting the FEniCS implementation Finally, we can stabilize with GGLS the previous Continuous Galerkin FEM approach. * First, we need to define the stabilizing parameter and its related quantities ```python h = CellDiameter(mesh) Da_k = (sigma*pow(h,2))/(6.0*k) eps = conditional(ge(Da_k,8),1,conditional(ge(Da_k,1),0.064*Da_k+0.49,0)) tau = (eps*(pow(h,2)))/(6.0*sigma) ``` * Now we add the stabilizing terms to the classical Galerkin formulation ```python a += inner(grad(sigma*u_h - k*div(grad(u_h))), tau*grad(sigma*v_h - k*div(grad(v_h))))*dx L += inner(grad(f), tau*grad(sigma*v_h - k*div(grad(v_h))))*dx ``` * Redefining the problem ```python u_sol = Function(V) problem = LinearVariationalProblem(a, L, u_sol, dirichlet_condition) ``` * Solving the stabilized problem ```python solver = LinearVariationalSolver(problem) solver.solve() ``` * Now we plot the results to check ```python plot(u_sol, marker='x', label='Approx') plot(u_e, label='Exact') # Configuracoes de fontes no grafico plt.rc('text',usetex=True) plt.rc('font', size=14) # Plotando plt.xlim(x_left, x_right) # Limites do eixo x plt.ylim(np.min(u_sol.vector().get_local()), 1.05*np.max(u_e.vector().get_local())) # Limites do eixo y plt.grid(True, linestyle='--') # Ativa o grid do grafico plt.xlabel(r'$x$') # Legenda do eixo x plt.ylabel(r'$u(x)$') # Legenda do eixo y plt.legend(loc='best',borderpad=0.5) # Ativa legenda no grafico e diz para se posicionar na melhor localizacao detectada plt.show() # Exibe o grafico em tela ``` Now things look right. ## Another stabilization: Galerkin Least Squares (GLS) An alternative stabilization is the very well known method named Galerkin Least Squares, or GLS for short. This methods was introduced by Hughes, Franca and Hulbert (see the paper "A new finite element formulation for computational fluid dynamics: VIII. The Galerkin/least-squares method for advective–diffusive equations" for further details). The formulation to reaction-diffusion problem was based on a remark provide in the Franca and Do Carmo GGLS paper. It's very similar approach to GGLS method in the modification sense related to the classical Galerkin formulation. Now, we have the following terms to add: \begin{align} &R_{GLS}(u,v) := \sum_{K}\tau_k \left(\sigma u -k \Delta u, \sigma v -k \Delta v \right)_K \\ &S_{GLS}(v) := \sum_{K}\tau_k \left(f, \sigma v -k \Delta v \right)_K \end{align} which trivially is extended to 1D as \begin{align} &R_{GLS}(u,v) := \sum_{K}\tau_k \left(\sigma u -k u'', \sigma v -k v''\right)_K \\ &S_{GLS}(v) := \sum_{K}\tau_k \left(f, \sigma v -k v'' \right)_K \end{align} ### 1D GLS implementation in FEniCS For instance, the stabilization parameter and its parameters can be pretty much the same as in GLS, just for testing purpose. Minor changes are necessary, but we will solve within a different variational problem just to compare the methods. * The classical Continuous Galerkin terms ```python a_GLS = k*inner(grad(v_h), grad(u_h))*dx + sigma*inner(u_h, v_h)*dx L_GLS = f*v_h*dx ``` * Adding the GLS terms ```python a_GLS += tau*inner((sigma*u_h - k*div(grad(u_h))), sigma*v_h - k*div(grad(v_h)))*dx L_GLS += tau*inner(f, sigma*v_h - k*div(grad(v_h)))*dx ``` * Now we solve the problem with GLS ```python u_sol_gls = Function(V) problem = LinearVariationalProblem(a_GLS, L_GLS, u_sol_gls, dirichlet_condition) solver = LinearVariationalSolver(problem) solver.solve() ``` * Then we plot: ```python plot(u_sol, marker='.', label='GGLS') plot(u_sol_gls, marker='x', label='GLS') plot(u_e, label='Exact') # Configuracoes de fontes no grafico plt.rc('text',usetex=True) plt.rc('font', size=14) # Plotando plt.xlim(x_left, x_right) # Limites do eixo x plt.ylim(np.min(u_sol.vector().get_local()), 1.02*np.max(u_sol_gls.vector().get_local())) # Limites do eixo y plt.grid(True, linestyle='--') # Ativa o grid do grafico plt.xlabel(r'$x$') # Legenda do eixo x plt.ylabel(r'$u(x)$') # Legenda do eixo y plt.legend(loc='best',borderpad=0.5) # Ativa legenda no grafico e diz para se posicionar na melhor localizacao detectada plt.show() # Exibe o grafico em tela ``` The above result confirms what we expected: GLS can not control spurious oscillation with $Da>>1$ and $p = 1$ while GGLS achieve success controlling $H^1$-seminorm.
50da145191df8a717510383c038419b408dc7b27
98,112
ipynb
Jupyter Notebook
fem/fenics/reaction-diffusion1D.ipynb
volpatto/quick-ipython-notebooks
4c97f0c8fbf918a76f74d8198720562538eaf5a2
[ "MIT" ]
1
2020-06-09T16:49:11.000Z
2020-06-09T16:49:11.000Z
fem/fenics/reaction-diffusion1D.ipynb
volpatto/quick-ipython-notebooks
4c97f0c8fbf918a76f74d8198720562538eaf5a2
[ "MIT" ]
null
null
null
fem/fenics/reaction-diffusion1D.ipynb
volpatto/quick-ipython-notebooks
4c97f0c8fbf918a76f74d8198720562538eaf5a2
[ "MIT" ]
null
null
null
115.15493
27,176
0.862667
true
4,816
Qwen/Qwen-72B
1. YES 2. YES
0.859664
0.774583
0.665881
__label__eng_Latn
0.924724
0.385396
# Praca domowa - Dominik Stańczak ```python import sympy sympy.init_printing() t, lambda3a, lambdaa12, N4, N12, N16, dN4, dN12, dN16, dt = sympy.symbols('t, lambda_3a, lambda_a12, N4, N12, N16, dN4, dN12, dN16, dt', real=True) eqs = [ sympy.Eq(dN4/dt, -3*lambda3a * N4 **3 - lambdaa12 * N4 * N12), sympy.Eq(dN12/dt, lambda3a * N4 **3 - lambdaa12 * N4 * N12), sympy.Eq(dN16/dt, lambdaa12 * N4 * N12) ] eqs ``` ```python m, rho = sympy.symbols('m, rho', real=True) X4, X12, X16, dX4, dX12, dX16 = sympy.symbols('X4, X12, X16, dX4, dX12, dX16', real=True) Xeqs = [ sympy.Eq(X4, m/rho*4*N4), sympy.Eq(X12, m/rho*12*N12), sympy.Eq(X16, m/rho*16*N16), ] Xeqs ``` ```python subs = {X4: dX4, X12: dX12, X16: dX16, N4: dN4, N12: dN12, N16: dN16} dXeqs = [eq.subs(subs) for eq in Xeqs] dXeqs ``` ```python full_conservation = [sympy.Eq(X4 + X12 + X16, 1), sympy.Eq(dX4 + dX12 + dX16, 0)] full_conservation ``` ```python all_eqs = eqs + Xeqs + dXeqs + full_conservation all_eqs ``` ```python X_all_eqs = [eq.subs(sympy.solve(Xeqs, [N4, N12, N16])).subs(sympy.solve(dXeqs, [dN4, dN12, dN16])) for eq in eqs] + [full_conservation[1]] X_all_eqs ``` ```python solutions = sympy.solve(X_all_eqs, [dX4, dX12, dX16]) dX12dX4 = solutions[dX12]/solutions[dX4] dX12dX4 ``` ```python q = sympy.symbols('q', real=True) dX12dX4_final = dX12dX4.subs({lambdaa12*m: q * lambda3a * rho}).simplify() dX12dX4_final ``` ```python fX12 = sympy.Function('X12')(X4) diffeq = sympy.Eq(fX12.diff(X4), dX12dX4_final.subs(X12, fX12)) diffeq ``` ```python dX16dX4 = solutions[dX16]/solutions[dX4] dX16dX4 ``` ```python dX16dX4_final = dX16dX4.subs({lambdaa12*m: q * lambda3a * rho}).simplify() dX16dX4_final ``` ```python derivatives_func = sympy.lambdify((X4, X12, X16, q), [dX12dX4_final, dX16dX4_final]) derivatives_func(1, 0, 0, 1) ``` ```python def f(X, X4, q): return derivatives_func(X4, *X, q) f([0, 0], 1, 1) ``` ```python import numpy as np from scipy.integrate import odeint import matplotlib.pyplot as plt X4 = np.linspace(1, 0, 1000) q_list = np.logspace(-3, np.log10(2), 500) results = [] # fig, (ax1, ax2) = plt.subplots(2, sharex=True, figsize=(10, 8)) # ax1.set_xlim(0, 1) # ax2.set_xlim(0, 1) # ax1.set_ylim(0, 1) # ax2.set_ylim(0, 1) for q in q_list: X = odeint(f, [0, 0], X4, args=(q,)) X12, X16 = X.T # ax1.plot(X4, X12, label=f"q: {q:.1f}") # ax2.plot(X4, X16, label=f"q: {q:.1f}") # ax2.set_xlabel("X4") # ax1.set_ylabel("X12") # ax2.set_ylabel("X16") # plt.plot(X4, X16) # plt.legend() results.append(X[-1]) results = np.array(results) ``` ```python X12, X16 = results.T plt.figure(figsize=(10, 10)) plt.plot(q_list, X12, label="X12") plt.plot(q_list, X16, label="X16") plt.xlabel("q") plt.xscale("log") plt.ylabel("X") plt.legend(loc='best') plt.xlim(q_list.min(), q_list.max()); plt.grid() plt.savefig("Reacts.png") ```
b6276353fad9cee72fadd49d73eef54341751ff5
82,607
ipynb
Jupyter Notebook
NuclearReactsOnly.ipynb
StanczakDominik/FUW-AstroI-notatki
f99d639b3264a79ddcd291ca9c475718acb82d5d
[ "CC-BY-4.0" ]
null
null
null
NuclearReactsOnly.ipynb
StanczakDominik/FUW-AstroI-notatki
f99d639b3264a79ddcd291ca9c475718acb82d5d
[ "CC-BY-4.0" ]
null
null
null
NuclearReactsOnly.ipynb
StanczakDominik/FUW-AstroI-notatki
f99d639b3264a79ddcd291ca9c475718acb82d5d
[ "CC-BY-4.0" ]
null
null
null
169.276639
31,872
0.869612
true
1,254
Qwen/Qwen-72B
1. YES 2. YES
0.951142
0.888759
0.845336
__label__eng_Latn
0.096565
0.802332
<a href="https://colab.research.google.com/github/sergiorolnic/github-slideshow/blob/master/Untitled11.ipynb" target="_parent"></a> # Esempi ## prime due es ```python #Scrivere un programma Python per calcolare la distanza tra due punti import math p1=[4,0] p2=[6,6] distanza= math.sqrt((p1[0]-p2[0])**2+(p1[1]-p2[1])**2) Creare una lista con 100 elementi uguali a 0. lista=[0]*100 print(lista) ''' Esercizio 20 Data una lista annidata a=[[2,1,3],[4,5,6],[7,8,9]], scrivere un programma Python che la trasformi nella lista semplice ottenuta con i suoi valori, cioè a1=[2,1,3,4,5,6,7,8,9] ''' a=[[2,1,3],[4,5,6],[7,8,9]] a1=a[0]+a[1]+a[2] print(a1) z = np.arange(0,30.5,0.5) print("Numpy Array z ",z) t = np.linspace(1,2,100) ps= np.dot(a,b) print("Prodotto scalare tra a e b ",ps) D=np.diag(a) a=1 c=1 k=np.arange(1,9) b=10.0**k; Delta= b**2-4*a*c ''' La formula del calcolo della soluzione x1 può dare problemi numerici in base ai valori di b: quando b assume un valore molto elevato b**2-4ac è approssimabile a b**2, quindi nel calcolo di x1 si può veriificare il fenomento di cancellazione dovuto al calcolo -b+ sqrt(Delta) in quanto sqrt(Delta) in questo caso è approssimabile a b. Per la formula del calcolo della soluzione x2 questo problema non si verifica. ''' x1=(-b+np.sqrt(Delta))/(2*a) x2=(-b-np.sqrt(Delta))/(2*a) ''' Usiamo una formula alternativa, nota in algebra, per il calcolo della soluzione x1 a partire dalla soluzione x2 ''' x1new=c/(a*x2) ------ ``` # Malcondizionamento sistema al variare di x ```python import numpy as np import numpy.linalg as npl import scipy.linalg as spl import matplotlib.pyplot as plt A=np.array([[3,5],[3.01,5.01]]) b=np.array([10, 1]) x= spl.solve(A,b) #soluzione del sistema: x=[-2255; 1355] # perturbo il coeff. di x della seconda equazione: dA=np.array([[0, 0], [0.01, 0]]) err_dati= npl.norm(dA,np.inf)/npl.norm(A,np.inf) #errore percentuale=0.12% print("Errore relativo sui dati Sistema 1 pert. in percentuale", err_dati*100,"%") # soluzione del sistema perturbato: x1=spl.solve(A+dA,b) err_rel_sol=npl.norm(x-x1,np.inf)/npl.norm(x,np.inf) #%errore percentuale=71.43% print("Errore relativo sulla soluzione Sistema 1 pert. in percentuale", err_rel_sol*100,"%") # quindi ad una piccola perturbazione sui dati corrisponde una grossa # perturbazione sui risultati --> problema mal condizionato """ Esercizio 4 """ import numpy as np import numpy.linalg as npl import scipy.linalg as spl #Esempi di sistemi malcondizionati A=np.array([[6, 63, 662.2],[63, 662.2, 6967.8],[662.2, 6967.8, 73393.5664]]) b=np.array([1.1, 2.33, 1.7]) KA= npl.cond(A,np.inf) x=spl.solve(A,b) #perturbare la matrice A1=A.copy() A1[0,0]=A[0,0]+0.01 x_per=spl.solve(A1,b) #Errore relativo sui dati err_dati=npl.norm(A-A1,np.inf)/npl.norm(A,np.inf) print("Errore relativo sui dati in percentuale ", err_dati*100,"%") err_rel_sol=npl.norm(x_per-x,np.inf)/npl.norm(x,np.inf) print("Errore relativo sulla soluzione in percentuale ", err_rel_sol*100,"%") ``` Errore relativo sui dati Sistema 1 pert. in percentuale 0.12468827930174566 % Errore relativo sulla soluzione Sistema 1 pert. in percentuale 71.42857142857439 % # Zeri ## esami ```python """ Ese1 """ import numpy as np import matplotlib.pyplot as plt from funzioni_zeri import newton_m, stima_ordine import sympy as sym from sympy.utilities.lambdify import lambdify from scipy.optimize import fsolve x=sym.symbols('x') fx= x-1/3*sym.sqrt(30*x-25) dfx=sym.diff(fx,x,1) print(dfx) x0=4 f=lambdify(x,fx,np) alfa=fsolve(f,x0) print("La funzione ha uno zero in ", alfa) fp=lambdify(x,dfx,np) fp_alfa=fp(alfa) print("La derivata prima in ", alfa, 'vale ',fp_alfa) x=np.linspace(5/6,25/6,100) plt.plot(x,f(x)) plt.plot(x,[0]*100) plt.plot(alfa,0,'ro') plt.show() #In alfa=1.66666667 si annula sia la funzione che la sua derivata prima, #la funzione ha in x=1.66666667 uno xero con molteplicita=2 m=2 tolx=1e-12 tolf=1e-12 nmax=100 #Metodo iterativo che converge quadraticamente al alfa: metodo di Newton Modificato con m=2 x1,it,xk= newton_m(f,fp,x0,m,tolx,tolf,nmax) #Verifico l'ordine di convergenza ordine=stima_ordine(xk,it) plt.plot(range(it),np.abs(xk)) plt.show() #Il metodo non converge se scelgo come iterato iniziale x0=5/6, #perchè la derivata prima in 5/6 diverge va a -infinito ``` ```python # -*- coding: utf-8 -*- """ Created on Mon May 24 19:09:59 2021 @author: damia """ import numpy as np import matplotlib.pyplot as plt from funzioni_zeri import iterazione import sympy as sym from sympy.utilities.lambdify import lambdify n=35 u1=np.zeros((n,),dtype=float) u2=np.zeros((n,),dtype=float) u3=np.zeros((n,),dtype=float) for i in range(1,n+1): u1[i-1]=15*((3/5)**(i)+1)/(5*(3/5)**(i)+3) u2[0]=4 for i in range(1,n): u2[i]=8-15/u2[i-1] u3[0]=4 u3[1]=17/4 for i in range(3,n+1): u3[i-1]=108-815/u3[i-2]+1500/(u3[i-2]*u3[i-3]) plt.plot(range(n),u2) plt.title('Formula 2') plt.show() plt.plot(range(n),u3) plt.title('Formula 3') plt.show() err_rel2=np.abs(u2-u1)/np.abs(u1) err_rel3=np.abs(u3-u1)/np.abs(u1) plt.semilogy(range(n),err_rel2,range(n),err_rel3) plt.legend(['Errore relativo formula 2', 'Errore relativo formula 3']) plt.show() g1=lambda x: 8-15/x g2=lambda x: 108-815/x+ 1500/(x**2); x=sym.symbols('x') #Definisco funzione g1 g1x= 8-15/x dg1x=sym.diff(g1x,x,1) dg1=lambdify(x,dg1x,np) g1=lambdify(x,g1x,np) #Definisco funzione g2 g2x=108-815/x+1500/(x**2) dg2x=sym.diff(g2x,x,1) dg2=lambdify(x,dg2x,np) g2=lambdify(x,g2x,np) x=np.linspace(4,100 ,100) plt.plot(x,g1(x)) plt.plot(x,x) plt.legend(['g1(x)','y=x']) plt.show() #La g2 interseca la bisettrice in 2 punti, ha due punti fissi (x=5 ed 100) , ma la derivata prima della g2 # non soddisfa le ipotesi del teorema di convergenza locale in un intorno del primo punto fisso,5 plt.plot(x,g2(x)) plt.plot(x,x) plt.legend(['g2(x)','y=x']) plt.show() tolx=1e-5 nmax=100 x0=4 x1,xk,it=iterazione(g1,x0,tolx,nmax) x2,xk2,it2=iterazione(g2,x0,tolx,nmax) print("Punto fisso della funzione g1 --> ",x1) print("Punto fisso della funzione g1 --> ",x2) #Visualizziamo la derivata prima di g1 in un intorno di 5, sono soddisfatte le iptesi del teorema di convergenza #locale xx=np.linspace(2,6,100) plt.semilogy(xx,dg1(xx)) plt.plot([2,6],[1,1]) plt.plot([2,6],[-1,-1]) plt.legend(['derivata prima di g1 in un intorno di 5 ', 'y=1','y=-1']) plt.show() xx=np.linspace(2,6,100) plt.semilogy(xx,dg2(xx)) plt.plot([2,6],[1,1]) plt.plot([2,6],[-1,-1]) plt.legend(['derivata prima di g2 in un intorno di 5 ', 'y=1','y=-1']) plt.show() xx=np.linspace(95,105,100) plt.plot(xx,dg2(xx)) plt.plot([95,105],[1,1]) plt.plot([95,105],[-1,-1]) plt.legend(['Derivata prima di g2 in un intorno di 100','y=1','y=-1']) ``` ```python import numpy as np import sympy as sym import funzioni_zeri import matplotlib.pyplot as plt from sympy.utilities.lambdify import lambdify from scipy.optimize import fsolve tolx=1e-7 nmax=1000 f= lambda x: np.tan(3/2*x)-2*np.cos(x)-x*(7-x) #Utilizzo il metodo fsolve di scipy.optimize per calcolare lo zero alfa della funzione f, #prende in input l'iterato iniziale x0 x0=0.0 alfa=fsolve(f, x0) print("Lo zero della funzione e' ",alfa) #Disegno: l'asse x e la funzione f valutata in un intervallo opportuno xx=np.linspace(-1.0,1.0,100) plt.plot(xx,0*xx,xx,f(xx),alfa,0,'ro') plt.legend(['Funzione f', 'zero']) plt.show() ''' Definisco la funzione g in formato simbolico perchè poi utilizzo la funzione diff di sympy per calcolare l'espressione analitica della derivata prima' ''' x=sym.symbols('x') #Considero la funzione g indicata dalla traccia del compito gx=sym.tan(3/2*x)-2*sym.cos(x)-x*(6-x) #Disegno la funzione g(x) e la bisettrice y=x g=lambdify(x,gx,np) plt.plot(xx,xx,xx,g(xx)) plt.title('funzione g(x) e y=x') plt.show() #Calcolo la derivata prima di gx espressione simbolica tramite la funzione diff del modulo sym dgx=sym.diff(gx,x,1) dg=lambdify(x,dgx,np) #Disegno la funzione dg(x) #Posso giustifcare la convergenza del procedimento iterativo guardando la derivata prima di g(x) #in un intorno della soluzione: il metodo genera una successione di iterati convergenti alla radice alfa # ed appartenenti a questo intorno se |g'(x)|< 1 in un intorno della soluzione plt.plot(xx ,dg(xx )) plt.plot(alfa,0,'ro') #Disegno la retta y=1 plt.plot([-1,1],[1,1],'--') #Disegno la retta y=-1 plt.plot([-1,1],[-1,-1],'--') plt.title('funzione dg(x) proposta dalla traccia - Ipotesi per la convergenza non soddisfatte') plt.legend(['Grafico derivata prima di g1 (x)', 'Zero', 'Retta y=1', 'Retta y=-1']) plt.show() #Dal grafico vedo che per la funzione g proposta dalla traccia #non sono soddisfatte le ipotesi del teorema di convergenza locale #Ricavo la funzione gx per la quale ci sia convergenza gx1= (sym.tan(3.0/2.0*x)-2*sym.cos(x)+x**2)/7 #Disegno la funzione g(x) e la bisettrice y=x g1=lambdify(x,gx1,np) plt.plot(xx,xx,xx,g1(xx)) plt.title('funzione g1(x) ricavata e y=x') plt.show() #Calcolo la derivata prima di gx espressione simbolica tramite la funzione diff del modulo sym dgx1=sym.diff(gx1,x,1) dg1=lambdify(x,dgx1,np) #Disegno la funzione dg1(x) #Posso giustifcare la convergenza del procedimento iterativo guardando la derivata prima di g(x) #in un intorno della soluzione: il metodo genera una successione di iterati convergenti alla radice alfa # ed appartenenti a questo intorno se |g'(x)|< 1 in un intorno della soluzione plt.plot(xx ,dg1(xx )) plt.plot(alfa,0,'ro') #Disegno la retta y=1 plt.plot([-1,1],[1,1],'--') #Disegno la retta y=-1 plt.plot([-1,1],[-1,-1],'--') plt.title('funzione dg1(x) Ricavata - Ipotesi di convergenza soddisfatte') plt.legend(['Grafico derivata prima di g1 (x)', 'Zero', 'Retta y=1', 'Retta y=-1']) plt.show() ''' Dal grafico vedo che per la funzione g ceh ho ricavato soddisfatte le ipotesi del teorema di convergenza locale: esiste un intorno della soluzione per cui |g'(x)|<1 ''' #Utilizzo il metodo di iterazione funzionale per calcolare il punto fisso di g1 x1,it,xk=funzioni_zeri.iterazione(g1,x0,tolx,nmax) print('iterazioni= {:d}, soluzione={:e} \n\n'.format(it,x1)) #Calcolo l'ordine del metodo ordine_iter= funzioni_zeri.stima_ordine(xk,it) #Essendo il metodo con ordine di convergenza lineare, la costante asintotica di convergenza è data #da |g'(alfa)| dove alfa è la radice. print("Iterazione it={:d}, ordine di convergenza {:e}".format(it,ordine_iter )) plt.plot(range(it+1),xk) ``` ## Newton, Newton modificato(molteplicità) e stima ordine ```python import math import numpy as np import sympy as sym import funzioni_zeri import matplotlib.pyplot as plt from sympy.utilities.lambdify import lambdify tolx=1e-12; tolf=1e-12; x=sym.symbols('x') fx=fx=x**3+x**2-33*x+63 dfx=sym.diff(fx,x,1) #Trasformo in numeriche la funzione e la sua derivata f=lambdify(x,fx,np) df=lambdify(x,dfx,np) #Disegno nello stesso grafico l'asse x e la funzione f valutata in un intervallo opportuno, [-10,10] z=np.linspace(-10,10,100) plt.plot(z,0*z,z,f(z),'r-') nmax=500 x0=1 xNew,itNew,xkNew=funzioni_zeri.newton(f,df,x0,tolx,tolf,nmax) print('X0= {:e} , zero Newton= {:e} con {:d} iterazioni \n'.format(x0,xNew,itNew)) ordine_New =funzioni_zeri.stima_ordine(xkNew,itNew) print("Newton it={:d}, ordine di convergenza {:e}".format(itNew,ordine_New)) #Utilizzando il metodo di Newton modifica e ponendo m uguale alla molteplicità della radice # si ottiene un metodo con ordine di convergenza 2 m=2 xNew_m,itNew_m,xkNew_m=funzioni_zeri.newton_m(f,df,x0,m,tolx,tolf,nmax) print('X0= {:e} , zero Newton Mod= {:e} con {:d} iterazioni \n'.format(x0,xNew_m,itNew_m)) ordine_New_m =funzioni_zeri.stima_ordine(xkNew_m,itNew_m) print("Newton Mod it={:d}, ordine di convergenza {:e}".format(itNew_m,ordine_New_m)) ``` ## Costate asintotica ```python """ esercizio 6 """ import numpy as np import sympy as sym import funzioni_zeri import matplotlib.pyplot as plt from sympy.utilities.lambdify import lambdify tolx=1.e-7 nmax=1000 f= lambda x: x**3+4*x**2-10 #Disegno: l'asse x e la funzione f valutata in un intervallo opportuno xx=np.linspace(0.0,1.6,100) plt.plot(xx,0*xx,xx,f(xx)) plt.show() x0=1.5; x=sym.symbols('x') #Definisco le possibili espressioni della funzione f(x)=0 nella forma x=g(x) #Per come ricavare le diverse g(X) , analizzare file pdf su virtuale con spiegazione teorica dell'esercizio 6 #................................ #gx=sym.sqrt(10/(x+4)); #p=1, 0.127229401770925 gx=1/2*sym.sqrt(10-x**3) #p=1, C=0.511961226874885 #gx=(10+x)/(x**2+4*x+1); #p=1, C=0.983645643784931 #gx=sym.sqrt(10/x-4*x) # non converge #Disegno la funzione g(x) e la bisettrice y=x g=lambdify(x,gx,np) plt.plot(xx,xx,xx,g(xx)) plt.title('funzione g(x) e y=x') plt.show() #Calcolo la derivata prima di gx espressione simbolica tramite la funzione diff del modulo sym dgx=sym.diff(gx,x,1) dg=lambdify(x,dgx,np) #Disegno la funzione dg(x) #Posso giustifcare la convergenza del procedimento iterativo guardando la derivata prima di g(x) #in un intorno della soluzione: il metodo genera una successione di iterati convergenti alla radice alfa # ed appartenenti a questo intorno se |g'(x)|< 1 in un intorno della soluzione plt.plot(xx,dg(xx)) plt.title('funzione dg(x)') plt.show() x1,it,xk=funzioni_zeri.iterazione(g,x0,tolx,nmax) print('iterazioni= {:d}, soluzione={:e} \n\n'.format(it,x1)) #Calcolo l'ordine del metodo ordine_iter= funzioni_zeri.stima_ordine(xk,it) #Essendo il metodo con ordine di convergenza lineare, la costante asintotica di convergenza è data #da |g'(alfa)| dove alfa è la radice. C=abs(dg(x1)) print("Iterazione it={:d}, ordine di convergenza {:e}, Costante asintotica di convergenza {:e}".format(it,ordine_iter,C)) plt.plot(xx,xx,'k-',xx,g(xx)) plt.title("abs(g'(alfa))="+str(C)) Vx=[] Vy=[] for k in range(it): Vx.append(xk[k]) Vy.append(xk[k]) Vx.append(xk[k]) Vy.append(xk[k+1]) Vy[0]=0 plt.plot(Vx,Vy,'r',xk,[0]*(it+1),'or-') plt.show() # Si osserva che a parità di ordine di convergenza, più piccola è la costante asintotica di convergenza, #maggiore è la velocità del metodo. ``` ## Funzioni ```python import numpy as np import math ''' Il core Python non possiede la funzione sign. La funzione copysign(a,b) del modulo math restituisce un valore numerico che ha il valore assoluto di a e segno di b. Per avere il segno di un valore numerico b si può usare math.copysign(1,b) che resistuisce 1 se b>0, -1 se b<0, 0 se b è zero ''' def sign(x): return math.copysign(1, x) #Bisezione def bisez(fname,a,b,tol): eps=np.spacing(1) # np.spacing(x) Restituisce la distanza tra x e il numero adiacente più vicino. # np.spacing(1) restituisce quindi l' eps di macchina. fa=fname(a) fb=fname(b) if sign(fa)==sign(fb): print('intervallo non corretto --Metodo non applicabile') return [],0,[] else: maxit=int(math.ceil(math.log((b-a)/tol)/math.log(2))) print('n. di passi necessari=',maxit,'\n'); xk=[] it=0 #while it<maxit and abs(b-a)>=tol+eps*max(abs(a),abs(b)): while it<maxit and abs(b-a)>=tol: c=a+(b-a)*0.5 #formula stabile per il calcolo del punto medio dell'intervallo xk.append(c) it+=1 if c==a or c==b: break fxk=fname(c) if fxk==0: break elif sign(fxk)==sign(fa): a=c fa=fxk elif sign(fxk)==sign(fb): b=c fb=fxk x=c return x,it,xk def regula_falsi(fname,a,b,tol,nmax): #Regula Falsi eps=np.spacing(1) xk=[] fa=fname(a) fb=fname(b) if sign(fa)==sign(fb): print('intervallo non corretto --Metodo non applicabile') return [],0,[] else: it=0 fxk=fname(a) while it<nmax and abs(b-a)>=tol+eps*max(abs(a),abs(b)) and abs(fxk)>=tol : x1=a-fa*(b-a)/(fb-fa); xk.append(x1) it+=1 fxk=fname(x1); if fxk==0: break elif sign(fxk)==sign(fa): a=x1; fa=fxk; elif sign(fxk)==sign(fb): b=x1; fb=fxk; if it==nmax : print('Regula Falsi: Raggiunto numero max di iterazioni') return x1,it,xk def corde(fname,fpname,x0,tolx,tolf,nmax): #Corde xk=[] m=fpname(x0) #m= Coefficiente angolare della tangente in x0 fx0=fname(x0) d=fx0/m x1=x0-d fx1=fname(x1) xk.append(x1) it=1 while it<nmax and abs(fx1)>=tolf and abs(d)>=tolx*abs(x1) : x0=x1 fx0=fname(x0) d=fx0/m ''' #x1= ascissa del punto di intersezione tra la retta che passa per il punto (xi,f(xi)) e ha pendenza uguale a m e l'asse x ''' x1=x0-d fx1=fname(x1) it=it+1 xk.append(x1) if it==nmax: print('raggiunto massimo numero di iterazioni \n') return x1,it,xk #Secanti def secanti(fname,xm1,x0,tolx,tolf,nmax): xk=[] fxm1=fname(xm1); fx0=fname(x0); d=fx0*(x0-xm1)/(fx0-fxm1) x1=x0-d; xk.append(x1) fx1=fname(x1); it=1 while it<nmax and abs(fx1)>=tolf and abs(d)>=tolx*abs(x1): xm1=x0 x0=x1 fxm1=fname(xm1) fx0=fname(x0) d=fx0*(x0-xm1)/(fx0-fxm1) x1=x0-d fx1=fname(x1) xk.append(x1); it=it+1; if it==nmax: print('Secanti: raggiunto massimo numero di iterazioni \n') return x1,it,xk def newton(fname,fpname,x0,tolx,tolf,nmax): #Newton xk=[] fx0=fname(x0) dfx0=fpname(x0) if abs(dfx0)>np.spacing(1): d=fx0/dfx0 x1=x0-d fx1=fname(x1) xk.append(x1) it=0 else: print('Newton: Derivata nulla in x0 - EXIT \n') return [],0,[] it=1 while it<nmax and abs(fx1)>=tolf and abs(d)>=tolx*abs(x1): x0=x1 fx0=fname(x0) dfx0=fpname(x0) if abs(dfx0)>np.spacing(1): d=fx0/dfx0 x1=x0-d fx1=fname(x1) xk.append(x1) it=it+1 else: print('Newton: Derivata nulla in x0 - EXIT \n') return x1,it,xk if it==nmax: print('Newton: raggiunto massimo numero di iterazioni \n'); return x1,it,xk def stima_ordine(xk,iterazioni): p=[] for k in range(iterazioni-3): p.append(np.log(abs(xk[k+2]-xk[k+3])/abs(xk[k+1]-xk[k+2]))/np.log(abs(xk[k+1]-xk[k+2])/abs(xk[k]-xk[k+1]))); ordine=p[-1] return ordine #Newton Modificato def newton_m(fname,fpname,x0,m,tolx,tolf,nmax): eps=np.spacing(1) xk=[] #xk.append(x0) fx0=fname(x0) dfx0=fpname(x0) if abs(dfx0)>eps: d=fx0/dfx0 x1=x0-m*d fx1=fname(x1) xk.append(x1) it=0 else: print('Newton: Derivata nulla in x0 \n') return [],0,[] it=1 while it<nmax and abs(fx1)>=tolf and abs(d)>=tolx*abs(x1): x0=x1 fx0=fname(x0) dfx0=fpname(x0) if abs(dfx0)>eps: d=fx0/dfx0 x1=x0-m*d fx1=fname(x1) xk.append(x1) it=it+1 else: print('Newton Mod: Derivata nulla \n') return x1,it,xk if it==nmax: print('Newton Mod: raggiunto massimo numero di iterazioni \n'); return x1,it,xk def iterazione(gname,x0,tolx,nmax): xk=[] xk.append(x0) x1=gname(x0) d=x1-x0 xk.append(x1) it=1 while it<nmax and abs(d)>=tolx*abs(x1) : x0=x1 x1=gname(x0) d=x1-x0 it=it+1 xk.append(x1) if it==nmax: print('Raggiunto numero max di iterazioni \n') return x1, it,xk ``` # Sistemi Lineari e QR ## esame ```python # -*- coding: utf-8 -*- """ Created on Tue Dec 7 12:09:22 2021 @author: HWRUser """ import numpy as np import Sistemi_lineari as sl # CHOLESKY --> autovalori > 0 e simmetrica # LU --> determinante delle sottomatrici != 0 A = np.matrix('10 -4 4 0; -4 10 0 2; 4 0 10 2; 0 2 2 0') print(np.linalg.eigvals(A)) #A ha autovalore negativo, quindi non ammette cholesky B = np.matrix('5 -2 2 0; -2 5 0 1; 2 0 5 1; 0 1 1 5') print(np.linalg.eigvals(B)) #B ha autovalori positivi e simmetrica, ammette cholesky for i in range(1, 5): if np.linalg.det(A[0:i, 0:i]) == 0: print('Errore: determinante di A = 0') #A ha tutte le sottomatrici con determinante diverso da 0, ammette LU for i in range(1, 5): if np.linalg.det(B[0:i, 0:i]) == 0: print('Errore: determinante di B = 0') #B ha tutte le sottomatrici con determinante diverso da 0, ammette LU Ainv = np.linalg.inv(A) Binv = np.linalg.inv(B) P, L, U, flag = sl.LU_nopivot(A) print(P) print(L) print(U) detA = np.prod(np.diag(U)) detInvA = 1/detA P, L, U, flag = sl.LU_nopivot(B) print(P) print(L) print(U) detB = np.prod(np.diag(U)) detInvB = 1/detB # PA = LU --> det(P) * det(A) = det(L) * det(U) ``` ## Funzioni ```python import numpy as np def Lsolve(L,b): """ Risoluzione con procedura forward di Lx=b con L triangolare inferiore Input: L matrice triangolare inferiore b termine noto Output: x: soluzione del sistema lineare flag= 0, se sono soddisfatti i test di applicabilità 1, se non sono soddisfatti """ #test dimensione m,n=L.shape flag=0; if n != m: print('errore: matrice non quadrata') flag=1 x=[] return x, flag # Test singolarita' if np.all(np.diag(L)) != True: print('el. diag. nullo - matrice triangolare inferiore') x=[] flag=1 return x, flag # Preallocazione vettore soluzione x=np.zeros((n,1)) for i in range(n): s=np.dot(L[i,:i],x[:i]) #scalare=vettore riga * vettore colonna x[i]=(b[i]-s)/L[i,i] return x,flag def Usolve(U,b): """ Risoluzione con procedura backward di Rx=b con R triangolare superiore Input: U matrice triangolare superiore b termine noto Output: x: soluzione del sistema lineare flag= 0, se sono soddisfatti i test di applicabilità 1, se non sono soddisfatti """ #test dimensione m,n=U.shape flag=0; if n != m: print('errore: matrice non quadrata') flag=1 x=[] return x, flag # Test singolarita' if np.all(np.diag(U)) != True: print('el. diag. nullo - matrice triangolare superiore') x=[] flag=1 return x, flag # Preallocazione vettore soluzione x=np.zeros((n,1)) for i in range(n-1,-1,-1): s=np.dot(U[i,i+1:n],x[i+1:n]) #scalare=vettore riga * vettore colonna x[i]=(b[i]-s)/U[i,i] return x,flag def LUsolve(L,U,P,b): """ Risoluzione a partire da PA =LU assegnata """ Pb=np.dot(P,b) y,flag=Lsolve(L,Pb) if flag == 0: x,flag=Usolve(U,y) else: return [],flag return x,flag def LU_nopivot(A): """ % Fattorizzazione PA=LU senza pivot versione vettorizzata In output: L matrice triangolare inferiore U matrice triangolare superiore P matrice identità tali che LU=PA=A """ # Test dimensione m,n=A.shape flag=0; if n!=m: print("Matrice non quadrata") L,U,P,flag=[],[],[],1 return P,L,U,flag P=np.eye(n); U=A.copy(); # Fattorizzazione for k in range(n-1): #Test pivot if U[k,k]==0: print('elemento diagonale nullo') L,U,P,flag=[],[],[],1 return P,L,U,flag # Eliminazione gaussiana U[k+1:n,k]=U[k+1:n,k]/U[k,k] # Memorizza i moltiplicatori U[k+1:n,k+1:n]=U[k+1:n,k+1:n]-np.outer(U[k+1:n,k],U[k,k+1:n]) # Eliminazione gaussiana sulla matrice L=np.tril(U,-1)+np.eye(n) # Estrae i moltiplicatori U=np.triu(U) # Estrae la parte triangolare superiore + diagonale return P,L,U,flag def LU_nopivotv(A): """ % Fattorizzazione PA=LU senza pivot versione vettorizzata intermedia In output: L matrice triangolare inferiore U matrice triangolare superiore P matrice identità tali che LU=PA=A """ # Test dimensione m,n=A.shape flag=0; if n!=m: print("Matrice non quadrata") L,U,P,flag=[],[],[],1 return P,L,U,flag P=np.eye(n); U=A.copy(); # Fattorizzazione for k in range(n-1): #Test pivot if U[k,k]==0: print('elemento diagonale nullo') L,U,P,flag=[],[],[],1 return P,L,U,flag # Eliminazione gaussiana for i in range(k+1,n): U[i,k]=U[i,k]/U[k,k] # Memorizza i moltiplicatori U[i,k+1:n]=U[i,k+1:n]-U[i,k]*U[k,k+1:n] # Eliminazione gaussiana sulla matrice L=np.tril(U,-1)+np.eye(n) # Estrae i moltiplicatori U=np.triu(U) # Estrae la parte triangolare superiore + diagonale return P,L,U,flag def LU_nopivotb(A): """ % Fattorizzazione PA=LU senza pivot versione base In output: L matrice triangolare inferiore U matrice triangolare superiore P matrice identità tali che LU=PA=A """ # Test dimensione m,n=A.shape flag=0; if n!=m: print("Matrice non quadrata") L,U,P,flag=[],[],[],1 return P,L,U,flag P=np.eye(n); U=A.copy(); # Fattorizzazione for k in range(n-1): #Test pivot if U[k,k]==0: print('elemento diagonale nullo') L,U,P,flag=[],[],[],1 return P,L,U,flag # Eliminazione gaussiana for i in range(k+1,n): U[i,k]=U[i,k]/U[k,k] for j in range(k+1,n): # Memorizza i moltiplicatori U[i,j]=U[i,j]-U[i,k]*U[k,j] # Eliminazione gaussiana sulla matrice L=np.tril(U,-1)+np.eye(n) # Estrae i moltiplicatori U=np.triu(U) # Estrae la parte triangolare superiore + diagonale return P,L,U,flag def swapRows(A,k,p): A[[k,p],:] = A[[p,k],:] def LU_pivot(A): """ % Fattorizzazione PA=LU con pivot In output: L matrice triangolare inferiore U matrice triangolare superiore P matrice di permutazione tali che PA=LU """ # Test dimensione m,n=A.shape flag=0; if n!=m: print("Matrice non quadrata") L,U,P,flag=[],[],[],1 return P,L,U,flag P=np.eye(n); U=A.copy(); # Fattorizzazione for k in range(n-1): #Scambio di righe nella matrice U e corrispondente scambio nella matrice di permutazione per # tenere traccia degli scambi avvenuti #Fissata la colonna k-esima calcolo l'indice di riga p a cui appartiene l'elemento di modulo massimo a partire dalla riga k-esima p = np.argmax(abs(U[k:n,k])) + k if p != k: swapRows(P,k,p) swapRows(U,k,p) # Eliminazione gaussiana U[k+1:n,k]=U[k+1:n,k]/U[k,k] # Memorizza i moltiplicatori U[k+1:n,k+1:n]=U[k+1:n,k+1:n]-np.outer(U[k+1:n,k],U[k,k+1:n]) # Eliminazione gaussiana sulla matrice L=np.tril(U,-1)+np.eye(n) # Estrae i moltiplicatori U=np.triu(U) # Estrae la parte triangolare superiore + diagonale return P,L,U,flag def solve_nsis(A,B): # Test dimensione m,n=A.shape flag=0; if n!=m: print("Matrice non quadrata") return Y= np.zeros((n,n)) X= np.zeros((n,n)) P,L,U,flag= LU_nopivot(A) if flag==0: for i in range(n): y,flag=Lsolve(L,np.dot(P,B[:,i])) Y[:,i]=y.squeeze(1) x,flag= Usolve(U,Y[:,i]) X[:,i]=x.squeeze(1) else: print("Elemento diagonale nullo") X=[] return X ``` ## Es4 ```python import numpy as np import funzioni_Sistemi_lineari as fSl import scipy.linalg as spl import matplotlib.pyplot as plt xesatta=np.array([[2],[2]]) err_rel_nopivot=[] err_rel_pivot=[] for k in range(2,19,2): A= np.array([[10.0**(-k),1],[1,1]]) b=np.array([[2+10.0**(-k)],[4]]) P,L,U,flag, = fSl.LU_nopivot(A) if flag==0: x_nopivot,flag=fSl.LUsolve(L,U,P,b) else: print("Sistema non risolubile senza strategia pivotale") err_rel_nopivot.append(np.linalg.norm(x_nopivot-xesatta,1)/np.linalg.norm(xesatta,1)) P_pivot,Lpivot,Upivot,flagpivot, = fSl.LU_pivot(A) if flagpivot==0: x_pivot,flag=fSl.LUsolve(Lpivot,Upivot,P_pivot,b) else: print("Sistema non risolubile con strategia pivotale") err_rel_pivot.append(np.linalg.norm(x_pivot-xesatta,1)/np.linalg.norm(xesatta,1)) plt.semilogy(range(2,19,2),err_rel_nopivot,range(2,19,2),err_rel_pivot) plt.legend(['No pivot','Pivot']) plt.show() ``` ## esame qr ```python """ Created on Sat May 1 11:23:27 2021 @author: damia """ import numpy as np import matplotlib.pyplot as plt import scipy.linalg as spl #Funzioni necessarie def Usolve(U,b): """ Risoluzione con procedura backward di Rx=b con R triangolare superiore Input: U matrice triangolare superiore b termine noto Output: x: soluzione del sistema lineare flag= 0, se sono soddisfatti i test di applicabilità 1, se non sono soddisfatti """ #test dimensione m,n=U.shape flag=0; if n != m: print('errore: matrice non quadrata') flag=1 x=[] return x, flag # Test singolarita' if np.all(np.diag(U)) != True: print('el. diag. nullo - matrice triangolare superiore') x=[] flag=1 return x, flag # Preallocazione vettore soluzione x=np.zeros((n,1)) for i in range(n-1,-1,-1): s=np.dot(U[i,i+1:n],x[i+1:n]) #scalare=vettore riga * vettore colonna x[i]=(b[i]-s)/U[i,i] return x,flag def metodoQR(x,y,n): """ INPUT x vettore colonna con le ascisse dei punti y vettore colonna con le ordinate dei punti n grado del polinomio approssimante OUTPUT a vettore colonna contenente i coefficienti incogniti """ H=np.vander(x,n+1) Q,R=spl.qr(H) y1=np.dot(Q.T,y) a,flag=Usolve(R[0:n+1,:],y1[0:n+1]) return a #------------------------------------------------------ #Script #---------------------------------------------------- m=12 x=np.linspace(1900,2010,12) y=np.array([76.0,92.0,106.0,123.0,132.0,151.0,179.0,203.0,226.0,249.0,281.0,305.0]) xmin=np.min(x) xmax=np.max(x) xval=np.linspace(xmin,xmax,100) for n in range(1,4): a=metodoQR(x,y,n) residuo=np.linalg.norm(y-np.polyval(a,x))**2 print("Norma del residuo al quadrato",residuo) p=np.polyval(a,xval) plt.plot(xval,p) plt.legend(['n=1','n=2','n=3']) plt.plot(x,y,'o') ``` ## Metodo QR ```python import numpy as np import scipy.linalg as spl from funzioni_Sistemi_lineari import Usolve def metodoQR(x,y,n): """ INPUT x vettore colonna con le ascisse dei punti y vettore colonna con le ordinate dei punti n grado del polinomio approssimante OUTPUT a vettore colonna contenente i coefficienti incogniti """ H=np.vander(x,n+1) Q,R=spl.qr(H) y1=np.dot(Q.T,y) a,flag=Usolve(R[0:n+1,:],y1[0:n+1]) return a ``` ```python import numpy as np import matplotlib.pyplot as plt from funzioni_Approssimazione_MQ import metodoQR x = np.array([0.0004, 0.2507, 0.5008, 2.0007, 8.0013]) y = np.array([0.0007, 0.0162,0.0288, 0.0309, 0.0310]); #Calcolo della retta di regessione a=metodoQR(x,y,1) residuo=np.linalg.norm(y-np.polyval(a,x))**2 print("Norma al quadrato del residuo Retta di regressione",residuo) xmin=np.min(x) xmax=np.max(x) xval=np.linspace(xmin,xmax,100) p=np.polyval(a,xval) plt.plot(xval,p,'r-',x,y,'o') plt.legend(['Retta di regressione', 'Dati']) plt.show() #Calcolo della parabola di approssimazione nel senso dei minimi quuadrati a=metodoQR(x,y,2) residuo=np.linalg.norm(y-np.polyval(a,x))**2 print("Norma al quadrato del residuo Polinomio di approssimazione di grado 2",residuo) xmin=np.min(x) xmax=np.max(x) xval=np.linspace(xmin,xmax,100) p=np.polyval(a,xval) plt.plot(xval,p,'r-',x,y,'o') plt.legend(['Polinomio di approssimazione di grado 2', 'Dati']) plt.show() ``` # Interpolazione ## Funzione ```python import numpy as np def plagr(xnodi,k): """ Restituisce i coefficienti del k-esimo pol di Lagrange associato ai punti del vettore xnodi """ xzeri=np.zeros_like(xnodi) n=xnodi.size if k==0: xzeri=xnodi[1:n] else: xzeri=np.append(xnodi[0:k],xnodi[k+1:n]) num=np.poly(xzeri) den=np.polyval(num,xnodi[k]) p=num/den return p def InterpL(x, f, xx): """" %funzione che determina in un insieme di punti il valore del polinomio %interpolante ottenuto dalla formula di Lagrange. % DATI INPUT % x vettore con i nodi dell'interpolazione % f vettore con i valori dei nodi % xx vettore con i punti in cui si vuole calcolare il polinomio % DATI OUTPUT % y vettore contenente i valori assunti dal polinomio interpolante % """ n=x.size m=xx.size L=np.zeros((n,m)) for k in range(n): p=plagr(x,k) L[k,:]=np.polyval(p,xx) return np.dot(f,L) ``` ## Es ```python import numpy as np from funzioni_Interpolazione_Polinomiale import InterpL import matplotlib.pyplot as plt #nodi del problema di interpolazione T=np.array([-55, -45, -35, -25, -15, -5, 5, 15, 25, 35, 45, 55, 65]) L=np.array([3.7, 3.7,3.52,3.27, 3.2, 3.15, 3.15, 3.25, 3.47, 3.52, 3.65, 3.67, 3.52]) # punti di valutazione per l'interpolante xx=np.linspace(np.min(T),np.max(T),200); pol=InterpL(T,L,xx); pol42=InterpL(T,L,np.array([42])) pol_42=InterpL(T,L,np.array([-42])) plt.plot(xx,pol,'b--',T,L,'r*',42,pol42,'og',-42,pol_42,'og'); plt.legend(['interpolante di Lagrange','punti di interpolazione','stima 1', 'stima2']); plt.show() ``` # Integrazione ## esame ```python """ Created on Sat Jun 5 08:29:40 2021 @author: damia """ import numpy as np import matplotlib.pyplot as plt import math #funzioni per l'integrazione Simposon Composita e con ricerca automatica del numero di N #di sottointervalli def SimpComp(fname,a,b,n): h=(b-a)/(2*n) nodi=np.arange(a,b+h,h) f=fname(nodi) I=(f[0]+2*np.sum(f[2:2*n:2])+4*np.sum(f[1:2*n:2])+f[2*n])*h/3 return I def simptoll(fun,a,b,tol): Nmax=4096 err=1 N=1; IN=SimpComp(fun,a,b,N); while N<=Nmax and err>tol : N=2*N I2N=SimpComp(fun,a,b,N) err=abs(IN-I2N)/15 IN=I2N if N>Nmax: print('Raggiunto nmax di intervalli con simptoll') N=0 IN=[] return IN,N #Funzioni per l'interpolazione di Lagrange def plagr(xnodi,k): """ Restituisce i coefficienti del k-esimo pol di Lagrange associato ai punti del vettore xnodi """ xzeri=np.zeros_like(xnodi) n=xnodi.size if k==0: xzeri=xnodi[1:n] else: xzeri=np.append(xnodi[0:k],xnodi[k+1:n]) num=np.poly(xzeri) den=np.polyval(num,xnodi[k]) p=num/den return p def InterpL(x, f, xx): """" %funzione che determina in un insieme di punti il valore del polinomio %interpolante ottenuto dalla formula di Lagrange. % DATI INPUT % x vettore con i nodi dell'interpolazione % f vettore con i valori dei nodi % xx vettore con i punti in cui si vuole calcolare il polinomio % DATI OUTPUT % y vettore contenente i valori assunti dal polinomio interpolante % """ n=x.size m=xx.size L=np.zeros((n,m)) for k in range(n): p=plagr(x,k) L[k,:]=np.polyval(p,xx) return np.dot(f,L) # Script principale tol=1e-08 x=np.zeros((6,)) y=np.zeros((6,)) N=np.zeros((6,)) fig=1 #Funzione integranda f= lambda x: 2/math.pi*(5.5*(1-np.exp(-0.05*x))*np.sin(x**2)) for i in range(0,6): #Estremo destro dell'intervallo di integrazione x[i]=0.5+2*i #Costruisco punti nell'intervallo [0,x[i]] in cui valutare e poi disegnare la funzione integranda xi=np.linspace(0,x[i],100) plt.subplot(2,3,fig) plt.plot(xi,f(xi)) plt.legend([ 'x['+str(i)+']='+str(x[i])]) fig+=1 #Calcolo il valore dell'integrale i-esimo con a=0 e b= x[i]con la precisione richiesta y[i],N[i]=simptoll(f,0,x[i],tol) plt.show() xx=np.linspace(min(x),max(x),100) #Calcolo il polinomio che interpola le coppie (x,y) pol=InterpL(x, y, xx) plt.plot(xx,pol,x,y,'ro') plt.legend(['Polinomio interpolante','Nodi di interpolazione']) plt.show() print("Numero di sottointervalli per ciascuni il calcolo di ciascun integrale \n",N) ``` ```python import numpy as np import f_Interpol_Polinomial as fip import f_Integrazi_numerica as fin import matplotlib.pyplot as plt A = 1 B = 3 n = 3 f = lambda x: x-np.sqrt(x-1) punti = np.linspace(A, B, 50) nodi = np.linspace(A, B, n+1) pol = fip.InterpL(nodi, f(nodi), punti) print(pol) def fp(val): return fip.InterpL(nodi, f(nodi), val) plt.plot(punti, f(punti), 'r', punti, pol, 'b', nodi, f(nodi), 'go') plt.show() tol = 1e-5 I1 = 2.114381916835873 I2 = 2.168048769926493 I1a, N1 = fin.simptoll(f, A, B, tol) I2a, N2 = fin.simptoll(fp, A, B, tol) print(N1) print(N2) print(np.abs(I1a - I1)) print(np.abs(I2a - I2)) ``` ```python import numpy as np import f_Interpol_Polinomial as ip import matplotlib.pyplot as plt f = lambda x: 1/(1+900*x**2) a = -1 b = 1 punti = np.linspace(a, b, 50) for n in range(5, 35, 5): x = np.array(list()) for i in range(1, n+2): x = np.append(x, -1+(2*(i-1)/n)) pe = ip.InterpL(x, f(x), punti) re = np.abs(f(punti) - pe) plt.plot(punti, re) plt.show() for n in range(5, 35, 5): x = np.array(list()) for i in range(n+1, 0, -1): x = np.append(x, np.cos(((2*i-1)*np.pi)/(2*(n+1)))) pe = ip.InterpL(x, f(x), punti) re = np.abs(f(punti) - pe) plt.plot(punti, re) plt.show() ``` ```python import numpy as np import f_Interpol_Polinomial as fip import matplotlib.pyplot as plt f = lambda x: np.cos(np.pi * x) + np.sin(np.pi * x) x = np.array([1.0, 1.5, 1.75]) a = 0 b = 2 xx = np.linspace(a, b, 50) p = fip.InterpL(x, f(x), xx) plt.plot(xx, p, 'r', xx, f(xx), 'g', x, f(x), 'o') plt.legend(['Polinomio', 'Funzione', 'Nodi']) plt.show() xo = np.array([0.75]) r = np.abs(f(0.75) - fip.InterpL(x, f(x), xo)) print('Resto: ', r) x = np.array([0.75, 1.0, 1.5, 1.75]) p2 = fip.InterpL(x, f(x), xx) plt.plot(xx, p2, 'r', xx, f(xx), 'g', x, f(x), 'o') plt.legend(['Polinomio 2', 'Funzione', 'Nodi']) plt.show() ``` ## Funzioni ```python import numpy as np def TrapComp(fname,a,b,n): h=(b-a)/n nodi=np.arange(a,b+h,h) f=fname(nodi) I=(f[0]+2*np.sum(f[1:n])+f[n])*h/2 return I def SimpComp(fname,a,b,n): h=(b-a)/(2*n) nodi=np.arange(a,b+h,h) f=fname(nodi) I=(f[0]+2*np.sum(f[2:2*n:2])+4*np.sum(f[1:2*n:2])+f[2*n])*h/3 return I def traptoll(fun,a,b,tol): Nmax=2048 err=1 N=1; IN=TrapComp(fun,a,b,N); while N<=Nmax and err>tol : N=2*N I2N=TrapComp(fun,a,b,N) err=abs(IN-I2N)/3 IN=I2N if N>Nmax: print('Raggiunto nmax di intervalli con traptoll') N=0 IN=[] return IN,N def simptoll(fun,a,b,tol): Nmax=2048 err=1 N=1; IN=SimpComp(fun,a,b,N); while N<=Nmax and err>tol : N=2*N I2N=SimpComp(fun,a,b,N) err=abs(IN-I2N)/15 IN=I2N if N>Nmax: print('Raggiunto nmax di intervalli con traptoll') N=0 IN=[] return IN,N ``` ## Es1 ```python import sympy as sym import Funzioni_Integrazione as FI from sympy.utilities.lambdify import lambdify import numpy as np import matplotlib.pyplot as plt scelta=input("Scegli funzione ") x=sym.symbols('x') scelta_funzione = { '1': [x**10,0.0,1.0], '2': [sym.asin(x),0.0,1.0], '3': [sym.log(1+x), 0.0,1.0] } fx,a,b=scelta_funzione.get(scelta) Iesatto=float(sym.integrate(fx,(x,a,b))) f= lambdify(x,fx,np) N=[1, 2, 4, 8, 16, 32 ,64 ,128, 256] i=0 InT=[] InS=[] for n in N: InT.append(FI.TrapComp(f,a,b,n)) InS.append(FI.SimpComp(f,a,b,n)) ET=np.zeros((9,)) ES=np.zeros((9,)) ET=np.abs(np.array(InT)-Iesatto)/abs(Iesatto) ES=np.abs(np.array(InS)-Iesatto)/abs(Iesatto) plt.semilogy(N,ET,'ro-',N,ES,'b*-') plt.legend(['Errore Trapezi Composita', 'Errore Simpson Composita']) plt.show() ``` ## Es2 toll ```python import sympy as sym import Funzioni_Integrazione as FI from sympy.utilities.lambdify import lambdify import numpy as np import matplotlib.pyplot as plt scelta=input("Scegli funzione ") x=sym.symbols('x') scelta_funzione = { '1': [sym.log(x),1.0,2.0], '2': [sym.sqrt(x),0.0,1.0], '3': [sym.Abs(x), -1.0,1.0] } fx,a,b=scelta_funzione.get(scelta) Iesatto=float(sym.integrate(fx,(x,a,b))) f= lambdify(x,fx,np) tol=1e-6 IT,NT=FI.traptoll(f,a,b,tol) print("Il valore dell'integrale esatto e' ", Iesatto) if NT>0: print("Valore con Trapezi Composito Automatica ",IT," numero di suddivisoini ",NT) IS,NS=FI.simptoll(f,a,b,tol) if NS>0: print("Valore con Simpson Composito Automatica ",IS," numero di suddivisoini ",NS) ```
c871f7a49375fc5e64f5733cf1b51721ef3e8f9b
73,010
ipynb
Jupyter Notebook
Untitled11.ipynb
sergiorolnic/github-slideshow
589404d7a9f217135d19ef661ce203851a02b6b5
[ "MIT" ]
null
null
null
Untitled11.ipynb
sergiorolnic/github-slideshow
589404d7a9f217135d19ef661ce203851a02b6b5
[ "MIT" ]
3
2020-10-10T21:39:27.000Z
2020-10-11T09:55:57.000Z
Untitled11.ipynb
sergiorolnic/github-slideshow
589404d7a9f217135d19ef661ce203851a02b6b5
[ "MIT" ]
null
null
null
33.521579
238
0.417148
true
14,686
Qwen/Qwen-72B
1. YES 2. YES
0.835484
0.885631
0.739931
__label__ita_Latn
0.695732
0.557438
## Betting Formula on Betcha \begin{align} \\ B (w_i,t_i) = \beta_{1}e^{-\beta_{2} t_i} \frac{w_i}{\sum^{n}_{k=1}w_k} \sum^{m}_{k=1}l_j \\ \\ \end{align} where $w_i$ = money bet by ith person in the winner side $t_i$ = time when ith person place the bet ( $t_i \in [0,1] \ \forall i={1,2,...}$ ) $l_j$ = money bet by jth person in the loser side the coefficient $\beta_2$ is determined by the duration between the creation and the expiry date of bet ```python import numpy as np import matplotlib.pyplot as plt t = np.linspace(0,1) plt.style.use("seaborn-whitegrid") palette = plt.get_cmap("Spectral") ax1 =plt.subplot(111) ax1.set_xticks([]) for b2 in [0.1,0.5,1,2,5,10]: ax1.plot(t,np.exp(-b2*t),color=palette(0.5/b+0.6),label = b2) ax1.legend(title = "different "r"$\beta_2$"" values") plt.yticks(np.arange(0, 1, 0.1)) plt.xticks(np.arange(0, 1, 0.1)) plt.ylabel(r"$e^{-\beta_{2} t}$") plt.xlabel("t - time") ``` \begin{align} \\ \sum^{n}_{i=1}B(w_i,t_i) = \sum^{m}_{k=1}l_j \\ \end{align} The sum of the money paid to the winners is equal to the Loser Pot, We can then use the bootstrapping method to calculate $\beta_{1}$. ( $\beta_{1}>1$ ) Hence deducing the money obtained by individuals in the winner side. For example: Noel, Hailey, Yoon, Matthew, Natalie and some other participants are betting on the winner of the chainhack. Noel, Hailey and Yoon have betted on Batcha to win, and turns out Betcha did win. So they are now rewarded by sharing the Loser Pot of £50. The judges have set $\beta_1 = 1$. Lets say, **Noel** has bet on Betcha at t = 0 for £5, **Hailey** for £5 at t = 0.1 and **Yoon** for bet £10 at t = 0.1 then we get ```python 50/(12.5+12.5*np.exp(-0.1)+25*np.exp(-0.7)) ``` 1.3802584273885596 $\beta_1$ = 1.3802584273885596 ```python print(b1*12.5,b1*12.5*np.exp(-0.1),b1*25*np.exp(-0.7)) ``` 17.253230342356996 15.611368395757978 17.135401261885026 **Noel** wins £17.253230342356996 (and she will get her £5 back) **Hailey** wins £15.611368395757978 and **Yoon** wins £17.135401261885026 ```python ```
13153cf2c601821e467052d7ead3dea631d94b1a
46,157
ipynb
Jupyter Notebook
betting formula.ipynb
Chainhack-Betcha/betcha
853281361310e07e2dcb4fbc5f5059b15a3e2ac5
[ "MIT" ]
1
2018-11-21T12:11:50.000Z
2018-11-21T12:11:50.000Z
betting formula.ipynb
Chainhack-Betcha/betcha
853281361310e07e2dcb4fbc5f5059b15a3e2ac5
[ "MIT" ]
null
null
null
betting formula.ipynb
Chainhack-Betcha/betcha
853281361310e07e2dcb4fbc5f5059b15a3e2ac5
[ "MIT" ]
null
null
null
220.84689
41,212
0.920055
true
773
Qwen/Qwen-72B
1. YES 2. YES
0.943348
0.810479
0.764563
__label__eng_Latn
0.933415
0.614669
# Métodos Iterativos para sistemas de ecuaciones lineales ```python import numpy as np from scipy.linalg import solve_triangular import matplotlib.pyplot as plt from time import time ``` ```python def jacobi(A, b, n_iter=50, x_0=None): """ Solve Ax=b using Jacobi method Parameters ----------- A : (n, n) array A matrix b : (n, ) array RHS vector n_iter : int Number of iterations x_0 : (n, ) array Initual guess Returns ------- X : (n_iter + 1, n) array Matrix with approximation at each iteration """ n = A.shape[0] # Matrix size X = np.zeros((n_iter + 1, n)) # Matrix with solution at each iteration # Initial guess if x_0 is not None: X[0] = x_0 D = np.diag(A) # Diagonal of A (only keep a vector with diagonal) # Inverse of D. Compute reciprocal of vector elements and then fill diagonal matrix D_inv = np.diag(1 / D) # This avoid inverse computation "O(n) instead O(n^3)" LU = (A - np.diag(D)) # A - D = L + U (here D is a matrix) - Rembember LU != LU of PA=LU or A=LU # Jacobi iteration for k in range(n_iter): X[k+1] = np.dot(D_inv, (b - np.dot(LU, X[k]))) return X ``` ```python def gaussSeidel(A, b, n_iter=50, x_0=None): """ Solve Ax=b using Gauss-Seidel method Parameters ----------- A : (n, n) array A matrix b : (n, ) array RHS vector n_iter : int Number of iterations x_0 : (n, ) array Initual guess Returns ------- X : (n_iter + 1, n) array Matrix with approximation at each iteration """ n = A.shape[0] # Matrix size X = np.zeros((n_iter + 1, n)) # Matrix with solution at each iteration # Initial guess if x_0 is not None: X[0] = x_0 LD = np.tril(A) # Get lower triangle with main diagonal (L + D) U = A - LD # Upper triangle # Gauss-Seidel iteration for k in range(n_iter): X[k+1] = solve_triangular(LD, b - np.dot(U, X[k]), lower=True) return X ``` ```python def SOR(A, b, w=1.05, n_iter=50, x_0=None): """ Solve Ax=b using SOR(w) method Parameters ----------- A : (n, n) array A matrix b : (n, ) array RHS vector w : float Omega parameter n_iter : int Number of iterations x_0 : (n, ) array Initual guess Returns ------- X : (n_iter + 1, n) array Matrix with approximation at each iteration """ n = A.shape[0] # Matrix size X = np.zeros((n_iter + 1, n)) # Matrix with solution at each iteration # Initial guess if x_0 is not None: X[0] = x_0 L = np.tril(A, k=-1) # Get lower triangle U = np.triu(A, k=1) # Get Upper triangle D = A - U - L # SOR for k in range(n_iter): X[k+1] = solve_triangular(w * L + D, w * b + np.dot((1 - w) * D - w * U, X[k]), lower=True) return X ``` ```python def error(X, x): """ Compute error of approximation at each iteration Parameters ---------- X : (m, n) array Matrix with approximation at each iteration x : (n, ) array Solution of system Returns ------- X_err : (m, ) array Error vector """ X_err = np.linalg.norm(X - x, axis=1, ord=np.inf) return X_err ``` # Ejemplo Apunte Resolver \begin{equation} \begin{split} u + 3v & = -1 \\ 5u + 4v & = 6 \end{split} \end{equation} Primero, lo expresamos de forma matricial: \begin{equation} \begin{bmatrix} 5 & 4 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} 6 \\ -1 \end{bmatrix}. \end{equation} Notar que se intercambiaron filas para asegurar que $A$ sea **estrictamente diagonal dominante**. Comprobemos que se llega a la solución (o cerca) en la iteración que se indica. ```python A_ap = np.array([[5, 4], [1, 3]]) b_ap = np.array([6, -1]) ``` ## Solución Analítica ```python x_ap = np.linalg.solve(A_ap, b_ap) x_ap ``` array([ 2., -1.]) ## Jacobi ```python x_ap_j = jacobi(A_ap, b_ap, 50) x_ap_j[-1] ``` array([ 2., -1.]) ## Gauss-Seidel ```python x_ap_g = gaussSeidel(A_ap, b_ap, 17) x_ap_g[-1] ``` array([ 2., -1.]) ## SOR($\omega$) ```python x_ap_s = SOR(A_ap, b_ap, 1.09, 9) x_ap_s[-1] ``` array([ 2., -1.]) Efectivamente *SOR* obtiene la solución en menos iteraciones. # Otro ejemplo ```python A_2 = np.array([ [3, -1, 0, 0, 0, 0.5], [-1, 3, -1, 0, 0.5, 0], [0, -1, 3, -1, 0, 0], [0, 0, -1, 3, -1, 0], [0, 0.5, 0, -1, 3, -1], [0.5, 0, 0, 0, -1, 3] ]) b_2 = np.array([2.5, 1.5, 1., 1., 1.5, 2.5]) ``` ## Solución de referencia ```python x_2 = np.linalg.solve(A_2, b_2) x_2 ``` array([1., 1., 1., 1., 1., 1.]) ## Jacobi ```python X_2_jac = jacobi(A_2, b_2) X_2_jac[-1] ``` array([1., 1., 1., 1., 1., 1.]) ## Gauss-Seidel ```python X_2_gss = gaussSeidel(A_2, b_2) X_2_gss[-1] ``` array([1., 1., 1., 1., 1., 1.]) ## SOR($\omega$) ```python def error(X, x): return np.linalg.norm(X - x, axis=1, ord=np.inf) ``` Podemos buscar el parámetro $\omega$, analizando el error para un par de iteraciones ```python n_w = 20 sor_err_2 = np.zeros(n_w) w_s = np.linspace(1, 1.3, n_w) for i in range(n_w): X_sor_tmp = SOR(A_2, b_2, w_s[i], 5) err_tmp_2 = error(X_sor_tmp, x_2) sor_err_2[i] = err_tmp_2[-1] print("i: %d \t w: %f \t error: %f" % (i, w_s[i], sor_err_2[i])) ``` i: 0 w: 1.000000 error: 0.013793 i: 1 w: 1.015789 error: 0.012247 i: 2 w: 1.031579 error: 0.010732 i: 3 w: 1.047368 error: 0.009437 i: 4 w: 1.063158 error: 0.008973 i: 5 w: 1.078947 error: 0.008508 i: 6 w: 1.094737 error: 0.008036 i: 7 w: 1.110526 error: 0.007551 i: 8 w: 1.126316 error: 0.007048 i: 9 w: 1.142105 error: 0.006524 i: 10 w: 1.157895 error: 0.005972 i: 11 w: 1.173684 error: 0.005390 i: 12 w: 1.189474 error: 0.004774 i: 13 w: 1.205263 error: 0.006131 i: 14 w: 1.221053 error: 0.007827 i: 15 w: 1.236842 error: 0.009566 i: 16 w: 1.252632 error: 0.011351 i: 17 w: 1.268421 error: 0.013182 i: 18 w: 1.284211 error: 0.015062 i: 19 w: 1.300000 error: 0.016993 ```python plt.plot(w_s, sor_err_2, 'bd') plt.yscale('log') plt.grid(True) plt.show() ``` Mirando los valores del gráfico obtenemos que $\omega=1.189474$ ```python min_pos_2 = np.argmin(sor_err_2) X_2_sor = SOR(A_2, b_2, w_s[min_pos_2]) X_2_sor[-1] ``` array([1., 1., 1., 1., 1., 1.]) ## Convergencia de métodos ```python # Error e_jac = error(X_2_jac, x_2) e_gss = error(X_2_gss, x_2) e_sor = error(X_2_sor, x_2) ``` ```python n_jac = np.arange(e_jac.shape[-1]) n_gss = np.arange(e_gss.shape[-1]) n_sor = np.arange(e_sor.shape[-1]) plt.plot(n_jac, e_jac, 'ro', label="Jacobi") plt.plot(n_gss, e_gss, 'bo', label="Gauss-Seidel") plt.plot(n_sor, e_sor, 'go', label="SOR") plt.yscale('log') plt.grid(True) plt.legend() plt.show() ``` # Ejemplo aleatorio ```python def ddMatrix(n): """ Randomly generates an n x n strictly diagonally dominant matrix A. Parameters ---------- n : int Matrix size Returns ------- A : (n, n) array Strictly diagonally dominant matrix """ A = np.random.random((n,n)) deltas = 0.5 * np.random.random(n) row_sum = A.sum(axis=1) - np.diag(A) np.fill_diagonal(A, row_sum+deltas) return A ``` ```python n = 50 A_r = ddMatrix(n) b_r = np.random.rand(n) ``` ## Solución Analítica ```python x_r = np.linalg.solve(A_r, b_r) ``` ## Jacobi ```python X_r_jac = jacobi(A_r, b_r, n_iter=100) ``` ## Gauss-Seidel ```python X_r_gss = gaussSeidel(A_r, b_r, n_iter=50) ``` ## SOR($\omega$) Para buscar $\omega$ de *SOR*, se realiza un par de iteraciones y se analiza el error. Vamos a elegir el que tenga menor error... ```python n_w = 20 w_s = np.linspace(1, 1.3, n_w) sor_err_r = np.zeros(n_w) for i in range(n_w): X_sor_tmp = SOR(A_r, b_r, w_s[i], 5) sor_err_r[i] = np.linalg.norm(X_sor_tmp[-1] - x_r, np.inf) print("i: %d \t w: %f \t error: %f" % (i, w_s[i], sor_err_r[i])) ``` i: 0 w: 1.000000 error: 0.000014 i: 1 w: 1.015789 error: 0.000016 i: 2 w: 1.031579 error: 0.000019 i: 3 w: 1.047368 error: 0.000027 i: 4 w: 1.063158 error: 0.000038 i: 5 w: 1.078947 error: 0.000049 i: 6 w: 1.094737 error: 0.000063 i: 7 w: 1.110526 error: 0.000078 i: 8 w: 1.126316 error: 0.000101 i: 9 w: 1.142105 error: 0.000126 i: 10 w: 1.157895 error: 0.000155 i: 11 w: 1.173684 error: 0.000188 i: 12 w: 1.189474 error: 0.000226 i: 13 w: 1.205263 error: 0.000267 i: 14 w: 1.221053 error: 0.000314 i: 15 w: 1.236842 error: 0.000365 i: 16 w: 1.252632 error: 0.000423 i: 17 w: 1.268421 error: 0.000486 i: 18 w: 1.284211 error: 0.000555 i: 19 w: 1.300000 error: 0.000631 ```python min_pos_r = np.argmin(sor_err_r) X_r_sor = SOR(A_r, b_r, w_s[min_pos_r], n_iter=50) ``` ## Comparación de Error ```python e_r_jac = error(X_r_jac, x_r) e_r_gss = error(X_r_gss, x_r) e_r_sor = error(X_r_sor, x_r) ``` ```python n_r_jac = np.arange(e_r_jac.shape[-1]) n_r_gss = np.arange(e_r_gss.shape[-1]) n_r_sor = np.arange(e_r_sor.shape[-1]) plt.plot(n_r_jac, e_r_jac, 'ro', label="Jacobi") plt.plot(n_r_gss, e_r_gss, 'bo', label="Gauss-Seidel") plt.plot(n_r_sor, e_r_sor, 'go', label="SOR") plt.yscale('log') plt.grid(True) plt.legend() plt.show() ``` Para analizar la convergencia de los métodos vamos a utilizar el *radio espectral* $\rho(A)$. ```python def spectralRadius(A): """ Compute spectral radius of A Parameters ---------- A : (n, n) array A Matrix Returns ------- rho : float Spectral radius """ ev = np.linalg.eigvals(A) # Compute eigenvalues rho = np.max(np.abs(ev)) # Largest eigenvalue in magnitude return rho ``` ```python L = np.tril(A_r, k=-1) U = np.triu(A_r, k=1) D = A_r - L - U ``` ```python M_r_jac = np.dot(np.linalg.inv(D), L + U) M_r_gss = np.dot(np.linalg.inv(L + D), U) ``` ```python sr_jac = spectralRadius(M_r_jac) sr_gss = spectralRadius(M_r_gss) print(sr_jac, sr_gss) ``` 0.989112355268985 0.21169440485989235 En este caso vemos que *Gauss-Seidel* converge mucho más rápido que Jacobi. # Teorema de los círculos de Gershgorin ```python def diskGershgorin(A): """ Compute Gershgorin disks. Parameters ---------- A : (n, n) array Matrix Returns ------- disks : (n, 2) array Gershgorin disks. First column is the center = |a_{ii}| and second column is radius = \sum_{i\neq j} |a_{ij}|. """ n = A.shape[0] disks = np.zeros((n, 2)) # First column is center and second is radius for i in range(n): c = A[i, i] # Center R = np.sum(np.abs(A[i])) - np.abs(c) # Sum of absolute values of rows without diagonal disks[i, 0] = c disks[i, 1] = R return disks ``` ```python def circles(disks): """ Return circles. Parameters ---------- disks : (n, 2) array Gershgorin disks. Returns ------- C : (n, 100, 2) array Circles to plot. """ n = disks.shape[0] N = 100 theta = np.linspace(0, 2*np.pi, N) C = np.zeros((n, N, 2)) for i in range(n): C[i, :, 0] = disks[i, 0] + disks[i, 1] * np.cos(theta) C[i, :, 1] = disks[i, 1] * np.sin(theta) return C ``` ```python def plotCircles(A): """ Plot Gershgorin disks and eigenvalues of A. Parameters ---------- A : (n, n) array Matrix Returns ------- None """ disks = diskGershgorin(A) circs = circles(disks) ev = np.linalg.eigvals(A) for i in range(disks.shape[0]): plt.fill(circs[i, :, 0], circs[i, :, 1], alpha=.5) plt.plot(ev[i], 0, 'bo') plt.grid(True) plt.axis('equal') plt.show() ``` ```python A1 = np.array([ [8, 1, 0], [1, 4, 0.1], [0, 0.1, 1] ]) ``` ```python A2 = np.array([ [10, -1, 0, 1], [0.2, 8, 0.2, 0.2], [1, 1, 2, 1], [-1, -1, -1, -11] ]) ``` ```python plotCircles(A1) ``` ```python plotCircles(A2) ``` ## Comentario sobre convergencia Para un método iterativo de la forma $\mathbf{x}_{k+1}=M\mathbf{x}_k + \mathbf{\hat{b}}$, podemos usar el *radio espectral* $\rho(M)$ y así estudiar la convergencia del método. Esto implica resolver el problema \begin{equation} M\mathbf{v} = \lambda \mathbf{v}, \end{equation} para obtener los valores de $\lambda$, con una complejidad aproximada de $\sim O(n^3)$. Si utilizamos el *Teorema de los círculos de Gershgorin*, podríamos obtener una *cota* de los valores propios con aproximadamente $\sim O(n^2)$ operaciones. # Complejidad Para ver los tiempos de cómputo y comparar el número de iteraciones, utilizaremos la tercera forma de representar los métodos. ```python def jacobi2(A, b, n_iter=50, tol=1e-8, x_0=None): """ Solve Ax=b using Jacobi method Parameters ----------- A : (n, n) array A matrix b : (n, ) array RHS vector n_iter : int Number of iterations tol : float Tolerance x_0 : (n, ) array Initual guess Returns ------- X : (n_iter + 1, n) array Matrix with approximation at each iteration """ n = A.shape[0] # Matrix size X = np.zeros((n_iter + 1, n)) # Matrix with solution at each iteration # Initial guess if x_0 is not None: X[0] = x_0 D = np.diag(A) # Diagonal of A (only keep a vector with diagonal) D_inv = np.diag(1 / D) # Inverse of D r = b - np.dot(A, X[0]) # Residual vector # Jacobi iteration for k in range(n_iter): X[k+1] = X[k] + np.dot(D_inv, r) r = b - np.dot(A, X[k+1]) # Update residual if np.linalg.norm(r) < tol: # Stop criteria X = X[:k+2] break return X ``` ```python def gaussSeidel2(A, b, n_iter=50, tol=1e-8, x_0=None): """ Solve Ax=b using Gauss-Seidel method Parameters ----------- A : (n, n) array A matrix b : (n, ) array RHS vector n_iter : int Number of iterations tol : float Tolerance x_0 : (n, ) array Initual guess Returns ------- X : (n_iter + 1, n) array Matrix with approximation at each iteration """ n = A.shape[0] # Matrix size X = np.zeros((n_iter + 1, n)) # Matrix with solution at each iteration # Initial guess if x_0 is not None: X[0] = x_0 LD = np.tril(A) # Get lower triangle (L + D) # Get inverse in O(n^2) instead of np.linalg.inv(LD) O(n^3) LD_inv = solve_triangular(LD, np.eye(n), lower=True) r = b - np.dot(A, X[0]) # Residual # Gauss-Seidel iteration for k in range(n_iter): X[k+1] = X[k] + np.dot(LD_inv, r) r = b - np.dot(A, X[k+1]) # Residual update if np.linalg.norm(r) < tol: # Stop criteria X = X[:k+2] break return X ``` ```python def SOR2(A, b, w=1.05, n_iter=50, tol=1e-8, x_0=None): """ Solve Ax=b using SOR(w) method Parameters ----------- A : (n, n) array A matrix b : (n, ) array RHS vector w : w Omega parameter. n_iter : int Number of iterations tol : float Tolerance x_0 : (n, ) array Initual guess Returns ------- X : (n_iter + 1, n) array Matrix with approximation at each iteration """ n = A.shape[0] # Matrix size X = np.zeros((n_iter + 1, n)) # Matrix with solution at each iteration # Initial guess if x_0 is not None: X[0] = x_0 L = np.tril(A, k=-1) # Get lower triangle Dw = np.diag(np.diag(A) / w) # Get inverse in O(n^2) instead of np.linalg.inv(L+Dw) O(n^3) LDw_inv = solve_triangular(L+Dw, np.eye(n), lower=True) r = b - np.dot(A, X[0]) # Residual # SOR iteration for k in range(n_iter): X[k+1] = X[k] + np.dot(LDw_inv, r) r = b - np.dot(A, X[k+1]) # Residual update if np.linalg.norm(r) < tol: # Stop criteria X = X[:k+2] break return X ``` A continuación se resuelven sistemas parra distintos valores de $n$, y calculamos los tiempos. ```python Ne = 5 # Number of experiments N = 2 ** np.arange(7, 10) # N = [2^7, 2^{10}] Nn = N.shape[-1] # For times times_jac = np.zeros(Nn) times_gss = np.zeros(Nn) times_sor = np.zeros(Nn) ``` ```python for i in range(Nn): n = N[i] A = ddMatrix(n) b = np.random.random(n) # Time Jacobi start_time= time() for j in range(Ne): x = jacobi2(A, b) end_time = time() times_jac[i] = (end_time - start_time) / Ne # Time G-S start_time = time() for j in range(Ne): x = gaussSeidel2(A, b) end_time = time() times_gss[i] = (end_time - start_time) / Ne # Time SOR start_time = time() for j in range(Ne): x = SOR2(A, b) end_time = time() times_sor[i] = (end_time - start_time) / Ne ``` ```python plt.figure(figsize=(12, 6)) plt.plot(N, times_jac, 'rx', label="Jacobi") plt.plot(N, times_gss, 'bd', label="Gauss-Seidel") plt.plot(N, times_sor, 'go', label="SOR") # Deben adaptar el coeficiente que acompaña a N**k según los tiempos que obtengan en su computador plt.plot(N, 1e-7 * N ** 2, 'g--', label=r"$O(n^2)$") plt.plot(N, 1e-8 * N ** 3, 'r--', label=r"$O(n^3)$") plt.grid(True) plt.yscale('log') plt.xscale('log') plt.xlabel(r"$n$") plt.ylabel("Time [s]") plt.legend() plt.show() ``` Del gráfico podemos confirmar que la complejidad de estos métodos es $\sim I n^2$, donde $I$ es el número de iteraciones. El valor de $I$ puede ser diferente en cada método. # Referencias * Sauer, T. (2006). Numerical Analysis Pearson Addison Wesley. * https://github.com/tclaudioe/Scientific-Computing/tree/master/SC1/05_linear_systems_of_equations.ipynb ```python ```
9893c80b9aef573015158f39520ccb379425a647
122,651
ipynb
Jupyter Notebook
material/04_sistemas_ecuaciones/metodos_iterativos.ipynb
Felipitoo/CC
2ce7bac8c02b5ef7089e752f2143e13a4b77afc2
[ "BSD-3-Clause" ]
null
null
null
material/04_sistemas_ecuaciones/metodos_iterativos.ipynb
Felipitoo/CC
2ce7bac8c02b5ef7089e752f2143e13a4b77afc2
[ "BSD-3-Clause" ]
null
null
null
material/04_sistemas_ecuaciones/metodos_iterativos.ipynb
Felipitoo/CC
2ce7bac8c02b5ef7089e752f2143e13a4b77afc2
[ "BSD-3-Clause" ]
null
null
null
89.200727
29,528
0.821176
true
6,610
Qwen/Qwen-72B
1. YES 2. YES
0.919643
0.917303
0.843591
__label__eng_Latn
0.205821
0.798277
# Sympy ```python import math math.sqrt(9) ``` 3.0 ```python math.sqrt(8) ``` 2.8284271247461903 ```python import sympy result = sympy.sqrt(8) # result = 2*sqrt(2) result ``` $\displaystyle 2 \sqrt{2}$ ```python result / 2 ``` $\displaystyle \sqrt{2}$ # Símbolos en sympy ```python # Representar la expresión x + 2y expression = x + 2*y ``` ```python from sympy import symbols x, y = symbols('x y') expression = x + 2*y expression ``` $\displaystyle x + 2 y$ ```python expression + 1 ``` $\displaystyle x + 2 y + 1$ ```python expression - x ``` $\displaystyle 2 y$ # Integrales El método principal de este módulo es `integrate()` integrate(f, x) retorna la integral indefinida integrate(f, (x, a, b)) retorna la integral definida entre `b` y `a` ```python from sympy import * x = Symbol('x') expr = 2 * (x**3) expr ``` $\displaystyle 2 x^{3}$ ```python integrate(expr, x) ``` $\displaystyle \frac{x^{4}}{2}$ ```python expr = x**2 * exp(x) * cos(x) expr ``` $\displaystyle x^{2} e^{x} \cos{\left(x \right)}$ ```python integrate(expr, x) ``` $\displaystyle \frac{x^{2} e^{x} \sin{\left(x \right)}}{2} + \frac{x^{2} e^{x} \cos{\left(x \right)}}{2} - x e^{x} \sin{\left(x \right)} + \frac{e^{x} \sin{\left(x \right)}}{2} - \frac{e^{x} \cos{\left(x \right)}}{2}$ ```python ```
1b82882880df283cd24deaeba9da6b00c7e9f91e
6,767
ipynb
Jupyter Notebook
Videos/PrimerBimestre/video2.ipynb
renatojobal/python_sol
d8d0434755679d62caa34d0ea227aacc2aa1ad6d
[ "MIT" ]
null
null
null
Videos/PrimerBimestre/video2.ipynb
renatojobal/python_sol
d8d0434755679d62caa34d0ea227aacc2aa1ad6d
[ "MIT" ]
null
null
null
Videos/PrimerBimestre/video2.ipynb
renatojobal/python_sol
d8d0434755679d62caa34d0ea227aacc2aa1ad6d
[ "MIT" ]
2
2021-02-02T03:33:44.000Z
2021-02-02T03:34:20.000Z
20.885802
579
0.468007
true
494
Qwen/Qwen-72B
1. YES 2. YES
0.951142
0.849971
0.808443
__label__kor_Hang
0.172636
0.716618
# Customizing Potentials in JAX MD ``` #@title Imports & Utils !pip install --upgrade -q https://storage.googleapis.com/jax-releases/cuda$(echo $CUDA_VERSION | sed -e 's/\.//' -e 's/\..*//')/jaxlib-$(pip search jaxlib | grep -oP '[0-9\.]+' | head -n 1)-cp36-none-linux_x86_64.whl !pip install --upgrade -q jax !pip install -q git+https://www.github.com/conference-submitter/jax-md import numpy as onp import jax.numpy as np from jax import random from jax import jit, grad, vmap, value_and_grad from jax import lax from jax import ops from jax.config import config config.update("jax_enable_x64", True) from jax_md import space, smap, energy, minimize, quantity, simulate, partition from functools import partial import time f32 = np.float32 f64 = np.float64 import matplotlib import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 16}) #import seaborn as sns #sns.set_style(style='white') def format_plot(x, y): plt.grid(True) plt.xlabel(x, fontsize=20) plt.ylabel(y, fontsize=20) def finalize_plot(shape=(1, 0.7)): plt.gcf().set_size_inches( shape[0] * 1.5 * plt.gcf().get_size_inches()[1], shape[1] * 1.5 * plt.gcf().get_size_inches()[1]) plt.tight_layout() def calculate_bond_data(displacement_or_metric, R, dr_cutoff, species=None): if( not(species is None)): assert(False) metric = space.map_product(space.canonicalize_displacement_or_metric(displacement)) dr = metric(R,R) dr_include = np.triu(np.where(dr<1, 1, 0)) - np.eye(R.shape[0],dtype=np.int32) index_list=np.dstack(np.meshgrid(np.arange(N), np.arange(N), indexing='ij')) i_s = np.where(dr_include==1, index_list[:,:,0], -1).flatten() j_s = np.where(dr_include==1, index_list[:,:,1], -1).flatten() ij_s = np.transpose(np.array([i_s,j_s])) bonds = ij_s[(ij_s!=np.array([-1,-1]))[:,1]] lengths = dr.flatten()[(ij_s!=np.array([-1,-1]))[:,1]] return bonds, lengths def plot_system(R,box_size,species=None,ms=20): R_plt = onp.array(R) if(species is None): plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms) else: for ii in range(np.amax(species)+1): Rtemp = R_plt[species==ii] plt.plot(Rtemp[:, 0], Rtemp[:, 1], 'o', markersize=ms) plt.xlim([0, box_size]) plt.ylim([0, box_size]) plt.xticks([], []) plt.yticks([], []) finalize_plot((1,1)) key = random.PRNGKey(0) ``` ##Prerequisites This cookbook assumes a working knowledge of Python and Numpy. The concept of broadcasting is particularly important both in this cookbook and in JAX MD. We also assume a basic knowlege of [JAX](https://github.com/google/jax/), which JAX MD is built on top of. Here we briefly review a few JAX basics that are important for us: * ```jax.vmap``` allows for automatic vectorization of a function. What this means is that if you have a function that takes an input ```x``` and returns an output ```y```, i.e. ```y = f(x)```, then ```vmap``` will transform this function to act on an array of ```x```'s and return an array of ```y```'s, i.e. ```Y = vmap(f)(X)```, where ```X=np.array([x1,x2,...,xn])``` and ```Y=np.array([y1,y2,...,yn])```. * ```jax.grad``` employs automatic differentiation to transform a function into a new function that calculates its gradient, for example: ```dydx = grad(f)(x)```. * ```jax.lax.scan``` allows for efficient for-loops that can be compiled and differentiated over. See [here](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scan.html#jax.lax.scan) for more details. * [Random numbers are different in JAX.](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#%F0%9F%94%AA-Random-Numbers) The details aren't necessary for this cookbook, but if things look a bit different, this is why. ##The basics of user-defined potentials ###Create a user defined potential function to use throughout this cookbook Here we create a custom potential that has a short-ranged, non-diverging repulsive interaction and a medium-ranged Morse-like attractive interaction. It takes the following form: \begin{equation} V(r) = \begin{cases} \frac{1}{2} k (r-r_0)^2 - D_0,& r < r_0\\ D_0\left( e^{-2\alpha (r-r_0)} -2 e^{-\alpha(r-r_0)}\right), & r \geq r_0 \end{cases} \end{equation} and has 4 parameters: $D_0$, $\alpha$, $r_0$, and $k$. ``` def harmonic_morse(dr, D0=5.0, alpha=5.0, r0=1.0, k=50.0, **kwargs): U = np.where(dr < r0, 0.5 * k * (dr - r0)**2 - D0, D0 * (np.exp(-2. * alpha * (dr - r0)) - 2. * np.exp(-alpha * (dr - r0))) ) return np.array(U, dtype=dr.dtype) ``` plot $V(r)$. ``` drs = np.arange(0,3,0.01) U = harmonic_morse(drs) plt.plot(drs,U) format_plot(r'$r$', r'$V(r)$') finalize_plot() ``` ###Calculate the energy of a system of interacting particles We now want to calculate the energy of a system of $N$ spheres in $d$ dimensions, where each particle interacts with every other particle via our user-defined function $V(r)$. The total energy is \begin{equation} E_\text{total} = \sum_{i<j}V(r_{ij}), \end{equation} where $r_{ij}$ is the distance between particles $i$ and $j$. Our first task is to set up the system by specifying the $N$, $d$, and the size of the simulation box. We then use JAX's internal random number generator to pick positions for each particle. ``` N = 50 dimension = 2 box_size = 6.8 key, split = random.split(key) R = random.uniform(split, (N,dimension), minval=0.0, maxval=box_size, dtype=f64) plot_system(R,box_size) ``` At this point, we could manually loop over all particle pairs and calculate the energy, keeping track of boundary conditions, etc. Fortunately, JAX MD has machinery to automate this. First, we must define two functions, ```displacement``` and ```shift```, which contain all the information of the simulation box, boundary conditions, and underlying metric. ```displacement``` is used to calculate the vector displacement between particles, and ```shift``` is used to move particles. For most cases, it is recommended to use JAX MD's built in functions, which can be called using: * ``` displacement, shift = space.free()``` * ``` displacement, shift = space.periodic(box_size)``` * ``` displacement, shift = space.periodic_general(T)``` For demonstration purposes, we will define these manually for a square periodic box, though without proper error handling, etc. The following should have the same functionality as ```displacement, shift = space.periodic(box_size)```. ``` def setup_periodic_box(box_size): def displacement_fn(Ra, Rb, **unused_kwargs): dR = Ra - Rb return np.mod(dR + box_size * f32(0.5), box_size) - f32(0.5) * box_size def shift_fn(R, dR, **unused_kwargs): return np.mod(R + dR, box_size) return displacement_fn, shift_fn displacement, shift = setup_periodic_box(box_size) ``` We now set up a function to calculate the total energy of the system. The JAX MD function ```smap.pair``` takes a given potential and promotes it to act on all particle pairs in a system. ```smap.pair``` does not actually return an energy, rather it returns a function that can be used to calculate the energy. For convenience and readability, we wrap ```smap.pair``` in a new function called ```harmonic_morse_pair```. For now, ignore the species keyword, we will return to this later. ``` def harmonic_morse_pair( displacement_or_metric, species=None, D0=5.0, alpha=10.0, r0=1.0, k=50.0): D0 = np.array(D0, dtype=f32) alpha = np.array(alpha, dtype=f32) r0 = np.array(r0, dtype=f32) k = np.array(k, dtype=f32) return smap.pair( harmonic_morse, space.canonicalize_displacement_or_metric(displacement_or_metric), species=species, D0=D0, alpha=alpha, r0=r0, k=k) ``` Our helper function can be used to construct a function to compute the energy of the entire system as follows. ``` # Create a function to calculate the total energy with specified parameters energy_fn = harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=1.0,k=500.0) # Use this to calculate the total energy print(energy_fn(R)) # Use grad to calculate the net force force = -grad(energy_fn)(R) print(force[:5]) ``` We are now in a position to use our energy function to manipulate the system. As an example, we perform energy minimization using JAX MD's implementation of the FIRE algorithm. We start by defining a function that takes an energy function, a set of initial positions, and a shift function and runs a specified number of steps of the minimization algorithm. The function returns the final set of positions and the maximum absolute value component of the force. We will use this function throughout this cookbook. ``` def run_minimization(energy_fn, R_init, shift, num_steps=5000): dt_start = 0.001 dt_max = 0.004 init,apply=minimize.fire_descent(jit(energy_fn),shift,dt_start=dt_start,dt_max=dt_max) apply = jit(apply) @jit def scan_fn(state, i): return apply(state), 0. state = init(R_init) state, _ = lax.scan(scan_fn,state,np.arange(num_steps)) return state.position, np.amax(np.abs(-grad(energy_fn)(state.position))) ``` Now run the minimization with our custom energy function. ``` Rfinal, max_force_component = run_minimization(energy_fn, R, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system( Rfinal, box_size ) ``` ###Create a truncated potential It is often desirable to have a potential that is strictly zero beyond a well-defined cutoff distance. In addition, MD simulations require the energy and force (i.e. first derivative) to be continuous. To easily modify an existing potential $V(r)$ to have this property, JAX MD follows the approach [taken by HOOMD Blue](https://hoomd-blue.readthedocs.io/en/stable/module-md-pair.html#hoomd.md.pair.pair). Consider the function \begin{equation} S(r) = \begin{cases} 1,& r<r_\mathrm{on} \\ \frac{(r_\mathrm{cut}^2-r^2)^2 (r_\mathrm{cut}^2 + 2r^2 - 3 r_\mathrm{on}^2)}{(r_\mathrm{cut}^2-r_\mathrm{on}^2)^3},& r_\mathrm{on} \leq r < r_\mathrm{cut}\\ 0,& r \geq r_\mathrm{cut} \end{cases} \end{equation} Here we plot both $S(r)$ and $\frac{dS(r)}{dr}$, both of which are smooth and strictly zero above $r_\mathrm{cut}$. ``` dr = np.arange(0,3,0.01) S = energy.multiplicative_isotropic_cutoff(lambda dr: 1, r_onset=1.5, r_cutoff=2.0)(dr) ngradS = vmap(grad(energy.multiplicative_isotropic_cutoff(lambda dr: 1, r_onset=1.5, r_cutoff=2.0)))(dr) plt.plot(dr,S,label=r'$S(r)$') plt.plot(dr,ngradS,label=r'$\frac{dS(r)}{dr}$') plt.legend() format_plot(r'$r$','') finalize_plot() ``` We then use $S(r)$ to create a new function \begin{equation}\tilde V(r) = V(r) S(r), \end{equation} which is exactly $V(r)$ below $r_\mathrm{on}$, strictly zero above $r_\mathrm{cut}$ and is continuous in its first derivative. This is implemented in JAX MD through ```energy.multiplicative_isotropic_cutoff```, which takes in a potential function $V(r)$ (e.g. our ```harmonic_morse``` function) and returns a new function $\tilde V(r)$. ``` harmonic_morse_cutoff = energy.multiplicative_isotropic_cutoff( harmonic_morse, r_onset=1.5, r_cutoff=2.0) dr = np.arange(0,3,0.01) V = harmonic_morse(dr) V_cutoff = harmonic_morse_cutoff(dr) F = -vmap(grad(harmonic_morse))(dr) F_cutoff = -vmap(grad(harmonic_morse_cutoff))(dr) plt.plot(dr,V, label=r'$V(r)$') plt.plot(dr,V_cutoff, label=r'$\tilde V(r)$') plt.plot(dr,F, label=r'$-\frac{d}{dr} V(r)$') plt.plot(dr,F_cutoff, label=r'$-\frac{d}{dr} \tilde V(r)$') plt.legend() format_plot('$r$', '') plt.ylim(-13,5) finalize_plot() ``` As before, we can use ```smap.pair``` to promote this to act on an entire system. ``` def harmonic_morse_cutoff_pair( displacement_or_metric, D0=5.0, alpha=5.0, r0=1.0, k=50.0, r_onset=1.5, r_cutoff=2.0): D0 = np.array(D0, dtype=f32) alpha = np.array(alpha, dtype=f32) r0 = np.array(r0, dtype=f32) k = np.array(k, dtype=f32) return smap.pair( energy.multiplicative_isotropic_cutoff( harmonic_morse, r_onset=r_onset, r_cutoff=r_cutoff), space.canonicalize_displacement_or_metric(displacement_or_metric), D0=D0, alpha=alpha, r0=r0, k=k) ``` This is implemented as before ``` # Create a function to calculate the total energy energy_fn = harmonic_morse_cutoff_pair(displacement, D0=5.0, alpha=10.0, r0=1.0, k=500.0, r_onset=1.5, r_cutoff=2.0) # Use this to calculate the total energy print(energy_fn(R)) # Use grad to calculate the net force force = -grad(energy_fn)(R) print(force[:5]) # Minimize the energy using the FIRE algorithm Rfinal, max_force_component = run_minimization(energy_fn, R, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system( Rfinal, box_size ) ``` ##Specifying parameters ###Dynamic parameters In the above examples, the strategy is to create a function ```energy_fn``` that takes a set of positions and calculates the energy of the system with all the parameters (e.g. ```D0```, ```alpha```, etc.) baked in. However, JAX MD allows you to override these baked-in values dynamically, i.e. when ```energy_fn``` is called. For example, we can print out the minimized energy and force of the above system with the truncated potential: ``` print(energy_fn(Rfinal)) print(-grad(energy_fn)(Rfinal)[:5]) ``` This uses the baked-in values of the 4 parameters: ```D0=5.0,alpha=10.0,r0=1.0,k=500.0```. If, for example, we want to dynamically turn off the attractive part of the potential, we simply pass ```D0=0``` to ```energy_fn```: ``` print(energy_fn(Rfinal, D0=0)) ``` Since changing the potential moves the minimum, the force will not be zero: ``` print(-grad(energy_fn)(Rfinal, D0=0)[:5]) ``` This ability to dynamically pass parameters is very powerful. For example, if you want to shrink particles each step during a simulation, you can simply specify a different ```r0``` each step. This is demonstrated below, where we run a Brownian dynamics simulation at zero temperature with continuously decreasing ```r0```. The details of ```simulate.brownian``` are beyond the scope of this cookbook, but the idea is that we pass a new value of ```r0``` to the function ```apply``` each time it is called. The function ```apply``` takes a step of the simulation, and internally it passes any extra parameters like ```r0``` to ```energy_fn```. ``` def run_brownian(energy_fn, R_init, shift, key, num_steps): init, apply = simulate.brownian(energy_fn, shift, dt=0.00001, T_schedule=0.0, gamma=0.1) apply = jit(apply) # Define how r0 changes for each step r0_initial = 1.0 r0_final = .5 def Get_r0(t): return r0_final + (r0_initial-r0_final)*(num_steps-t)/num_steps @jit def scan_fn(state, t): # Dynamically pass r0 to apply, which passes it on to energy_fn return apply(state, t, r0=Get_r0(t)), 0 key, split = random.split(key) state = init(split, R_init) state, _ = lax.scan(scan_fn,state,np.arange(num_steps)) return state.position, np.amax(np.abs(-grad(energy_fn)(state.position))) ``` If we use the previous result as the starting point for the Brownian Dynamics simulation, we find exactly what we would expect, the system contracts into a finite cluster, held together by the attractive part of the potential. ``` key, split = random.split(key) Rfinal2, max_force_component = run_brownian(energy_fn, Rfinal, shift, split, num_steps=6000) plot_system( Rfinal2, box_size ) ``` ###Particle-specific parameters Our example potential has 4 parameters: ```D0```, ```alpha```, ```r0```, and ```k```. The usual way to pass these parameters is as a scalar (e.g. ```D0=5.0```), in which case that parameter is fixed for every particle pair. However, Python broadcasting allows for these parameters to be specified separately for every different particle pair by passing an $(N,N)$ array rather than a scalar. As an example, let's do this for the parameter ```r0```, which is an effective way of generating a system with continuous polydispersity in particle size. Note that the polydispersity disrupts the crystalline order after minimization. ``` # Draw the radii from a uniform distribution key, split = random.split(key) radii = random.uniform(split, (N,), minval=1.0, maxval=2.0, dtype=f64) # Rescale to match the initial volume fraction radii = np.array([radii * np.sqrt(N/(4.*np.dot(radii,radii)))]) # Turn this into a matrix of sums r0_matrix = radii+radii.transpose() # Create the energy function using r0_matrix energy_fn = harmonic_morse_pair(displacement, D0=5.0, alpha=10.0, r0=r0_matrix, k=500.0) # Minimize the energy using the FIRE algorithm Rfinal, max_force_component = run_minimization(energy_fn, R, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system( Rfinal, box_size ) ``` In addition to standard Python broadcasting, JAX MD allows for the special case of additive parameters. If a parameter is passed as a (N,) array ```p_vector```, JAX MD will convert this into a (N,N) array ```p_matrix``` where ```p_matrix[i,j] = 0.5 (p_vector[i] + p_vector[j])```. This is a JAX MD specific ability and not a feature of Python broadcasting. As it turns out, our above polydisperse example falls into this category. Therefore, we could achieve the same result by passing ```r0=2.0*radii```. ``` # Create the energy function the radii array energy_fn = harmonic_morse_pair(displacement, D0=5.0, alpha=10.0, r0=2.*radii, k=500.0) # Minimize the energy using the FIRE algorithm Rfinal, max_force_component = run_minimization(energy_fn, R, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system( Rfinal, box_size ) ``` ### Species It is often important to specify parameters differently for different particle pairs, but doing so with full ($N$,$N$) matrices is both inefficient and obnoxious. JAX MD allows users to create species, i.e. $N_s$ groups of particles that are identical to each other, so that parameters can be passed as much smaller ($N_s$,$N_s$) matrices. First, create an array that specifies which particles belong in which species. We will divide our system into two species. ``` N_0 = N // 2 # Half the particles in species 0 N_1 = N - N_0 # The rest in species 1 species = np.array([0] * N_0 + [1] * N_1, dtype=np.int32) print(species) ``` Next, create the $(2,2)$ matrix of ```r0```'s, which are set so that the overall volume fraction matches our monodisperse case. ``` rsmall=0.41099747 # Match the total volume fraction rlarge=1.4*rsmall r0_species_matrix = np.array([[2*rsmall, rsmall+rlarge], [rsmall+rlarge, 2*rlarge]]) print(r0_species_matrix) ``` ``` energy_fn = harmonic_morse_pair(displacement, species=species, D0=5.0, alpha=10.0, r0=r0_species_matrix, k=500.0) Rfinal, max_force_component = run_minimization(energy_fn, R, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system( Rfinal, box_size, species=species ) ``` ###Dynamic Species Just like standard parameters, the species list can be passed dynamically as well. However, unlike standard parameters, you have to tell ```smap.pair``` that the species will be specified dynamically. To do this, set ```species=quantity.Dynamic``` when creating your energy function. The following sets up an energy function where the attractive part of the interaction only exists between members of the first species, but where the species will be defined dynamically. ``` D0_species_matrix = np.array([[ 5.0, 0.0], [0.0, 0.0]]) energy_fn = harmonic_morse_pair(displacement, species=quantity.Dynamic, D0=D0_species_matrix, alpha=10.0, r0=0.5, k=500.0) ``` Now we set up a finite temperature Brownian Dynamics simulation where, at every step, particles on the left half of the simulation box are assigned to species 0, while particles on the right half are assigned to species 1. ``` def run_brownian(energy_fn, R_init, shift, key, num_steps): init, apply = simulate.brownian(energy_fn, shift, dt=0.00001, T_schedule=1.0, gamma=0.1) # apply = jit(apply) # Define a function to recalculate the species each step def get_species(R): return np.where(R[:,0] < box_size / 2, 0, 1) #@jit def scan_fn(state, t): # Recalculate the species list species = get_species(state.position) # Dynamically pass species to apply, which passes it on to energy_fn return apply(state, species=species, species_count=2), 0 key, split = random.split(key) state = init(split, R_init) state, _ = lax.scan(scan_fn,state,np.arange(num_steps)) return state.position,np.amax(np.abs(-grad(energy_fn)(state.position, species=get_species(state.position), species_count=2))) ``` When we run this, we see that particles on the left side form clusters while particles on the right side do not. ``` key, split = random.split(key) Rfinal, max_force_component = run_brownian(energy_fn, R, shift, split, num_steps=10000) plot_system( Rfinal, box_size ) ``` ##Efficeiently calculating neighbors The most computationally expensive part of most MD programs is calculating the force between all pairs of particles. Generically, this scales with $N^2$. However, for systems with isotropic pairwise interactions that are strictly zero beyond a cutoff, there are techniques to dramatically improve the efficiency. The two most common methods are cell list and neighbor lists. **Cell lists** The technique here is to divide space into small cells that are just larger than the largest interaction range in the system. Thus, if particle $i$ is in cell $c_i$ and particle $j$ is in cell $c_j$, $i$ and $j$ can only interact if $c_i$ and $c_j$ are neighboring cells. Rather than searching all $N^2$ combinations of particle pairs for non-zero interactions, you only have to search the particles in the neighboring cells. **Neighbor lists** Here, for each particle $i$, we make a list of *potential* neighbors: particles $j$ that are within some threshold distance $r_\mathrm{threshold}$. If $r_\mathrm{threshold} = r_\mathrm{cutoff} + \Delta r_\mathrm{threshold}$ (where $r_\mathrm{cutoff}$ is the largest interaction range in the system and $\Delta r_\mathrm{threshold}$ is an appropriately chosen buffer size), then all interacting particles will appear in this list as long as no particles moves by more than $\Delta r_\mathrm{threhsold}/2$. There is a tradeoff here: smaller $\Delta r_\mathrm{threhsold}$ means fewer particles to search over each MD step but the list must be recalculated more often, while larger $\Delta r_\mathrm{threhsold}$ means slower force calculates but less frequent neighbor list calculations. In practice, the most efficient technique is often to use cell lists to calculate neighbor lists. In JAX MD, this occurs under the hood, and so only calls to neighbor-list functionality are necessary. To implement neighbor lists, we need two functions: 1) a function to create and update the neighbor list, and 2) an energy function that uses a neighbor list rather than operating on all particle pairs. We create these functions with ```partition.neighbor_list``` and ```smap.pair_neighbor_list```, respectively. ```partition.neighbor_list``` takes basic box information as well as the maximum interaction range ```r_cutoff``` and the buffer size ```dr_threshold```. ``` def harmonic_morse_cutoff_neighbor_list( displacement_or_metric, box_size, species=None, D0=5.0, alpha=5.0, r0=1.0, k=50.0, r_onset=1.0, r_cutoff=1.5, dr_threshold=2.0, **kwargs): D0 = np.array(D0, dtype=np.float32) alpha = np.array(alpha, dtype=np.float32) r0 = np.array(r0, dtype=np.float32) k = np.array(k, dtype=np.float32) r_onset = np.array(r_onset, dtype=np.float32) r_cutoff = np.array(r_cutoff, np.float32) dr_threshold = np.float32(dr_threshold) neighbor_fn = partition.neighbor_list( displacement_or_metric, box_size, r_cutoff, dr_threshold) energy_fn = smap.pair_neighbor_list( energy.multiplicative_isotropic_cutoff(harmonic_morse, r_onset, r_cutoff), space.canonicalize_displacement_or_metric(displacement_or_metric), species=species, D0=D0, alpha=alpha, r0=r0, k=k) return neighbor_fn, energy_fn ``` To test this, we generate our new ```neighbor_fn``` and ```energy_fn```, as well as a comparison energy function using the default approach. ``` r_onset = 1.5 r_cutoff = 2.0 dr_threshold = 1.0 neighbor_fn, energy_fn = harmonic_morse_cutoff_neighbor_list( displacement, box_size, D0=5.0, alpha=10.0, r0=1.0, k=500.0, r_onset=r_onset, r_cutoff=r_cutoff, dr_threshold=dr_threshold) energy_fn_comparison = harmonic_morse_cutoff_pair( displacement, D0=5.0, alpha=10.0, r0=1.0, k=500.0, r_onset=r_onset, r_cutoff=r_cutoff) ``` Next, we use ```neighbor_fn``` and the current set of positions to populate the neighbor list. ``` nbrs = neighbor_fn(R) ``` To calculate the energy, we pass ```nbrs.idx``` to ```energy_fn```. The energy matches the comparison. ``` print(energy_fn(R,neighbor_idx=nbrs.idx)) print(energy_fn_comparison(R)) ``` Note that by default ```neighbor_fn``` uses a cell list internally to populate the neighbor list. This approach fails when the box size in any dimension is less than 3 times $r_\mathrm{threhsold} = r_\mathrm{cutoff} + \Delta r_\mathrm{threshold}$. In this case, ```neighbor_fn``` automatically turns off the use of cell lists, and instead searches over all particle pairs. This can also be done manually by passing ```disable_cell_list=True``` to ```partition.neighbor_list```. This can be useful for debugging or for small systems where the overhead of cell lists outweighs the benefit. ###Updating neighbor lists The function ```neighbor_fn``` has two different usages, depending on how it is called. When used as above, i.e. ```nbrs = neighbor_fn(R)```, a new neighbor list is generated from scratch. Internally, JAX MD uses the given positions ```R``` to estimate a maximum capacity, i.e. the maximum number of neighbors any particle will have at any point during the use of the neighbor list. This estimate can be adjusted by passing a value of ```capacity_multiplier``` to ```partition.neighbor_list```, which defaults to ```capacity_multiplier=1.25```. Since the maximum capacity is not known ahead of time, this construction of the neighbor list cannot be compiled. However, once a neighbor list is created in this way, repopulating the list with the same maximum capacity is a simpler operation that *can* be compiled. This is done by calling ```nbrs = neighbor_fn(R, nbrs)```. Internally, this checks if any particle has moved more than $\Delta r_\mathrm{threshold}/2$ and, if so, recomputes the neighbor list. If the new neighbor list exceeds the maximum capacity for any particle, the boolean variable ```nbrs.did_buffer_overflow``` is set to ```True```. These two uses together allow for safe and efficient neighbor list calculations. The example below demonstrates a typical simulation loop that uses neighbor lists. ``` def run_brownian_neighbor_list(energy_fn, neighbor_fn, R_init, shift, key, num_steps): nbrs = neighbor_fn(R_init) init, apply = simulate.brownian(energy_fn, shift, dt=0.00001, T_schedule=1.0, gamma=0.1) #apply = jit(apply) #@jit def body_fn(state, t): state, nbrs = state nbrs = neighbor_fn(state.position, nbrs) state = apply(state, neighbor_idx=nbrs.idx) return (state, nbrs), 0 key, split = random.split(key) state = init(split, R_init) step = 0 step_inc=100 while step < num_steps/step_inc: rtn_state, _ = lax.scan(body_fn,(state,nbrs), np.arange(step_inc)) new_state, nbrs = rtn_state # If the neighbor list overflowed, rebuild it and repeat part of # the simulation. if nbrs.did_buffer_overflow: nbrs = neighbor_fn(state.position) else: state = new_state step += 1 return state.position ``` To run this, we consider a much larger system than we have to this point. Warning: running this may take a few minutes. ``` Nlarge = 100*N box_size_large = 10*box_size displacement_large, shift_large = setup_periodic_box(box_size_large) key, split1, split2 = random.split(key,3) Rlarge = random.uniform(split1, (Nlarge,dimension), minval=0.0, maxval=box_size_large, dtype=f64) dr_threshold = 1.5 neighbor_fn, energy_fn = harmonic_morse_cutoff_neighbor_list( displacement_large, box_size_large, D0=5.0, alpha=10.0, r0=1.0, k=500.0, r_onset=r_onset, r_cutoff=r_cutoff, dr_threshold=dr_threshold) energy_fn = jit(energy_fn) start_time = time.clock() Rfinal = run_brownian_neighbor_list(energy_fn, neighbor_fn, Rlarge, shift_large, split2, num_steps=4000) end_time = time.clock() print('run time = {}'.format(end_time-start_time)) plot_system( Rfinal, box_size_large, ms=2 ) ``` ##Bonds Bonds are a way of specifying potentials between specific pairs of particles that are "on" regardless of separation. For example, it is common to employ a two-sided spring potential between specific particle pairs, but JAX MD allows the user to specify arbitrary potentials with static or dynamic parameters. ### Create and implement a bond potential We start by creating a custom potential that corresponds to a bistable spring, taking the form \begin{equation} V(r) = a_4(r-r_0)^4 - a_2(r-r_0)^2. \end{equation} $V(r)$ has two minima, at $r = r_0 \pm \sqrt{\frac{a_2}{2a_4}}$. ``` def bistable_spring(dr, r0=1.0, a2=2, a4=5, **kwargs): return a4*(dr-r0)**4 - a2*(dr-r0)**2 ``` Plot $V(r)$ ``` drs = np.arange(0,2,0.01) U = bistable_spring(drs) plt.plot(drs,U) format_plot(r'$r$', r'$V(r)$') finalize_plot() ``` The next step is to promote this function to act on a set of bonds. This is done via ```smap.bond```, which takes our ```bistable_spring``` function, our displacement function, and a list of the bonds. It returns a function that calculates the energy for a given set of positions. ``` def bistable_spring_bond( displacement_or_metric, bond, bond_type=None, r0=1, a2=2, a4=5): """Convenience wrapper to compute energy of particles bonded by springs.""" r0 = np.array(r0, f32) a2 = np.array(a2, f32) a4 = np.array(a4, f32) return smap.bond( bistable_spring, space.canonicalize_displacement_or_metric(displacement_or_metric), bond, bond_type, r0=r0, a2=a2, a4=a4) ``` However, in order to implement this, we need a list of bonds. We will do this by taking a system minimized under our original ```harmonic_morse``` potential: ``` R_temp, max_force_component = run_minimization(harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=1.0,k=500.0), R, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system( R_temp, box_size ) ``` We now place a bond between all particle pairs that are separated by less than 1.3. ```calculate_bond_data``` returns a list of such bonds, as well as a list of the corresponding current length of each bond. ``` bonds, lengths = calculate_bond_data(displacement, R_temp, 1.3) print(bonds[:5]) # list of particle index pairs that form bonds print(lengths[:5]) # list of the current length of each bond ``` We use this length as the ```r0``` parameter, meaning that initially each bond is at the unstable local maximum $r=r_0$. ``` bond_energy_fn = bistable_spring_bond(displacement, bonds, r0=lengths) ``` We now use our new ```bond_energy_fn``` to minimize the energy of the system. The expectation is that nearby particles should either move closer together or further apart, and the choice of which to do should be made collectively due to the constraint of constant volume. This is exactly what we see. ``` Rfinal, max_force_component = run_minimization(bond_energy_fn, R_temp, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system( Rfinal, box_size ) ``` ###Specifying bonds dynamically As with species or parameters, bonds can be specified dynamically, i.e. when the energy function is called. Importantly, note that this does NOT override bonds that were specified statically in ```smap.bond```. ``` # Specifying the bonds dynamically ADDS additional bonds. # Here, we dynamically pass the same bonds that were passed statically, which # has the effect of doubling the energy print(bond_energy_fn(R)) print(bond_energy_fn(R,bonds=bonds, r0=lengths)) ``` We won't go thorugh a further example as the implementation is exactly the same as specifying species or parameters dynamically, but the ability to employ bonds both statically and dynamically is a very powerful and general framework. ## Combining potentials Most JAX MD functionality (e.g. simulations, energy minimizations) relies on a function that calculates energy for a set of positions. Importantly, while this cookbook focus on simple and robust ways of defining such functions, JAX MD is not limited to these methods; users can implement energy functions however they like. As an important example, here we consider the case where the energy includes both a pair potential and a bond potential. Specifically, we combine ```harmonic_morse_pair``` with ```bistable_spring_bond```. ``` # Note, the code in the "Bonds" section must be run prior to this. energy_fn = harmonic_morse_pair(displacement,D0=0.,alpha=10.0,r0=1.0,k=1.0) bond_energy_fn = bistable_spring_bond(displacement, bonds, r0=lengths) def combined_energy_fn(R): return energy_fn(R) + bond_energy_fn(R) ``` Here, we have set $D_0=0$, so the pair potential is just a one-sided repulsive harmonic potential. For particles connected with a bond, this raises the energy of the "contracted" minimum relative to the "extended" minimum. ``` drs = np.arange(0,2,0.01) U = harmonic_morse(drs,D0=0.,alpha=10.0,r0=1.0,k=1.0)+bistable_spring(drs) plt.plot(drs,U) format_plot(r'$r$', r'$V(r)$') finalize_plot() ``` This new energy function can be passed to the minimization routine (or any other JAX MD simulation routine) in the usual way. ``` Rfinal, max_force_component = run_minimization(combined_energy_fn, R_temp, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system( Rfinal, box_size ) ``` ##Specifying forces instead of energies So far, we have defined functions that calculate the energy of the system, which we then pass to JAX MD. Internally, JAX MD uses automatic differentiation to convert these into functions that calculate forces, which are necessary to evolve a system under a given dynamics. However, JAX MD has the option to pass force functions directly, rather than energy functions. This creates additional flexibility because some forces cannot be represented as the gradient of a potential. As a simple example, we create a custom force function that zeros out the force of some particles. During energy minimization, where there is no stochastic noise, this has the effect of fixing the position of these particles. First, we break the system up into two species, as before. ``` N_0 = N // 2 # Half the particles in species 0 N_1 = N - N_0 # The rest in species 1 species = np.array([0]*N_0 + [1]*N_1, dtype=np.int32) print(species) ``` Next, we we creat our custom force function. Starting with our ```harmonic_morse``` pair potential, we calculate the force manually (i.e. using built-in automatic differentiation), and then multiply the force by the species id, which has the desired effect. ``` energy_fn = harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=1.0,k=500.0) force_fn = quantity.force(energy_fn) def custom_force_fn(R, **kwargs): return vmap(lambda a,b: a*b)(force_fn(R),species) ``` To run with our custom force function, we have to tell JAX MD that we are giving it a force rather than an energy. To do this, we define a new minimization function with a parameter ```quant``` that is passed to the JAX MD minimizer. If we set ```quant=quantity.Force```, this tells JAX MD that we are specifying a force function rather than an energy function. ``` def run_minimization_general(energy_or_force, R_init, shift, num_steps=5000, quant=quantity.Energy): dt_start = 0.001 dt_max = 0.004 init,apply=minimize.fire_descent(jit(energy_or_force),shift,quant=quant,dt_start=dt_start,dt_max=dt_max) apply = jit(apply) @jit def scan_fn(state, i): return apply(state), 0. state = init(R_init) state, _ = lax.scan(scan_fn,state,np.arange(num_steps)) return state.position, np.amax(np.abs(quantity.canonicalize_force(energy_or_force,quant)(state.position))) ``` We run this as usual, passing ```quant=quantity.Force```. ``` key, split = random.split(key) Rfinal, _ = run_minimization_general(custom_force_fn, R, shift, quant=quantity.Force) plot_system( Rfinal, box_size, species ) ``` After the above minimization, the blue particles have the same positions as they did initially: ``` plot_system( R, box_size, species ) ``` Note, this method for fixing particles only works when there is no stochastic noise (e.g. in Langevin or Brownian dynamics) because such noise affects partices whether or not they have a net force. A safer way to fix particles is to create a custom ```shift``` function. ##Coupled ensembles For a final example that demonstrates the flexibility within JAX MD, lets do something that is particularly difficult in most standard MD packages. We will create a "coupled ensemble" -- i.e. a set of two identical systems that are connected via a $Nd$ dimensional spring. An extension of this idea is used, for example, in the Doubly Nudged Elastic Band method for finding transition states. If the "normal" energy of each system is \begin{equation} U(R) = \sum_{i,j} V( r_{ij} ), \end{equation} where $r_{ij}$ is the distance between the $i$th and $j$th particles in $R$ and the $V(r)$ is a standard pair potential, and if the two sets of positions, $R_0$ and $R_1$, are coupled via the potential \begin{equation} U_\mathrm{spr}(R_0,R_1) = \frac 12 k_\mathrm{spr} \left| R_1 - R_0 \right|^2, \end{equation} so that the total energy of the system is \begin{equation} U_\mathrm{total} = U(R_0) + U(R_1) + U_\mathrm{spr}(R_0,R_1). \end{equation} ``` energy_fn = harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=0.5,k=500.0) def spring_energy_fn(Rall, k_spr=50.0, **kwargs): metric = vmap(space.canonicalize_displacement_or_metric(displacement), (0, 0), 0) dr = metric(Rall[0],Rall[1]) return 0.5*k_spr*np.sum((dr)**2) def total_energy_fn(Rall, **kwargs): return np.sum(vmap(energy_fn)(Rall)) + spring_energy_fn(Rall) ``` We now have to define a new shift function that can handle arrays of shape $(2,N,d)$. In addition, we make two copies of our initial positions ```R```, one for each system. ``` def shift_all(Rall, dRall, **kwargs): return vmap(shift)(Rall, dRall) Rall = np.array([R,R]) ``` Now, all we have to do is pass our custom energy and shift functions, as well as the $(2,N,d)$ dimensional initial position, to JAX MD, and proceed as normal. As a demonstration, we define a simple and general Brownian Dynamics simulation function, similar to the simulation routines above except without the special cases (e.g. chaning ```r0``` or species). ``` def run_brownian_simple(energy_or_force, R_init, shift, key, num_steps, quant=quantity.Energy): init, apply = simulate.brownian(energy_or_force, shift, quant=quant, dt=0.00001, T_schedule=1.0, gamma=0.1) apply = jit(apply) @jit def scan_fn(state, t): return apply(state), 0 key, split = random.split(key) state = init(split, R_init) state, _ = lax.scan(scan_fn,state,np.arange(num_steps)) return state.position ``` Note that nowhere in this function is there any indication that we are simulating an ensemble of systems. This comes entirely form the inputs: i.e. the energy function, the shift function, and the set of initial positions. ``` key, split = random.split(key) Rall_final = run_brownian_simple(total_energy_fn, Rall, shift_all, split, num_steps=10000) ``` The output also has shape $(2,N,d)$. If we display the results, we see that the two systems are in similar, but not identical, positions, showing that we have succeeded in simulating a coupled ensemble. ``` for Ri in Rall_final: plot_system( Ri, box_size ) finalize_plot((0.5,0.5)) ```
abb30ce6c0b2ae20d4b4e165062137d974f003ab
73,146
ipynb
Jupyter Notebook
notebooks/custom_potentials.ipynb
conference-submitter/jax-md
f696191b6b0854c427dfa30ee4df0cf04aa709e5
[ "ECL-2.0", "Apache-2.0" ]
1
2020-07-24T19:49:05.000Z
2020-07-24T19:49:05.000Z
notebooks/custom_potentials.ipynb
conference-submitter/jax-md
f696191b6b0854c427dfa30ee4df0cf04aa709e5
[ "ECL-2.0", "Apache-2.0" ]
null
null
null
notebooks/custom_potentials.ipynb
conference-submitter/jax-md
f696191b6b0854c427dfa30ee4df0cf04aa709e5
[ "ECL-2.0", "Apache-2.0" ]
null
null
null
35.979341
811
0.534575
true
10,975
Qwen/Qwen-72B
1. YES 2. YES
0.808067
0.798187
0.644989
__label__eng_Latn
0.982903
0.336855
```python from itertools import product import numpy as np ``` # Introducing DenseNet Ming Li Data Scientist Open Source Contributor (pandas, scikit-learn, etc). <figure> <center> <figcaption><font size="-1">A single Dense Block of 5 layers. Huang et al 2016.</font></figcaption></center></figure> * Novel connectivity pattern to increase network connections to quadratic in relation to layers: $\frac{𝐿(𝐿+1)}{2}$ * Feature reuse. * Parameters and computation efficient. * Outperform current state-of-the-art results across various benchmarks. * Easy and efficient (Pleiss et al 2017) implementation available. ## Convolution In continuous domain of $\tau$, convolution is defined as: $$(f * g)(\tau) = \int_{0}^{t} f(\tau) g(\tau - t) d\tau$$ In discrete coordinate space $[h, w]$, this is equivalently defined as: $$(f * g)[h, w] = \sum_{i}\sum_{j} f(h, w)g(h - i, w - j)$$ ```python f = np.array([[1, 1, 1, 0, 0], [0, 1, 1, 1, 0], [0, 0, 1, 1, 1], [0, 0, 1, 1, 0], [0, 1, 1, 0, 0]]) g = np.array([[1, 0, 1], [0, 1, 0], [1, 0, 1]]) ``` <figure> <center> <figcaption><font size="-1">Convolution during Forward Propagation, <a href=http://ufldl.stanford.edu/wiki/index.php/Feature_extraction_using_convolution>source of image</a></font></figcaption></center> </figure> ```python def element_conv(): h, w = 3, 3 element_conv = np.zeros_like(g, dtype=np.float32) for i in range(h): for j in range(w): kernel = f[i:h+i, j:w+j] element_conv[i, j] = np.sum(kernel * g) return element_conv %timeit -n 1000 element_conv() ``` 92.2 µs ± 10.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) ```python element_conv() ``` array([[4., 3., 4.], [2., 4., 3.], [2., 3., 4.]], dtype=float32) Implementation as Matrix Multiplication: ```python def matmul_conv(): h, w = 3, 3 col = np.zeros([9, 9]) for i, j in product(range(h), range(w)): col[i*w+j] = f[i:h+i, j:w+j].flatten() return (g.flatten() @ col).reshape(g.shape) %timeit -n 1000 matmul_conv() ``` 23.9 µs ± 6.66 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) ```python # check identity assert np.allclose(element_conv(), matmul_conv()) ``` ## Dense Block A Dense Block in DenseNet is a block of hidden layers where subsequent layer reuses feature from preceding layers, through concatenation of feature maps along the depth. More concretely, $$\begin{align*} x_{l} &= f_{composite}(x_{0})\\ x_{2} &= f_{composite}([x_{0}, x_{1}])\\ \dots\\ x_{l} &= f_{composite}([x_{0}, x_{1}, x_{2}, \dots, x_{l-1}]) \end{align*}$$ - inspired by "identity skip connection" in ResNet - avoids informaiton loss by using concatenation (not summation). - bottleneck and compression techniques to control growth. <figure> <center> <figcaption><font size="-1">An illustration of Dense Block of 2 layers</font></figcaption></center> </figure> ## Composite Function $f_{composite}$ consists of 3 functions inspired from 'pre-activation' in ResNet: Batch Normalization, ReLU, Convolution. Concretely: $$ f_{composite}(x) = Conv(ReLU(BN(x))) $$ <figure> <center> <figcaption><font size="-1">full pre-activation as in ResNet, note that "weight" indicates conv. He et al 2016</font></figcaption></center> </figure> **Batch Normalizing Transform** is defined as: $$\begin{align} \hat{x_{i}} &= \frac{x_{i} - \mu_{B}}{\sqrt{\sigma_{B}^2 + \epsilon}}\\ BN(x_{i}; \gamma, \beta) &= \gamma \hat{x_{i}} + \beta \end{align}$$ It has the benefit of reducing internal covariance shift (moments of activations), accelerating training, regularizing. In addition, with DenseNet, it produces data augmentation every time a feature map is reused. <figure> <center> <figcaption><font size="-1">Test Acurracy and Distribution of Logits over time. Ioffe and Szegedy 2015</font></figcaption></center> </figure> **Rectified Linear Unit (ReLU)** is defined as: $$f(x) = \max({0, x})$$ - faster evaluation than sigmoid function $\sigma(z) = \frac{1}{1 + e^{-z}}$ - $\frac{\partial{f(x)}}{\partial{x}} \in \{0, 1\}$ is more favourable to $\frac{\partial{\sigma(x)}}{\partial{x}} \in [0., 0.25]$ in regards to reducing gradient vanishing. - may cause 'dead neurons', unable to recover due to $\forall x \in (-\infty, 0), \ \frac{\partial{f(x)}}{\partial{x}} = 0$. <figure> <center> <figcaption><font size="-1">Saturated layer slowing down learning. Glorot and Bengio et al 2010</font></figcaption></center> </figure> ## Dense Connectivity - direct connections allow Jacobians to travel further with reduced gradient vanishing, so deeper network is sensibly possible. - fewer parameters required, less prone to overfitting. - pseudo-augmentation due to feature resuse and Batch Normalization <figure> <center> <figcaption><font size="-1">A DenseNet with multiple dense blocks.</font></figcaption></center> </figure> a _very_ illustrative example: $$\begin{align*} \delta{w_1} &= \frac{\partial{E_{1}}}{\partial{w_1}} + \frac{\partial{E_{2}}}{\partial{w_1}} + \dots \\ \delta{w_1} &= \frac{\partial{E_{1}}}{\partial{f(x_1)}}\frac{\partial{f(x_1)}}{\partial{x_1}}\frac{\partial{x_1}}{\partial{w_1}} + \frac{\partial{E_{2}}}{\partial{f([x_1, x_2])}}\frac{\partial{f([x_1, x_2])}}{\partial{x_1}}\frac{\partial{x_1}}{\partial{w_1}} + \dots\\ \text{then}\\ \hat{w_1} &= w_1 - \lambda\delta{w_1} \end{align*}$$ ## Implementation - Pleiss et al 2017 reduces memory consumption exploiting cheap concatenation and BN operations. - Two pre-allocated shared memory storages for intermediate outputs from concatenation and BN. <figure> <center> <figcaption><font size="-1">efficient implementation. Pleiss et al 2017</font></figcaption></center> </figure> ## Not Mentioned - bottleneck - transition layers - compression ## Reference - Huang et al 2016. Densely Connected Convolutional Networks. accessible at: https://arxiv.org/abs/1608.06993 - Glorot and Bengio et al 2010. Understanding the difficulty of training deep feedforward neural networks. accessible at: http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf - He et al 2016. Identity Mappings in Deep Residual Networks. accessible at: https://arxiv.org/abs/1603.05027 - Ioffe and Szegedy 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. accessible at: https://arxiv.org/abs/1502.03167 - Pleiss et al 2017. Memory-Efficient Implementation of DenseNets. accessible at: https://arxiv.org/abs/1707.06990 ## Reads - Wan et al 2018. Reconciling Feature-Reuse and Overfitting in DenseNet with Specialized Dropout. accessible at: https://arxiv.org/abs/1810.00091 - Zhang et al 2015. Character-level Convolutional Networks for Text Classification. accessible at: https://arxiv.org/abs/1509.01626 - He et al 2015. Deep Residual Learning for Image Recognition. accessible at: https://arxiv.org/abs/1512.03385 - Krizhevsky et al 2012. Deep Convolutional Neural Networks. accessible at : https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf - Simonyan and Zisserman 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. accessible at: https://arxiv.org/pdf/1409.1556.pdf - Szegedy et al 2014. Going Deeper with Convolutions. accessible at https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf ## Slides https://github.com/minggli/densenet
22f1158d9be9bc162219b60a91f09e35a27117ed
13,775
ipynb
Jupyter Notebook
LDSJC.ipynb
minggli/densenet
01420e090e72413be9345757f35371a9af813b41
[ "MIT" ]
1
2017-11-17T14:07:23.000Z
2017-11-17T14:07:23.000Z
LDSJC.ipynb
minggli/DenseNet
01420e090e72413be9345757f35371a9af813b41
[ "MIT" ]
3
2020-03-24T16:40:36.000Z
2021-03-25T22:37:42.000Z
LDSJC.ipynb
minggli/DenseNet
01420e090e72413be9345757f35371a9af813b41
[ "MIT" ]
null
null
null
28.227459
299
0.545408
true
2,262
Qwen/Qwen-72B
1. YES 2. YES
0.835484
0.879147
0.734513
__label__eng_Latn
0.656794
0.544851
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All). Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below: ```python NAME = "Prabal Chowdhury" COLLABORATORS = "" ``` --- ## CSE330 Lab: Richardson Extrapolation --- ## Instructions Today's assignment is to: 1. Implement Richardson Extrapolation method using Python ## Richardson Extrapolation: We used central difference method to calculate derivatives of functions last task. In this task we will use Richardson extrapolation to get a more accurate result. Let, $$ D_h = \frac{f(x_1+h) -f(x_1-h)}{2h}\tag{5.1}$$ General Taylor Series formula: $$ f(x) = f(x_1) + f'(x_1)(x - x_1) + \frac{f''(x_1)}{2}(x - x_1)^2+... $$ Using Taylor's theorem to expand we get, \begin{align} f(x_1+h) &= f(x_1) + f^{\prime}(x_1)h + \frac{f^{\prime \prime}(x_1)}{2}h^2 + \frac{f^{\prime \prime \prime}(x_1)}{3!}h^3 + \frac{f^{(4)}(x_1)}{4!}h^4 + \frac{f^{(5)}(x_1)}{5!}h^5 + O(h^6)\tag{5.2} \\ f(x_1-h) &= f(x_1) - f^{\prime}(x_1)h + \frac{f^{\prime \prime}(x_1)}{2}h^2 - \frac{f^{\prime \prime \prime}(x_1)}{3!}h^3 + \frac{f^{(4)}(x_1)}{4!}h^4 - \frac{f^{(5)}(x_1)}{5!}h^5 + O(h^6)\tag{5.3} \end{align} Subtracting $5.3$ from $5.2$ we get, $$ f(x_1+h) - f(x_1-h) = 2f^{\prime}(x_1)h + 2\frac{f^{\prime \prime \prime}(x_1)}{3!}h^3 + 2\frac{f^{(5)}(x_1)}{5!}h^5 + O(h^7)\tag{5.4}$$ So, \begin{align} D_h &= \frac{f(x_1+h) - f(x_1-h)}{2h} \\ &= \frac{1}{2h} \left( 2f^{\prime}(x_1)h + 2\frac{f^{\prime \prime \prime}(x_1)}{3!}h^3 + 2\frac{f^{(5)}(x_1)}{5!}h^5 + O(h^7) \right) \\ &= f^{\prime}(x_1) + \frac{f^{\prime \prime \prime}(x_1)}{6}h^2 + \frac{f^{(5)}(x_1)}{120}h^4 + O(h^6) \tag{5.5} \end{align} We get our derivative $f'(x)$ plus some error terms of order $>= 2$ Now, we want to bring our error order down to 4. If we use $h, \text{and} \frac{h}{2}$ as step size in $5.5$, we get, \begin{align} D_h &= f^{\prime}(x_1) + f^{\prime \prime \prime}(x_1)\frac{h^2}{6} + f^{(5)}(x_1) \frac{h^4}{120} + O(h^6) \tag{5.6} \\ D_{h/2} &= f^{\prime}(x_1) + f^{\prime \prime \prime}(x_1)\frac{h^2}{2^2 . 6} + f^{(5)}(x_1) \frac{h^4}{2^4 . 120} + O(h^6) \tag{5.7} \end{align} Multiplying $5.7$ by $4$ and subtracting from $5.6$ we get, \begin{align} D_h - 4D_{h/2} &= -3f^{\prime}(x) + f^{(5)}(x_1) \frac{h^4}{160} + O(h^6)\\ \Longrightarrow D^{(1)}_h = \frac{4D_{h/2} - D_h}{3} &= f^{\prime}(x) - f^{(5)}(x_1) \frac{h^4}{480} + O(h^6) \tag{5.8} \end{align} Let's calculate the derivative using $5.8$ ### 1. Let's import the necessary headers ```python import numpy as np import pandas as pd import matplotlib.pyplot as plt from numpy.polynomial import Polynomial ``` ### 2. Let's create a function named `dh(f, h, x)` function `dh(f, h, x)` takes three parameters as input: a function `f`, a value `h`, and a set of values `x`. It returns the derivatives of the function at each elements of array `x` using the Central Difference method. This calculates equation $(5.1)$. ```python def dh(f, h, x): ''' Input: f: np.polynomial.Polynonimial type data. h: floating point data. x: np.array type data. Output: return np.array type data of slope at each point x. ''' # -------------------------------------------- return (f(x+h) - f(x-h)) / (2*h) # -------------------------------------------- ``` ### 3. Let's create another funtion `dh1(f, h, x)`. `dh1(f, h, x)` takes the same type of values as `dh(f, h, x)` as input. It calculates the derivative using previously defined `dh(f, h, x)` function and using equation $5.8$ and returns the values. ```python def dh1(f, h, x): ''' Input: f: np.polynomial.Polynonimial type data. h: floating point data. x: np.array type data. Output: return np.array type data of slope at each point x. ''' # -------------------------------------------- # YOUR CODE HERE return (4 * dh(f, h/2, x) - dh(f, h, x)) / 3 # -------------------------------------------- ``` ### 4. Now let's create the `error(f, hs, x_i)` function The `error(f, hs, x_i)` function takes a function `f` as input. It also takes a list of different values of h as `hs` and a specific value as `x_i` as input. It calculates the derivatives as point `x_i` using both functions described in **B** and **C**, i.e. `dh` and `dh1` ```python def error(f, hs, x_i): #Using the functions we wrote dh() my c_diff and dh1() which is my first order c diff, we find the error through appending their diffrences with Y_actual ny f(x) ''' Input: f : np.polynomial.Polynonimial type data. hs : np.array type data. list of h. x_i: floating point data. single value of x. Output: return two np.array type data of errors by two methods.. ''' f_prime = f.deriv(1) #first order derivitive f^1(x) Y_actual = f_prime(x_i) diff_error = [] diff2_error = [] for h in hs: #where h is my loop counter iterating through hs # for each values of hs calculate the error using both methods # and append those values into diff_error and diff2_error list. # -------------------------------------------- # YOUR CODE HERE e1 = Y_actual - dh(f, hs, x_i) diff_error.append(e1) e2 = Y_actual - dh1(f, hs, x_i) diff2_error.append(e2) # -------------------------------------------- print(pd.DataFrame({"h": hs, "Diff": diff_error, "Diff2": diff2_error})) return diff_error, diff2_error ``` ### 5. Finally let's run some tests function to draw the actual function ```python def draw_graph(f, ax, domain=[-10, 10], label=None): data = f.linspace(domain=domain) ax.plot(data[0], data[1], label='Function') ``` ### Draw the polynomial and it's actual derivative function ```python fig, ax = plt.subplots() ax.axhline(y=0, color='k') p = Polynomial([2.0, 1.0, -6.0, -2.0, 2.5, 1.0]) p_prime = p.deriv(1) draw_graph(p, ax, [-2.4, 1.5], 'Function') draw_graph(p_prime, ax, [-2.4, 1.5], 'Derivative') ax.legend() ``` ### Draw the actual derivative and richardson derivative using `h=1` and `h=0.1` as step size. ```python fig, ax = plt.subplots() ax.axhline(y=0, color='k') draw_graph(p_prime, ax, [-2.4, 1.5], 'actual') h = 1 x = np.linspace(-2.4, 1.5, 50, endpoint=True) y = dh1(p, h, x) ax.plot(x, y, label='Richardson; h=1') h = 0.1 x = np.linspace(-2.4, 1.5, 50, endpoint=True) y = dh1(p, h, x) ax.plot(x, y, label='Richardson; h=0.1') ax.legend() ``` ### Draw error-vs-h cuve ```python fig, ax = plt.subplots() ax.axhline(y=0, color='k') hs = np.array([1., 0.55, 0.3, .17, 0.1, 0.055, 0.03, 0.017, 0.01]) e1, e2 = error(p, hs, 2.0) ax.plot(hs, e1, label='e1') ax.plot(hs, e2, label='e2') ax.legend() ``` ```python ```
af44335ef720cdaa38d9e2f73e7a7bf93779d8df
69,253
ipynb
Jupyter Notebook
Richardson_Extrapolation.ipynb
PrabalChowdhury/CSE330-NUMERICAL-METHODS
aabfea01f4ceaecfbb50d771ee990777d6e1122c
[ "MIT" ]
null
null
null
Richardson_Extrapolation.ipynb
PrabalChowdhury/CSE330-NUMERICAL-METHODS
aabfea01f4ceaecfbb50d771ee990777d6e1122c
[ "MIT" ]
null
null
null
Richardson_Extrapolation.ipynb
PrabalChowdhury/CSE330-NUMERICAL-METHODS
aabfea01f4ceaecfbb50d771ee990777d6e1122c
[ "MIT" ]
null
null
null
103.827586
19,926
0.80372
true
2,492
Qwen/Qwen-72B
1. YES 2. YES
0.880797
0.826712
0.728165
__label__eng_Latn
0.785135
0.530104
# ¿Cómo funciona la suspensión de un auto? <div> </div> > Una primer aproximación al modelo de la suspensión de un automovil es considerar el *oscilador armónico amortiguado*. Referencia: - https://es.wikipedia.org/wiki/Oscilador_arm%C3%B3nico#Oscilador_arm.C3.B3nico_amortiguado Un **modelo** que describe el comportamiento del sistema mecánico anterior es \begin{equation} m\frac{d^2 x}{dt^2}=-c\frac{dx}{dt}-kx \end{equation} donde $c$ es la constante de amortiguamiento y $k$ es la constante de elasticidad. <font color=red> Revisar modelado </font> Documentación de los paquetes que utilizaremos hoy. - https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html - https://docs.scipy.org/doc/scipy/reference/index.html ___ En `python` existe una función llamada <font color = blue>_odeint_</font> del paquete <font color = blue>_integrate_</font> de la libreria <font color = blue>_scipy_</font>, que permite integrar sistemas vectoriales de primer orden, del tipo \begin{equation} \frac{d\boldsymbol{y}}{dt} = \boldsymbol{f}(\boldsymbol{y},t); \qquad \text{ con }\quad \boldsymbol{y}\in\mathbb{R}^n,\quad \boldsymbol{f}:\mathbb{R}^n\times\mathbb{R}_{+}\to\mathbb{R}^n \end{equation} con condiciones iniciales $\boldsymbol{y}(0) = \boldsymbol{y}_{0}$. Notar que <font color=red> $\boldsymbol{y}$ representa un vector de $n$ componentes</font>. Ahora, si nos fijamos bien, el modelo del *oscilador armónico amortiguado* que obtuvimos es una ecuación diferencial ordinaria (EDO) de segundo orden. No hay problema. La podemos convertir en un sistema de ecuaciones de primer orden de la siguiente manera: 1. Seleccionamos el vector $\boldsymbol{y}=\left[y_1\quad y_2\right]^T$, con $y_1=x$ y $y_2=\frac{dx}{dt}$. 2. Notamos que $\frac{dy_1}{dt}=\frac{dx}{dt}=y_2$ y $\frac{dy_2}{dt}=\frac{d^2x}{dt^2}=-\frac{c}{m}\frac{dx}{dt}-\frac{k}{m}x=-\frac{c}{m}y_2-\frac{k}{m}y_1$. 3. Entonces, el modelo de segundo orden lo podemos representar como el siguiente sistema vectorial de primer orden: \begin{equation} \frac{d\boldsymbol{y}}{dt}=\left[\begin{array}{c}\frac{dy_1}{dt} \\ \frac{dy_2}{dt}\end{array}\right]=\left[\begin{array}{c}y_2 \\ -\frac{k}{m}y_1-\frac{c}{m}y_2\end{array}\right]=\left[\begin{array}{cc}0 & 1 \\-\frac{k}{m} & -\frac{c}{m}\end{array}\right]\boldsymbol{y}. \end{equation} ```python # Primero importamos todas las librerias, paquetes y/o funciones que vamos a utlizar import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint ``` ```python help(odeint) ``` Help on function odeint in module scipy.integrate.odepack: odeint(func, y0, t, args=(), Dfun=None, col_deriv=0, full_output=0, ml=None, mu=None, rtol=None, atol=None, tcrit=None, h0=0.0, hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhnil=0, mxordn=12, mxords=5, printmessg=0, tfirst=False) Integrate a system of ordinary differential equations. .. note:: For new code, use `scipy.integrate.solve_ivp` to solve a differential equation. Solve a system of ordinary differential equations using lsoda from the FORTRAN library odepack. Solves the initial value problem for stiff or non-stiff systems of first order ode-s:: dy/dt = func(y, t, ...) [or func(t, y, ...)] where y can be a vector. .. note:: By default, the required order of the first two arguments of `func` are in the opposite order of the arguments in the system definition function used by the `scipy.integrate.ode` class and the function `scipy.integrate.solve_ivp`. To use a function with the signature ``func(t, y, ...)``, the argument `tfirst` must be set to ``True``. Parameters ---------- func : callable(y, t, ...) or callable(t, y, ...) Computes the derivative of y at t. If the signature is ``callable(t, y, ...)``, then the argument `tfirst` must be set ``True``. y0 : array Initial condition on y (can be a vector). t : array A sequence of time points for which to solve for y. The initial value point should be the first element of this sequence. This sequence must be monotonically increasing or monotonically decreasing; repeated values are allowed. args : tuple, optional Extra arguments to pass to function. Dfun : callable(y, t, ...) or callable(t, y, ...) Gradient (Jacobian) of `func`. If the signature is ``callable(t, y, ...)``, then the argument `tfirst` must be set ``True``. col_deriv : bool, optional True if `Dfun` defines derivatives down columns (faster), otherwise `Dfun` should define derivatives across rows. full_output : bool, optional True if to return a dictionary of optional outputs as the second output printmessg : bool, optional Whether to print the convergence message tfirst: bool, optional If True, the first two arguments of `func` (and `Dfun`, if given) must ``t, y`` instead of the default ``y, t``. .. versionadded:: 1.1.0 Returns ------- y : array, shape (len(t), len(y0)) Array containing the value of y for each desired time in t, with the initial value `y0` in the first row. infodict : dict, only returned if full_output == True Dictionary containing additional output information ======= ============================================================ key meaning ======= ============================================================ 'hu' vector of step sizes successfully used for each time step. 'tcur' vector with the value of t reached for each time step. (will always be at least as large as the input times). 'tolsf' vector of tolerance scale factors, greater than 1.0, computed when a request for too much accuracy was detected. 'tsw' value of t at the time of the last method switch (given for each time step) 'nst' cumulative number of time steps 'nfe' cumulative number of function evaluations for each time step 'nje' cumulative number of jacobian evaluations for each time step 'nqu' a vector of method orders for each successful step. 'imxer' index of the component of largest magnitude in the weighted local error vector (e / ewt) on an error return, -1 otherwise. 'lenrw' the length of the double work array required. 'leniw' the length of integer work array required. 'mused' a vector of method indicators for each successful time step: 1: adams (nonstiff), 2: bdf (stiff) ======= ============================================================ Other Parameters ---------------- ml, mu : int, optional If either of these are not None or non-negative, then the Jacobian is assumed to be banded. These give the number of lower and upper non-zero diagonals in this banded matrix. For the banded case, `Dfun` should return a matrix whose rows contain the non-zero bands (starting with the lowest diagonal). Thus, the return matrix `jac` from `Dfun` should have shape ``(ml + mu + 1, len(y0))`` when ``ml >=0`` or ``mu >=0``. The data in `jac` must be stored such that ``jac[i - j + mu, j]`` holds the derivative of the `i`th equation with respect to the `j`th state variable. If `col_deriv` is True, the transpose of this `jac` must be returned. rtol, atol : float, optional The input parameters `rtol` and `atol` determine the error control performed by the solver. The solver will control the vector, e, of estimated local errors in y, according to an inequality of the form ``max-norm of (e / ewt) <= 1``, where ewt is a vector of positive error weights computed as ``ewt = rtol * abs(y) + atol``. rtol and atol can be either vectors the same length as y or scalars. Defaults to 1.49012e-8. tcrit : ndarray, optional Vector of critical points (e.g. singularities) where integration care should be taken. h0 : float, (0: solver-determined), optional The step size to be attempted on the first step. hmax : float, (0: solver-determined), optional The maximum absolute step size allowed. hmin : float, (0: solver-determined), optional The minimum absolute step size allowed. ixpr : bool, optional Whether to generate extra printing at method switches. mxstep : int, (0: solver-determined), optional Maximum number of (internally defined) steps allowed for each integration point in t. mxhnil : int, (0: solver-determined), optional Maximum number of messages printed. mxordn : int, (0: solver-determined), optional Maximum order to be allowed for the non-stiff (Adams) method. mxords : int, (0: solver-determined), optional Maximum order to be allowed for the stiff (BDF) method. See Also -------- solve_ivp : Solve an initial value problem for a system of ODEs. ode : a more object-oriented integrator based on VODE. quad : for finding the area under a curve. Examples -------- The second order differential equation for the angle `theta` of a pendulum acted on by gravity with friction can be written:: theta''(t) + b*theta'(t) + c*sin(theta(t)) = 0 where `b` and `c` are positive constants, and a prime (') denotes a derivative. To solve this equation with `odeint`, we must first convert it to a system of first order equations. By defining the angular velocity ``omega(t) = theta'(t)``, we obtain the system:: theta'(t) = omega(t) omega'(t) = -b*omega(t) - c*sin(theta(t)) Let `y` be the vector [`theta`, `omega`]. We implement this system in python as: >>> def pend(y, t, b, c): ... theta, omega = y ... dydt = [omega, -b*omega - c*np.sin(theta)] ... return dydt ... We assume the constants are `b` = 0.25 and `c` = 5.0: >>> b = 0.25 >>> c = 5.0 For initial conditions, we assume the pendulum is nearly vertical with `theta(0)` = `pi` - 0.1, and is initially at rest, so `omega(0)` = 0. Then the vector of initial conditions is >>> y0 = [np.pi - 0.1, 0.0] We will generate a solution at 101 evenly spaced samples in the interval 0 <= `t` <= 10. So our array of times is: >>> t = np.linspace(0, 10, 101) Call `odeint` to generate the solution. To pass the parameters `b` and `c` to `pend`, we give them to `odeint` using the `args` argument. >>> from scipy.integrate import odeint >>> sol = odeint(pend, y0, t, args=(b, c)) The solution is an array with shape (101, 2). The first column is `theta(t)`, and the second is `omega(t)`. The following code plots both components. >>> import matplotlib.pyplot as plt >>> plt.plot(t, sol[:, 0], 'b', label='theta(t)') >>> plt.plot(t, sol[:, 1], 'g', label='omega(t)') >>> plt.legend(loc='best') >>> plt.xlabel('t') >>> plt.grid() >>> plt.show() ```python # Función f(y,t) que vamos a integrar def amortiguado(y, t, k, m, c): y1 = y[0] y2 = y[1] return np.array([y2, -(k / m) * y1 - (c / m) * y2]) ``` ```python # Definimos los parámetros k, m y c k, m, c = 3, 1, 0.5 # Condiciones iniciales y0 = np.array([1, 1]) # Especificamos los puntos de tiempo donde queremos la solución t = np.linspace(0, 30, 300) ``` ```python # Solución numérica y = odeint(func=amortiguado, y0=y0, t=t, args=(k, m, c)) ``` ¿Cómo entrega odeint las soluciones? ```python # Averiguar la forma de solución y.shape ``` (300, 2) ```python # Mostrar la solución y ``` array([[ 1.00000000e+00, 1.00000000e+00], [ 1.08255354e+00, 6.44400045e-01], [ 1.12926028e+00, 2.87265734e-01], [ 1.14050185e+00, -6.08345277e-02], [ 1.11768327e+00, -3.90114960e-01], [ 1.06314152e+00, -6.91821494e-01], [ 9.80031347e-01, -9.58438604e-01], [ 8.72192532e-01, -1.18384948e+00], [ 7.44003548e-01, -1.36344733e+00], [ 6.00226559e-01, -1.49419684e+00], [ 4.45848722e-01, -1.57464647e+00], [ 2.85924657e-01, -1.60489362e+00], [ 1.25424603e-01, -1.58650595e+00], [-3.09074594e-02, -1.52240341e+00], [-1.78682133e-01, -1.41670627e+00], [-3.13976227e-01, -1.27455532e+00], [-4.33425216e-01, -1.10191089e+00], [-5.34294753e-01, -9.05337619e-01], [-6.14530179e-01, -6.91782064e-01], [-6.72783643e-01, -4.68349828e-01], [-7.08419119e-01, -2.42089037e-01], [-7.21496257e-01, -1.97860630e-02], [-7.12734597e-01, 1.92221059e-01], [-6.83460139e-01, 3.88206930e-01], [-6.35536705e-01, 5.63197023e-01], [-5.71284864e-01, 7.13077112e-01], [-4.93391403e-01, 8.34672057e-01], [-4.04812458e-01, 9.25793147e-01], [-3.08673486e-01, 9.85254145e-01], [-2.08169142e-01, 1.01285709e+00], [-1.06466032e-01, 1.00934976e+00], [-6.61104632e-03, 9.76357430e-01], [ 8.85523259e-02, 9.16292147e-01], [ 1.76457556e-01, 8.32243416e-01], [ 2.54876953e-01, 7.27854260e-01], [ 3.21970545e-01, 6.07187080e-01], [ 3.76321163e-01, 4.74583783e-01], [ 4.16955391e-01, 3.34524538e-01], [ 4.43350454e-01, 1.91489371e-01], [ 4.55427534e-01, 4.98265382e-02], [ 4.53532357e-01, -8.63687837e-02], [ 4.38404262e-01, -2.13362439e-01], [ 4.11135226e-01, -3.27873347e-01], [ 3.73120515e-01, -4.27147733e-01], [ 3.26002856e-01, -5.09014244e-01], [ 2.71612061e-01, -5.71919246e-01], [ 2.11902105e-01, -6.14942256e-01], [ 1.48887646e-01, -6.37791984e-01], [ 8.45818567e-02, -6.40784133e-01], [ 2.09373508e-02, -6.24802410e-01], [-4.02082314e-02, -5.91244778e-01], [-9.71805876e-02, -5.41957213e-01], [-1.48509956e-01, -4.79157531e-01], [-1.92964291e-01, -4.05352053e-01], [-2.29573832e-01, -3.23247831e-01], [-2.57646763e-01, -2.35663301e-01], [-2.76775942e-01, -1.45440011e-01], [-2.86836931e-01, -5.53580104e-02], [-2.87977823e-01, 3.19428361e-02], [-2.80601553e-01, 1.14033541e-01], [-2.65341593e-01, 1.88757055e-01], [-2.43032054e-01, 2.54278333e-01], [-2.14673367e-01, 3.09122495e-01], [-1.81394746e-01, 3.52200544e-01], [-1.44414712e-01, 3.82822476e-01], [-1.05000916e-01, 4.00698037e-01], [-6.44304938e-02, 4.05925681e-01], [-2.39520746e-02, 3.98970667e-01], [ 1.52495098e-02, 3.80633436e-01], [ 5.20849165e-02, 3.52009701e-01], [ 8.55876270e-02, 3.14443793e-01], [ 1.14936335e-01, 2.69477011e-01], [ 1.39471973e-01, 2.18792684e-01], [ 1.58709138e-01, 1.64159774e-01], [ 1.72341858e-01, 1.07376701e-01], [ 1.80243801e-01, 5.02170689e-02], [ 1.82463197e-01, -5.62123894e-03], [ 1.79212881e-01, -5.85622380e-02], [ 1.70855985e-01, -1.07192185e-01], [ 1.57887912e-01, -1.50293175e-01], [ 1.40915311e-01, -1.86869344e-01], [ 1.20632813e-01, -2.16165278e-01], [ 9.77983079e-02, -2.37676424e-01], [ 7.32075851e-02, -2.51151649e-01], [ 4.76690987e-02, -2.56588223e-01], [ 2.19795918e-02, -2.54219734e-01], [-3.09873955e-03, -2.44497674e-01], [-2.68589505e-02, -2.28067501e-01], [-4.86674160e-02, -2.05740192e-01], [-6.79788598e-02, -1.78460311e-01], [-8.43480530e-02, -1.47271718e-01], [-9.74380008e-02, -1.13282048e-01], [-1.07024548e-01, -7.76270369e-02], [-1.12997443e-01, -4.14357813e-02], [-1.15358006e-01, -5.79788809e-03], [-1.14213629e-01, 2.82666093e-02], [-1.09769431e-01, 5.98338166e-02], [-1.02317451e-01, 8.80984204e-02], [-9.22238117e-02, 1.12391550e-01], [-7.99143379e-02, 1.32193748e-01], [-6.58591253e-02, 1.47142925e-01], [-5.05565614e-02, 1.57037300e-01], [-3.45172979e-02, 1.61833489e-01], [-1.82486470e-02, 1.61640028e-01], [-2.23983493e-03, 1.56706740e-01], [ 1.30514820e-02, 1.47410493e-01], [ 2.72111227e-02, 1.34237906e-01], [ 3.98784229e-02, 1.17765682e-01], [ 5.07542116e-02, 9.86392579e-02], [ 5.96065967e-02, 7.75504737e-02], [ 6.62744819e-02, 5.52149751e-02], [ 7.06688393e-02, 3.23500227e-02], [ 7.27717970e-02, 9.65333856e-03], [ 7.26336834e-02, -1.22164388e-02], [ 7.03682113e-02, -3.26571938e-02], [ 6.61460356e-02, -5.11382596e-02], [ 6.01869530e-02, -6.72125140e-02], [ 5.27510391e-02, -8.05254290e-02], [ 4.41290354e-02, -9.08209578e-02], [ 3.46323078e-02, -9.79442377e-02], [ 2.45826886e-02, -1.01841190e-01], [ 1.43025071e-02, -1.02555178e-01], [ 4.10509572e-03, -1.00220965e-01], [-5.71397588e-03, -9.50562807e-02], [-1.48847166e-02, -8.73513683e-02], [-2.31693954e-02, -7.74569047e-02], [-3.03679467e-02, -6.57707453e-02], [-3.63220085e-02, -5.27239153e-02], [-4.09175312e-02, -3.87663328e-02], [-4.40859626e-02, -2.43526600e-02], [-4.58040350e-02, -9.92871470e-03], [-4.60922337e-02, 4.08119800e-03], [-4.50120538e-02, 1.72856986e-02], [-4.26621872e-02, 2.93362230e-02], [-3.91737998e-02, 3.99351709e-02], [-3.47050959e-02, 4.88421445e-02], [-2.94353420e-02, 5.58782266e-02], [-2.35585689e-02, 6.09282254e-02], [-1.72771482e-02, 6.39409421e-02], [-1.07954393e-02, 6.49275402e-02], [-4.31368716e-03, 6.39581692e-02], [ 1.97765943e-03, 6.11570151e-02], [ 7.90306528e-03, 5.66960110e-02], [ 1.33063359e-02, 5.07874454e-02], [ 1.80542610e-02, 4.36757518e-02], [ 2.20394035e-02, 3.56287536e-02], [ 2.51819948e-02, 2.69286518e-02], [ 2.74309256e-02, 1.78630283e-02], [ 2.87638445e-02, 8.71613240e-03], [ 2.91864079e-02, -2.39306389e-04], [ 2.87307434e-02, -8.74952811e-03], [ 2.74532105e-02, -1.65862707e-02], [ 2.54315592e-02, -2.35522285e-02], [ 2.27615988e-02, -2.94853359e-02], [ 1.95534995e-02, -3.42618035e-02], [ 1.59278522e-02, -3.77978836e-02], [ 1.20116157e-02, -4.00503712e-02], [ 7.93407486e-03, -4.10158893e-02], [ 3.82292821e-03, -4.07290402e-02], [-1.99386584e-04, -3.92595299e-02], [-4.01903279e-03, -3.67084026e-02], [-7.53369787e-03, -3.32035400e-02], [-1.06550374e-02, -2.88945878e-02], [-1.33105865e-02, -2.39474939e-02], [-1.54451110e-02, -1.85388367e-02], [-1.70213884e-02, -1.28501164e-02], [-1.80204191e-02, -7.06218181e-03], [-1.84410945e-02, -1.34994748e-03], [-1.82993527e-02, 4.12245437e-03], [-1.76268770e-02, 9.20596955e-03], [-1.64693946e-02, 1.37702627e-02], [-1.48846455e-02, 1.77066315e-02], [-1.29400987e-02, 2.09301420e-02], [-1.07104931e-02, 2.33809612e-02], [-8.27528669e-03, 2.50248869e-02], [-5.71609021e-03, 2.58530983e-02], [-3.11416444e-03, 2.58811736e-02], [-5.48049501e-04, 2.51474380e-02], [ 1.90860958e-03, 2.37107251e-02], [ 4.18898848e-03, 2.16476415e-02], [ 6.23470843e-03, 1.90494555e-02], [ 7.99714491e-03, 1.60186882e-02], [ 9.43837476e-03, 1.26655558e-02], [ 1.05317686e-02, 9.10435258e-03], [ 1.12622197e-02, 5.44989437e-03], [ 1.16260257e-02, 1.81412182e-03], [ 1.16304384e-02, -1.69705203e-03], [ 1.12929114e-02, -4.98657608e-03], [ 1.06400909e-02, -7.96862948e-03], [ 9.70658499e-03, -1.05706460e-02], [ 8.53355543e-03, -1.27347635e-02], [ 7.16718837e-03, -1.44188132e-02], [ 5.65709128e-03, -1.55968093e-02], [ 4.05466582e-03, -1.62589055e-02], [ 2.41151165e-03, -1.64108831e-02], [ 7.77900648e-04, -1.60732421e-02], [-7.98635319e-04, -1.52798610e-02], [-2.27456622e-03, -1.40763419e-02], [-3.61144889e-03, -1.25181215e-02], [-4.77680726e-03, -1.06683391e-02], [-5.74479375e-03, -8.59560656e-03], [-6.49662364e-03, -6.37173957e-03], [-7.02078378e-03, -4.06946867e-03], [-7.31301267e-03, -1.76027952e-03], [-7.37607266e-03, 4.87642282e-04], [-7.21932418e-03, 2.61124781e-03], [-6.85812738e-03, 4.55422110e-03], [-6.31309922e-03, 6.26831530e-03], [-5.60924821e-03, 7.71436370e-03], [-4.77502635e-03, 8.86299514e-03], [-3.84132387e-03, 9.69501328e-03], [-2.84044197e-03, 1.02014636e-02], [-1.80507533e-03, 1.03833963e-02], [-7.67331710e-04, 1.02513437e-02], [ 2.42181483e-04, 9.82454909e-03], [ 1.19518410e-03, 9.12997398e-03], [ 2.06644020e-03, 8.20112874e-03], [ 2.83435051e-03, 7.07676734e-03], [ 3.48140918e-03, 5.79949187e-03], [ 3.99451765e-03, 4.41431307e-03], [ 4.36515405e-03, 2.96720883e-03], [ 4.58940006e-03, 1.50372425e-03], [ 4.66782792e-03, 6.76642050e-05], [ 4.60526827e-03, -1.30011880e-03], [ 4.41045781e-03, -2.56277695e-03], [ 4.09559101e-03, -3.68835291e-03], [ 3.67579419e-03, -4.65048450e-03], [ 3.16853576e-03, -5.42889919e-03], [ 2.59299920e-03, -6.00971481e-03], [ 1.96943622e-03, -6.38553482e-03], [ 1.31851777e-03, -6.55534650e-03], [ 6.60707493e-04, -6.52424133e-03], [ 1.56706421e-05, -6.30296331e-03], [-5.98265584e-04, -5.90731860e-03], [-1.16458455e-03, -5.35746407e-03], [-1.66897588e-03, -4.67710064e-03], [-2.09965002e-03, -3.89260437e-03], [-2.44756023e-03, -3.03212050e-03], [-2.70653293e-03, -2.12464979e-03], [-2.87331116e-03, -1.19915451e-03], [-2.94750598e-03, -2.83709131e-04], [-2.93146652e-03, 5.95279962e-04], [-2.83007955e-03, 1.41377000e-03], [-2.65049987e-03, 2.15067188e-03], [-2.40183020e-03, 2.78832655e-03], [-2.09475750e-03, 3.31285466e-03], [-1.74116178e-03, 3.71437984e-03], [-1.35370872e-03, 3.98712298e-03], [-9.45439348e-04, 4.12937155e-03], [-5.29367996e-04, 4.14332975e-03], [-1.18101593e-04, 4.03486555e-03], [ 2.76511191e-04, 3.81315686e-03], [ 6.43690822e-04, 3.49025933e-03], [ 9.73991278e-04, 3.08061156e-03], [ 1.25951079e-03, 2.60049377e-03], [ 1.49405297e-03, 2.06746256e-03], [ 1.67322061e-03, 1.49976805e-03], [ 1.79445903e-03, 9.15786421e-04], [ 1.85704068e-03, 3.33473308e-04], [ 1.86199702e-03, -2.30143606e-04], [ 1.81200277e-03, -7.59419712e-04], [ 1.71121630e-03, -1.24048634e-03], [ 1.56508289e-03, -1.66156660e-03], [ 1.38011504e-03, -2.01322342e-03], [ 1.16364754e-03, -2.28852126e-03], [ 9.23581944e-04, -2.48310780e-03], [ 6.68126447e-04, -2.59521496e-03], [ 4.05540031e-04, -2.62558383e-03], [ 1.43887408e-04, -2.57732093e-03], [-1.09190231e-04, -2.45570220e-03], [-3.46677662e-04, -2.26787852e-03], [-5.62359431e-04, -2.02261209e-03], [-7.50965013e-04, -1.72991853e-03], [-9.08277890e-04, -1.40071545e-03], [-1.03120674e-03, -1.04646479e-03], [-1.11782219e-03, -6.78801085e-04], [-1.16735638e-03, -3.09181726e-04], [-1.18016783e-03, 5.14368857e-05], [-1.15767707e-03, 3.92899575e-04], [-1.10227236e-03, 7.06110899e-04], [-1.01719178e-03, 9.83250186e-04], [-9.06387198e-04, 1.21793999e-03], [-7.74373028e-04, 1.40536142e-03], [-6.26065857e-04, 1.54232050e-03], [-4.66620193e-04, 1.62726244e-03], [-3.01265866e-04, 1.66023638e-03], [-1.35150539e-04, 1.64281632e-03], [ 2.68080687e-05, 1.57797952e-03], [ 1.80054177e-04, 1.46995057e-03], [ 3.20511446e-04, 1.32401743e-03], [ 4.44678792e-04, 1.14632296e-03], [ 5.49705086e-04, 9.43642342e-04], [ 6.33441051e-04, 7.23153279e-04], [ 6.94468231e-04, 4.92205466e-04], [ 7.32104638e-04, 2.58096254e-04], [ 7.46388941e-04, 2.78588802e-05], [ 7.38044078e-04, -1.91931321e-04], [ 7.08422500e-04, -3.95327716e-04], [ 6.59435631e-04, -5.77156322e-04], [ 5.93470317e-04, -7.33129892e-04]]) - $y$ es una matriz de n filas y 2 columnas. - La primer columna de $y$ corresponde a $y_1$. - La segunda columna de $y$ corresponde a $y_2$. ¿Cómo extraemos los resultados $y_1$ y $y_2$ independientemente? ```python # Extraer y1 y y2 y1 = y[:, 0] y2 = y[:, 1] ``` ### Para hacer participativamente... - Graficar en una misma ventana $y_1$ vs. $t$ y $y_2$ vs. $t$... ¿qué pueden observar? ```python # Gráfica plt.figure(figsize=(8,6)) plt.plot(t, y1, 'b', lw=3, label='Posición [m]: $y_1(t)$') plt.plot(t, y2, 'r', lw=3, label='Velocidad [m/s]: $y_2(t)$') plt.xlabel("Tiempo [s] $t$") plt.legend(loc="best") plt.grid() plt.show() ``` - Graficar $y_2/\omega_0$ vs. $y_1$... ¿cómo se complementan estos gráficos? ¿conclusiones? ```python # Gráfica omega0 = (k/m)**0.5 plt.figure(figsize=(8,6)) plt.plot(y1, y2/omega0, 'b', lw=3) plt.xlabel("Posición $y_1$") plt.ylabel("Velocidad normalizada $y_2/\omega_0$") plt.grid() plt.show() ``` ## Dependiendo de los parámetros, 3 tipos de soluciones Teníamos \begin{equation} m\frac{d^2 x}{dt^2} + c\frac{dx}{dt} + kx = 0 \end{equation} si recordamos que $\omega_0 ^2 = \frac{k}{m}$ y definimos $\frac{c}{m}\equiv 2\Gamma$, tendremos \begin{equation} \frac{d^2 x}{dt^2} + 2\Gamma \frac{dx}{dt}+ \omega_0^2 x = 0 \end{equation} <font color=blue>El comportamiento viene determinado por las raices de la ecuación característica. Ver en el tablero...</font> ### Subamortiguado Si $\omega_0^2 > \Gamma^2$ se tiene movimiento oscilatorio *subamortiguado*. ```python omega0 = (k/m)**0.5 Gamma = c/(2*m) ``` ```python omega0**2, Gamma**2 ``` (2.9999999999999996, 0.0625) ```python omega0**2 > Gamma**2 ``` True Entonces, el primer caso que ya habíamos presentado corresponde a movimiento amortiguado. ```python # Gráfica, de nuevo plt.figure(figsize=(8,6)) plt.plot(t, y1, 'b', lw=3, label='Posición [m]: $y_1(t)$') plt.plot(t, y2, 'r', lw=3, label='Velocidad [m/s]: $y_2(t)$') plt.xlabel("Tiempo [s] $t$") plt.legend(loc="best") plt.grid() plt.show() ``` ```python # Función f(y,t) que vamos a integrar def amortiguado(y, t, k, m, c): y1 = y[0] y2 = y[1] w0 = np.sqrt(k / m) f = np.sin(w0 * t) return np.array([y2, -(k / m) * y1 - (c / m) * y2 + f]) ``` ```python # Definimos los parámetros k, m y c k, m, c = 3, 1, 0.5 # Condiciones iniciales y0 = np.array([1, 1]) # Especificamos los puntos de tiempo donde queremos la solución t = np.linspace(0, 30, 300) ``` ```python # Solución numérica y = odeint(func=amortiguado, y0=y0, t=t, args=(k, m, c)) ``` ```python y1, y2 = y.T ``` ```python # Gráfica, de nuevo plt.figure(figsize=(8,6)) plt.plot(t, y1, 'b', lw=3, label='Posición [m]: $y_1(t)$') plt.plot(t, y2, 'r', lw=3, label='Velocidad [m/s]: $y_2(t)$') plt.plot(t, 1 * np.sin((k / m)**0.5 * t)) plt.xlabel("Tiempo [s] $t$") plt.legend(loc="best") plt.grid() plt.show() ``` ```python ``` ```python ``` ```python ``` ### Sobreamortiguado Si $\omega_0^2 < \Gamma^2$ se tiene movimiento oscilatorio *sobreamortiguado*. ```python # Nuevas constantes k = .1 # Constante del muelle m = 1.0 # Masa c = 1 # Constante de amortiguación ``` Simular y graficar... ```python omega0 = np.sqrt(k/m) Gamma = c/(2*m) ``` ```python omega0**2, Gamma**2 ``` (0.1, 0.25) ```python omega0**2<Gamma**2 ``` True ```python # Simular y = odeint(amortiguado, y0, t, args=(k,m,c)) y1s = y[:,0] y2s = y[:,1] ``` ```python # Graficar plt.figure(figsize=(8,6)) plt.plot(t, y1s, 'b', lw=3, label='Posición [m]: $y_1(t)$') plt.plot(t, y2s, 'r', lw=3, label='Velocidad [m/s]: $y_2(t)$') plt.xlabel("Tiempo [s] $t$") plt.legend(loc="best") plt.grid() plt.show() ``` ### Amortiguamiento crítico Si $\omega_0^2 = \Gamma^2$ se tiene movimiento *críticamente amortiguado*. ```python # Nuevas constantes k = .0625 # Constante del muelle m = 1.0 # Masa c = .5 # Constante de amortiguación ``` Simular y graficar... ```python omega0 = np.sqrt(k/m) Gamma = c/(2*m) ``` ```python omega0**2, Gamma**2 ``` (0.0625, 0.0625) ```python omega0**2 == Gamma**2 ``` True ```python # Simular y = odeint(amortiguado, y0, t, args=(k,m,c)) y1c = y[:,0] y2c = y[:,1] ``` ```python # Graficar plt.figure(figsize=(8,6)) plt.plot(t, y1c, 'b', lw=3, label='Posición [m]: $y_1(t)$') plt.plot(t, y2c, 'r', lw=3, label='Velocidad [m/s]: $y_2(t)$') plt.xlabel("Tiempo [s] $t$") plt.legend(loc="best") plt.grid() plt.show() ``` En resumen, se tiene entonces: ```python tt = t fig, ((ax1, ax2, ax3), (ax4, ax5, ax6)) = plt.subplots(2, 3, sharex='col', sharey='row',figsize =(10,6)) ax1.plot(tt, y1, c = 'k') ax1.set_title('Amortiguado', fontsize = 14) ax1.set_ylabel('Posición', fontsize = 14) ax2.plot(tt, y1s, c = 'b') ax2.set_title('Sobreamortiguado', fontsize = 14) ax3.plot(tt, y1c, c = 'r') ax3.set_title('Crítico', fontsize = 16) ax4.plot(tt, y2, c = 'k') ax4.set_ylabel('Velocidad', fontsize = 14) ax4.set_xlabel('tiempo', fontsize = 14) ax5.plot(tt, y2s, c = 'b') ax5.set_xlabel('tiempo', fontsize = 14) ax6.plot(tt, y2c, c = 'r') ax6.set_xlabel('tiempo', fontsize = 14) plt.show() ``` > **Tarea**. ¿Cómo se ve el espacio fase para los diferentes casos así como para diferentes condiciones iniciales? > En un gráfico como el anterior, realizar gráficas del plano fase para los distintos movimientos y para cuatro conjuntos de condiciones iniciales distintas - y0 = [1, 1] - y0 = [1, -1] - y0 = [-1, 1] - y0 = [-1, -1] Hacer lo anterior en un nuevo notebook de jupyter llamado Tarea7_ApellidoNombre.ipynb y subir en el espacio habilitado. <footer id="attribution" style="float:right; color:#808080; background:#fff;"> Created with Jupyter by Lázaro Alonso. Modified by Esteban Jiménez Rodríguez. </footer>
032fae45399664c42622c6e9e10207f0522ee3a3
315,341
ipynb
Jupyter Notebook
Modulo3/Clase16_OsciladorAmortiguado.ipynb
DiegoBAL23/simmatp2021
238e88b58cf0481de444ffd14a8b46dbdfae6066
[ "MIT" ]
null
null
null
Modulo3/Clase16_OsciladorAmortiguado.ipynb
DiegoBAL23/simmatp2021
238e88b58cf0481de444ffd14a8b46dbdfae6066
[ "MIT" ]
null
null
null
Modulo3/Clase16_OsciladorAmortiguado.ipynb
DiegoBAL23/simmatp2021
238e88b58cf0481de444ffd14a8b46dbdfae6066
[ "MIT" ]
null
null
null
229.00581
79,804
0.890208
true
12,182
Qwen/Qwen-72B
1. YES 2. YES
0.887205
0.757794
0.672319
__label__eng_Latn
0.137677
0.400353
# Homework 2 ## BIOST 558 Spring 2020 ### Juan Solorio ```python # needed libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn import preprocessing from sklearn import linear_model import statsmodels.api as sm %matplotlib inline plt.style.use('ggplot') ``` ### Exercise 1 #### Gradient Descent - Theory (part 1 answers) We have the objective equation \begin{equation} F(\beta) = \frac{1}n\sum_{i=1}^n(y_i - x^T_i\beta)^2 + \lambda\|\beta\|^2_2 \end{equation} or expanded: \begin{equation} F(\beta) = \frac{1}n\sum_{i=1}^n(y_i - \sum_{j=1}^dx_{ij}\beta_j)^2 + \lambda\sum_{j=1}^d\beta_j^2 \end{equation} which we want to minimize with respect to $\beta$, meaning we want to take the derivative of $F(\beta)$ and set it equal to zero. The derivative for n=1 and d=1 yields: \begin{equation} \frac{dF}{d\beta} = \frac{2}n(y - x\beta) + 2\lambda\beta \end{equation} In Matrix terms, $F(\beta)$ can be interpreted as: \begin{equation} F(\beta) = (Y - X\beta)^T(Y-X\beta) + \lambda\beta^T\beta \end{equation} \begin{equation} = y^Ty - \beta^TX^Ty-y^TX\beta+\beta^TX^TX\beta+\lambda\beta^T\beta \end{equation} \begin{equation} = y^Ty - \beta^TX^Ty-y^TX\beta+\beta^TX^TX\beta+\beta^T\lambda I\beta \end{equation} \begin{equation} = y^Ty - 2\beta^TX^Ty + \beta^T(X^TX + \lambda I)\beta \end{equation} Taking the derivative of the matrix form of $F(\beta)$ leads us to the following \end{equation} \begin{equation} \frac{d}{d\beta}= - 2X^Ty + 2(X^TX + \lambda I)\beta \end{equation} #### Gradient Descent - Algorithm Code ```python # Load the data hitters = pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/Hitters.csv', sep=',', header=0) hitters = hitters.dropna() ``` ```python # Creating matrix X with the predictors and # and vector y with the response X = hitters.drop('Salary', axis=1) X = pd.get_dummies(X, drop_first=True) y = hitters.Salary ``` ```python # Dividing the data into train and test sets. # By default, it is a 75-25 split between train-test X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) # Standarizing the data X_train = np.array((X_train-X_train.mean())/(X_train.max()-X_train.min())) y_train = np.array((y_train-y_train.mean())/(y_train.max()-y_train.min())) ``` ```python def betas(X,y,lamda=0.05): ''' Function to calculate betas from the matrix solution ''' I = np.identity(len(X[0])) return np.dot(np.linalg.inv(np.dot(X.T,X) + lamda*I),np.dot(X.T,y)) ``` ```python def Fx_ridge(X,y,beta, lamda): ''' # RSS = (y-XB).T(y-XB)+lamda*B.T*B This is the objective function for Ridge regression ''' n = len(y) lin = y - (X @ beta) reg = lamda*(beta.T @ beta) return (1/n *(lin.T @ lin) + reg) ``` ```python def computegrad(X, y, beta,lamda): ''' Function for the gradient of the ridge objective function ''' n = len(X[1]) I = np.identity(len(X_train[1])) lin = (X_train.T @ y_train)/n reg = ((X_train.T @ X_train)/n + lamda*I) @ beta return 2*(reg-lin) ``` ```python def graddescent(X,y,t,beta,lamda,max_iter=1000): ''' Gradient descent function that takes in X,y,t,beta,lamda,max_iter returns list of betas, grandient, objective fx values ''' grad = computegrad(X,y,beta,lamda) fx = Fx_ridge(X,y,beta,lamda) fx_vals = [fx] beta_vals = [beta] grad_vals = [np.linalg.norm(grad)] for i in range(max_iter): beta = beta - t*grad grad = computegrad(X,y,beta,lamda) fx = Fx_ridge(X,y,beta,lamda) beta_vals.append(beta) grad_vals.append(np.linalg.norm(grad)) fx_vals.append(fx) return [beta_vals,grad_vals,fx_vals] ``` ```python def graddescent_eps(X,y,t,beta,lamda,eps,max_iter=1000): ''' Gradient descent function that takes in X,y,t,beta,lamda,max_iter returns list of betas, grandient, objective fx values ''' grad = computegrad(X,y,beta,lamda) fx = Fx_ridge(X,y,beta,lamda) fx_vals = [fx] beta_vals = [beta] grad_vals = [grad] iter_val = 0 while np.linalg.norm(grad) > eps and iter_val < max_iter: beta = beta - t*grad grad = computegrad(X,y,beta,lamda) fx = Fx_ridge(X,y,beta,lamda) beta_vals.append(beta) grad_vals.append(grad) fx_vals.append(fx) return [beta_vals,grad_vals,fx_vals] ``` ##### Checking the Gradient Descent and Objective Ridge Fx for $\lambda = -5.00$ ```python n=len(X_train[1]) beta = np.zeros(n) t=0.05 lamda=-5 grad_desc_n5 = graddescent(X_train, y_train,t,beta,lamda) ts = np.array(range(len(grad_desc_n5[1]))) plt.scatter(x=ts,y=np.array(grad_desc_n5[2])) plt.show() ``` From the figure we can observe that with $\lambda < 0$, the Ridge objective function diverges to $-\infty$, which was to be expected given that it would cause the gradient descent to become more of a gradient 'ascent'. ##### Checking the Gradient Descent and Objective Ridge Fx for $\lambda = 0.05$ ```python n=len(X_train[1]) beta = np.zeros(n) t=0.05 lamda=0.05 grad_desc_05 = graddescent(X_train, y_train,t,beta,lamda) ``` ```python df_fx = pd.DataFrame(grad_desc_05[2]) df_grad = pd.DataFrame(grad_desc_05[1]) df_beta = pd.DataFrame(grad_desc_05[0]) df_fx.plot(legend=False,figsize=(8,6),title='Ridge Objective Fx') plt.xlabel("t") plt.show() ``` ```python df_beta.plot(subplots=True,layout=(4,5),legend=False,figsize=(14,10), title=r"$\beta$ Individial Predictors' Gradient Descent") plt.subplots_adjust(hspace=0.35) plt.show() ``` As can be observed in the $"Ridge\ Objective\ Fx"$ chart and the $"\beta\ Individial\ Predictors'\ Gradient\ Descent"$ charts, for the $\lambda = 0.05$, we can see each of these values converging, in the case of the $\beta s$ converge to some number and for the $F(\beta)$ to about 0.032. #### Comparing sklearn.linear_model.Ridge $\beta$ to those values from the defined functions ```python # sklearn Ridge regression betas ridge = linear_model.Ridge(alpha=0.05) model = ridge.fit(X_train,y_train) model.coef_ ``` array([-0.49423255, 0.61521257, 0.14279993, -0.12541456, 0.0084655 , 0.31010389, -0.08027074, -0.36881245, 0.56366317, -0.05996115, 0.52130315, 0.05585199, -0.4589716 , 0.14773404, 0.06993996, -0.08699545, 0.01999316, -0.0504822 , -0.00686063]) ```python # betas calculated through my betas function B = betas(X_train,y_train) B ``` array([-0.49423255, 0.61521257, 0.14279993, -0.12541456, 0.0084655 , 0.31010389, -0.08027074, -0.36881245, 0.56366317, -0.05996115, 0.52130315, 0.05585199, -0.4589716 , 0.14773404, 0.06993996, -0.08699545, 0.01999316, -0.0504822 , -0.00686063]) ```python print("sklearn: %f \t Vectorize Ridge: %f \t Grad Desc: %f"%( Fx_ridge(X_train,y_train,model.coef_,lamda),Fx_ridge(X_train,y_train,B,lamda),np.array(grad_desc_05[2])[-1])) ``` sklearn: 0.107533 Vectorize Ridge: 0.107533 Grad Desc: 0.031999 The values calculated by the vectorize Ridge function are quite close to those from the sklearn Ridge function. However the values calculated by my gradient descent were lower than these two previous values. #### Adding a stopping criteria $\|\nabla F(\beta)\| <$ $\epsilon = 0.005$ ```python n=len(X_train[1]) beta = np.zeros(n) t=0.2 lamda=0.05 eps = 0.005 grad_desc_eps = graddescent_eps(X_train, y_train,t,beta,lamda,eps) df_fx = pd.DataFrame(grad_desc_eps[2]) df_grad = pd.DataFrame(grad_desc_eps[1]) df_fx.plot(legend=False,figsize=(8,6),title='Ridge Objective Fx') plt.xlabel("t") plt.show() ``` The new criteria allows for the gradient descent process to stop sooner as it converges without the need to continue iterating. ### Exercise 2 ```python # (a) Read in the dataset. The data can be downloaded from this url: http://www-bcf.usc. # edu/~gareth/ISL/Auto.csv When reading in the data use the option na values=’?’. # Then drop all NaN values using dropna(). # Load the data Auto = pd.read_csv("http://faculty.marshall.usc.edu/gareth-james/ISL/Auto.csv", sep=',', header=0, na_values='?') Auto = Auto.dropna() Auto['constant']=1 ``` ```python # (b) Use the OLS function from the statsmodels package to perform a simple linear regression with mpg as the response and weight as the predictor. Be sure to include an # intercept. Use the summary() attribute to print the results. X = Auto.loc[:,['constant','weight']] Y = Auto.mpg lm_mpg_wgt = sm.OLS(Y,X) lm_mpg_wgt_model = lm_mpg_wgt.fit() print(lm_mpg_wgt_model.summary()) ``` OLS Regression Results ============================================================================== Dep. Variable: mpg R-squared: 0.693 Model: OLS Adj. R-squared: 0.692 Method: Least Squares F-statistic: 878.8 Date: Fri, 17 Apr 2020 Prob (F-statistic): 6.02e-102 Time: 20:56:37 Log-Likelihood: -1130.0 No. Observations: 392 AIC: 2264. Df Residuals: 390 BIC: 2272. Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ constant 46.2165 0.799 57.867 0.000 44.646 47.787 weight -0.0076 0.000 -29.645 0.000 -0.008 -0.007 ============================================================================== Omnibus: 41.682 Durbin-Watson: 0.808 Prob(Omnibus): 0.000 Jarque-Bera (JB): 60.039 Skew: 0.727 Prob(JB): 9.18e-14 Kurtosis: 4.251 Cond. No. 1.13e+04 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 1.13e+04. This might indicate that there are strong multicollinearity or other numerical problems. - i) From the summary statistics we can observe there is a strong relation between Weight and mpg. - ii) From the summary we see that the p-value suggest a very strong statistical significance for the linear relation between MPG and Weight. With a correlation of -0.83 - iii) The relationship between MPG and Weight is negative. ```python fig, ax = plt.subplots() fig = sm.graphics.plot_fit(lm_mpg_wgt_model, 1, ax=ax) ax.set_xlabel("Weight") ax.set_ylabel("mpg") ax.set_title("Linear Regression") plt.show() ``` ```python # (d) Plot the residuals vs. fitted values. Comment on any problems you see with the fit. fig, ax = plt.subplots() lm_mpg_wgt_fitted = lm_mpg_wgt_model.fittedvalues lm_mpg_wgt_res = lm_mpg_wgt_model.resid fig = sns.residplot(lm_mpg_wgt_fitted,Auto.columns[0],data=Auto, lowess=True, scatter_kws={'alpha':0.5}, line_kws={'color': 'red', 'lw': 1, 'alpha': 0.8}) ax.set_xlabel("Fitted") ax.set_ylabel("Residuals") ax.set_title("Residuals vs Fitted") plt.show() ``` The red lines across the data shows a "bow-shape" or curve which indicates we are not capturing some of the non-linear aspects of the model. ### Exercise 3 ```python # (a) Produce a scatterplot matrix which includes all of the variables in the data set using # pandas.plotting.scatter matrix pd.plotting.scatter_matrix(Auto, alpha=0.5, figsize=(14,14)) plt.show() ``` ```python # (b) Compute the matrix of correlations between the variables using the corr() attribute # in Pandas. corrMatrix = Auto.corr() sns.heatmap(corrMatrix,annot=True) plt.show() ``` ```python # (c) Use the OLS function from the statsmodels package to perform a multiple linear # regression with mpg as the response and all other variables except name as the predictors. # Be sure to include an intercept. Xmlt = Auto.loc[:,['constant','cylinders','displacement','horsepower','weight','acceleration','year','origin']] lm_mpg_mlt = sm.OLS(Y,Xmlt) lm_mpg_mlt_model = lm_mpg_mlt.fit() print(lm_mpg_mlt_model.summary()) ``` OLS Regression Results ============================================================================== Dep. Variable: mpg R-squared: 0.821 Model: OLS Adj. R-squared: 0.818 Method: Least Squares F-statistic: 252.4 Date: Fri, 17 Apr 2020 Prob (F-statistic): 2.04e-139 Time: 20:57:02 Log-Likelihood: -1023.5 No. Observations: 392 AIC: 2063. Df Residuals: 384 BIC: 2095. Df Model: 7 Covariance Type: nonrobust ================================================================================ coef std err t P>|t| [0.025 0.975] -------------------------------------------------------------------------------- constant -17.2184 4.644 -3.707 0.000 -26.350 -8.087 cylinders -0.4934 0.323 -1.526 0.128 -1.129 0.142 displacement 0.0199 0.008 2.647 0.008 0.005 0.035 horsepower -0.0170 0.014 -1.230 0.220 -0.044 0.010 weight -0.0065 0.001 -9.929 0.000 -0.008 -0.005 acceleration 0.0806 0.099 0.815 0.415 -0.114 0.275 year 0.7508 0.051 14.729 0.000 0.651 0.851 origin 1.4261 0.278 5.127 0.000 0.879 1.973 ============================================================================== Omnibus: 31.906 Durbin-Watson: 1.309 Prob(Omnibus): 0.000 Jarque-Bera (JB): 53.100 Skew: 0.529 Prob(JB): 2.95e-12 Kurtosis: 4.460 Cond. No. 8.59e+04 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 8.59e+04. This might indicate that there are strong multicollinearity or other numerical problems. ```python # (d) Plot the residuals vs. fitted values. Comment on any problems you see with the fit. fig, ax = plt.subplots() lm_mpg_mlt_fitted = lm_mpg_mlt_model.fittedvalues lm_mpg_mlt_res = lm_mpg_mlt_model.resid fig = sns.residplot(lm_mpg_mlt_fitted,Auto.columns[0],data=Auto, lowess=True, scatter_kws={'alpha':0.5}, line_kws={'color': 'red', 'lw': 1, 'alpha': 0.8}) ax.set_xlabel("Fitted") ax.set_ylabel("Residuals") ax.set_title("Residuals vs Fitted") plt.show() ``` Even from the multivariate model, the Residuals vs Fitted plot is still showing some aspects of the model that are not accounted for. This might mean we could be overlooking some sort of transformation to possibly allow for a better fit. ```python # (e) Statsmodels allows you to fit models using R-style formulas. See http://www.statsmodels. # org/dev/example_formulas.html. Use the * and : symbols to fit linear regression # models with interaction effects. Do any interactions appear to be statistically significant? import statsmodels.formula.api as smf Auto_x = Auto.drop(['name','constant'],axis=1) lm_mpg_mult = smf.ols(formula = 'mpg ~ cylinders * displacement * horsepower * weight * acceleration * year * origin', data=Auto_x) res_mult = lm_mpg_mult.fit() print(res_mult.summary()) ``` OLS Regression Results ============================================================================== Dep. Variable: mpg R-squared: 0.931 Model: OLS Adj. R-squared: 0.906 Method: Least Squares F-statistic: 38.51 Date: Fri, 17 Apr 2020 Prob (F-statistic): 1.49e-123 Time: 20:57:10 Log-Likelihood: -838.23 No. Observations: 392 AIC: 1880. Df Residuals: 290 BIC: 2286. Df Model: 101 Covariance Type: nonrobust ===================================================================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------------------------------------------------------- Intercept -0.0172 0.395 -0.043 0.965 -0.795 0.761 cylinders -0.9849 3.107 -0.317 0.751 -7.100 5.130 displacement 15.6423 26.104 0.599 0.549 -35.735 67.019 cylinders:displacement 9.9087 7.405 1.338 0.182 -4.666 24.483 horsepower 3.4947 23.826 0.147 0.883 -43.399 50.388 cylinders:horsepower 0.3706 8.546 0.043 0.965 -16.450 17.191 displacement:horsepower -0.2288 0.317 -0.722 0.471 -0.852 0.395 cylinders:displacement:horsepower -0.1061 0.094 -1.128 0.260 -0.291 0.079 weight -0.5506 0.852 -0.647 0.518 -2.227 1.125 cylinders:weight -0.3717 0.284 -1.309 0.192 -0.931 0.187 displacement:weight -0.0100 0.022 -0.455 0.649 -0.053 0.033 cylinders:displacement:weight 0.0016 0.004 0.389 0.698 -0.007 0.010 horsepower:weight 0.0024 0.011 0.224 0.823 -0.019 0.024 cylinders:horsepower:weight 0.0040 0.004 1.084 0.279 -0.003 0.011 displacement:horsepower:weight -1.781e-05 0.000 -0.081 0.935 -0.000 0.000 cylinders:displacement:horsepower:weight 2.426e-05 4.09e-05 0.593 0.553 -5.62e-05 0.000 acceleration -24.8380 46.061 -0.539 0.590 -115.494 65.818 cylinders:acceleration 2.7590 24.784 0.111 0.911 -46.020 51.538 displacement:acceleration -1.0383 1.458 -0.712 0.477 -3.908 1.832 cylinders:displacement:acceleration -0.7078 0.511 -1.384 0.167 -1.714 0.299 horsepower:acceleration 0.0149 1.697 0.009 0.993 -3.326 3.355 cylinders:horsepower:acceleration -0.1210 0.636 -0.190 0.849 -1.373 1.131 displacement:horsepower:acceleration 0.0271 0.024 1.153 0.250 -0.019 0.073 cylinders:displacement:horsepower:acceleration 0.0049 0.006 0.767 0.444 -0.008 0.018 weight:acceleration 0.0331 0.062 0.534 0.594 -0.089 0.155 cylinders:weight:acceleration 0.0279 0.018 1.549 0.123 -0.008 0.063 displacement:weight:acceleration -0.0004 0.001 -0.275 0.784 -0.003 0.002 cylinders:displacement:weight:acceleration 0.0002 0.000 0.759 0.449 -0.000 0.001 horsepower:weight:acceleration 7.089e-05 0.001 0.083 0.934 -0.002 0.002 cylinders:horsepower:weight:acceleration -0.0003 0.000 -1.293 0.197 -0.001 0.000 displacement:horsepower:weight:acceleration 5.365e-06 1.68e-05 0.319 0.750 -2.77e-05 3.85e-05 cylinders:displacement:horsepower:weight:acceleration -2.933e-06 3.04e-06 -0.965 0.335 -8.91e-06 3.05e-06 year -7.7709 5.452 -1.425 0.155 -18.502 2.960 cylinders:year -2.2045 1.821 -1.210 0.227 -5.789 1.380 displacement:year -0.1631 0.344 -0.473 0.636 -0.841 0.515 cylinders:displacement:year -0.1110 0.097 -1.140 0.255 -0.303 0.081 horsepower:year 0.0346 0.322 0.107 0.915 -0.600 0.669 cylinders:horsepower:year 0.0331 0.121 0.275 0.784 -0.204 0.270 displacement:horsepower:year 0.0018 0.004 0.411 0.681 -0.007 0.010 cylinders:displacement:horsepower:year 0.0013 0.001 1.038 0.300 -0.001 0.004 weight:year 0.0100 0.015 0.674 0.501 -0.019 0.039 cylinders:weight:year 0.0061 0.004 1.670 0.096 -0.001 0.013 displacement:weight:year 7.838e-05 0.000 0.297 0.767 -0.000 0.001 cylinders:displacement:weight:year -2.061e-05 4.85e-05 -0.425 0.671 -0.000 7.49e-05 horsepower:weight:year -2.278e-05 0.000 -0.142 0.887 -0.000 0.000 cylinders:horsepower:weight:year -8.25e-05 4.81e-05 -1.715 0.087 -0.000 1.22e-05 displacement:horsepower:weight:year 7.245e-07 2.65e-06 0.274 0.784 -4.48e-06 5.93e-06 cylinders:displacement:horsepower:weight:year -2.609e-07 4.95e-07 -0.527 0.598 -1.23e-06 7.13e-07 acceleration:year 0.8048 0.649 1.240 0.216 -0.472 2.082 cylinders:acceleration:year 0.1015 0.330 0.308 0.759 -0.547 0.750 displacement:acceleration:year 0.0271 0.021 1.284 0.200 -0.014 0.069 cylinders:displacement:acceleration:year 0.0042 0.007 0.628 0.531 -0.009 0.017 horsepower:acceleration:year -0.0057 0.024 -0.236 0.813 -0.053 0.041 cylinders:horsepower:acceleration:year -0.0005 0.009 -0.055 0.956 -0.018 0.017 displacement:horsepower:acceleration:year -0.0004 0.000 -1.253 0.211 -0.001 0.000 cylinders:displacement:horsepower:acceleration:year -2.828e-05 8.87e-05 -0.319 0.750 -0.000 0.000 weight:acceleration:year -0.0008 0.001 -0.799 0.425 -0.003 0.001 cylinders:weight:acceleration:year -0.0004 0.000 -1.703 0.090 -0.001 6.22e-05 displacement:weight:acceleration:year 3.713e-06 1.6e-05 0.232 0.817 -2.78e-05 3.53e-05 cylinders:displacement:weight:acceleration:year -1.157e-06 2.54e-06 -0.456 0.649 -6.15e-06 3.84e-06 horsepower:weight:acceleration:year 9.533e-08 1.19e-05 0.008 0.994 -2.32e-05 2.34e-05 cylinders:horsepower:weight:acceleration:year 5.662e-06 3.23e-06 1.751 0.081 -7.03e-07 1.2e-05 displacement:horsepower:weight:acceleration:year -6.705e-08 2.04e-07 -0.328 0.743 -4.69e-07 3.35e-07 cylinders:displacement:horsepower:weight:acceleration:year 2.691e-08 3.69e-08 0.728 0.467 -4.58e-08 9.96e-08 origin -1.6028 3.091 -0.518 0.605 -7.687 4.481 cylinders:origin -0.7671 2.486 -0.309 0.758 -5.660 4.126 displacement:origin 10.2550 21.860 0.469 0.639 -32.769 53.279 cylinders:displacement:origin -11.6548 12.294 -0.948 0.344 -35.851 12.542 horsepower:origin 2.9976 21.387 0.140 0.889 -39.095 45.090 cylinders:horsepower:origin -1.6308 11.010 -0.148 0.882 -23.300 20.039 displacement:horsepower:origin -0.1666 0.266 -0.626 0.532 -0.690 0.357 cylinders:displacement:horsepower:origin 0.1478 0.152 0.975 0.330 -0.151 0.446 weight:origin -0.3470 0.728 -0.477 0.634 -1.779 1.085 cylinders:weight:origin 0.4401 0.414 1.063 0.289 -0.375 1.255 displacement:weight:origin 0.0092 0.020 0.461 0.645 -0.030 0.048 cylinders:displacement:weight:origin -0.0017 0.005 -0.366 0.714 -0.011 0.008 horsepower:weight:origin 0.0054 0.013 0.428 0.669 -0.019 0.030 cylinders:horsepower:weight:origin -0.0046 0.006 -0.830 0.407 -0.015 0.006 displacement:horsepower:weight:origin 6.378e-05 0.000 0.321 0.748 -0.000 0.000 cylinders:displacement:horsepower:weight:origin -2.918e-05 4.8e-05 -0.608 0.544 -0.000 6.53e-05 acceleration:origin -23.8165 44.972 -0.530 0.597 -112.329 64.696 cylinders:acceleration:origin 6.8451 20.638 0.332 0.740 -33.774 47.464 displacement:acceleration:origin -0.6671 1.193 -0.559 0.576 -3.015 1.680 cylinders:displacement:acceleration:origin 0.7792 0.746 1.044 0.297 -0.690 2.248 horsepower:acceleration:origin 0.0620 1.564 0.040 0.968 -3.016 3.140 cylinders:horsepower:acceleration:origin 0.0674 0.738 0.091 0.927 -1.384 1.519 displacement:horsepower:acceleration:origin 0.0011 0.017 0.065 0.948 -0.033 0.035 cylinders:displacement:horsepower:acceleration:origin -0.0075 0.009 -0.818 0.414 -0.026 0.011 weight:acceleration:origin 0.0212 0.055 0.385 0.701 -0.087 0.130 cylinders:weight:acceleration:origin -0.0292 0.028 -1.032 0.303 -0.085 0.026 displacement:weight:acceleration:origin 0.0005 0.001 0.395 0.693 -0.002 0.003 cylinders:displacement:weight:acceleration:origin -0.0002 0.000 -0.661 0.509 -0.001 0.000 horsepower:weight:acceleration:origin -0.0006 0.001 -0.636 0.525 -0.002 0.001 cylinders:horsepower:weight:acceleration:origin 0.0004 0.000 0.929 0.354 -0.000 0.001 displacement:horsepower:weight:acceleration:origin -9.374e-06 1.57e-05 -0.596 0.552 -4.03e-05 2.16e-05 cylinders:displacement:horsepower:weight:acceleration:origin 3.341e-06 3.28e-06 1.019 0.309 -3.11e-06 9.79e-06 year:origin -6.1203 4.912 -1.246 0.214 -15.789 3.548 cylinders:year:origin 4.3981 2.511 1.752 0.081 -0.544 9.340 displacement:year:origin -0.1078 0.289 -0.373 0.709 -0.677 0.461 cylinders:displacement:year:origin 0.1233 0.162 0.761 0.447 -0.196 0.442 horsepower:year:origin 0.0188 0.289 0.065 0.948 -0.551 0.588 cylinders:horsepower:year:origin -0.0343 0.153 -0.224 0.823 -0.335 0.267 displacement:horsepower:year:origin 0.0028 0.004 0.792 0.429 -0.004 0.010 cylinders:displacement:horsepower:year:origin -0.0018 0.002 -0.879 0.380 -0.006 0.002 weight:year:origin 0.0076 0.011 0.699 0.485 -0.014 0.029 cylinders:weight:year:origin -0.0079 0.005 -1.559 0.120 -0.018 0.002 displacement:weight:year:origin -9.245e-05 0.000 -0.397 0.692 -0.001 0.000 cylinders:displacement:weight:year:origin 2.569e-05 5.68e-05 0.452 0.651 -8.61e-05 0.000 horsepower:weight:year:origin -0.0001 0.000 -0.995 0.321 -0.000 0.000 cylinders:horsepower:weight:year:origin 9.742e-05 6.83e-05 1.427 0.155 -3.69e-05 0.000 displacement:horsepower:weight:year:origin -1.063e-06 2.32e-06 -0.457 0.648 -5.64e-06 3.51e-06 cylinders:displacement:horsepower:weight:year:origin 2.911e-07 5.94e-07 0.490 0.625 -8.78e-07 1.46e-06 acceleration:year:origin 0.6899 0.627 1.101 0.272 -0.544 1.923 cylinders:acceleration:year:origin -0.3570 0.283 -1.263 0.208 -0.914 0.200 displacement:acceleration:year:origin -0.0086 0.018 -0.489 0.625 -0.043 0.026 cylinders:displacement:acceleration:year:origin -0.0045 0.010 -0.457 0.648 -0.024 0.015 horsepower:acceleration:year:origin -0.0030 0.020 -0.151 0.880 -0.043 0.037 cylinders:horsepower:acceleration:year:origin 0.0022 0.010 0.218 0.827 -0.018 0.022 displacement:horsepower:acceleration:year:origin 6.517e-05 0.000 0.272 0.786 -0.000 0.001 cylinders:displacement:horsepower:acceleration:year:origin 5.796e-05 0.000 0.464 0.643 -0.000 0.000 weight:acceleration:year:origin -0.0003 0.001 -0.400 0.689 -0.002 0.001 cylinders:weight:acceleration:year:origin 0.0005 0.000 1.353 0.177 -0.000 0.001 displacement:weight:acceleration:year:origin -3.585e-06 1.4e-05 -0.255 0.799 -3.12e-05 2.4e-05 cylinders:displacement:weight:acceleration:year:origin 9.195e-07 3e-06 0.306 0.760 -4.99e-06 6.83e-06 horsepower:weight:acceleration:year:origin 1.063e-05 1.09e-05 0.978 0.329 -1.08e-05 3.2e-05 cylinders:horsepower:weight:acceleration:year:origin -6.546e-06 4.75e-06 -1.377 0.170 -1.59e-05 2.81e-06 displacement:horsepower:weight:acceleration:year:origin 1.052e-07 1.88e-07 0.560 0.576 -2.65e-07 4.75e-07 cylinders:displacement:horsepower:weight:acceleration:year:origin -3.037e-08 4.04e-08 -0.751 0.453 -1.1e-07 4.92e-08 ============================================================================== Omnibus: 45.623 Durbin-Watson: 1.752 Prob(Omnibus): 0.000 Jarque-Bera (JB): 129.382 Skew: 0.532 Prob(JB): 8.04e-29 Kurtosis: 5.605 Cond. No. 3.52e+16 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 3.52e+16. This might indicate that there are strong multicollinearity or other numerical problems. ```python # (f) Try a few different transformations of the variables, such as log(X), 1/X, # sqrt(X), X2. Comment on your findings. Auto_log = np.log(Auto_x.drop(['year','origin'],axis=1)) Auto_inv = 1/Auto_x.drop(['year','origin'],axis=1) Auto_sqrt= np.sqrt(Auto_x.drop(['year','origin'],axis=1)) ``` ```python lm_log = smf.ols(formula = 'mpg ~ cylinders * displacement * horsepower * weight * acceleration', data=Auto_log) lm_log_model = lm_log.fit() print(lm_log_model.summary()) fig, ax = plt.subplots() lm_log_fitted = lm_log_model.fittedvalues lm_log_res = lm_log_model.resid fig = sns.residplot(lm_log_fitted,Auto_log.columns[0], data=Auto_log, lowess=True, scatter_kws={'alpha':0.5}, line_kws={'color': 'red', 'lw': 1, 'alpha': 0.8}) ax.set_xlabel("Fitted") ax.set_ylabel("Residuals") ax.set_title("Residuals vs Fitted") plt.show() ``` By conducting a Log transformation on the data, we can see that the residuals now present a much better fit for the Fitted values. We even see from the $p-values$ that each predictor and the interactions show statistical significance in their relation to mpg.
fc3dedf13c500e8227c451e7432bc11a5fb7f2b9
737,365
ipynb
Jupyter Notebook
1-Python/.ipynb_checkpoints/Homework #2-checkpoint.ipynb
JUAN-SOLORIO/UW-MachineLearning
60cf1474bce45dd541d3fb60eb3b2a2eeaa9ca3c
[ "RSA-MD" ]
null
null
null
1-Python/.ipynb_checkpoints/Homework #2-checkpoint.ipynb
JUAN-SOLORIO/UW-MachineLearning
60cf1474bce45dd541d3fb60eb3b2a2eeaa9ca3c
[ "RSA-MD" ]
null
null
null
1-Python/.ipynb_checkpoints/Homework #2-checkpoint.ipynb
JUAN-SOLORIO/UW-MachineLearning
60cf1474bce45dd541d3fb60eb3b2a2eeaa9ca3c
[ "RSA-MD" ]
null
null
null
658.949955
337,052
0.925799
true
10,826
Qwen/Qwen-72B
1. YES 2. YES
0.901921
0.815232
0.735275
__label__eng_Latn
0.391354
0.546622
```python import numpy as np import matplotlib.pyplot as plt ``` ### O Metódo de Euler Explicito ou Forward Euler #### Expansão em Taylor de uma função y Expansão em Série de Taylor de $y(t)$ centrada em $t_0$ é dada por $$ y(t) = \sum_{n=0}^{\infty} \frac{y^{(n)}(t_0)}{n!}(t-t_0)^n $$ ##### Expansão de y até a primeira derivada Seja $h = t_n - t_{n-1}$ $$ y(t_{k+1}) = y(t_k) + y'(t_k)h + \mathcal{O}(h^2)\\ $$ O metódo de Euler explicito é um método recursivo de solução de equações diferenciais ordinárias, e consiste em utilizar a aproximação por Taylor e ignorar o erro $\mathcal{O}(h^2)$, nos dando: <br> $$ y_{n+1} \approx u_{n+1} = u_n + f(u_n,t_n) \cdot (t_{n+1} - t_n) $$ Com $y_n = y(t_n)$ a solução analitica no ponto $t_n$ e $u_n$ a aproximação númerica, $f(a,b)$ a derivada de $a$ em $b$ ```python ## F: Função derivada da função que queremos encontrar ## t0: tempo inicial ## y0: ponto inicial ## ts: range de tempo def f_euler(F, y0, ts): ys = [y0] t = ts[0] for tnext in ts: ynext = ys[-1] + F(ys[-1],t)*(tnext-t) ys.append(ynext) t = tnext return np.array(ys[:-1]) ``` ### O metódo de Runge-Kutta de segunda ordem Enquanto o metódo de Euler aproxima a solução em um ponto andando na tangente daquele ponto, o metódo de Runge-Kutta de segunda ordem aproxima o mesmo ponto andando na média entre a tangente no ponto e a tangente no ponto futuro.<br> Seja $$ k_1 = f(u_n,t_n)\\ k_2 = f(u_n + hk_1,t_{n+1}) $$ Então $k_1$ é a derivada no ponto e $k_2$ é a derivada no ponto futuro, aproximando a mesma pelo metódo de Euler<br> O passo para Runge-Kutta será então a média entre estas duas, e ficará: $$ y_{n+1} \approx u_{n+1} = u_n + h \frac{k_1 + k_2}{2} $$ ```python def rk_2(F, y0, ts): ys = [y0] t = ts[0] h = h = ts[1] - ts[0] for tnext in ts: k1 = F(ys[-1], t) k2 = F(ys[-1] + h*k1,tnext) ynext = ys[-1] + h * (k1+k2) / 2.0 ys.append(ynext) t = tnext return np.array(ys[:-1]) ``` Testando para a EDO $$ \begin{cases} y'(t) = - y(t) + 2\sin(t^2) \\ y(0) = 1.2\end{cases} $$ ```python def F(y,t): return -y + 2*np.sin(t**2) ``` ```python ## Definindo o dominio ts = np.linspace(-5,5,500) y0 = 1.2 ## Criando a lista para Runge-Kutta 2nd order ys = rk_2(F,y0,ts) ## Criando a lista para Euler Explicito ys2 = f_euler(F,y0,ts) #ans = f(y,ts) plt.plot(ts,ys,label='RK') plt.plot(ts,ys2,label='Explicito') plt.legend() plt.show() ``` ### O metódo de Euler converge - Um pouco de Análise ### Definições: Seja $\frac{\mathrm{d}y}{\mathrm{d}t} = f(y,t)$<br> Seja $t \in \mathbb{N}$ o número de 'tempos' no dominio e $t^*$ o tempo final e $\lfloor \cdot \rfloor$ a função `floor`, que retorna a parte inteira.<br> Seja $h \in \mathbb{R}, h > 0$, o 'tamanho' de cada partição, ou seja $h = t_{n+1} - t_n$ <br> Podemos então definir $n$, tal que $n$ assume valores no conjunto $\{0, \dots , \lfloor \frac{t^*}{h} \rfloor\}$<br> Seja $\lVert \cdot \rVert$ uma norma definida no espaço Seja $y_n$ o valor real (analitico) da função **$y$** no ponto $t_n$, ou seja $y_n = y(t_n)$<br> Seja $u_n$ o valor númerico aproximado da função $y$ no ponto $t_n$ pelo método de Euler, ou seja $u_{n+1} = u_{n} + f(u_{n},t_{n})\cdot h$<br> Uma função $f$ é dita Lipschitz se satisfaz a condição de Lipschitz: $\exists M \in \mathbb{R}; \lVert f(x_1) - f(x_2) \rVert \leq M\cdot\lVert x_1 - x_2\rVert$ Um metódo é dito convergente se: $$ \lim_{h\to 0^+} \max_{n=0, \dots , \lfloor \frac{t^*}{h} \rfloor} \lVert u_n - y_n \rVert = 0 $$ Ou seja, sempre que a malha for refinida, a solução númerica em um ponto se aproxima da solução analitica neste ponto ### Teorema: O metódo de Euler converge #### Prova Tomemos $f(y,t)$ analitica, ou seja, pode ser representada pela série de Taylor centrada em um ponto $t_0$ e é Lipschitz.<br> $f(y,t)$ analitica implica $y$ analitica.<br> Vamos definir $err_n = u_n - y_n$, nosso erro númerico, então queremos provar $$ \lim_{h\to 0^+} \max_{n=0, \dots , \lfloor \frac{t^*}{h} \rfloor} \lVert err_n \rVert = 0 $$ Expandindo nossa solução $y$ da equação diferencial por Taylor: $$ y_{n+1} = y_n + hf(y_n,t_n)+\mathcal{O}(h^2) \tag{1} $$ Como $y$ é analitica, então sua derivada é contínua, logo pelo `Teorema do Valor Extremo`, dado uma vizinhança em torno de $t_n$ o termo $\mathcal{O}(h^2)$ é limitado $\forall h>0$ e $n \leq \lfloor t^*/h \rfloor$ por $M>0, M \in \mathbb{R}$, e pela propriedade arquimediana do corpo do reais $\exists c \in \mathbb{R}, c>0; c\cdot h^2 \geq M$, portanto podemos limitar $\mathcal{O}(h^2)$ por $ch^2, c>0$.<br> Agora vamos fazer $err_{n+1} = u_{n+1} - y_{n+1}$ usando a expansão em Taylor $y_{n+1}$ e Euler em $u_{n+1}$ $$ \begin{align} err_{n+1} &= u_{n+1} - y_{n+1}\\ &= u_n + h(f(u_n,t_n)) - y_n - h(f(yn,tn) + \mathcal{O}(h^2)\\ &= \underbrace{u_n - y_n}_{err_n} + h\left(f(u_n,t_n) - f(y_n,t_n)\right) + \mathcal{O}(h^2)\\ &= err_n + h\left(f(u_n,t_n) - f(y_n,t_n)\right) + \mathcal{O}(h^2)\\ \end{align} $$ Daqui podemos perceber que o erro no passo seguinte depende também do erro anterior já cometido<br> E segue do fato de que $\mathcal{O}(h^2)$ é limitada, com uma cota superior $ch^2$ e da desigualdade triangular $$ \lVert err_{n+1} \rVert \leq \lVert err_n\rVert + \lVert h\left(f(u_n,t_n) - f(y_n,t_n)\right)\rVert + \lVert ch^2 \rVert $$ E pela condição de Lipschitz $$ \lVert f(u_n,t_n) - f(y_n,t_n) \rVert \leq \lambda\lVert u_n - y_n \rVert = \lambda\lVert err_n \rVert, \lambda > 0 $$ Então temos $$ \lVert err_{n+1} \rVert \leq \lVert err_n\rVert + \lVert h\left(f(u_n,t_n) - f(y_n,t_n)\right)\rVert + \lVert ch^2 \rVert \leq \lVert err_n\rVert + \lambda h\lVert err_n \rVert + ch^2\\$$ $\therefore$ $$ \lVert err_{n+1} \rVert \leq (1+h\lambda)\lVert err_n \rVert + ch^2 \tag{2} $$ --- Agora vamos propor: $$ \lVert err_n \rVert \leq \frac{c}{\lambda}h[(1+h\lambda)^n - 1] $$ #### Demonstração: Indução em n Para $n = 0$ $$ \lVert err_0 \rVert \leq \frac{c}{\lambda}h[(1+h\lambda)^0 - 1] = \frac{c}{\lambda}h[1 - 1] = 0\\ err_0 = u_0 - y_0 = 0, \text{pois é a condição inicial} $$ Temos portanto nossa hipotese de indução, vale para $n=k$, vamos para o passo indutivo: $n = k+1$. Da equação 2, temos: $$ \lVert err_{k+1}\rVert \leq (1+h\lambda)\lVert err_k \rVert + ch^2 $$ E pela hipotese de indução $$ \lVert err_k \rVert\leq \frac{c}{\lambda}h[(1+h\lambda)^k - 1] $$ Logo $$ \lVert err_{k+1} \rVert \leq (1+h\lambda)\frac{c}{\lambda}h[(1+h\lambda)^k - 1] + ch^2 $$ Desenvolvendo o termo da direita: $$ \begin{align} (1+h\lambda)\frac{c}{\lambda}h[(1+h\lambda)^k - 1] + ch^2 &= \frac{c}{\lambda}h[(1+h\lambda)^{k+1} - (1+h\lambda)] +ch^2\\ &= \frac{c}{\lambda}h(1+h\lambda)^{k+1} - \frac{c}{\lambda}h(1+h\lambda) +ch^2\\ &= \frac{c}{\lambda}h(1+h\lambda)^{k+1} - \frac{c}{\lambda}h - \frac{c}{\lambda}h^2\lambda + ch^2\\ &= \frac{c}{\lambda}h(1+h\lambda)^{k+1} - \frac{c}{\lambda}h\\ &= \frac{c}{\lambda}h[(1+h\lambda)^{k+1} - 1] \end{align} $$ Portanto $$ \lVert err_{k+1} \rVert \leq \frac{c}{\lambda}h[(1+h\lambda)^{k+1} - 1] $$ E o passo indutivo vale. Logo pelo principio de indução finita temos: $$ \lVert err_n \rVert \leq \frac{c}{\lambda}h[(1+h\lambda)^n - 1] \tag{3} $$ --- Como $h\lambda >0$, então temos $(1+h\lambda) < e^{h\lambda}$ e portanto $(1+h\lambda)^n < e^{nh\lambda}$, e n assume valor máximo em $n = \lfloor t^*/h \rfloor $, portanto: $$(1+h\lambda)^n < e^{\lfloor t^*/h \rfloor h\lambda} \leq e^{t^*\lambda}$$ Substituindo na inequação 3 para $err_n$, teremos: $$ \lVert err_n \rVert \leq \frac{c}{\lambda}h[e^{t^*\lambda} - 1] $$ Passando o limite $h\to 0$, teremos: $$ \lim_{h\to 0}\lVert err_n \rVert \leq \frac{c}{\lambda}h[e^{t^*\lambda} - 1] = 0\\ \therefore \lim_{h\to 0}\lVert err_n \rVert = 0 $$ Portanto o Metódo de Euler converge para toda função Lipschitz. Q.E.D. ### Visualizando o teorema Vamos plotar a solução da equação diferencial $y' = sin(t^2) - y$ com um refinamento da malha cada vez melhor e visualizar a convergência do metódo<br> Plotaremos também um gráfico com a evolução do erro relativo entre a solução de malha mais fina e todas as soluções anteriores ```python ## Equação Diferencial def F(y,t): return -y + 2*np.sin(t**2) # Criação dos dominios com vários h diferentes ts = np.array([np.linspace(-10,10,i) for i in np.arange(50,300,63)]) # Condição inicial y0 = 1.2 # Preparação da listas para plotagem ys_e = np.array([f_euler(F,y0,i) for i in ts ]) # Estilo das curvas lstyle = ['--','-.',':','-'] # Plot do gráfico de solução plt.figure(figsize=(15,7)) for i in range(len(ts)): plt.plot(ts[i],ys_e[i], ls = lstyle[i], label='$h = '+ str("{0:.2f}".format(20.0/len(ts[i])) +'$')) plt.title('Visualização da convergência do Metódo de Euler') plt.xlabel('t') plt.ylabel('y(t)') plt.legend() plt.show() ## Criando os arrays de erro hs = [0.4,0.18,0.11] ans = [[],[],[]] for i in range(len(ys_e[:-1])): n = np.floor(hs[i]/0.08) for j in range(len(ys_e[i])): try: ans[i].append(ys_e[-1][n*j]) except: ans[i].append(ys_e[-1][-1]) for i in range(len(ans)): ans[i] = np.array(ans[i]) err = np.array([abs(j - i) for i,j in zip(ys_e,ans)]) plt.figure(figsize=(15,7)) for i in range(len(ts)-1): plt.plot(ts[i],err[i], ls = lstyle[i], label='$h = '+ str("{0:.2f}".format(20.0/len(ts[i])) +'$')) plt.title('Visualização do Erro da solução mais convergida em relação às outras soluções') plt.xlabel('t') plt.ylabel('err(y)') plt.legend() plt.show() ``` ### Gráficos de Convergência de Runge-Kutta ```python # Preparação da listas para plotagem ys_rk = np.array([rk_2(F,y0,i) for i in ts ]) # Estilo das curvas lstyle = ['--','-.',':','-'] # Plot do gráfico de solução plt.figure(figsize=(15,7)) for i in range(len(ts)): plt.plot(ts[i],ys_rk[i], ls = lstyle[i], label='$h = '+ str("{0:.2f}".format(20.0/len(ts[i])) +'$')) plt.title('Visualização da convergência de Runge-Kutta') plt.xlabel('t') plt.ylabel('y(t)') plt.legend() plt.show() ``` ```python plt.plot(ts[-1],abs(ys_e[-1]-ys_rk[-1])) ``` ##### Teorema: O metódo de Euler converge ###### Prova Tomemos $f(y,t)$ analitica, ou seja, pode ser representada pela série de Taylor centrada em um ponto $t_0$ e é Lipschitz continua.<br> $f(y,t)$ analitica implica $y$ analitica.<br> Vamos definir $err_n = u_n - y_n$, nosso erro númerico, então queremos provar $$ \lim_{h\to 0^+} \max_{n=0, \dots , \lfloor \frac{t^*}{h} \rfloor} \lVert err_n \rVert = 0 $$ Expandindo nossa solução $y$ da equação diferencial por Taylor: $$ y_{n+1} = y_n + hf(y_n,t_n)+\mathcal{O}(h^2) $$ Como $y$ é analitica, então sua derivada é contínua, logo pelo `Teorema do Valor Extremo`, dado uma vizinhança em torno de $t_n$ o termo $\mathcal{O}(h^2)$ é limitado $\forall h>0$ e $n \leq \lfloor t^*/h \rfloor$ por $M>0, M \in \mathbb{R}$, e pela propriedade arquimediana do corpo do reais $\exists c \in \mathbb{R}, c>0; c\cdot h^2 \geq M$, portanto podemos limitar $\mathcal{O}(h^2)$ por $ch^2, c>0$.<br> Agora vamos fazer $err_{n+1} = u_{n+1} - y_{n+1}$ usando a expansão em Taylor e Euler em $u_n$ $$ \begin{align} err_{n+1} &= u_{n+1} - y_{n+1}\\ &= u_n + h(f(u_n,t_n)) - y_n - h(f(yn,tn) + \mathcal{O}(h^2)\\ &= \underbrace{u_n - y_n}_{err_n} + h\left(f(u_n,t_n) - f(y_n,t_n)\right) + \mathcal{O}(h^2)\\ &= err_n + h\left(f(u_n,t_n) - f(y_n,t_n)\right) + \mathcal{O}(h^2)\\ \end{align} $$ Daqui podemos perceber que o erro no passo seguinte depende também do erro anterior já cometido<br> E segue do limite superior para $\mathcal{O}(h^2)$ e da desigualdade triangular $$ \lVert err_{n+1} \rVert \leq \lVert err_n\rVert + \lVert h\left(f(u_n,t_n) - f(y_n,t_n)\right)\rVert + \lVert ch^2 \rVert $$ E pela condição de Lipschitz $$ \lVert f(u_n,t_n) - f(y_n,t_n) \rVert \leq \lambda\lVert u_n - y_n \rVert = \lambda\lVert err_n \rVert, \lambda > 0 $$ Então temos $$ \lVert err_{n+1} \rVert \leq \lVert err_n\rVert + \lVert h\left(f(u_n,t_n) - f(y_n,t_n)\right)\rVert + \lVert ch^2 \rVert \leq \lVert err_n\rVert + \lambda h\lVert err_n \rVert + ch^2\\ \therefore \lVert err_{n+1} \rVert \leq \lVert err_n\rVert + \lVert h\left(f(u_n,t_n) - f(y_n,t_n)\right)\rVert + \lVert ch^2 \rVert \leq (1+h\lambda)\lVert err_n \rVert + ch^2 $$ Assumiremos (provar mais tarde) $$ \lVert err_n \rVert \leq \frac{c}{\lambda}h[(1+h\lambda)^n - 1] $$ Como $h\lambda >0$, então temos $(1+h\lambda) < e^{h\lambda}$ e portanto $(1+h\lambda)^n < e^{nh\lambda}$, e n assume valor máximo em $n = \lfloor t^*/h \rfloor $, portanto: $(1+h\lambda)^n < e^{\lfloor t^*/h \rfloor h\lambda} = e^{t^*\lambda}$<br> Substituindo na inequação anterior para $err_n$, teremos: $$ \lVert err_n \rVert \leq \frac{c}{\lambda}h[e^{t^*\lambda} - 1] $$ Passando o limite $h\to 0$, teremos: $$ \lim_{h\to 0}\lVert err_n \rVert \leq \frac{c}{\lambda}h[e^{t^*\lambda} - 1] = 0\\ \therefore \lim_{h\to 0}\lVert err_n \rVert = 0 $$ O Metódo de Euler converger. Q.E.D. ```python ``` ```python ```
5f548d76ab340cdd13807c44b4fececb209e2114
540,491
ipynb
Jupyter Notebook
analise-numerica-edo-2019-1/.ipynb_checkpoints/RK e Eulers-checkpoint.ipynb
mirandagil/university-courses
e70ce5262555e84cffb13e53e139e7eec21e8907
[ "MIT" ]
1
2019-12-23T16:39:01.000Z
2019-12-23T16:39:01.000Z
analise-numerica-edo-2019-1/.ipynb_checkpoints/RK e Eulers-checkpoint.ipynb
mirandagil/university-courses
e70ce5262555e84cffb13e53e139e7eec21e8907
[ "MIT" ]
null
null
null
analise-numerica-edo-2019-1/.ipynb_checkpoints/RK e Eulers-checkpoint.ipynb
mirandagil/university-courses
e70ce5262555e84cffb13e53e139e7eec21e8907
[ "MIT" ]
null
null
null
1,004.630112
165,604
0.950634
true
5,241
Qwen/Qwen-72B
1. YES 2. YES
0.810479
0.919643
0.745351
__label__por_Latn
0.893757
0.570032
# Tutorial on Hyperparameter Tuning in Neural Networks **Author:** Matthew Stewart <br> This notebook follows the same procedure as the Medium article **"Simple Guide to Hyperparameter Tuning in Neural Networks"**. [https://medium.com/@matthew_stewart/simple-guide-to-hyperparameter-tuning-in-neural-networks-3fe03dad8594]. In this notebook we will optimize and fine tune a neural network to find the global minimum of a particularly troublesome function known as the Beale function. This is one of many test functions for optimization that are commonly used in academia (see https://en.wikipedia.org/wiki/Test_functions_for_optimization for more information). ```python ## Formatting import requests from IPython.core.display import HTML styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text HTML(styles) ``` <style> blockquote { background: #AEDE94; } h1 { padding-top: 25px; padding-bottom: 25px; text-align: left; padding-left: 10px; background-color: #DDDDDD; color: black; } h2 { padding-top: 10px; padding-bottom: 10px; text-align: left; padding-left: 5px; background-color: #EEEEEE; color: black; } div.exercise { background-color: #ffcccc; border-color: #E9967A; border-left: 5px solid #800080; padding: 0.5em; } div.discussion { background-color: #ccffcc; border-color: #88E97A; border-left: 5px solid #0A8000; padding: 0.5em; } div.theme { background-color: #DDDDDD; border-color: #E9967A; border-left: 5px solid #800080; padding: 0.5em; font-size: 18pt; } div.gc { background-color: #AEDE94; border-color: #E9967A; border-left: 5px solid #800080; padding: 0.5em; font-size: 12pt; } p.q1 { padding-top: 5px; padding-bottom: 5px; text-align: left; padding-left: 5px; background-color: #EEEEEE; color: black; } header { padding-top: 35px; padding-bottom: 35px; text-align: left; padding-left: 10px; background-color: #DDDDDD; color: black; } </style> ## Learning Goals In this notebook, we will explore ways to optimize the loss function of a multilayer perceptor (MLP) by tuning the model hyperparameters. We will also explore the use of cross-validation as a technique for checking potential values for these hyperparameters. By the end of this notebook, you should: - Be familiar with the use of `sklearn`'s `optimize` function. - Be able to identify the hyperparameters that go into the training of a MLP. - Be familiar with the implementation in `keras` of various optimization techniques. - Know how to use callbacks - Apply cross-validation to check for multiple values of hyperparameters. ```python import matplotlib.pyplot as plt import numpy as np from scipy.optimize import minimize %matplotlib inline ``` ## Part 1: Beale's function ### First let's look at function optimization in `scipy.optimize`, using Beale's function as an example Optimizing a function $f: A\rightarrow R$, from some set A to the real numbers is finding an element $x_0\,\epsilon\, A$ such that $f(x_0)\leq f(x)$ for all $x\,\epsilon\, A$ (finding the minimum) or such that $f(x_0)\geq f(x)$ for all $x\,\epsilon\, A$ (finding the maximum). To illustrate our point we will use a function of two parameters. Our goal is to optimize over these 2 parameters. We can extend to higher dimensions by plotting pairs of parameters against each other. The Wikipedia article on Test functions for optimization has a few functions that are useful for evaluating optimization algorithms. Here is Beale's function: $f(x,y)$ = $(1.5−x+xy)^2+(2.25−x+xy^2)^2+(2.625−x+xy^3)^2$ We already know that this function has a minimum at [3.0, 0.5]. Let's see if `scipy` will find it. <pre>source: https://en.wikipedia.org/wiki/Test_functions_for_optimization</pre> ```python # define Beale's function which we want to minimize def objective(X): x = X[0]; y = X[1] return (1.5 - x + x*y)**2 + (2.25 - x + x*y**2)**2 + (2.625 - x + x*y**3)**2 ``` ```python # function boundaries xmin, xmax, xstep = -4.5, 4.5, .9 ymin, ymax, ystep = -4.5, 4.5, .9 ``` ```python # Let's create some points x1, y1 = np.meshgrid(np.arange(xmin, xmax + xstep, xstep), np.arange(ymin, ymax + ystep, ystep)) ``` Let's make an initial guess ```python # initial guess x0 = [4., 4.] f0 = objective(x0) print (f0) ``` 68891.203125 ```python bnds = ((xmin, xmax), (ymin, ymax)) minimum = minimize(objective, x0, bounds=bnds) ``` ```python print(minimum) ``` fun: 2.068025638865627e-12 hess_inv: <2x2 LbfgsInvHessProduct with dtype=float64> jac: array([-1.55969780e-06, 9.89837957e-06]) message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL' nfev: 60 nit: 14 status: 0 success: True x: array([3.00000257, 0.50000085]) ```python real_min = [3.0, 0.5] print (f'The answer, {minimum.x}, is very close to the optimum as we know it, which is {real_min}') print (f'The value of the objective for {real_min} is {objective(real_min)}') ``` The answer, [3.00000257 0.50000085], is very close to the optimum as we know it, which is [3.0, 0.5] The value of the objective for [3.0, 0.5] is 0.0 ## Part 2: Optimization in neural networks In general: **Learning Representation --> Objective function --> Optimization algorithm** A neural network can be defined as a framework that combines inputs and tries to guess the output. If we are lucky enough to have some results, called "the ground truth", to compare the outputs produced by the network, we can calculate the **error**. So the network guesses, calculates some error function, guesses again, trying to minimize this error, guesses again, until the error does not go down any more. This is optimization. In neural networks the most common used optimization algorithms, are flavors of **GD (gradient descent)**. The *objective function* used in gradient descent is the *loss function* which we want to minimize . ### A `keras` Refresher `Keras` is a Python library for deep learning that can run on top of both Theano or TensorFlow, two powerful Python libraries for fast numerical computing created and released by Facebook and Google, respectevely. Keras was developed to make developing deep learning models as fast and easy as possible for research and practical applications. It runs on Python 2.7 or 3.5 and can seamlessly execute on GPUs and CPUs. Keras is built on the idea of a model. At its core we have a sequence of layers called the `Sequential` model which is a linear stack of layers. Keras also provides the `functional API`, a way to define complex models, such as multi-output models, directed acyclic graphs, or models with shared layers. We can summarize the construction of deep learning models in Keras using the Sequential model as follows: 1. **Define your model**: create a `Sequential` model and add layers. 2. **Compile your model**: specify loss function and optimizers and call the `.compile()` function. 3. **Fit your model**: train the model on data by calling the `.fit()` function. 4. **Make predictions**: use the model to generate predictions on new data by calling functions such as `.evaluate()` or `.predict()`. ### Callbacks: taking a peek into our model while it's training You can look at what is happening in various stages of your model by using `callbacks`. A callback is a set of functions to be applied at given stages of the training procedure. You can use callbacks to get a view on internal states and statistics of the model during training. You can pass a list of callbacks (as the keyword argument callbacks) to the `.fit()` method of the Sequential or Model classes. The relevant methods of the callbacks will then be called at each stage of the training. - A callback function you are already familiar with is `keras.callbacks.History()`. This is automatically included in `.fit()`. - Another very useful one is `keras.callbacks.ModelCheckpoint` which saves the model with its weights at a certain point in the training. This can prove useful if your model is running for a long time and a system failure happens. Not all is lost then. It's a good practice to save the model weights only when an improvement is observed as measured by the `acc`, for example. - `keras.callbacks.EarlyStopping` stops the training when a monitored quantity has stopped improving. - `keras.callbacks.LearningRateScheduler` will change the learning rate during training. We will apply some callbacks later. For full documentation on `callbacks` see https://keras.io/callbacks/ ### What are the steps to optimizing our network? ```python import tensorflow as tf import keras from keras import layers from keras import models from keras import utils from keras.layers import Dense from keras.models import Sequential from keras.layers import Flatten from keras.layers import Dropout from keras.layers import Activation from keras.regularizers import l2 from keras.optimizers import SGD from keras.optimizers import RMSprop from keras import datasets from keras.callbacks import LearningRateScheduler from keras.callbacks import History from keras import losses from sklearn.utils import shuffle print(tf.VERSION) print(tf.keras.__version__) ``` 1.12.0 2.1.6-tf Using TensorFlow backend. ```python # fix random seed for reproducibility np.random.seed(5) ``` ### Step 1 - Deciding on the network topology (not really considered optimization but is obviously very important) We will use the MNIST dataset which consists of grayscale images of handwritten digits (0-9) whose dimension is 28x28 pixels. Each pixel is 8 bits so its value ranges from 0 to 255. ```python #mnist = tf.keras.datasets.mnist mnist = keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train.shape, y_train.shape ``` ((60000, 28, 28), (60000,)) Each label is a number between 0 and 9 ```python print(y_train) ``` [5 0 4 ... 5 6 8] Let's look at some 10 of the images ```python plt.figure(figsize=(10,10)) for i in range(10): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(x_train[i], cmap=plt.cm.binary) plt.xlabel(y_train[i]) ``` ```python x_train[45].shape x_train[45, 15:20, 15:20] ``` array([[ 11, 198, 231, 41, 0], [ 82, 252, 204, 0, 0], [253, 253, 141, 0, 0], [252, 220, 36, 0, 0], [252, 96, 0, 0, 0]], dtype=uint8) ```python print(f'We have {x_train.shape[0]} train samples') print(f'We have {x_test.shape[0]} test samples') ``` We have 60000 train samples We have 10000 test samples #### Preprocessing the data To run our NN we need to pre-process the data * First we need to make the 2D image arrays into 1D (flatten them). We can either perform this by using array reshaping with `numpy.reshape()` or the `keras`' method for this: a layer called `tf.keras.layers.Flatten` which transforms the format of the images from a 2d-array (of 28 by 28 pixels), to a 1D-array of 28 * 28 = 784 pixels. * Then we need to normalize the pixel values (give them values between 0 and 1) using the following transformation: \begin{align} x := \dfrac{x - x_{min}}{x_{max} - x_{min}} \textrm{} \end{align} In our case $x_{min} = 0$ and $x_{max} = 255$ so the formula becomes simply $x := {x}/255$ ```python # normalize the data x_train, x_test = x_train / 255.0, x_test / 255.0 ``` ```python # reshape the data into 1D vectors x_train = x_train.reshape(60000, 784) x_test = x_test.reshape(10000, 784) num_classes = 10 ``` ```python x_train.shape[1] ``` 784 Now let's prepare our class vector (y) to a binary class matrix, e.g. for use with categorical_crossentropy. ```python # Convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) ``` ```python y_train[0] ``` array([0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], dtype=float32) Now we are ready to build the model! ### Step 2 - Adjusting the `learning rate` One of the most common optimization algorithm is Stochastic Gradient Descent (SGD). The hyperparameters that can be optimized in SGD are `learning rate`, `momentum`, `decay` and `nesterov`. `Learning rate` controls the weight at the end of each batch, and `momentum` controls how much to let the previous update influence the current weight update. `Decay` indicates the learning rate decay over each update, and `nesterov` takes the value True or False depending on if we want to apply Nesterov momentum. Typical values for those hyperparameters are lr=0.01, decay=1e-6, momentum=0.9, and nesterov=True. The learning rate hyperparameter goes into the `optimizer` function which we will see below. Keras has a default learning rate scheduler in the `SGD` optimizer that decreases the learning rate during the stochastic gradient descent optimization algorithm. The learning rate is decreased according to this formula: \begin{align} lr = lr * 1./(1. + decay * epoch) \textrm{} \end{align} <pre>source: http://cs231n.github.io/neural-networks-3</pre> Let's implement a learning rate adaptation schedule in `Keras`. We'll start with SGD and a learning rate value of 0.1. We will then train the model for 60 epochs and set the decay argument to 0.0016 (0.1/60). We also include a momentum value of 0.8 since that seems to work well when using an adaptive learning rate. ```python epochs=60 learning_rate = 0.1 decay_rate = learning_rate / epochs momentum = 0.8 sgd = SGD(lr=learning_rate, momentum=momentum, decay=decay_rate, nesterov=False) ``` ```python # build the model input_dim = x_train.shape[1] lr_model = Sequential() lr_model.add(Dense(64, activation=tf.nn.relu, kernel_initializer='uniform', input_dim = input_dim)) lr_model.add(Dropout(0.1)) lr_model.add(Dense(64, kernel_initializer='uniform', activation=tf.nn.relu)) lr_model.add(Dense(num_classes, kernel_initializer='uniform', activation=tf.nn.softmax)) # compile the model lr_model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['acc']) ``` ```python %%time # Fit the model batch_size = int(input_dim/100) lr_model_history = lr_model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) ``` Train on 60000 samples, validate on 10000 samples Epoch 1/60 60000/60000 [==============================] - 9s 145us/step - loss: 0.3158 - acc: 0.9043 - val_loss: 0.1467 - val_acc: 0.9550 Epoch 2/60 60000/60000 [==============================] - 8s 136us/step - loss: 0.1478 - acc: 0.9555 - val_loss: 0.1194 - val_acc: 0.9617 Epoch 3/60 60000/60000 [==============================] - 8s 137us/step - loss: 0.1248 - acc: 0.9620 - val_loss: 0.1122 - val_acc: 0.9646 Epoch 4/60 60000/60000 [==============================] - 8s 136us/step - loss: 0.1167 - acc: 0.9638 - val_loss: 0.1075 - val_acc: 0.9681 Epoch 5/60 60000/60000 [==============================] - 8s 137us/step - loss: 0.1103 - acc: 0.9666 - val_loss: 0.1039 - val_acc: 0.9691 Epoch 6/60 60000/60000 [==============================] - 8s 137us/step - loss: 0.1051 - acc: 0.9677 - val_loss: 0.1015 - val_acc: 0.9694 Epoch 7/60 60000/60000 [==============================] - 8s 136us/step - loss: 0.1003 - acc: 0.9691 - val_loss: 0.1002 - val_acc: 0.9694 Epoch 8/60 60000/60000 [==============================] - 9s 144us/step - loss: 0.0961 - acc: 0.9707 - val_loss: 0.0998 - val_acc: 0.9694 Epoch 9/60 60000/60000 [==============================] - 9s 154us/step - loss: 0.0951 - acc: 0.9707 - val_loss: 0.0989 - val_acc: 0.9699 Epoch 10/60 60000/60000 [==============================] - 9s 150us/step - loss: 0.0919 - acc: 0.9721 - val_loss: 0.0978 - val_acc: 0.9696 Epoch 11/60 60000/60000 [==============================] - 8s 141us/step - loss: 0.0930 - acc: 0.9720 - val_loss: 0.0964 - val_acc: 0.9702 Epoch 12/60 60000/60000 [==============================] - 8s 141us/step - loss: 0.0899 - acc: 0.9728 - val_loss: 0.0965 - val_acc: 0.9703 Epoch 13/60 60000/60000 [==============================] - 8s 141us/step - loss: 0.0883 - acc: 0.9732 - val_loss: 0.0951 - val_acc: 0.9713 Epoch 14/60 60000/60000 [==============================] - 8s 141us/step - loss: 0.0871 - acc: 0.9733 - val_loss: 0.0958 - val_acc: 0.9705 Epoch 15/60 60000/60000 [==============================] - 8s 141us/step - loss: 0.0888 - acc: 0.9731 - val_loss: 0.0952 - val_acc: 0.9709 Epoch 16/60 60000/60000 [==============================] - 9s 145us/step - loss: 0.0857 - acc: 0.9743 - val_loss: 0.0950 - val_acc: 0.9713 Epoch 17/60 60000/60000 [==============================] - 9s 157us/step - loss: 0.0843 - acc: 0.9742 - val_loss: 0.0957 - val_acc: 0.9709 Epoch 18/60 60000/60000 [==============================] - 8s 142us/step - loss: 0.0842 - acc: 0.9749 - val_loss: 0.0942 - val_acc: 0.9719 Epoch 19/60 60000/60000 [==============================] - 9s 142us/step - loss: 0.0839 - acc: 0.9750 - val_loss: 0.0936 - val_acc: 0.9723 Epoch 20/60 60000/60000 [==============================] - 9s 142us/step - loss: 0.0824 - acc: 0.9748 - val_loss: 0.0942 - val_acc: 0.9723 Epoch 21/60 60000/60000 [==============================] - 9s 143us/step - loss: 0.0824 - acc: 0.9749 - val_loss: 0.0940 - val_acc: 0.9725 Epoch 22/60 60000/60000 [==============================] - 9s 142us/step - loss: 0.0829 - acc: 0.9752 - val_loss: 0.0938 - val_acc: 0.9718 Epoch 23/60 60000/60000 [==============================] - 9s 143us/step - loss: 0.0795 - acc: 0.9763 - val_loss: 0.0939 - val_acc: 0.9718 Epoch 24/60 60000/60000 [==============================] - 9s 149us/step - loss: 0.0796 - acc: 0.9763 - val_loss: 0.0936 - val_acc: 0.9722 Epoch 25/60 60000/60000 [==============================] - 8s 140us/step - loss: 0.0783 - acc: 0.9759 - val_loss: 0.0935 - val_acc: 0.9724 Epoch 26/60 60000/60000 [==============================] - 8s 140us/step - loss: 0.0805 - acc: 0.9755 - val_loss: 0.0937 - val_acc: 0.9721 Epoch 27/60 60000/60000 [==============================] - 8s 140us/step - loss: 0.0795 - acc: 0.9759 - val_loss: 0.0930 - val_acc: 0.9721 Epoch 28/60 60000/60000 [==============================] - 9s 148us/step - loss: 0.0786 - acc: 0.9765 - val_loss: 0.0931 - val_acc: 0.9721 Epoch 29/60 60000/60000 [==============================] - 9s 148us/step - loss: 0.0780 - acc: 0.9764 - val_loss: 0.0926 - val_acc: 0.9726 Epoch 30/60 60000/60000 [==============================] - 9s 144us/step - loss: 0.0759 - acc: 0.9768 - val_loss: 0.0925 - val_acc: 0.9726 Epoch 31/60 60000/60000 [==============================] - 9s 147us/step - loss: 0.0780 - acc: 0.9768 - val_loss: 0.0931 - val_acc: 0.9727 Epoch 32/60 60000/60000 [==============================] - 9s 155us/step - loss: 0.0768 - acc: 0.9765 - val_loss: 0.0925 - val_acc: 0.9726 Epoch 33/60 60000/60000 [==============================] - 9s 144us/step - loss: 0.0762 - acc: 0.9770 - val_loss: 0.0927 - val_acc: 0.9723 Epoch 34/60 60000/60000 [==============================] - 9s 149us/step - loss: 0.0765 - acc: 0.9770 - val_loss: 0.0928 - val_acc: 0.9723 Epoch 35/60 60000/60000 [==============================] - 8s 140us/step - loss: 0.0751 - acc: 0.9778 - val_loss: 0.0928 - val_acc: 0.9721 Epoch 36/60 60000/60000 [==============================] - 8s 138us/step - loss: 0.0756 - acc: 0.9772 - val_loss: 0.0919 - val_acc: 0.9721 Epoch 37/60 60000/60000 [==============================] - 8s 140us/step - loss: 0.0760 - acc: 0.9769 - val_loss: 0.0923 - val_acc: 0.9723 Epoch 38/60 60000/60000 [==============================] - 8s 138us/step - loss: 0.0751 - acc: 0.9772 - val_loss: 0.0921 - val_acc: 0.9726 Epoch 39/60 60000/60000 [==============================] - 9s 148us/step - loss: 0.0756 - acc: 0.9774 - val_loss: 0.0924 - val_acc: 0.9728 Epoch 40/60 60000/60000 [==============================] - 8s 141us/step - loss: 0.0750 - acc: 0.9774 - val_loss: 0.0924 - val_acc: 0.9728 Epoch 41/60 60000/60000 [==============================] - 9s 142us/step - loss: 0.0760 - acc: 0.9774 - val_loss: 0.0926 - val_acc: 0.9724 Epoch 42/60 60000/60000 [==============================] - 8s 142us/step - loss: 0.0719 - acc: 0.9783 - val_loss: 0.0920 - val_acc: 0.9730 Epoch 43/60 60000/60000 [==============================] - 9s 143us/step - loss: 0.0730 - acc: 0.9779 - val_loss: 0.0919 - val_acc: 0.9726 Epoch 44/60 60000/60000 [==============================] - 8s 140us/step - loss: 0.0722 - acc: 0.9785 - val_loss: 0.0920 - val_acc: 0.9728 Epoch 45/60 60000/60000 [==============================] - 9s 142us/step - loss: 0.0746 - acc: 0.9774 - val_loss: 0.0923 - val_acc: 0.9730 Epoch 46/60 60000/60000 [==============================] - 9s 148us/step - loss: 0.0736 - acc: 0.9778 - val_loss: 0.0920 - val_acc: 0.9729 Epoch 47/60 60000/60000 [==============================] - 9s 156us/step - loss: 0.0739 - acc: 0.9777 - val_loss: 0.0920 - val_acc: 0.9725 Epoch 48/60 60000/60000 [==============================] - 9s 151us/step - loss: 0.0720 - acc: 0.9783 - val_loss: 0.0917 - val_acc: 0.9731 Epoch 49/60 60000/60000 [==============================] - 9s 146us/step - loss: 0.0735 - acc: 0.9780 - val_loss: 0.0917 - val_acc: 0.9729 Epoch 50/60 60000/60000 [==============================] - 9s 152us/step - loss: 0.0729 - acc: 0.9780 - val_loss: 0.0923 - val_acc: 0.9723 Epoch 51/60 60000/60000 [==============================] - 9s 151us/step - loss: 0.0716 - acc: 0.9777 - val_loss: 0.0919 - val_acc: 0.9727 Epoch 52/60 60000/60000 [==============================] - 9s 145us/step - loss: 0.0716 - acc: 0.9784 - val_loss: 0.0915 - val_acc: 0.9726 Epoch 53/60 60000/60000 [==============================] - 9s 149us/step - loss: 0.0715 - acc: 0.9782 - val_loss: 0.0912 - val_acc: 0.9722 Epoch 54/60 60000/60000 [==============================] - 9s 143us/step - loss: 0.0704 - acc: 0.9786 - val_loss: 0.0911 - val_acc: 0.9720 Epoch 55/60 60000/60000 [==============================] - 9s 142us/step - loss: 0.0721 - acc: 0.9782 - val_loss: 0.0917 - val_acc: 0.9727 Epoch 56/60 60000/60000 [==============================] - 9s 143us/step - loss: 0.0717 - acc: 0.9784 - val_loss: 0.0918 - val_acc: 0.9725 Epoch 57/60 60000/60000 [==============================] - 9s 151us/step - loss: 0.0717 - acc: 0.9783 - val_loss: 0.0918 - val_acc: 0.9726 Epoch 58/60 60000/60000 [==============================] - 9s 144us/step - loss: 0.0708 - acc: 0.9783 - val_loss: 0.0916 - val_acc: 0.9725 Epoch 59/60 60000/60000 [==============================] - 8s 137us/step - loss: 0.0703 - acc: 0.9782 - val_loss: 0.0916 - val_acc: 0.9731 Epoch 60/60 60000/60000 [==============================] - 8s 137us/step - loss: 0.0703 - acc: 0.9785 - val_loss: 0.0918 - val_acc: 0.9727 CPU times: user 15min 16s, sys: 3min 41s, total: 18min 57s Wall time: 8min 37s ```python fig, ax = plt.subplots(1, 1, figsize=(10,6)) ax.plot(np.sqrt(lr_model_history.history['loss']), 'r', label='train') ax.plot(np.sqrt(lr_model_history.history['val_loss']), 'b' ,label='val') ax.set_xlabel(r'Epoch', fontsize=20) ax.set_ylabel(r'Loss', fontsize=20) ax.legend() ax.tick_params(labelsize=20) ``` ```python fig, ax = plt.subplots(1, 1, figsize=(10,6)) ax.plot(np.sqrt(lr_model_history.history['acc']), 'r', label='train') ax.plot(np.sqrt(lr_model_history.history['val_acc']), 'b' ,label='val') ax.set_xlabel(r'Epoch', fontsize=20) ax.set_ylabel(r'Accuracy', fontsize=20) ax.legend() ax.tick_params(labelsize=20) ``` ### Apply a custon learning rate change using `LearningRateScheduler` Write a function that performs the exponential learning rate decay as indicated by the following formula: \begin{align} lr = lr0 * e^{(-kt)} \textrm{} \end{align} ```python # solution epochs = 60 learning_rate = 0.1 # initial learning rate decay_rate = 0.1 momentum = 0.8 # define the optimizer function sgd = SGD(lr=learning_rate, momentum=momentum, decay=decay_rate, nesterov=False) ``` ```python input_dim = x_train.shape[1] num_classes = 10 batch_size = 196 # build the model exponential_decay_model = Sequential() exponential_decay_model.add(Dense(64, activation=tf.nn.relu, kernel_initializer='uniform', input_dim = input_dim)) exponential_decay_model.add(Dropout(0.1)) exponential_decay_model.add(Dense(64, kernel_initializer='uniform', activation=tf.nn.relu)) exponential_decay_model.add(Dense(num_classes, kernel_initializer='uniform', activation=tf.nn.softmax)) # compile the model exponential_decay_model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['acc']) ``` ```python # define the learning rate change def exp_decay(epoch): lrate = learning_rate * np.exp(-decay_rate*epoch) return lrate ``` ```python # learning schedule callback loss_history = History() lr_rate = LearningRateScheduler(exp_decay) callbacks_list = [loss_history, lr_rate] # you invoke the LearningRateScheduler during the .fit() phase exponential_decay_model_history = exponential_decay_model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, callbacks=callbacks_list, verbose=1, validation_data=(x_test, y_test)) ``` Train on 60000 samples, validate on 10000 samples Epoch 1/60 60000/60000 [==============================] - 1s 16us/step - loss: 1.9924 - acc: 0.3865 - val_loss: 1.4953 - val_acc: 0.5841 Epoch 2/60 60000/60000 [==============================] - 1s 11us/step - loss: 1.2430 - acc: 0.6362 - val_loss: 1.0153 - val_acc: 0.7164 Epoch 3/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.9789 - acc: 0.7141 - val_loss: 0.8601 - val_acc: 0.7617 Epoch 4/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.8710 - acc: 0.7452 - val_loss: 0.7811 - val_acc: 0.7865 Epoch 5/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.8115 - acc: 0.7609 - val_loss: 0.7336 - val_acc: 0.7968 Epoch 6/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.7749 - acc: 0.7678 - val_loss: 0.7030 - val_acc: 0.8035 Epoch 7/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.7524 - acc: 0.7742 - val_loss: 0.6822 - val_acc: 0.8095 Epoch 8/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.7342 - acc: 0.7788 - val_loss: 0.6673 - val_acc: 0.8122 Epoch 9/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.7218 - acc: 0.7840 - val_loss: 0.6562 - val_acc: 0.8148 Epoch 10/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.7144 - acc: 0.7836 - val_loss: 0.6475 - val_acc: 0.8168 Epoch 11/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.7054 - acc: 0.7857 - val_loss: 0.6408 - val_acc: 0.8175 Epoch 12/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.7008 - acc: 0.7896 - val_loss: 0.6354 - val_acc: 0.8185 Epoch 13/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6950 - acc: 0.7885 - val_loss: 0.6311 - val_acc: 0.8197 Epoch 14/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6921 - acc: 0.7895 - val_loss: 0.6274 - val_acc: 0.8199 Epoch 15/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6888 - acc: 0.7913 - val_loss: 0.6244 - val_acc: 0.8204 Epoch 16/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6833 - acc: 0.7932 - val_loss: 0.6219 - val_acc: 0.8206 Epoch 17/60 60000/60000 [==============================] - 1s 12us/step - loss: 0.6831 - acc: 0.7942 - val_loss: 0.6199 - val_acc: 0.8208 Epoch 18/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6822 - acc: 0.7937 - val_loss: 0.6182 - val_acc: 0.8212 Epoch 19/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6790 - acc: 0.7955 - val_loss: 0.6167 - val_acc: 0.8215 Epoch 20/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6797 - acc: 0.7935 - val_loss: 0.6155 - val_acc: 0.8218 Epoch 21/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6773 - acc: 0.7953 - val_loss: 0.6144 - val_acc: 0.8222 Epoch 22/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6742 - acc: 0.7960 - val_loss: 0.6135 - val_acc: 0.8227 Epoch 23/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6746 - acc: 0.7958 - val_loss: 0.6127 - val_acc: 0.8236 Epoch 24/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6729 - acc: 0.7973 - val_loss: 0.6120 - val_acc: 0.8237 Epoch 25/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6749 - acc: 0.7963 - val_loss: 0.6114 - val_acc: 0.8238 Epoch 26/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6715 - acc: 0.7967 - val_loss: 0.6109 - val_acc: 0.8241 Epoch 27/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6728 - acc: 0.7975 - val_loss: 0.6105 - val_acc: 0.8241 Epoch 28/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6721 - acc: 0.7964 - val_loss: 0.6101 - val_acc: 0.8246 Epoch 29/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6707 - acc: 0.7972 - val_loss: 0.6098 - val_acc: 0.8247 Epoch 30/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6711 - acc: 0.7980 - val_loss: 0.6095 - val_acc: 0.8247 Epoch 31/60 60000/60000 [==============================] - 1s 12us/step - loss: 0.6721 - acc: 0.7960 - val_loss: 0.6092 - val_acc: 0.8248 Epoch 32/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6708 - acc: 0.7981 - val_loss: 0.6090 - val_acc: 0.8249 Epoch 33/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6720 - acc: 0.7968 - val_loss: 0.6088 - val_acc: 0.8249 Epoch 34/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6700 - acc: 0.7973 - val_loss: 0.6086 - val_acc: 0.8250 Epoch 35/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6714 - acc: 0.7979 - val_loss: 0.6084 - val_acc: 0.8250 Epoch 36/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6687 - acc: 0.7980 - val_loss: 0.6083 - val_acc: 0.8250 Epoch 37/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6700 - acc: 0.7963 - val_loss: 0.6082 - val_acc: 0.8250 Epoch 38/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6702 - acc: 0.7964 - val_loss: 0.6081 - val_acc: 0.8252 Epoch 39/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6711 - acc: 0.7963 - val_loss: 0.6080 - val_acc: 0.8250 Epoch 40/60 60000/60000 [==============================] - 1s 12us/step - loss: 0.6698 - acc: 0.7971 - val_loss: 0.6079 - val_acc: 0.8251 Epoch 41/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6696 - acc: 0.7967 - val_loss: 0.6079 - val_acc: 0.8252 Epoch 42/60 60000/60000 [==============================] - 1s 12us/step - loss: 0.6699 - acc: 0.7989 - val_loss: 0.6078 - val_acc: 0.8252 Epoch 43/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6701 - acc: 0.7952 - val_loss: 0.6077 - val_acc: 0.8252 Epoch 44/60 60000/60000 [==============================] - 1s 12us/step - loss: 0.6704 - acc: 0.7973 - val_loss: 0.6077 - val_acc: 0.8252 Epoch 45/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6697 - acc: 0.7971 - val_loss: 0.6076 - val_acc: 0.8252 Epoch 46/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6718 - acc: 0.7969 - val_loss: 0.6076 - val_acc: 0.8252 Epoch 47/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6708 - acc: 0.7965 - val_loss: 0.6076 - val_acc: 0.8252 Epoch 48/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6690 - acc: 0.7968 - val_loss: 0.6075 - val_acc: 0.8252 Epoch 49/60 60000/60000 [==============================] - 1s 12us/step - loss: 0.6700 - acc: 0.7959 - val_loss: 0.6075 - val_acc: 0.8252 Epoch 50/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6722 - acc: 0.7963 - val_loss: 0.6075 - val_acc: 0.8252 Epoch 51/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6698 - acc: 0.7950 - val_loss: 0.6075 - val_acc: 0.8252 Epoch 52/60 60000/60000 [==============================] - 1s 12us/step - loss: 0.6689 - acc: 0.7978 - val_loss: 0.6075 - val_acc: 0.8252 Epoch 53/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6696 - acc: 0.7973 - val_loss: 0.6074 - val_acc: 0.8252 Epoch 54/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6678 - acc: 0.7975 - val_loss: 0.6074 - val_acc: 0.8252 Epoch 55/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6694 - acc: 0.7969 - val_loss: 0.6074 - val_acc: 0.8252 Epoch 56/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6688 - acc: 0.7981 - val_loss: 0.6074 - val_acc: 0.8252 Epoch 57/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6674 - acc: 0.7988 - val_loss: 0.6074 - val_acc: 0.8252 Epoch 58/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6685 - acc: 0.7970 - val_loss: 0.6074 - val_acc: 0.8252 Epoch 59/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6702 - acc: 0.7953 - val_loss: 0.6074 - val_acc: 0.8252 Epoch 60/60 60000/60000 [==============================] - 1s 11us/step - loss: 0.6701 - acc: 0.7965 - val_loss: 0.6074 - val_acc: 0.8252 ```python # check on the variables that can show me the learning rate decay exponential_decay_model_history.history.keys() ``` dict_keys(['val_loss', 'val_acc', 'loss', 'acc', 'lr']) ```python fig, ax = plt.subplots(1, 1, figsize=(10,6)) ax.plot(exponential_decay_model_history.history['lr'] ,'r') #, label='learn rate') ax.set_xlabel(r'Epoch', fontsize=20) ax.set_ylabel(r'Learning Rate', fontsize=20) #ax.legend() ax.tick_params(labelsize=20) ``` ```python fig, ax = plt.subplots(1, 1, figsize=(10,6)) ax.plot(np.sqrt(exponential_decay_model_history.history['loss']), 'r', label='train') ax.plot(np.sqrt(exponential_decay_model_history.history['val_loss']), 'b' ,label='val') ax.set_xlabel(r'Epoch', fontsize=20) ax.set_ylabel(r'Loss', fontsize=20) ax.legend() ax.tick_params(labelsize=20) ``` ### Step 3 - Choosing an `optimizer` and a `loss function` When constructing a model and using it to make our predictions, for example to assign label scores to images ("cat", "plane", etc), we want to measure our success or failure by defining a "loss" function (or objective function). The goal of optimization is to efficiently calculate the parameters/weights that minimize this loss function. `keras` provides various types of [loss functions](https://github.com/keras-team/keras/blob/master/keras/losses.py). Sometimes the "loss" function measures the "distance". We can define this "distance" between two data points in various ways suitable to the problem or dataset. Distance - Euclidean - Manhattan - others such as Hamming which measures distances between strings, for example. The Hamming distance of "carolin" and "cathrin" is 3. Loss functions - MSE (for regression) - categorical cross-entropy (for classification) - binary cross entropy (for classification) ```python # build the model input_dim = x_train.shape[1] model = Sequential() model.add(Dense(64, activation=tf.nn.relu, kernel_initializer='uniform', input_dim = input_dim)) # fully-connected layer with 64 hidden units model.add(Dropout(0.1)) model.add(Dense(64, kernel_initializer='uniform', activation=tf.nn.relu)) model.add(Dense(num_classes, kernel_initializer='uniform', activation=tf.nn.softmax)) ``` ```python # defining the parameters for RMSprop (I used the keras defaults here) rms = RMSprop(lr=0.001, rho=0.9, epsilon=None, decay=0.0) model.compile(loss='categorical_crossentropy', optimizer=rms, metrics=['acc']) ``` ### Step 4 - Deciding on the `batch size` and `number of epochs` ```python %%time batch_size = input_dim epochs = 60 model_history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) ``` Train on 60000 samples, validate on 10000 samples Epoch 1/60 60000/60000 [==============================] - 1s 14us/step - loss: 1.1320 - acc: 0.7067 - val_loss: 0.5628 - val_acc: 0.8237 Epoch 2/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.4831 - acc: 0.8570 - val_loss: 0.3674 - val_acc: 0.8934 Epoch 3/60 60000/60000 [==============================] - 1s 9us/step - loss: 0.3665 - acc: 0.8931 - val_loss: 0.3199 - val_acc: 0.9061 Epoch 4/60 60000/60000 [==============================] - 1s 9us/step - loss: 0.3100 - acc: 0.9092 - val_loss: 0.2664 - val_acc: 0.9233 Epoch 5/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.2699 - acc: 0.9206 - val_loss: 0.2295 - val_acc: 0.9326 Epoch 6/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.2391 - acc: 0.9305 - val_loss: 0.2104 - val_acc: 0.9362 Epoch 7/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.2115 - acc: 0.9383 - val_loss: 0.1864 - val_acc: 0.9459 Epoch 8/60 60000/60000 [==============================] - 1s 9us/step - loss: 0.1900 - acc: 0.9451 - val_loss: 0.1658 - val_acc: 0.9493 Epoch 9/60 60000/60000 [==============================] - 1s 9us/step - loss: 0.1714 - acc: 0.9492 - val_loss: 0.1497 - val_acc: 0.9538 Epoch 10/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.1565 - acc: 0.9539 - val_loss: 0.1404 - val_acc: 0.9591 Epoch 11/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.1443 - acc: 0.9569 - val_loss: 0.1305 - val_acc: 0.9616 Epoch 12/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.1334 - acc: 0.9596 - val_loss: 0.1224 - val_acc: 0.9628 Epoch 13/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.1257 - acc: 0.9627 - val_loss: 0.1133 - val_acc: 0.9660 Epoch 14/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.1169 - acc: 0.9652 - val_loss: 0.1116 - val_acc: 0.9674 Epoch 15/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.1091 - acc: 0.9675 - val_loss: 0.1104 - val_acc: 0.9670 Epoch 16/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.1051 - acc: 0.9689 - val_loss: 0.1030 - val_acc: 0.9692 Epoch 17/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0978 - acc: 0.9697 - val_loss: 0.1044 - val_acc: 0.9686 Epoch 18/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0929 - acc: 0.9718 - val_loss: 0.0996 - val_acc: 0.9689 Epoch 19/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0882 - acc: 0.9738 - val_loss: 0.1035 - val_acc: 0.9695 Epoch 20/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0850 - acc: 0.9737 - val_loss: 0.0941 - val_acc: 0.9717 Epoch 21/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0803 - acc: 0.9751 - val_loss: 0.0953 - val_acc: 0.9715 Epoch 22/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0793 - acc: 0.9762 - val_loss: 0.0898 - val_acc: 0.9729 Epoch 23/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0747 - acc: 0.9775 - val_loss: 0.0901 - val_acc: 0.9732 Epoch 24/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0718 - acc: 0.9778 - val_loss: 0.0948 - val_acc: 0.9720 Epoch 25/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0697 - acc: 0.9781 - val_loss: 0.0908 - val_acc: 0.9727 Epoch 26/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0668 - acc: 0.9794 - val_loss: 0.0917 - val_acc: 0.9726 Epoch 27/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0648 - acc: 0.9800 - val_loss: 0.0895 - val_acc: 0.9737 Epoch 28/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0637 - acc: 0.9798 - val_loss: 0.0868 - val_acc: 0.9728 Epoch 29/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0598 - acc: 0.9813 - val_loss: 0.0883 - val_acc: 0.9736 Epoch 30/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0570 - acc: 0.9820 - val_loss: 0.0869 - val_acc: 0.9741 Epoch 31/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0555 - acc: 0.9825 - val_loss: 0.0896 - val_acc: 0.9732 Epoch 32/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0554 - acc: 0.9827 - val_loss: 0.0843 - val_acc: 0.9743 Epoch 33/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0512 - acc: 0.9836 - val_loss: 0.0843 - val_acc: 0.9746 Epoch 34/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0509 - acc: 0.9835 - val_loss: 0.0868 - val_acc: 0.9753 Epoch 35/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0491 - acc: 0.9842 - val_loss: 0.0841 - val_acc: 0.9755 Epoch 36/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0471 - acc: 0.9848 - val_loss: 0.0887 - val_acc: 0.9728 Epoch 37/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0466 - acc: 0.9850 - val_loss: 0.0876 - val_acc: 0.9756 Epoch 38/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0456 - acc: 0.9856 - val_loss: 0.0833 - val_acc: 0.9769 Epoch 39/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0431 - acc: 0.9866 - val_loss: 0.0869 - val_acc: 0.9759 Epoch 40/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0413 - acc: 0.9869 - val_loss: 0.0926 - val_acc: 0.9743 Epoch 41/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0401 - acc: 0.9872 - val_loss: 0.0851 - val_acc: 0.9756 Epoch 42/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0401 - acc: 0.9876 - val_loss: 0.0856 - val_acc: 0.9764 Epoch 43/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0392 - acc: 0.9870 - val_loss: 0.0861 - val_acc: 0.9771 Epoch 44/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0396 - acc: 0.9870 - val_loss: 0.0918 - val_acc: 0.9756 Epoch 45/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0371 - acc: 0.9883 - val_loss: 0.0866 - val_acc: 0.9766 Epoch 46/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0374 - acc: 0.9883 - val_loss: 0.0888 - val_acc: 0.9748 Epoch 47/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0376 - acc: 0.9878 - val_loss: 0.0850 - val_acc: 0.9761 Epoch 48/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0351 - acc: 0.9890 - val_loss: 0.0848 - val_acc: 0.9777 Epoch 49/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0361 - acc: 0.9884 - val_loss: 0.0850 - val_acc: 0.9771 Epoch 50/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0341 - acc: 0.9887 - val_loss: 0.0889 - val_acc: 0.9769 Epoch 51/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0322 - acc: 0.9897 - val_loss: 0.0882 - val_acc: 0.9771 Epoch 52/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0322 - acc: 0.9895 - val_loss: 0.0892 - val_acc: 0.9762 Epoch 53/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0313 - acc: 0.9892 - val_loss: 0.0916 - val_acc: 0.9771 Epoch 54/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0300 - acc: 0.9897 - val_loss: 0.0913 - val_acc: 0.9772 Epoch 55/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0306 - acc: 0.9902 - val_loss: 0.0904 - val_acc: 0.9763 Epoch 56/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0307 - acc: 0.9900 - val_loss: 0.0910 - val_acc: 0.9777 Epoch 57/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0299 - acc: 0.9901 - val_loss: 0.0918 - val_acc: 0.9763 Epoch 58/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0299 - acc: 0.9906 - val_loss: 0.0914 - val_acc: 0.9778 Epoch 59/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0284 - acc: 0.9909 - val_loss: 0.0907 - val_acc: 0.9769 Epoch 60/60 60000/60000 [==============================] - 0s 8us/step - loss: 0.0283 - acc: 0.9908 - val_loss: 0.0962 - val_acc: 0.9761 CPU times: user 1min 17s, sys: 7.4 s, total: 1min 24s Wall time: 29.3 s ```python score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` Test loss: 0.09620895183624088 Test accuracy: 0.9761 ```python fig, ax = plt.subplots(1, 1, figsize=(10,6)) ax.plot(np.sqrt(model_history.history['acc']), 'r', label='train_acc') ax.plot(np.sqrt(model_history.history['val_acc']), 'b' ,label='val_acc') ax.set_xlabel(r'Epoch', fontsize=20) ax.set_ylabel(r'Accuracy', fontsize=20) ax.legend() ax.tick_params(labelsize=20) ``` ```python fig, ax = plt.subplots(1, 1, figsize=(10,6)) ax.plot(np.sqrt(model_history.history['loss']), 'r', label='train') ax.plot(np.sqrt(model_history.history['val_loss']), 'b' ,label='val') ax.set_xlabel(r'Epoch', fontsize=20) ax.set_ylabel(r'Loss', fontsize=20) ax.legend() ax.tick_params(labelsize=20) ``` ### Step 5 - Random restarts This method does not seem to have an implementation in `keras`. Develop your own function for this using `keras.callbacks.LearningRateScheduler`. You can refer back to how we used it to set a custom learning rate. ### Tuning the Hyperparameters using Cross Validation Now instead of trying different values by hand, we will use GridSearchCV from Scikit-Learn to try out several values for our hyperparameters and compare the results. To do cross-validation with `keras` we will use the wrappers for the Scikit-Learn API. They provide a way to use Sequential Keras models (single-input only) as part of your Scikit-Learn workflow. There are two wrappers available: `keras.wrappers.scikit_learn.KerasClassifier(build_fn=None, **sk_params)`, which implements the Scikit-Learn classifier interface, `keras.wrappers.scikit_learn.KerasRegressor(build_fn=None, **sk_params)`, which implements the Scikit-Learn regressor interface. ```python import numpy from sklearn.model_selection import GridSearchCV from keras.wrappers.scikit_learn import KerasClassifier ``` #### Trying different weight initializations ```python # let's create a function that creates the model (required for KerasClassifier) # while accepting the hyperparameters we want to tune # we also pass some default values such as optimizer='rmsprop' def create_model(init_mode='uniform'): # define model model = Sequential() model.add(Dense(64, kernel_initializer=init_mode, activation=tf.nn.relu, input_dim=784)) model.add(Dropout(0.1)) model.add(Dense(64, kernel_initializer=init_mode, activation=tf.nn.relu)) model.add(Dense(10, kernel_initializer=init_mode, activation=tf.nn.softmax)) # compile model model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy']) return model ``` ```python %%time seed = 7 numpy.random.seed(seed) batch_size = 128 epochs = 10 model_CV = KerasClassifier(build_fn=create_model, epochs=epochs, batch_size=batch_size, verbose=1) # define the grid search parameters init_mode = ['uniform', 'lecun_uniform', 'normal', 'zero', 'glorot_normal', 'glorot_uniform', 'he_normal', 'he_uniform'] param_grid = dict(init_mode=init_mode) grid = GridSearchCV(estimator=model_CV, param_grid=param_grid, n_jobs=-1, cv=3) grid_result = grid.fit(x_train, y_train) ``` Epoch 1/10 60000/60000 [==============================] - 1s 21us/step - loss: 0.4118 - acc: 0.8824 Epoch 2/10 60000/60000 [==============================] - 1s 15us/step - loss: 0.1936 - acc: 0.9437 Epoch 3/10 60000/60000 [==============================] - 1s 14us/step - loss: 0.1482 - acc: 0.9553 Epoch 4/10 60000/60000 [==============================] - 1s 14us/step - loss: 0.1225 - acc: 0.9631 Epoch 5/10 60000/60000 [==============================] - 1s 14us/step - loss: 0.1064 - acc: 0.9676 Epoch 6/10 60000/60000 [==============================] - 1s 14us/step - loss: 0.0944 - acc: 0.9710 Epoch 7/10 60000/60000 [==============================] - 1s 14us/step - loss: 0.0876 - acc: 0.9732 Epoch 8/10 60000/60000 [==============================] - 1s 15us/step - loss: 0.0809 - acc: 0.9745 Epoch 9/10 60000/60000 [==============================] - 1s 14us/step - loss: 0.0741 - acc: 0.9775 Epoch 10/10 60000/60000 [==============================] - 1s 15us/step - loss: 0.0709 - acc: 0.9783 CPU times: user 21 s, sys: 3.56 s, total: 24.5 s Wall time: 1min 20s ```python # print results print(f'Best Accuracy for {grid_result.best_score_} using {grid_result.best_params_}') means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print(f' mean={mean:.4}, std={stdev:.4} using {param}') ``` Best Accuracy for 0.9689333333333333 using {'init_mode': 'lecun_uniform'} mean=0.9647, std=0.001438 using {'init_mode': 'uniform'} mean=0.9689, std=0.001044 using {'init_mode': 'lecun_uniform'} mean=0.9651, std=0.001515 using {'init_mode': 'normal'} mean=0.1124, std=0.002416 using {'init_mode': 'zero'} mean=0.9657, std=0.0005104 using {'init_mode': 'glorot_normal'} mean=0.9687, std=0.0008436 using {'init_mode': 'glorot_uniform'} mean=0.9681, std=0.002145 using {'init_mode': 'he_normal'} mean=0.9685, std=0.001952 using {'init_mode': 'he_uniform'} ### Save Your Neural Network Model to JSON The Hierarchical Data Format (HDF5) is a data storage format for storing large arrays of data including values for the weights in a neural network. You can install HDF5 Python module: pip install h5py Keras gives you the ability to describe and save any model using the JSON format. ```python from keras.models import model_from_json # serialize model to JSON model_json = model.to_json() with open("model.json", "w") as json_file: json_file.write(model_json) # save weights to HDF5 model.save_weights("model.h5") print("Model saved") # when you want to retrieve the model: load json and create model json_file = open('model.json', 'r') saved_model = json_file.read() # close the file as good practice json_file.close() model_from_json = model_from_json(saved_model) # load weights into new model model_from_json.load_weights("model.h5") print("Model loaded") ``` Model saved Model loaded ### Cross-validation with more than one hyperparameters We can do cross-validation with more than one parameters simultaneously, effectively trying out combinations of them. **Note: Cross-validation in neural networks is computationally expensive**. Think before you experiment! Multiply the number of features you are validating on to see how many combinations there are. Each combination is evaluated using the cv-fold cross-validation (cv is a parameter we choose). For example, we can choose to search for different values of: - batch size, - number of epochs and - initialization mode. The choices are specified into a dictionary and passed to GridSearchCV. We will perform a GridSearch for `batch size`, `number of epochs` and `initializer` combined. ```python # repeat some of the initial values here so we make sure they were not changed input_dim = x_train.shape[1] num_classes = 10 # let's create a function that creates the model (required for KerasClassifier) # while accepting the hyperparameters we want to tune # we also pass some default values such as optimizer='rmsprop' def create_model_2(optimizer='rmsprop', init='glorot_uniform'): model = Sequential() model.add(Dense(64, input_dim=input_dim, kernel_initializer=init, activation='relu')) model.add(Dropout(0.1)) model.add(Dense(64, kernel_initializer=init, activation=tf.nn.relu)) model.add(Dense(num_classes, kernel_initializer=init, activation=tf.nn.softmax)) # compile model model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy']) return model ``` ```python %%time # fix random seed for reproducibility (this might work or might not work # depending on each library's implenentation) seed = 7 numpy.random.seed(seed) # create the sklearn model for the network model_init_batch_epoch_CV = KerasClassifier(build_fn=create_model_2, verbose=1) # we choose the initializers that came at the top in our previous cross-validation!! init_mode = ['glorot_uniform', 'uniform'] batches = [128, 512] epochs = [10, 20] # grid search for initializer, batch size and number of epochs param_grid = dict(epochs=epochs, batch_size=batches, init=init_mode) grid = GridSearchCV(estimator=model_init_batch_epoch_CV, param_grid=param_grid, cv=3) grid_result = grid.fit(x_train, y_train) ``` Epoch 1/10 40000/40000 [==============================] - 1s 21us/step - loss: 0.4801 - acc: 0.8601 Epoch 2/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.2309 - acc: 0.9310 Epoch 3/10 40000/40000 [==============================] - 1s 14us/step - loss: 0.1744 - acc: 0.9479 Epoch 4/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1422 - acc: 0.9575 Epoch 5/10 40000/40000 [==============================] - 1s 14us/step - loss: 0.1214 - acc: 0.9625 Epoch 6/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1081 - acc: 0.9675 Epoch 7/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.0974 - acc: 0.9693 Epoch 8/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.0874 - acc: 0.9730 Epoch 9/10 40000/40000 [==============================] - 1s 14us/step - loss: 0.0800 - acc: 0.9750 Epoch 10/10 40000/40000 [==============================] - 1s 14us/step - loss: 0.0750 - acc: 0.9765 20000/20000 [==============================] - 0s 10us/step 40000/40000 [==============================] - 0s 6us/step Epoch 1/10 40000/40000 [==============================] - 1s 22us/step - loss: 0.4746 - acc: 0.8656 Epoch 2/10 40000/40000 [==============================] - 1s 14us/step - loss: 0.2264 - acc: 0.9336 Epoch 3/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1734 - acc: 0.9487 Epoch 4/10 40000/40000 [==============================] - 1s 14us/step - loss: 0.1436 - acc: 0.9568 Epoch 5/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1256 - acc: 0.9614 Epoch 6/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1104 - acc: 0.9660 Epoch 7/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.0973 - acc: 0.9707 Epoch 8/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.0870 - acc: 0.9733 Epoch 9/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.0818 - acc: 0.9748 Epoch 10/10 40000/40000 [==============================] - 1s 14us/step - loss: 0.0730 - acc: 0.9770 20000/20000 [==============================] - 0s 10us/step 40000/40000 [==============================] - 0s 6us/step Epoch 1/10 40000/40000 [==============================] - 1s 23us/step - loss: 0.4639 - acc: 0.8671 Epoch 2/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.2208 - acc: 0.9344 Epoch 3/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1693 - acc: 0.9491 Epoch 4/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1388 - acc: 0.9580 Epoch 5/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1206 - acc: 0.9634 Epoch 6/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1062 - acc: 0.9678 Epoch 7/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.0956 - acc: 0.9711 Epoch 8/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.0870 - acc: 0.9728 Epoch 9/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.0794 - acc: 0.9750 Epoch 10/10 40000/40000 [==============================] - 1s 14us/step - loss: 0.0748 - acc: 0.9774 20000/20000 [==============================] - 0s 11us/step 40000/40000 [==============================] - 0s 7us/step Epoch 1/10 40000/40000 [==============================] - 1s 23us/step - loss: 0.7144 - acc: 0.7894 Epoch 2/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.3246 - acc: 0.9045 Epoch 3/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.2482 - acc: 0.9268 Epoch 4/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.2005 - acc: 0.9407 Epoch 5/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1673 - acc: 0.9485 Epoch 6/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1462 - acc: 0.9559 Epoch 7/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1305 - acc: 0.9604 Epoch 8/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1166 - acc: 0.9643 Epoch 9/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1079 - acc: 0.9675 Epoch 10/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.0983 - acc: 0.9695 20000/20000 [==============================] - 0s 12us/step 40000/40000 [==============================] - 0s 6us/step Epoch 1/10 40000/40000 [==============================] - 1s 24us/step - loss: 0.6894 - acc: 0.7944 Epoch 2/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.3171 - acc: 0.9061 Epoch 3/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.2358 - acc: 0.9312 Epoch 4/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1911 - acc: 0.9422 Epoch 5/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1595 - acc: 0.9526 Epoch 6/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1401 - acc: 0.9579 Epoch 7/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1230 - acc: 0.9636 Epoch 8/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1096 - acc: 0.9672 Epoch 9/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1027 - acc: 0.9692 Epoch 10/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.0936 - acc: 0.9718 20000/20000 [==============================] - 0s 12us/step 40000/40000 [==============================] - 0s 7us/step Epoch 1/10 40000/40000 [==============================] - 1s 24us/step - loss: 0.7028 - acc: 0.7976 Epoch 2/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.3189 - acc: 0.9055 Epoch 3/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.2390 - acc: 0.9307 Epoch 4/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1910 - acc: 0.9435 Epoch 5/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1595 - acc: 0.9528 Epoch 6/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1376 - acc: 0.9583 Epoch 7/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1213 - acc: 0.9629 Epoch 8/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.1100 - acc: 0.9664 Epoch 9/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.0987 - acc: 0.9697 Epoch 10/10 40000/40000 [==============================] - 1s 15us/step - loss: 0.0932 - acc: 0.9714 20000/20000 [==============================] - 0s 13us/step 40000/40000 [==============================] - 0s 7us/step Epoch 1/20 40000/40000 [==============================] - 1s 25us/step - loss: 0.4938 - acc: 0.8589 Epoch 2/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.2286 - acc: 0.9324 Epoch 3/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1751 - acc: 0.9470 Epoch 4/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1458 - acc: 0.9553 Epoch 5/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1237 - acc: 0.9620 Epoch 6/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1106 - acc: 0.9669 Epoch 7/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0998 - acc: 0.9696 Epoch 8/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0898 - acc: 0.9729 Epoch 9/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0799 - acc: 0.9749 Epoch 10/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0746 - acc: 0.9769 Epoch 11/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0688 - acc: 0.9783 Epoch 12/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0655 - acc: 0.9792 Epoch 13/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0619 - acc: 0.9801 Epoch 14/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0574 - acc: 0.9814 Epoch 15/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0549 - acc: 0.9836 Epoch 16/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0512 - acc: 0.9837 Epoch 17/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0464 - acc: 0.9855 Epoch 18/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0466 - acc: 0.9850 Epoch 19/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0431 - acc: 0.9863 Epoch 20/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0430 - acc: 0.9861 20000/20000 [==============================] - 0s 13us/step 40000/40000 [==============================] - 0s 7us/step Epoch 1/20 40000/40000 [==============================] - 1s 25us/step - loss: 0.4956 - acc: 0.8584 Epoch 2/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.2286 - acc: 0.9321 Epoch 3/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1746 - acc: 0.9474 Epoch 4/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1438 - acc: 0.9574 Epoch 5/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1232 - acc: 0.9634 Epoch 6/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1104 - acc: 0.9665 Epoch 7/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0986 - acc: 0.9696 Epoch 8/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0886 - acc: 0.9723 Epoch 9/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0828 - acc: 0.9741 Epoch 10/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0763 - acc: 0.9768 Epoch 11/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0679 - acc: 0.9781 Epoch 12/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0656 - acc: 0.9788 Epoch 13/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0601 - acc: 0.9810 Epoch 14/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0571 - acc: 0.9817 Epoch 15/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0517 - acc: 0.9832 Epoch 16/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0509 - acc: 0.9834 Epoch 17/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0478 - acc: 0.9850 Epoch 18/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0461 - acc: 0.9852 Epoch 19/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0442 - acc: 0.9854 Epoch 20/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0427 - acc: 0.9866 20000/20000 [==============================] - 0s 14us/step 40000/40000 [==============================] - 0s 7us/step Epoch 1/20 40000/40000 [==============================] - 1s 27us/step - loss: 0.4694 - acc: 0.8670 Epoch 2/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.2196 - acc: 0.9336 Epoch 3/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1681 - acc: 0.9495 Epoch 4/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1391 - acc: 0.9589 Epoch 5/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1200 - acc: 0.9630 Epoch 6/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1067 - acc: 0.9671 Epoch 7/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0968 - acc: 0.9713 Epoch 8/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0880 - acc: 0.9730 Epoch 9/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0815 - acc: 0.9754 Epoch 10/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0742 - acc: 0.9771 Epoch 11/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0711 - acc: 0.9778 Epoch 12/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0643 - acc: 0.9806 Epoch 13/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0613 - acc: 0.9815 Epoch 14/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0561 - acc: 0.9818 Epoch 15/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0534 - acc: 0.9832 Epoch 16/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0528 - acc: 0.9832 Epoch 17/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0479 - acc: 0.9848 Epoch 18/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0486 - acc: 0.9844 Epoch 19/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0438 - acc: 0.9861 Epoch 20/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0427 - acc: 0.9861 20000/20000 [==============================] - 0s 14us/step 40000/40000 [==============================] - 0s 7us/step Epoch 1/20 40000/40000 [==============================] - 1s 27us/step - loss: 0.6830 - acc: 0.8003 Epoch 2/20 40000/40000 [==============================] - 1s 16us/step - loss: 0.3234 - acc: 0.9044 Epoch 3/20 40000/40000 [==============================] - 1s 16us/step - loss: 0.2456 - acc: 0.9269 Epoch 4/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1982 - acc: 0.9407 Epoch 5/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1627 - acc: 0.9506 Epoch 6/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1422 - acc: 0.9561 Epoch 7/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1251 - acc: 0.9618 Epoch 8/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1134 - acc: 0.9650 Epoch 9/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1037 - acc: 0.9677 Epoch 10/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0957 - acc: 0.9705 Epoch 11/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0891 - acc: 0.9730 Epoch 12/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0832 - acc: 0.9745 Epoch 13/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0760 - acc: 0.9768 Epoch 14/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0727 - acc: 0.9778 Epoch 15/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0681 - acc: 0.9782 Epoch 16/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0628 - acc: 0.9803 Epoch 17/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0613 - acc: 0.9801 Epoch 18/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0572 - acc: 0.9824 Epoch 19/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0549 - acc: 0.9821 Epoch 20/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0521 - acc: 0.9834 20000/20000 [==============================] - 0s 15us/step 40000/40000 [==============================] - 0s 7us/step Epoch 1/20 40000/40000 [==============================] - 1s 28us/step - loss: 0.6742 - acc: 0.8034 Epoch 2/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.3142 - acc: 0.9081 Epoch 3/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.2365 - acc: 0.9306 Epoch 4/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1873 - acc: 0.9453 Epoch 5/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1563 - acc: 0.9531 Epoch 6/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1373 - acc: 0.9580 Epoch 7/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1212 - acc: 0.9640 Epoch 8/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1084 - acc: 0.9665 Epoch 9/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1014 - acc: 0.9695 Epoch 10/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0917 - acc: 0.9732 Epoch 11/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0838 - acc: 0.9739 Epoch 12/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0788 - acc: 0.9749 Epoch 13/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0720 - acc: 0.9779 Epoch 14/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0685 - acc: 0.9793 Epoch 15/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0666 - acc: 0.9797 Epoch 16/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0618 - acc: 0.9807 Epoch 17/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0578 - acc: 0.9819 Epoch 18/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0557 - acc: 0.9825 Epoch 19/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0535 - acc: 0.9831 Epoch 20/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0482 - acc: 0.9848 20000/20000 [==============================] - 0s 15us/step 40000/40000 [==============================] - 0s 7us/step Epoch 1/20 40000/40000 [==============================] - 1s 29us/step - loss: 0.6865 - acc: 0.8001 Epoch 2/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.3089 - acc: 0.9106 Epoch 3/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.2292 - acc: 0.9304 Epoch 4/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1845 - acc: 0.9435 Epoch 5/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1562 - acc: 0.9527 Epoch 6/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1371 - acc: 0.9584 Epoch 7/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1206 - acc: 0.9629 Epoch 8/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.1098 - acc: 0.9667 Epoch 9/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0990 - acc: 0.9699 Epoch 10/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0912 - acc: 0.9718 Epoch 11/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0863 - acc: 0.9740 Epoch 12/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0792 - acc: 0.9757 Epoch 13/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0721 - acc: 0.9782 Epoch 14/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0696 - acc: 0.9782 Epoch 15/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0661 - acc: 0.9796 Epoch 16/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0621 - acc: 0.9810 Epoch 17/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0588 - acc: 0.9813 Epoch 18/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0541 - acc: 0.9831 Epoch 19/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0513 - acc: 0.9834 Epoch 20/20 40000/40000 [==============================] - 1s 15us/step - loss: 0.0514 - acc: 0.9835 20000/20000 [==============================] - 0s 16us/step 40000/40000 [==============================] - 0s 7us/step Epoch 1/10 40000/40000 [==============================] - 1s 22us/step - loss: 0.7656 - acc: 0.7926 Epoch 2/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.3345 - acc: 0.9021 Epoch 3/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2633 - acc: 0.9225 Epoch 4/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2207 - acc: 0.9357 Epoch 5/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1877 - acc: 0.9450 Epoch 6/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1684 - acc: 0.9500 Epoch 7/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1517 - acc: 0.9552 Epoch 8/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1358 - acc: 0.9587 Epoch 9/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1236 - acc: 0.9627 Epoch 10/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1153 - acc: 0.9661 20000/20000 [==============================] - 0s 14us/step 40000/40000 [==============================] - 0s 4us/step Epoch 1/10 40000/40000 [==============================] - 1s 22us/step - loss: 0.7640 - acc: 0.7905 Epoch 2/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.3207 - acc: 0.9055 Epoch 3/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2455 - acc: 0.9276 Epoch 4/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2050 - acc: 0.9392 Epoch 5/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1755 - acc: 0.9484 Epoch 6/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1573 - acc: 0.9524 Epoch 7/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1385 - acc: 0.9594 Epoch 8/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1247 - acc: 0.9634 Epoch 9/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1181 - acc: 0.9644 Epoch 10/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1074 - acc: 0.9679 20000/20000 [==============================] - 0s 15us/step 40000/40000 [==============================] - 0s 4us/step Epoch 1/10 40000/40000 [==============================] - 1s 23us/step - loss: 0.7858 - acc: 0.7867 Epoch 2/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.3358 - acc: 0.9017 Epoch 3/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2603 - acc: 0.9250 Epoch 4/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2160 - acc: 0.9367 Epoch 5/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1879 - acc: 0.9435 Epoch 6/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1663 - acc: 0.9513 Epoch 7/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1493 - acc: 0.9552 Epoch 8/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1363 - acc: 0.9591 Epoch 9/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1262 - acc: 0.9609 Epoch 10/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1147 - acc: 0.9653 20000/20000 [==============================] - 0s 16us/step 40000/40000 [==============================] - 0s 4us/step Epoch 1/10 40000/40000 [==============================] - 1s 23us/step - loss: 1.1193 - acc: 0.6932 Epoch 2/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.4829 - acc: 0.8573 Epoch 3/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.3785 - acc: 0.8885 Epoch 4/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.3208 - acc: 0.9062 Epoch 5/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2811 - acc: 0.9167 Epoch 6/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2472 - acc: 0.9263 Epoch 7/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2200 - acc: 0.9362 Epoch 8/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1963 - acc: 0.9422 Epoch 9/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1772 - acc: 0.9475 Epoch 10/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1611 - acc: 0.9519 20000/20000 [==============================] - 0s 16us/step 40000/40000 [==============================] - 0s 4us/step Epoch 1/10 40000/40000 [==============================] - 1s 24us/step - loss: 1.1189 - acc: 0.6828 Epoch 2/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.4867 - acc: 0.8556 Epoch 3/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.3713 - acc: 0.8919 Epoch 4/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.3125 - acc: 0.9090 Epoch 5/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2748 - acc: 0.9214 Epoch 6/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2443 - acc: 0.9285 Epoch 7/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2141 - acc: 0.9383 Epoch 8/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1919 - acc: 0.9445 Epoch 9/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1767 - acc: 0.9485 Epoch 10/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1617 - acc: 0.9528 20000/20000 [==============================] - 0s 17us/step 40000/40000 [==============================] - 0s 4us/step Epoch 1/10 40000/40000 [==============================] - 1s 25us/step - loss: 1.0997 - acc: 0.7128 Epoch 2/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.4652 - acc: 0.8638 Epoch 3/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.3716 - acc: 0.8919 Epoch 4/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.3165 - acc: 0.9072 Epoch 5/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2735 - acc: 0.9199 Epoch 6/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2381 - acc: 0.9299 Epoch 7/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.2094 - acc: 0.9380 Epoch 8/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1863 - acc: 0.9442 Epoch 9/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1693 - acc: 0.9491 Epoch 10/10 40000/40000 [==============================] - 0s 8us/step - loss: 0.1549 - acc: 0.9538 20000/20000 [==============================] - 0s 17us/step 40000/40000 [==============================] - 0s 4us/step Epoch 1/20 40000/40000 [==============================] - 1s 26us/step - loss: 0.7406 - acc: 0.7936 Epoch 2/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.3331 - acc: 0.9019 Epoch 3/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2623 - acc: 0.9229 Epoch 4/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2174 - acc: 0.9372 Epoch 5/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1876 - acc: 0.9436 Epoch 6/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1659 - acc: 0.9498 Epoch 7/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1493 - acc: 0.9559 Epoch 8/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1358 - acc: 0.9602 Epoch 9/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1244 - acc: 0.9624 Epoch 10/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1135 - acc: 0.9655 Epoch 11/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1063 - acc: 0.9683 Epoch 12/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0963 - acc: 0.9711 Epoch 13/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0924 - acc: 0.9720 Epoch 14/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0897 - acc: 0.9732 Epoch 15/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0811 - acc: 0.9752 Epoch 16/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0798 - acc: 0.9750 Epoch 17/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0733 - acc: 0.9770 Epoch 18/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0695 - acc: 0.9789 Epoch 19/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0652 - acc: 0.9798 Epoch 20/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0622 - acc: 0.9815 20000/20000 [==============================] - 0s 18us/step 40000/40000 [==============================] - 0s 4us/step Epoch 1/20 40000/40000 [==============================] - 1s 27us/step - loss: 0.7349 - acc: 0.8012 Epoch 2/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.3201 - acc: 0.9060 Epoch 3/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2541 - acc: 0.9257 Epoch 4/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2149 - acc: 0.9375 Epoch 5/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1894 - acc: 0.9443 Epoch 6/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1694 - acc: 0.9496 Epoch 7/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1552 - acc: 0.9541 Epoch 8/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1421 - acc: 0.9580 Epoch 9/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1307 - acc: 0.9605 Epoch 10/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1200 - acc: 0.9642 Epoch 11/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1110 - acc: 0.9663 Epoch 12/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1041 - acc: 0.9681 Epoch 13/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0971 - acc: 0.9698 Epoch 14/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0943 - acc: 0.9713 Epoch 15/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0864 - acc: 0.9733 Epoch 16/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0809 - acc: 0.9748 Epoch 17/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0776 - acc: 0.9762 Epoch 18/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0745 - acc: 0.9762 Epoch 19/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0712 - acc: 0.9777 Epoch 20/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0654 - acc: 0.9793 20000/20000 [==============================] - 0s 18us/step 40000/40000 [==============================] - 0s 4us/step Epoch 1/20 40000/40000 [==============================] - 1s 27us/step - loss: 0.7435 - acc: 0.8001 Epoch 2/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.3335 - acc: 0.9029 Epoch 3/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2555 - acc: 0.9254 Epoch 4/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2168 - acc: 0.9359 Epoch 5/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1872 - acc: 0.9458 Epoch 6/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1640 - acc: 0.9514 Epoch 7/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1452 - acc: 0.9578 Epoch 8/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1313 - acc: 0.9607 Epoch 9/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1193 - acc: 0.9644 Epoch 10/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1104 - acc: 0.9670 Epoch 11/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1024 - acc: 0.9691 Epoch 12/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0955 - acc: 0.9715 Epoch 13/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0881 - acc: 0.9734 Epoch 14/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0815 - acc: 0.9756 Epoch 15/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0772 - acc: 0.9764 Epoch 16/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0745 - acc: 0.9780 Epoch 17/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0681 - acc: 0.9788 Epoch 18/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0649 - acc: 0.9806 Epoch 19/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0640 - acc: 0.9800 Epoch 20/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0604 - acc: 0.9813 20000/20000 [==============================] - 0s 19us/step 40000/40000 [==============================] - 0s 4us/step Epoch 1/20 40000/40000 [==============================] - 1s 28us/step - loss: 1.1126 - acc: 0.6943 Epoch 2/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.4792 - acc: 0.8568 Epoch 3/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.3712 - acc: 0.8910 Epoch 4/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.3155 - acc: 0.9068 Epoch 5/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2746 - acc: 0.9196 Epoch 6/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2392 - acc: 0.9293 Epoch 7/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2126 - acc: 0.9363 Epoch 8/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1923 - acc: 0.9437 Epoch 9/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1706 - acc: 0.9489 Epoch 10/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1591 - acc: 0.9530 Epoch 11/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1462 - acc: 0.9560 Epoch 12/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1342 - acc: 0.9593 Epoch 13/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1272 - acc: 0.9609 Epoch 14/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1188 - acc: 0.9639 Epoch 15/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1126 - acc: 0.9660 Epoch 16/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1072 - acc: 0.9673 Epoch 17/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1016 - acc: 0.9692 Epoch 18/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0971 - acc: 0.9697 Epoch 19/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0901 - acc: 0.9719 Epoch 20/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0869 - acc: 0.9729 20000/20000 [==============================] - 0s 19us/step 40000/40000 [==============================] - 0s 4us/step Epoch 1/20 40000/40000 [==============================] - 1s 28us/step - loss: 1.0906 - acc: 0.7216 Epoch 2/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.4527 - acc: 0.8658 Epoch 3/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.3559 - acc: 0.8956 Epoch 4/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.3055 - acc: 0.9109 Epoch 5/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2642 - acc: 0.9219 Epoch 6/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2333 - acc: 0.9317 Epoch 7/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2057 - acc: 0.9386 Epoch 8/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1858 - acc: 0.9462 Epoch 9/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1674 - acc: 0.9501 Epoch 10/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1514 - acc: 0.9548 Epoch 11/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1410 - acc: 0.9571 Epoch 12/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1303 - acc: 0.9607 Epoch 13/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1193 - acc: 0.9643 Epoch 14/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1131 - acc: 0.9649 Epoch 15/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1054 - acc: 0.9680 Epoch 16/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0999 - acc: 0.9696 Epoch 17/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0937 - acc: 0.9712 Epoch 18/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0895 - acc: 0.9731 Epoch 19/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0830 - acc: 0.9758 Epoch 20/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0790 - acc: 0.9760 20000/20000 [==============================] - 0s 20us/step 40000/40000 [==============================] - 0s 4us/step Epoch 1/20 40000/40000 [==============================] - 1s 29us/step - loss: 1.1233 - acc: 0.6955 Epoch 2/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.4793 - acc: 0.8588 Epoch 3/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.3751 - acc: 0.8898 Epoch 4/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.3203 - acc: 0.9069 Epoch 5/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2760 - acc: 0.9196 Epoch 6/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2422 - acc: 0.9286 Epoch 7/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.2155 - acc: 0.9363 Epoch 8/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1910 - acc: 0.9437 Epoch 9/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1718 - acc: 0.9489 Epoch 10/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1584 - acc: 0.9530 Epoch 11/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1435 - acc: 0.9570 Epoch 12/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1326 - acc: 0.9603 Epoch 13/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1244 - acc: 0.9615 Epoch 14/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1171 - acc: 0.9644 Epoch 15/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1093 - acc: 0.9670 Epoch 16/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.1047 - acc: 0.9692 Epoch 17/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0984 - acc: 0.9698 Epoch 18/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0910 - acc: 0.9720 Epoch 19/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0860 - acc: 0.9734 Epoch 20/20 40000/40000 [==============================] - 0s 8us/step - loss: 0.0827 - acc: 0.9748 20000/20000 [==============================] - 0s 21us/step 40000/40000 [==============================] - 0s 4us/step Epoch 1/20 60000/60000 [==============================] - 2s 31us/step - loss: 0.4007 - acc: 0.8851 Epoch 2/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.1892 - acc: 0.9439 Epoch 3/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.1432 - acc: 0.9567 Epoch 4/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.1185 - acc: 0.9643 Epoch 5/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.1050 - acc: 0.9678 Epoch 6/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0929 - acc: 0.9715 Epoch 7/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0848 - acc: 0.9737 Epoch 8/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0775 - acc: 0.9764 Epoch 9/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0723 - acc: 0.9780 Epoch 10/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0693 - acc: 0.9785 Epoch 11/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0637 - acc: 0.9802 Epoch 12/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0602 - acc: 0.9814 Epoch 13/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0585 - acc: 0.9816 Epoch 14/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0546 - acc: 0.9827 Epoch 15/20 60000/60000 [==============================] - 1s 18us/step - loss: 0.0512 - acc: 0.9834 Epoch 16/20 60000/60000 [==============================] - 1s 18us/step - loss: 0.0501 - acc: 0.9841 Epoch 17/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0481 - acc: 0.9844 Epoch 18/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0456 - acc: 0.9860 Epoch 19/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0444 - acc: 0.9857 Epoch 20/20 60000/60000 [==============================] - 1s 17us/step - loss: 0.0439 - acc: 0.9861 CPU times: user 7min 56s, sys: 1min 6s, total: 9min 3s Wall time: 3min 43s ```python # print results print(f'Best Accuracy for {grid_result.best_score_:.4} using {grid_result.best_params_}') means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print(f'mean={mean:.4}, std={stdev:.4} using {param}') ``` Best Accuracy for 0.9712 using {'batch_size': 128, 'epochs': 20, 'init': 'glorot_uniform'} mean=0.9687, std=0.002174 using {'batch_size': 128, 'epochs': 10, 'init': 'glorot_uniform'} mean=0.966, std=0.000827 using {'batch_size': 128, 'epochs': 10, 'init': 'uniform'} mean=0.9712, std=0.0006276 using {'batch_size': 128, 'epochs': 20, 'init': 'glorot_uniform'} mean=0.97, std=0.001214 using {'batch_size': 128, 'epochs': 20, 'init': 'uniform'} mean=0.9594, std=0.001476 using {'batch_size': 512, 'epochs': 10, 'init': 'glorot_uniform'} mean=0.9516, std=0.003239 using {'batch_size': 512, 'epochs': 10, 'init': 'uniform'} mean=0.9684, std=0.003607 using {'batch_size': 512, 'epochs': 20, 'init': 'glorot_uniform'} mean=0.9633, std=0.0007962 using {'batch_size': 512, 'epochs': 20, 'init': 'uniform'}
44886317da781cabc3f38db96df9ab71e429ddc8
294,767
ipynb
Jupyter Notebook
Simple Guide to Hyperparameter Tuning in Neural Networks/hyperparameter_tuning.ipynb
archangelstv/Neural-Networks
1a406363480845949f51ca68c08cd333ea69ed14
[ "MIT" ]
null
null
null
Simple Guide to Hyperparameter Tuning in Neural Networks/hyperparameter_tuning.ipynb
archangelstv/Neural-Networks
1a406363480845949f51ca68c08cd333ea69ed14
[ "MIT" ]
null
null
null
Simple Guide to Hyperparameter Tuning in Neural Networks/hyperparameter_tuning.ipynb
archangelstv/Neural-Networks
1a406363480845949f51ca68c08cd333ea69ed14
[ "MIT" ]
null
null
null
109.132544
30,289
0.74241
true
36,989
Qwen/Qwen-72B
1. YES 2. YES
0.803174
0.853913
0.68584
__label__oci_Latn
0.544617
0.431768
# Robinson Crusoe With ``respy`` you are able to prototype a model similar to Keane and Wolpin (1997) in minutes. As economists love Robinsonades[<sup>1</sup>](#fn1), we will showcase the implementation of a the Robinson Crusoe economy as a discrete choice dynamic programming model. Throughout the notebook you find indented text which tell parts of Robinson's story and motivates the model. We will first set the scene with a broad introduction and then turn to the precise model specification. We continue by simulating the model and analyze its comparative statics. We then extend the model and showcase the estimation of the model parameters. Just to be clear, don't misinterpret the fact that we explain ``respy`` using such a simplistic model. ``respy`` is not a toy and can just as well solve state-of-the-art structural models. It's just easier to explain respy in a situation where we don't have to explain a complicated model at the same time. ```python %matplotlib inline import io import matplotlib.pyplot as plt import pandas as pd import respy as rp import yaml import seaborn as sns import numpy as np from pathlib import Path from time import time plt.style.use("../_static/respy.mplstyle") ``` ## Introduction > After setting sail against his parents' wishes, being captured by pirates, escaping from them, building a plantation, and setting sail again to capture slaves in Africa, [Robinson Crusoe](https://en.wikipedia.org/wiki/Robinson_Crusoe) stranded on a small island. He is alone with one dog, two cats, and only some supplies. He goes fishing to make ends meet and if he is too tired he will relax in his hammock. But, he cannot relax to often as storing food is not easy on a tropical island. In the discrete choice dynamic programming model, Robinson chooses every period to either go fishing, $a = 0$, or spend the day in the hammock, $a = 1$, to maximize his expected sum of discounted lifetime utility. The utility of a choice, $U(s_t, a_t)$, depends on the state $s_t$, which contains information on the individual's characteristics, and the chosen alternative $a_t$. For working alternatives like fishing utility consists of two components, a wage and a non-pecuniary component. $$ U(s_t, a_t) = W(s_t, a_t) + N(s_t, a_t) $$ For non-working alternatives like the hammock, $W(s_t, a_t) = 0$. The wage is defined as $$\begin{align} W(s_t, a_t) &= r_a \exp\{x^w_{at} \beta^w_a + \epsilon_{at}\}\\ \ln(W(s_t, a_t)) &= \ln(r_a) + x^w_{at} \beta^w_a + \epsilon_{at} \end{align}$$ where $r_a$ is normally a market rental price for the skill units generated in the exponential expression. Another interpretation is that $ln(r_a)$ is simply the constant in the skill units. The skill units are generated by two components. $x^w_{at}$ and $\beta^w_a$ are the choice- and time-dependent covariates and parameters related to the wage signaled by superscript $w$. The last term, $\epsilon_{at}$ is a random shock. The non-pecuniary rewards for working alternatives are simply a vector dot product of covariates $x_t^w$ and parameters $\beta^w$. The superscript $w$ signals that the components belong to working alternatives. $$ N^w(s_t, a_t) = x_t^w\beta^w $$ The non-pecuniary reward for non-working alternatives is very similar except that the shocks enter the equation additively. Superscript $n$ stands for non-pecuniary. $$ N^n(s_t, a_t) = x_t^n\beta^n + \epsilon_{at} $$ Along with the lower triangular elements of the shock variance-covariance matrix of $\epsilon_t$, the utility parameters $\beta_a^w$, $\beta_a^n$ and $r_a$ form the main parameters of the model. ## Specification How can we express the equations and parameters with ``respy``? The following cell contains the code to write a ``.csv`` file which is the cornerstone of a model as it contains all parameters and some other specifications. It is quickly written and easily loaded with ``pandas``. ```python %%writefile robinson_crusoe_basic.csv category, name, value delta, delta, 0.95 wage_fishing, exp_fishing, 0.1 nonpec_fishing, constant, -1 nonpec_hammock, constant, 2.5 nonpec_hammock, not_fishing_last_period, -1 shocks_sdcorr, sd_fishing, 1 shocks_sdcorr, sd_hammock, 1 shocks_sdcorr, corr_hammock_fishing, -0.2 lagged_choice_1_hammock, constant, 1 ``` Overwriting robinson_crusoe_basic.csv ```python params = pd.read_csv( "robinson_crusoe_basic.csv", index_col=["category", "name"] ) params ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th></th> <th>value</th> </tr> <tr> <th>category</th> <th>name</th> <th></th> </tr> </thead> <tbody> <tr> <th>delta</th> <th>delta</th> <td>0.95</td> </tr> <tr> <th>wage_fishing</th> <th>exp_fishing</th> <td>0.10</td> </tr> <tr> <th>nonpec_fishing</th> <th>constant</th> <td>-1.00</td> </tr> <tr> <th rowspan="2" valign="top">nonpec_hammock</th> <th>constant</th> <td>2.50</td> </tr> <tr> <th>not_fishing_last_period</th> <td>-1.00</td> </tr> <tr> <th rowspan="3" valign="top">shocks_sdcorr</th> <th>sd_fishing</th> <td>1.00</td> </tr> <tr> <th>sd_hammock</th> <td>1.00</td> </tr> <tr> <th>corr_hammock_fishing</th> <td>-0.20</td> </tr> <tr> <th>lagged_choice_1_hammock</th> <th>constant</th> <td>1.00</td> </tr> </tbody> </table> </div> The parameters :class:`pd.DataFrame` contains a two-level :class:`pd.MultiIndex` to group parameters in categories. ``name`` should be uniquely assigned in each category or otherwise only the sum of identically named parameters is identified. ``value`` contains the value of the parameter. Note that we named Robinson's alternatives ``"fishing"`` and ``"hammock"`` and we have to use the names consistently. As long as you stick to lowercase letters separated by underscores, you can choose any name you want. The parameter specification contains following entries: - The first entry contains the discount factor of individuals. - The second category ``"wage_fishing"`` contains the parameters of the log wage equation for fishing. The group contains only one name called ``"exp_fishing"`` where ``"exp_*"`` is an identifier in the model for experience accumulated in a certain alternative. ``respy`` requires that you respect those identifiers of which there are not many and reference your alternatives consistently with the same name. If you stick to lowercase letters possibly separated by underscores, you are fine. - The third and fourth categories concern the non-pecuniary reward of fishing and relaxing in the hammock. - ``"shocks_sdcorr"`` groups the lower triangular of the variance-covariance matrix of shocks. - ``"lagged_choice_1_hammock"`` governs the distribution of previous choices at the begin of the model horizon. ``params`` is complemented with ``options`` which contains additional information. Here is short description: - ``"n_periods"`` defines the number of periods for which decision rules are computed. - ``"_seed"``: Seeds are used in every model component to ensure reproducibility. - ``"estimation_draws"`` defines the number of draws used to simulate the choice probabilities with Monte Carlo integration. - ``"estimation_tau"`` controls the smoothing to avoid zero-valued probabilities. - ``"interpolation_points"`` controls how many states are used to approximate the value functions of others states in each period. ``-1`` turns the approximation off. The approximation is detailed in Keane and Wolpin (1994). - ``"simulation_agents"`` defines how many agents are simulated. - ``"solution_draws"`` defines the number of draws used to simulate the value functions. - ``"covariates"`` is another dictionary where the key determines the covariate's name and the value is its definition. Here, we have to define what ``"constant"`` means. The covariate is created with :func:`pd.eval`. ```python %%writefile robinson_crusoe_basic.yaml n_periods: 10 estimation_draws: 200 estimation_seed: 500 estimation_tau: 0.001 interpolation_points: -1 simulation_agents: 1_000 simulation_seed: 132 solution_draws: 500 solution_seed: 456 covariates: constant: "1" not_fishing_last_period: "lagged_choice_1 != 'fishing'" ``` Overwriting robinson_crusoe_basic.yaml ```python options = yaml.safe_load(Path("robinson_crusoe_basic.yaml").read_text()) options ``` {'n_periods': 10, 'estimation_draws': 200, 'estimation_seed': 500, 'estimation_tau': 0.001, 'interpolation_points': -1, 'simulation_agents': 1000, 'simulation_seed': 132, 'solution_draws': 500, 'solution_seed': 456, 'covariates': {'constant': '1', 'not_fishing_last_period': "lagged_choice_1 != 'fishing'"}} ## Simulation We are now ready to simulate the model. ```python simulate = rp.get_simulate_func(params, options) df = simulate(params) ``` ```python df.head(15) ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th></th> <th>Experience_Fishing</th> <th>Lagged_Choice_1</th> <th>Shock_Reward_Fishing</th> <th>Meas_Error_Wage_Fishing</th> <th>Shock_Reward_Hammock</th> <th>Meas_Error_Wage_Hammock</th> <th>Choice</th> <th>Wage</th> <th>Discount_Rate</th> <th>Nonpecuniary_Reward_Fishing</th> <th>Wage_Fishing</th> <th>Flow_Utility_Fishing</th> <th>Value_Function_Fishing</th> <th>Continuation_Value_Fishing</th> <th>Nonpecuniary_Reward_Hammock</th> <th>Wage_Hammock</th> <th>Flow_Utility_Hammock</th> <th>Value_Function_Hammock</th> <th>Continuation_Value_Hammock</th> </tr> <tr> <th>Identifier</th> <th>Period</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th rowspan="10" valign="top">0</th> <th>0</th> <td>0</td> <td>hammock</td> <td>2.048628</td> <td>1</td> <td>0.866250</td> <td>1</td> <td>fishing</td> <td>2.048628</td> <td>0.95</td> <td>-1</td> <td>2.048628</td> <td>1.048628</td> <td>19.513202</td> <td>19.436393</td> <td>1.5</td> <td>NaN</td> <td>2.366250</td> <td>19.233744</td> <td>17.755256</td> </tr> <tr> <th>1</th> <td>1</td> <td>fishing</td> <td>0.147087</td> <td>1</td> <td>1.421523</td> <td>1</td> <td>hammock</td> <td>NaN</td> <td>0.95</td> <td>-1</td> <td>0.162556</td> <td>-0.837444</td> <td>16.864110</td> <td>18.633215</td> <td>2.5</td> <td>NaN</td> <td>3.921523</td> <td>20.032537</td> <td>16.958962</td> </tr> <tr> <th>2</th> <td>1</td> <td>hammock</td> <td>0.903027</td> <td>1</td> <td>-0.351595</td> <td>1</td> <td>fishing</td> <td>0.998000</td> <td>0.95</td> <td>-1</td> <td>0.998000</td> <td>-0.002000</td> <td>15.655857</td> <td>16.481955</td> <td>1.5</td> <td>NaN</td> <td>1.148405</td> <td>15.352842</td> <td>14.952039</td> </tr> <tr> <th>3</th> <td>2</td> <td>fishing</td> <td>0.339405</td> <td>1</td> <td>-0.930422</td> <td>1</td> <td>hammock</td> <td>NaN</td> <td>0.95</td> <td>-1</td> <td>0.414550</td> <td>-0.585450</td> <td>13.941609</td> <td>15.291641</td> <td>2.5</td> <td>NaN</td> <td>1.569578</td> <td>14.680286</td> <td>13.800745</td> </tr> <tr> <th>4</th> <td>2</td> <td>hammock</td> <td>2.822820</td> <td>1</td> <td>-0.420713</td> <td>1</td> <td>fishing</td> <td>3.447800</td> <td>0.95</td> <td>-1</td> <td>3.447800</td> <td>2.447800</td> <td>14.749573</td> <td>12.949235</td> <td>1.5</td> <td>NaN</td> <td>1.079287</td> <td>12.114100</td> <td>11.615592</td> </tr> <tr> <th>5</th> <td>3</td> <td>fishing</td> <td>2.015148</td> <td>1</td> <td>2.056790</td> <td>1</td> <td>hammock</td> <td>NaN</td> <td>0.95</td> <td>-1</td> <td>2.720165</td> <td>1.720165</td> <td>12.341990</td> <td>11.180868</td> <td>2.5</td> <td>NaN</td> <td>4.556790</td> <td>13.991270</td> <td>9.931031</td> </tr> <tr> <th>6</th> <td>3</td> <td>hammock</td> <td>5.802097</td> <td>1</td> <td>-0.090973</td> <td>1</td> <td>fishing</td> <td>7.832012</td> <td>0.95</td> <td>-1</td> <td>7.832012</td> <td>6.832012</td> <td>14.963155</td> <td>8.559098</td> <td>1.5</td> <td>NaN</td> <td>1.409027</td> <td>8.503160</td> <td>7.467509</td> </tr> <tr> <th>7</th> <td>4</td> <td>fishing</td> <td>0.429942</td> <td>1</td> <td>1.443708</td> <td>1</td> <td>hammock</td> <td>NaN</td> <td>0.95</td> <td>-1</td> <td>0.641398</td> <td>-0.358602</td> <td>5.609535</td> <td>6.282250</td> <td>2.5</td> <td>NaN</td> <td>3.943708</td> <td>9.014635</td> <td>5.337817</td> </tr> <tr> <th>8</th> <td>4</td> <td>hammock</td> <td>0.216153</td> <td>1</td> <td>-0.298857</td> <td>1</td> <td>hammock</td> <td>NaN</td> <td>0.95</td> <td>-1</td> <td>0.322463</td> <td>-0.677537</td> <td>2.612370</td> <td>3.463061</td> <td>1.5</td> <td>NaN</td> <td>1.201143</td> <td>3.663504</td> <td>2.591958</td> </tr> <tr> <th>9</th> <td>4</td> <td>hammock</td> <td>7.604617</td> <td>1</td> <td>-0.666748</td> <td>1</td> <td>fishing</td> <td>11.344756</td> <td>0.95</td> <td>-1</td> <td>11.344756</td> <td>10.344756</td> <td>10.344756</td> <td>0.000000</td> <td>1.5</td> <td>NaN</td> <td>0.833252</td> <td>0.833252</td> <td>0.000000</td> </tr> <tr> <th rowspan="5" valign="top">1</th> <th>0</th> <td>0</td> <td>hammock</td> <td>1.247475</td> <td>1</td> <td>-2.189594</td> <td>1</td> <td>fishing</td> <td>1.247475</td> <td>0.95</td> <td>-1</td> <td>1.247475</td> <td>0.247475</td> <td>18.712048</td> <td>19.436393</td> <td>1.5</td> <td>NaN</td> <td>-0.689594</td> <td>16.177899</td> <td>17.755256</td> </tr> <tr> <th>1</th> <td>1</td> <td>fishing</td> <td>0.802887</td> <td>1</td> <td>0.931514</td> <td>1</td> <td>hammock</td> <td>NaN</td> <td>0.95</td> <td>-1</td> <td>0.887327</td> <td>-0.112673</td> <td>17.588881</td> <td>18.633215</td> <td>2.5</td> <td>NaN</td> <td>3.431514</td> <td>19.542528</td> <td>16.958962</td> </tr> <tr> <th>2</th> <td>1</td> <td>hammock</td> <td>0.213228</td> <td>1</td> <td>1.803811</td> <td>1</td> <td>hammock</td> <td>NaN</td> <td>0.95</td> <td>-1</td> <td>0.235653</td> <td>-0.764347</td> <td>14.893510</td> <td>16.481955</td> <td>1.5</td> <td>NaN</td> <td>3.303811</td> <td>17.508248</td> <td>14.952039</td> </tr> <tr> <th>3</th> <td>1</td> <td>hammock</td> <td>1.824770</td> <td>1</td> <td>0.998793</td> <td>1</td> <td>hammock</td> <td>NaN</td> <td>0.95</td> <td>-1</td> <td>2.016682</td> <td>1.016682</td> <td>14.671037</td> <td>14.373005</td> <td>1.5</td> <td>NaN</td> <td>2.498793</td> <td>14.820062</td> <td>12.969757</td> </tr> <tr> <th>4</th> <td>1</td> <td>hammock</td> <td>0.602507</td> <td>1</td> <td>0.905523</td> <td>1</td> <td>hammock</td> <td>NaN</td> <td>0.95</td> <td>-1</td> <td>0.665873</td> <td>-0.334127</td> <td>11.258946</td> <td>12.203234</td> <td>1.5</td> <td>NaN</td> <td>2.405523</td> <td>12.794276</td> <td>10.935529</td> </tr> </tbody> </table> </div> We can inspect Robinson's decisions period by period. ```python fig, ax = plt.subplots() df.groupby("Period").Choice.value_counts(normalize=True).unstack().plot.bar( stacked=True, ax=ax ) plt.xticks(rotation="horizontal") plt.legend(loc="lower center", bbox_to_anchor=(0.5, -0.275), ncol=2) plt.show() plt.close() ``` We can also analyze the persistence in decisions. ```python data = pd.crosstab(df.Lagged_Choice_1, df.Choice, normalize=True) sns.heatmap(data, cmap="Blues", annot=True) ``` ## Analysis We now study how Robinson's behavior changes as we increase the returns to experience. We do so by plotting the average level of final experience in the sample under the different parameterizations. This analysis of the comparative statics of the model is straightforward to implement. In models of educational choice, this type of analysis is often applied to evaluate the effect of alternative tuition policies on average educational attainment. See Keane & Wolpin (1997, 2001) for example. The basic structure of the analysis remains the same. ```python # Specification of grid for evaluation num_points = 15 grid_start = 0.0 grid_stop = 0.3 grid_points = np.linspace(grid_start, grid_stop, num_points) rslts = list() for value in grid_points: params.loc["wage_fishing", "exp_fishing"] = value df = simulate(params) stat = df.groupby("Identifier")["Experience_Fishing"].max().mean() rslts.append(stat) ``` We collected all results in `rslts` and are ready to create a basic visualization. ```python fig, ax = plt.subplots() ax.plot(grid_points, rslts) ax.set_ylim([0, 10]) ax.set_xlabel("Return to experience") ax.set_ylabel("Average final level of exerience") plt.show() plt.close() ``` In the absence of any returns to experience, Robinson still spends more than two periods fishing. This share then increases with the return. Starting at around 0.2, Robinson spends all his time fishing. ## Extension Let us make the model more interesting! > At some point Crusoe notices that a group of cannibals occasionally visits the island and celebrate one of their dark rituals. But then, a prisoner can escape and becomes Crusoe's new friend Friday whom he teaches English. In return Friday can share his knowledge once to help Robinson improve his fishing skills, but that is only possible after Robinson tried at least once to go fishing. A common extension to structural models is to increase the choice set. Here, we want to add another choice called `"friday"` which affects the utility of fishing. The choice should be available once, starting with the third period, and only after Robinson has been fishing before. Note that, we load the example models with the function, `rp.get_example_model`. The command for the former model is `params, options, df = rp.get_example_model("robinson_crusoe_basic")`. You can use `with_data=False` to suppress the automatic simulation of a sample with this parameterization. ```python params, options = rp.get_example_model( "robinson_crusoe_extended", with_data=False ) ``` At first, take a look at the parameterization. There is a new positive parameter called `"contemplation_with_friday"` which enters the wage equation of fishing. The choice `"friday"` itself has a negative constant utility term which models the effort costs of learning and the food penalty. The variance-covariance matrix is also adjusted. ```python params ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th></th> <th>value</th> </tr> <tr> <th>category</th> <th>name</th> <th></th> </tr> </thead> <tbody> <tr> <th>delta</th> <th>delta</th> <td>0.95</td> </tr> <tr> <th rowspan="2" valign="top">wage_fishing</th> <th>exp_fishing</th> <td>0.10</td> </tr> <tr> <th>contemplation_with_friday</th> <td>0.40</td> </tr> <tr> <th>nonpec_fishing</th> <th>constant</th> <td>-1.00</td> </tr> <tr> <th rowspan="2" valign="top">nonpec_friday</th> <th>constant</th> <td>-1.00</td> </tr> <tr> <th>not_fishing_last_period</th> <td>-1.00</td> </tr> <tr> <th rowspan="2" valign="top">nonpec_hammock</th> <th>constant</th> <td>2.50</td> </tr> <tr> <th>not_fishing_last_period</th> <td>-1.00</td> </tr> <tr> <th rowspan="6" valign="top">shocks_sdcorr</th> <th>sd_fishing</th> <td>1.00</td> </tr> <tr> <th>sd_friday</th> <td>1.00</td> </tr> <tr> <th>sd_hammock</th> <td>1.00</td> </tr> <tr> <th>corr_friday_fishing</th> <td>0.00</td> </tr> <tr> <th>corr_hammock_fishing</th> <td>0.00</td> </tr> <tr> <th>corr_hammock_friday</th> <td>0.00</td> </tr> <tr> <th>lagged_choice_1_hammock</th> <th>constant</th> <td>1.00</td> </tr> </tbody> </table> </div> Turning to the `options`, we can see that the new covariate `"contemplation_with_friday"` is only affecting utility if Robinson is experienced in fishing and only for one interaction with friday. This naturally limits the interaction with Friday. The key `"inadmissible_states"` can be used to restrict the choice Friday to the third and following periods. The first key matches a choice. The value of the key can be a list of strings. If the string evaluates to `True`, a utility penalty ensures that individuals will never choose the corresponding states. There exist some states in the state space which will never be reached because choices are mutually exclusive or are affected by other restrictions. Filters under `"core_state_space_filters"` can be used to purge those states from the state space, reducing runtime and memory consumption. ```python options ``` {'n_periods': 10, 'estimation_draws': 200, 'estimation_seed': 500, 'estimation_tau': 0.001, 'interpolation_points': -1, 'simulation_agents': 1000, 'simulation_seed': 132, 'solution_draws': 500, 'solution_seed': 456, 'covariates': {'constant': '1', 'contemplation_with_friday': 'exp_friday == 1 and exp_fishing >= 1', 'not_fishing_last_period': "lagged_choice_1 != 'fishing'"}, 'inadmissible_states': {'friday': ['period < 2', 'exp_fishing == 0']}, 'core_state_space_filters': ["period > 0 and exp_fishing + exp_friday == period and lagged_choice_1 == 'hammock'", 'period <= 2 and exp_friday != 0', 'period >= 3 and period - exp_friday < 2', 'exp_friday > 0 and exp_fishing == 0', "exp_friday > 0 and exp_fishing == 1 and lagged_choice_1 == 'fishing'", "period - exp_friday == 2 and lagged_choice_1 != 'friday' and period > 2", "exp_{i} == 0 and lagged_choice_1 == '{i}'"]} Now, let us simulate a sample of the new model. ```python simulate = rp.get_simulate_func(params, options) ``` ```python df = simulate(params) ``` ```python fig, ax = plt.subplots() df.groupby("Period").Choice.value_counts(normalize=True).unstack().plot.bar( stacked=True, ax=ax, color=["C0", "C2", "C1"], ) plt.xticks(rotation="horizontal") plt.legend(loc="lower center", bbox_to_anchor=(0.5, -0.275), ncol=3) plt.show() plt.close() ``` ## Estimation To estimate model parameters via maximum likelihood, ``respy`` relies on [``estimagic``](https://github.com/OpenSourceEconomics/estimagic), an open-source tool to estimate structural models and more. That way, ``respy`` only has to implement the likelihood function of your model, the optimization and standard error calculation is done by ``estimagic``. Unlike other optimization libraries, ``estimagic`` does not optimize over a simple vector of parameters, but instead stores parameters in a ``pd.DataFrame``, which makes it easier to parse them into the quantities we need, store lower and upper bounds together with parameters and express constraints on the parameters. For ``estimagic``, we need to pass constraints on the parameters in a list containing dictionaries. Each dictionary is a constraint. A constraint includes two components: First, we need to tell ``estimagic`` which parameters we want to constrain. This is achieved by specifying an index location which will be passed to `df.loc`. Then, define the type of the constraint. Here, we only impose the constraint that the shock parameters have to be valid variances and correlations. Optionally, we can add a column ``"group"`` which is identical to the category column. The ``estimagic`` dashboard will then contain one parameter convergence plot per group instead of plotting all parameters in the same figure. Since ``respy`` has quite many parameters, this will make the plots much more readable. ```python from estimagic.optimization.optimize import maximize ``` ```python params["group"] = params.index.get_level_values("category") crit_func = rp.get_crit_func(params, options, df) crit_func(params) constr = rp.get_parameter_constraints("robinson_crusoe") ``` ```python results, params = maximize( crit_func, params, "scipy_L-BFGS-B", algo_options={"maxfun": 1}, constraints=constr, dashboard=False, ) ``` If we hadn't limited the optimization to just one function evaluation, ``params`` would contain the estimated parameters and results would contain additional information on the optimization. If we had set ``dashboard=True``, the call to maximize would have opened a browser window with the beautiful ``estimagic`` dashboard. Try it out if you run this notebook locally. ## Footnotes <span id="fn1"><sup>1</sup> One of the earliest references of Robinsonades in economics can be found in Marx (1867). In the 37th footnote, he mentions that even Ricardo used the theme before him. </span> ## References > Keane, M. P. and Wolpin, K. I. (1997). [The Career Decisions of Young Men](https://doi.org/10.1086/262080). *Journal of Political Economy*, 105(3): 473-522. > Marx, K. (1867). Das Kapital, Bd. 1. *MEW*, Bd, 23, 405
ab441855e0e0c606a15612c27138fea5d83b4f0f
104,679
ipynb
Jupyter Notebook
docs/getting_started/tutorial-robinson-crusoe.ipynb
structRobustness/respy
6b5683807d9733b08db4461d4f710c236514c921
[ "MIT" ]
null
null
null
docs/getting_started/tutorial-robinson-crusoe.ipynb
structRobustness/respy
6b5683807d9733b08db4461d4f710c236514c921
[ "MIT" ]
1
2020-07-03T09:25:18.000Z
2020-07-03T09:25:18.000Z
docs/getting_started/tutorial-robinson-crusoe.ipynb
structRobustness/respy
6b5683807d9733b08db4461d4f710c236514c921
[ "MIT" ]
null
null
null
73.82158
15,888
0.673726
true
8,739
Qwen/Qwen-72B
1. YES 2. YES
0.824462
0.73412
0.605254
__label__eng_Latn
0.909935
0.244537
$\newcommand{\xv}{\mathbf{x}} \newcommand{\wv}{\mathbf{w}} \newcommand{\vv}{\mathbf{v}} \newcommand{\yv}{\mathbf{y}} \newcommand{\zv}{\mathbf{z}} \newcommand{\av}{\mathbf{a}} \newcommand{\Chi}{\mathcal{X}} \newcommand{\R}{\rm I\!R} \newcommand{\sign}{\text{sign}} \newcommand{\Tm}{\mathbf{T}} \newcommand{\Xm}{\mathbf{X}} \newcommand{\Xlm}{\mathbf{X1}} \newcommand{\Wm}{\mathbf{W}} \newcommand{\Vm}{\mathbf{V}} \newcommand{\Ym}{\mathbf{Y}} \newcommand{\Zm}{\mathbf{Z}} \newcommand{\Gm}{\mathbf{G}} \newcommand{\Zlm}{\mathbf{Z1}} \newcommand{\I}{\mathbf{I}} \newcommand{\muv}{\boldsymbol\mu} \newcommand{\Sigmav}{\boldsymbol\Sigma} \newcommand{\Phiv}{\boldsymbol\Phi} $ # Nonlinear Logistic Regression Previously, we learned the linear logistic regression that uses the softmax layer for classification along with a linear model. $$ g_k(\xv) = P(T=k \mid \xv) = \frac{e^{\kappa_k}}{\sum_{c=1}^K e^{\kappa_c}} $$ By using this softmax function, we were able to generate probablistic outputs for all classes. To handle multi-label classes, we use the indicator target labels for training to update the weights for the linear model. Following the derivation, we have achieved the following update rule: $$ \wv_j \leftarrow \wv_j + \alpha \sum_{n=1}^{N} \Big( t_{n,j} - g_j(\xv_n)\Big) \xv_n. $$ To update the weights with batch samples, we can convert this update rule in matrix form as follows: $$ \wv \leftarrow \wv + \alpha \Xm^\top \Big( \Tm - g(\Xm)\Big). $$ Remember we start from the error function below for the derivation bvefore: $$ E(\wv) = - \ln P(\Tm \mid \wv) = - \sum_{n=1}^{N} \sum_{k=1}^{K} t_{n,k} \ln y_{n,k}. $$ # Nonlinear Extension with Neural Networks Now, we extend this to two layer neural networks. Similar to the derivation of neural network for regression, we can derive the gradient by switching the squuared error with the negative log likelihood function above. From the error function $E(\wv)$, we can derive the gradient to update the weights for each layer. $$ \begin{align} v_{dg} &\leftarrow v_{dg} - \alpha_h \frac{\partial{E(\Wm, \Vm)}} {\partial{v_{dg}}} \\ \\ w_{gk} &\leftarrow w_{gk} - \alpha_o \frac{\partial{E(\Wm, \Vm)}} {\partial{w_{gk}}}, \end{align} $$ where $\alpha_h$ and $\alpha_o$ are the learning rate for hidden and output layer respectively. Here, we denote the output of the neural network as $\kappa$. $$ \begin{align} \frac{\partial{E}}{\partial{w_{gk}}} &= -\frac{\partial{\Big( \sum_{n=1}^{N} \sum_{l=1}^{K} (t_{nl} \ln g_{nl}(\xv_n))} \Big)}{\partial{w_{gk}}} \\ \\ &= -\sum_{n=1}^{N} \sum_{l=1}^{K} t_{n,l} \frac{1}{g_{n,k}(\xv_n)} \frac{\partial g_{n,k}(\xv_n)}{\partial {w_{gk}}}\\ &= -\sum_{n=1}^{N} \sum_{l=1}^{K} t_{n,l} \frac{1}{g_{n,k}(\xv_n)} \frac{\partial g_{n,k}(\xv_n)}{\partial \kappa_{nk}} \frac{\partial \kappa_{nk} }{\partial {w_{gk}}}\\ &= -\sum_{n=1}^{N} \sum_{l=1}^{K} t_{n,l} \frac{1}{g_{n,k}(\xv_n)} g_{nk}(\xv_n) (I_{lk} - g_{nk}(\xv_n)) \frac{\partial \kappa_{nk} }{\partial {w_{gk}}}\\ &= -\sum_{n=1}^{N} \sum_{l=1}^{K} t_{n,l} (I_{lk} - g_{nk}(\xv_n)) \frac{\partial \sum_{g=0}^{G} z1_{ng} w_{gk} }{\partial {w_{gk}}}\\ &= -\sum_{n=1}^{N} \sum_{l=1}^{K} t_{n,l} (I_{lk} - g_{nk}(\xv_n)) z1_{ng}\\ &= -\sum_{n=1}^{N} \Big(\sum_{l=1}^{K} t_{n,l} (I_{lk} - g_{nk}(\xv_n)) \Big) z1_{ng}\\ &= -\sum_{n=1}^{N} \Big( \sum_{l=1}^{K} t_{n,l} I_{lk} - g_{nk}(\xv_n) \sum_{l=1}^{K} t_{n,l} \Big) z1_{ng}\\ &= -\sum_{n=1}^{N} \Big( t_{n,k} - g_{nk}(\xv_n) \Big) z1_{ng}. \end{align} $$ Coverting this gradient in matrix form and reflecting it on our weight update, $$ \Wm \leftarrow \Wm + \alpha_o \Zlm^\top \Big( \Tm - g(\Xm)\Big). $$ Now let us update the weight $\vv$ for the hidden layer. For the hidden layer, we repeat this: $$ \begin{align} \frac{\partial{E}}{\partial{v_{dg}}} &= \frac{\partial{\Big( \sum_{n=1}^{N} \sum_{k=1}^{K} (t_{nk} \ln g_{nk}(\xv_n))} \Big)}{\partial{v_{dg}}} \\ &= -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{n,k} \frac{1}{g_{n,k}(\xv_n)} \frac{\partial g_{n,k}(\xv_n)}{\partial {v_{dg}}}\\ &= -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{n,k} \frac{1}{g_{n,k}(\xv_n)} \frac{\partial g_{n,k}(\xv_n)} {\partial \kappa_{nk}} \frac{\partial \kappa_{nk}} {\partial {v_{dg}}}\\ &= -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{n,k} \frac{1}{g_{n,k}(\xv_n)} \frac{\partial g_{n,k}(\xv_n)} {\partial \kappa_{nk}} \sum_{g=0}^G w_{gk} \frac{\partial{h(a_{ng})}}{\partial{a_{ng}}} \frac{\partial{a_{ng}}}{\partial{v_{dg}}}\\ &= -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{n,k} (I_{kk} - g_{nk}(\xv_n)) \sum_{g=0}^G w_{gl} \frac{\partial{h(a_{ng})}}{\partial{a_{ng}}} \frac{\partial{a_{ng}}}{\partial{v_{dg}}}\\ &= -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{n,k} (I_{kk} - g_{nk}(\xv_n)) \sum_{g=0}^G w_{gk} \frac{\partial{h(a_{ng})}}{\partial{a_{ng}}} x1_{nd}\\ &= -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{n,k} (I_{kk} - g_{nk}(\xv_n)) \sum_{g=0}^G w_{gk} \frac{\partial{h(a_{ng})}}{\partial{a_{ng}}} x1_{nd}\\ &= -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{n,k} (I_{kk} - g_{nk}(\xv_n)) \sum_{g=0}^G w_{gk} (1 - z_{ng}^2) x1_{nd}. \end{align} $$ Again, coverting in matrix form for the hidden weight update, $$ \Vm \leftarrow \Vm + \alpha_h \Xlm^\top \Big( (\Tm - g(\Xm)) \Wm^\top \odot (1 - \Zm^2) \Big). $$ Here, $\odot$ denotes the element-wise multiplication. ## Summary (Regression vs Classification) <table> <tr> <th></th> <th width=45%> Regression </th> <th width=45%> Classification </th> </tr> <tr> <td> Forward Pass </td> <td> $$ \begin{align} \Zm &= h(\Xlm \cdot \Vm) \\ \\ \Ym & = \Zlm \cdot \Wm \end{align} $$ </td> <td> $$ \begin{align} \Zm &= h(\Xlm \cdot \Vm) \\ \\ \Ym & = \Zlm \cdot \Wm \\ \Gm & = softmax(\Ym) \end{align} $$ </td> </tr> <tr> <td> Backward Pass </td> <td> $$ \begin{align} \Vm &\leftarrow \Vm + \alpha_h \frac{1}{N} \frac{1}{K} \Xlm^\top \Big( (\Tm - \Ym) \Wm^\top \odot (1 - \Zm^2) \Big) \\ \Wm &\leftarrow \Wm + \alpha_o \frac{1}{N} \frac{1}{K} \Zlm^\top \Big( \Tm - \Ym \Big) \end{align} $$ </td> <td> $$ \begin{align} \Vm &\leftarrow \Vm + \alpha_h \Xlm^\top \Big( (\Tm - \Gm) \Wm^\top \odot (1 - \Zm^2) \Big)\\ \Wm &\leftarrow \Wm + \alpha_o \Zlm^\top \Big( \Tm - \Gm\Big) \end{align} $$ </td> </tr> <tr> <td></td> <td></td> <td> Note: Here $\Tm$ is a matrix with indicator variable outputs, <br/> and $\Gm$ is the output matrix after the softmax layer.</td> </tr> </table> # Practice Now, inherit the previous NeuralNetwork class to implement neural network classification. ```python import numpy as np import matplotlib.pyplot as plt %matplotlib inline import nn ``` Let us repeat the previous classification example with nonlinear classification. ```python # Data for testing N1 = 50 N2 = 50 N = N1 + N2 D = 2 K = 2 mu1 = [-1, -1] cov1 = np.eye(2) mu2 = [2,3] cov2 = np.eye(2) * 3 # # Train Data # C1 = np.random.multivariate_normal(mu1, cov1, N1) C2 = np.random.multivariate_normal(mu2, cov2, N2) plt.plot(C1[:, 0], C1[:, 1], 'or') plt.plot(C2[:, 0], C2[:, 1], 'xb') plt.xlim([-3, 6]) plt.ylim([-3, 7]) plt.title("training data set") Xtrain = np.vstack((C1, C2)) Ttrain = np.zeros((N, 1)) Ttrain[50:, :] = 1 # labels are zero or one means, stds = np.mean(Xtrain, 0), np.std(Xtrain, 0) # normalize inputs Xtrains = (Xtrain - means) / stds # # Test Data # Ct1 = np.random.multivariate_normal(mu1, cov1, 20) Ct2 = np.random.multivariate_normal(mu2, cov2, 20) Xtest = np.vstack((Ct1, Ct2)) Ttest = np.zeros((40, 1)) Ttest[20:, :] = 1 # normalize inputs Xtests = (Xtrain - means) / stds plt.figure() plt.plot(Ct1[:, 0], Ct1[:, 1], 'or') plt.plot(Ct2[:, 0], Ct2[:, 1], 'xb') plt.xlim([-3, 6]) plt.ylim([-3, 7]) plt.title("test data set") ``` ```python # Apply Nonlinear Logistic Regression from imp import reload reload(nn) #import warnings #warnings.filterwarnings('ignore') clsf = nn.NeuralNetLogReg([2, 4, 2]) clsf.train(Xtrain, Ttrain) classes, Y = clsf.use(Xtest) ``` ```python classes ``` array([ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.]) ```python Y ``` array([[ 1.00000000e+00, 3.62429239e-24], [ 1.00000000e+00, 4.49791656e-10], [ 9.99999998e-01, 2.08859210e-09], [ 1.00000000e+00, 3.62429232e-24], [ 2.99307336e-01, 7.00692664e-01], [ 1.00000000e+00, 3.62429229e-24], [ 1.00000000e+00, 3.62429243e-24], [ 1.00000000e+00, 3.62926450e-24], [ 1.00000000e+00, 6.36987730e-24], [ 1.00000000e+00, 3.62429240e-24], [ 5.99996498e-01, 4.00003502e-01], [ 1.00000000e+00, 3.62429229e-24], [ 1.00000000e+00, 4.48284191e-24], [ 1.00000000e+00, 3.64360621e-24], [ 1.00000000e+00, 3.95678934e-24], [ 1.00000000e+00, 3.67347448e-24], [ 1.00000000e+00, 3.62429229e-24], [ 1.00000000e+00, 3.62429233e-24], [ 1.00000000e+00, 5.59588219e-18], [ 1.00000000e+00, 3.62429229e-24], [ 1.71288265e-23, 1.00000000e+00], [ 2.19935378e-14, 1.00000000e+00], [ 1.71288265e-23, 1.00000000e+00], [ 1.71290892e-23, 1.00000000e+00], [ 1.70788912e-14, 1.00000000e+00], [ 4.48420430e-14, 1.00000000e+00], [ 6.00020101e-01, 3.99979899e-01], [ 1.71594780e-23, 1.00000000e+00], [ 1.75541816e-23, 1.00000000e+00], [ 6.04943745e-15, 1.00000000e+00], [ 4.48184489e-14, 1.00000000e+00], [ 4.48415004e-14, 1.00000000e+00], [ 4.37481476e-14, 1.00000000e+00], [ 1.71532100e-23, 1.00000000e+00], [ 1.41251835e-13, 1.00000000e+00], [ 4.28267187e-14, 1.00000000e+00], [ 4.48414747e-14, 1.00000000e+00], [ 1.71288265e-23, 1.00000000e+00], [ 1.71288265e-23, 1.00000000e+00], [ 5.00047240e-01, 4.99952760e-01]]) ```python # retrieve labels and plot plt.plot(Ttest) plt.plot(classes) print("Accuracy: ", 100 - np.mean(np.abs(Tl - Yl)) * 100, "%") ``` ```python # show me the boundary x = np.linspace(-3, 6, 1000) y = np.linspace(-3, 7, 1000) xs, ys = np.meshgrid(x, y) X = np.vstack((xs.flat, ys.flat)).T classes, _ = clsf.use(X) zs = classes.reshape(xs.shape) plt.figure(figsize=(6,6)) plt.contourf(xs, ys, zs.reshape(xs.shape)) plt.title("Decision Boundary") plt.plot(Ct1[:, 0], Ct1[:, 1], 'or') plt.plot(Ct2[:, 0], Ct2[:, 1], 'xb') ``` ```python from sklearn.datasets import make_circles X, T = make_circles(n_samples=800, noise=0.07, factor=0.4) plt.figure(figsize=(10, 8)) plt.scatter(X[:, 0], X[:, 1], marker='o', c=T) plt.title("Circles") ``` ```python clsf = nn.NeuralNetLogReg([2, 1, 2]) clsf.train(X, T) # checking the training error only classes, Y = clsf.use(X) ``` ```python # retrieve labels and plot plt.plot(T) plt.plot(classes) print("Accuracy: ", 100 - np.mean(np.abs(T - classes)) * 100, "%") ``` ```python # show me the boundary x = np.linspace(-1.5, 1.5, 1000) y = np.linspace(-1.5, 1.5, 1000) xs, ys = np.meshgrid(x, y) Xt = np.vstack((xs.flat, ys.flat)).T classes, _ = clsf.use(Xt) zs = classes.reshape(xs.shape) plt.figure(figsize=(6,6)) plt.contourf(xs, ys, zs.reshape(xs.shape), alpha=0.3) plt.title("Decision Boundary") plt.scatter(X[:, 0], X[:, 1], marker='o', c=T+3) ``` ```python from sklearn.datasets import load_iris data = load_iris() ``` ```python data.keys() ``` dict_keys(['data', 'target', 'target_names', 'DESCR', 'feature_names']) ```python data.target ``` array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]) ```python data.data.shape ``` (150, 4) ```python clsf = nn.NeuralNetLogReg([4, 1, 3]) clsf.train(data.data, data.target) # checking the training error only classes, Y = clsf.use(data.data) ``` ```python # retrieve labels and plot plt.plot(data.target) plt.plot(classes) print("Accuracy: ", 100 - np.mean(np.abs(data.target - classes)) * 100, "%") ``` ```python Y ``` array([[ 9.99991385e-01, 8.61541898e-06, 2.16892851e-27], [ 9.99978241e-01, 2.17593859e-05, 1.73875252e-26], [ 9.99986146e-01, 1.38544096e-05, 6.30606216e-27], [ 9.99971064e-01, 2.89362302e-05, 3.29887992e-26], [ 9.99992127e-01, 7.87333680e-06, 1.77158220e-27], [ 9.99982124e-01, 1.78759977e-05, 1.11795632e-26], [ 9.99978766e-01, 2.12335059e-05, 1.64576183e-26], [ 9.99986703e-01, 1.32968101e-05, 5.75011385e-27], [ 9.99962956e-01, 3.70441567e-05, 5.74627718e-26], [ 9.99985191e-01, 1.48093898e-05, 7.32482714e-27], [ 9.99993245e-01, 6.75457274e-06, 1.25551382e-27], [ 9.99980827e-01, 1.91733484e-05, 1.30853666e-26], [ 9.99985342e-01, 1.46575442e-05, 7.15717099e-27], [ 9.99989996e-01, 1.00044561e-05, 3.03453396e-27], [ 9.99997699e-01, 2.30076932e-06, 1.11687549e-28], [ 9.99995299e-01, 4.70117860e-06, 5.56182688e-28], [ 9.99992709e-01, 7.29105849e-06, 1.49071353e-27], [ 9.99986934e-01, 1.30657238e-05, 5.52802932e-27], [ 9.99988758e-01, 1.12416319e-05, 3.94324546e-27], [ 9.99989750e-01, 1.02495403e-05, 3.20410301e-27], [ 9.99983378e-01, 1.66221586e-05, 9.49442080e-27], [ 9.99981331e-01, 1.86685265e-05, 1.23239976e-26], [ 9.99995566e-01, 4.43440123e-06, 4.87770831e-28], [ 9.99875830e-01, 1.24170250e-04, 8.70150917e-25], [ 9.99957625e-01, 4.23750801e-05, 7.77272389e-26], [ 9.99965660e-01, 3.43395579e-05, 4.84634245e-26], [ 9.99954736e-01, 4.52641786e-05, 9.01421936e-26], [ 9.99989901e-01, 1.00986885e-05, 3.09912634e-27], [ 9.99990556e-01, 9.44389535e-06, 2.66581715e-27], [ 9.99970726e-01, 2.92737336e-05, 3.38595511e-26], [ 9.99967160e-01, 3.28396227e-05, 4.38365163e-26], [ 9.99974144e-01, 2.58558431e-05, 2.56177660e-26], [ 9.99996831e-01, 3.16910051e-06, 2.29314277e-28], [ 9.99997130e-01, 2.86951379e-06, 1.83458591e-28], [ 9.99985191e-01, 1.48093898e-05, 7.32482714e-27], [ 9.99990733e-01, 9.26748328e-06, 2.55523967e-27], [ 9.99994322e-01, 5.67817825e-06, 8.50057818e-28], [ 9.99985191e-01, 1.48093898e-05, 7.32482714e-27], [ 9.99976392e-01, 2.36084880e-05, 2.08842366e-26], [ 9.99987487e-01, 1.25126585e-05, 5.01613497e-27], [ 9.99988945e-01, 1.10546501e-05, 3.79741708e-27], [ 9.99841005e-01, 1.58994538e-04, 1.51643682e-24], [ 9.99983278e-01, 1.67216522e-05, 9.62257539e-27], [ 9.99885143e-01, 1.14856806e-04, 7.30324880e-25], [ 9.99955163e-01, 4.48367166e-05, 8.82408733e-26], [ 9.99961576e-01, 3.84240227e-05, 6.23837010e-26], [ 9.99991524e-01, 8.47589867e-06, 2.09081186e-27], [ 9.99981236e-01, 1.87640738e-05, 1.24661598e-26], [ 9.99992874e-01, 7.12563295e-06, 1.41579845e-27], [ 9.99987612e-01, 1.23883729e-05, 4.90488913e-27], [ 9.92514386e-09, 9.99995068e-01, 4.92238946e-06], [ 2.81161744e-09, 9.99976281e-01, 2.37163997e-05], [ 1.58608201e-10, 9.99147203e-01, 8.52797328e-04], [ 3.20936206e-09, 9.99979887e-01, 2.01102108e-05], [ 1.41769171e-10, 9.99019410e-01, 9.80590261e-04], [ 1.34843027e-09, 9.99940726e-01, 5.92727292e-05], [ 1.48275804e-10, 9.99072649e-01, 9.27351189e-04], [ 7.01402577e-05, 9.99929860e-01, 7.82466725e-11], [ 7.98274948e-09, 9.99993534e-01, 6.45785196e-06], [ 8.25410487e-09, 9.99993797e-01, 6.19426970e-06], [ 1.63145958e-06, 9.99998360e-01, 8.50748157e-09], [ 3.62438865e-09, 9.99982715e-01, 1.72812922e-05], [ 7.10123000e-07, 9.99999266e-01, 2.39963651e-08], [ 2.19376410e-10, 9.99430473e-01, 5.69526372e-04], [ 4.40842296e-06, 9.99995589e-01, 2.46382864e-09], [ 3.67207674e-08, 9.99999000e-01, 9.63527356e-07], [ 1.42809879e-10, 9.99028291e-01, 9.71709222e-04], [ 3.67635175e-06, 9.99996321e-01, 3.08979613e-09], [ 5.15195942e-12, 9.45936370e-01, 5.40636300e-02], [ 8.00753494e-07, 9.99999179e-01, 2.06592004e-08], [ 4.41072836e-13, 5.93580187e-01, 4.06419813e-01], [ 2.48716706e-07, 9.99999663e-01, 8.87474025e-08], [ 1.11553496e-12, 7.70387256e-01, 2.29612744e-01], [ 3.01218163e-09, 9.99978233e-01, 2.17643013e-05], [ 6.74092428e-08, 9.99999481e-01, 4.51843906e-07], [ 1.60820690e-08, 9.99997287e-01, 2.69695532e-06], [ 2.50279695e-10, 9.99516668e-01, 4.83331455e-04], [ 8.67286890e-13, 7.25443946e-01, 2.74556054e-01], [ 1.94136395e-10, 9.99336870e-01, 6.63129541e-04], [ 2.35046257e-04, 9.99764954e-01, 1.73213370e-11], [ 8.31184015e-07, 9.99999149e-01, 1.97205855e-08], [ 9.51617694e-06, 9.99990483e-01, 9.44062340e-10], [ 8.23075824e-07, 9.99999157e-01, 1.99630652e-08], [ 1.45909043e-14, 1.25902978e-01, 8.74097022e-01], [ 9.15653663e-11, 9.98311607e-01, 1.68839247e-03], [ 7.43966704e-10, 9.99875614e-01, 1.24385605e-04], [ 5.52689872e-10, 9.99819854e-01, 1.80145500e-04], [ 6.36834105e-10, 9.99849017e-01, 1.50982632e-04], [ 1.11186266e-07, 9.99999647e-01, 2.42131810e-07], [ 1.05800317e-08, 9.99995444e-01, 4.54549912e-06], [ 3.09433480e-09, 9.99978951e-01, 2.10463547e-05], [ 9.25529647e-10, 9.99905251e-01, 9.47481251e-05], [ 1.92789944e-07, 9.99999685e-01, 1.21916057e-07], [ 4.94115602e-05, 9.99950588e-01, 1.21101648e-10], [ 7.81447411e-09, 9.99993361e-01, 6.63166922e-06], [ 2.97695285e-07, 9.99999631e-01, 7.09305506e-08], [ 3.24226072e-08, 9.99998842e-01, 1.12528516e-06], [ 4.27392158e-08, 9.99999160e-01, 7.97427967e-07], [ 4.59452664e-04, 9.99540547e-01, 7.50720002e-12], [ 4.22858293e-08, 9.99999150e-01, 8.08100849e-07], [ 2.70847094e-26, 4.13427816e-08, 9.99999959e-01], [ 1.03345356e-18, 6.65624607e-04, 9.99334375e-01], [ 2.37218813e-22, 6.36828511e-06, 9.99993632e-01], [ 5.66073971e-19, 4.76658147e-04, 9.99523342e-01], [ 1.31560301e-23, 1.27957104e-06, 9.99998720e-01], [ 4.25521605e-25, 1.90616835e-07, 9.99999809e-01], [ 1.11980477e-14, 1.09604382e-01, 8.90395618e-01], [ 1.50074234e-21, 1.77246114e-05, 9.99982275e-01], [ 3.45504953e-21, 2.81532268e-05, 9.99971847e-01], [ 7.65161563e-25, 2.63976626e-07, 9.99999736e-01], [ 1.89107898e-16, 1.19244800e-02, 9.88075520e-01], [ 5.65090581e-19, 4.76198585e-04, 9.99523801e-01], [ 1.44992009e-20, 6.23962870e-05, 9.99937604e-01], [ 2.91038759e-20, 9.18475187e-05, 9.99908152e-01], [ 1.32492366e-23, 1.28459344e-06, 9.99998715e-01], [ 2.20083178e-21, 2.19201699e-05, 9.99978080e-01], [ 1.52744732e-17, 2.96366398e-03, 9.97036336e-01], [ 9.46967965e-24, 1.06618240e-06, 9.99998934e-01], [ 1.59595014e-28, 2.39415393e-09, 9.99999998e-01], [ 5.84505693e-15, 7.76200612e-02, 9.22379939e-01], [ 7.99106985e-23, 3.48186224e-06, 9.99996518e-01], [ 1.56590712e-18, 8.38185840e-04, 9.99161814e-01], [ 3.34349936e-25, 1.66744938e-07, 9.99999833e-01], [ 3.15474552e-15, 5.57056298e-02, 9.44294370e-01], [ 1.12856928e-20, 5.42974161e-05, 9.99945703e-01], [ 1.09282260e-18, 6.86572337e-04, 9.99313428e-01], [ 2.96124617e-14, 1.81129636e-01, 8.18870364e-01], [ 3.55492752e-14, 1.98548812e-01, 8.01451188e-01], [ 1.78754800e-22, 5.44290923e-06, 9.99994557e-01], [ 9.00937884e-16, 2.81480525e-02, 9.71851948e-01], [ 1.01341406e-21, 1.42546802e-05, 9.99985745e-01], [ 8.87413796e-20, 1.70497390e-04, 9.99829503e-01], [ 1.95908602e-23, 1.59595718e-06, 9.99998404e-01], [ 1.20569901e-12, 7.83523300e-01, 2.16476700e-01], [ 1.03591996e-15, 3.03841422e-02, 9.69615858e-01], [ 2.01777153e-24, 4.52107113e-07, 9.99999548e-01], [ 2.27027959e-23, 1.73200324e-06, 9.99998268e-01], [ 2.94263762e-17, 4.26181204e-03, 9.95738188e-01], [ 1.03943413e-13, 3.32051564e-01, 6.67948436e-01], [ 2.16147268e-19, 2.79403213e-04, 9.99720597e-01], [ 7.19304204e-24, 9.15302655e-07, 9.99999085e-01], [ 7.42616434e-20, 1.54451957e-04, 9.99845548e-01], [ 1.03345356e-18, 6.65624607e-04, 9.99334375e-01], [ 6.00800204e-24, 8.28288824e-07, 9.99999172e-01], [ 1.43258291e-24, 3.73852067e-07, 9.99999626e-01], [ 3.73190193e-21, 2.93834989e-05, 9.99970617e-01], [ 4.11023391e-18, 1.43146740e-03, 9.98568533e-01], [ 4.83922300e-18, 1.56711967e-03, 9.98432880e-01], [ 1.83042506e-21, 1.97894410e-05, 9.99980211e-01], [ 6.62613124e-16, 2.37833230e-02, 9.76216677e-01]]) ```python x = np.linspace(3.5, 8, 100) y = np.linspace(1.5, 5, 100) xs, ys = np.meshgrid(x, y) Xt = np.vstack((xs.flat, ys.flat)).T Xt = np.hstack((Xt, np.random.rand(*Xt.shape) * 0.001)) # fill random noise for other columns classes, Y = clsf.use(Xt) for k in range(3): zs = Y[:, k].reshape(xs.shape) plt.figure(figsize=(6,6)) plt.imshow(zs, origin='lower', extent=(3,9,1,5)) #plt.contourf(xs, ys, zs.reshape(xs.shape), alpha=0.3) plt.title("class: " + data.target_names[k]) plt.scatter(data.data[data.target==k, 0], data.data[data.target==k, 1], marker='o') ```
6f81f00dced9be42e450c3bc18375b2726e81e57
356,271
ipynb
Jupyter Notebook
reading_assignments/questions/7_Note-NonlinearLogReg.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning
ce14609327e5fa13f7af7b904a69da3aa3606f37
[ "MIT" ]
null
null
null
reading_assignments/questions/7_Note-NonlinearLogReg.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning
ce14609327e5fa13f7af7b904a69da3aa3606f37
[ "MIT" ]
null
null
null
reading_assignments/questions/7_Note-NonlinearLogReg.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning
ce14609327e5fa13f7af7b904a69da3aa3606f37
[ "MIT" ]
null
null
null
350.660433
121,600
0.904806
true
10,976
Qwen/Qwen-72B
1. YES 2. YES
0.893309
0.774583
0.691943
__label__yue_Hant
0.157882
0.445946
# Parametric Model-Based regression Notebook version: 1.3 (Sep 20, 2019) Author: Jesús Cid-Sueiro ([email protected]) Jerónimo Arenas García ([email protected]) Changes: v.1.0 - First version, expanding some cells from the Bayesian Regression notebook v.1.1 - Python 3 version. v.1.2 - Revised presentation. v.1.3 - Updated index notation Pending changes: * Include regression on the stock data ```python # Import some libraries that will be necessary for working with data and displaying plots # To visualize plots in the notebook %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import scipy.io # To read matlab files import pylab ``` ## A quick note on the mathematical notation In this notebook we will make extensive use of probability distributions. In general, we will use capital leters ${\bf X}$, $S$, $E$ ..., to denote random variables, and lower-case letters ${\bf x}$, $s$, $\epsilon$ ..., to denote the values they can take. In general, we will use letter $p$ for probability density functions (pdf). When necessary, we will use, capital subindices to make the random variable explicit. For instance, $p_{{\bf X}, S}({\bf x}, s)$ would be the joint pdf of random variables ${\bf X}$ and $S$ at values ${\bf x}$ and $s$, respectively. However, to avoid a notation overload, we will omit subindices when they are clear from the context. For instance, we will use $p({\bf x}, s)$ instead of $p_{{\bf X}, S}({\bf x}, s)$. ## 1. Model-based parametric regression ### 1.1. The regression problem Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing *good* predictions about some unknown variable $s$. To do so, we assume that a set of *labelled* training examples, $\{{\bf x}_k, s_k\}_{k=0}^{K-1}$ is available. The predictor function should make good predictions for new observations ${\bf x}$ not used during training. In practice, this is tested using a second set (the *test set*) of labelled samples. ### 1.2. The underlying model assumption Many regression algorithms are grounded on the idea that all samples from the training set have been generated independently by some common stochastic process. If $p({\bf x}, s)$ were known, we could apply estimation theory to estimate $s$ for a given ${\bf x}$ using $p$. For instance, we could apply any of the following classical estimates: * Maximum A Posterior (MAP): $$\hat{s}_{\text{MAP}} = \arg\max_s p(s| {\bf x})$$ * Minimum Mean Square Error (MSE): $$\hat{s}_{\text{MSE}} = \mathbb{E}\{S |{\bf x}\} = \int s \, p(s| {\bf x}) \, ds $$ Note that, since these estimators depend on $p(s |{\bf x})$, knowing the posterior distribution of the target variable is enough, and we do not need to know the joint distribution $p({\bf x}, s)$. More importantly, note that **if we knew the underlying model, we would not need the data** in ${\cal D}$ to make predictions on new data. #### Exercise 1: Assume the target variable $s$ is a scaled noisy version of the input variable $x$: $$ s = 2 x + \epsilon $$ where $\epsilon$ is Gaussian a noise variable with zero mean and unit variance, which does not depend on $x$. 1. Compute the target model $p(s| x)$ 2. Compute prediction $\hat{s}_\text{MAP}$ for an arbitrary input $x$ 3. Compute prediction $\hat{s}_\text{MSE}$ for an arbitrary input $x$ 4. Compute prediction $\hat{s}_\text{MSE}$ for input $x=4$ #### Solution: [comment]: # (<SOL>) [comment]: # (</SOL>) ### 1.3. Model-based regression In practice, the underlying model is usually unknown. Model based-regression methods exploit the idea of using the training data to estimate the posterior distribution $p(s|{\bf x})$ and then apply estimation theory to make predictions. ### 1.4. Parametric model-based regression In some cases, we may have a partial knowledge about the underlying mode. In this notebook we will assume that $p$ belongs to a parametric family of distributions $p(s|{\bf x},{\bf w})$, where ${\bf w}$ is some unknown parameter. #### Exercise 2: Assume the target variable $s$ is a scaled noisy version of the input variable $x$: $$ s = w x + \epsilon $$ where $\epsilon$ is Gaussian a noise variable with zero mean and unit variance, which does not depend on $x$. Assume that $w$ is known. 1. Compute the target model $p(s| x, w)$ 2. Compute prediction $\hat{s}_\text{MAP}$ for an arbitrary input $x$ 3. Compute prediction $\hat{s}_\text{MSE}$ for an arbitrary input $x$ #### Solution: [comment]: # (<SOL>) [comment]: # (</SOL>) We will use the training data to estimate ${\bf w}$ The estimation of ${\bf w}$ from a given dataset $\mathcal{D}$ is the goal of the following sections ## 2. Maximum Likelihood parameter estimation. The ML (Maximum Likelihood) principle is well-known in statistics and can be stated as follows: take the value of the parameter to be estimated (in our case, ${\bf w}$) that best explains the given observations (in our case, the training dataset $\mathcal{D}$). Mathematically, this can be expressed as follows: $$ \hat{\bf w}_{\text{ML}} = \arg \max_{\bf w} p(\mathcal{D}|{\bf w}) $$ #### Exercise 3: All samples in dataset ${\cal D} = \{(x_k, s_k), k=0,\ldots,K-1 \}$ $$ s_k = w \cdot x_k + \epsilon_k $$ where $\epsilon_k$ are i.i.d. (independent and identically distributed) Gaussian noise random variables with zero mean and unit variance, which do not depend on $x_k$. Compute the ML estimate, $\hat{w}_{\text{ML}}$, of $w$. #### Solution: [comment]: # (<SOL>) # </SOL> * **4.2.** Compute the ML estimate ```python # wML = <FILL IN> print("The ML estimate is {}".format(wML)) ``` * **4.3.** Plot the likelihood as a function of parameter $w$ along the interval $-0.5\le w \le 2$, verifying that the ML estimate takes the maximum value. ```python sigma_eps = 1 K = len(s) wGrid = np.arange(-0.5, 2, 0.01) p = [] for w in wGrid: d = s - X*w # p.append(<FILL IN>) # Compute the likelihood for the ML parameter wML # d = <FILL IN> # pML = [<FILL IN>] # Plot the likelihood function and the optimal value plt.figure() plt.plot(wGrid, p) plt.stem([wML], pML) plt.xlabel('$w$') plt.ylabel('Likelihood function') plt.show() ``` * **4.4.** Plot the prediction function on top of the data scatter plot ```python xgrid = np.arange(0, 1.2, 0.01) # sML = <FILL IN> plt.figure() plt.scatter(X, s) # plt.plot(<FILL IN>) plt.xlabel('x') plt.ylabel('s') plt.axis('tight') plt.show() ``` ### 2.1. Model assumptions In order to solve exercise 4 we have taken advantage of the statistical independence of the noise components. Some independence assumptions are required in general to compute the ML estimate in other scenarios. In order to estimate ${\bf w}$ from the training data in a mathematicaly rigorous and compact form let us group the target variables into a vector $$ {\bf s} = \left(s_0, \dots, s_{K-1}\right)^\top $$ and the input vectors into a matrix $$ {\bf X} = \left({\bf x}_0, \dots, {\bf x}_{K-1}\right)^\top $$ We will make the following assumptions: * A1. All samples in ${\cal D}$ have been generated by the same distribution, $p({\bf x}, s \mid {\bf w})$ * A2. Input variables ${\bf x}$ do not depend on ${\bf w}$. This implies that $$ p({\bf X} \mid {\bf w}) = p({\bf X}) $$ * A3. Targets $s_{0},\ldots, s_{K-1}$ are statistically independent, given ${\bf w}$ and the inputs ${\bf x}_0,\ldots, {\bf x}_{K-1}$, that is: $$ p({\bf s} \mid {\bf X}, {\bf w}) = \prod_{k=0}^{K-1} p(s_k \mid {\bf x}_k, {\bf w}) $$ Since ${\cal D} = ({\bf X}, {\bf s})$, we can write $$p(\mathcal{D}|{\bf w}) = p({\bf s}, {\bf X}|{\bf w}) = p({\bf s} | {\bf X}, {\bf w}) p({\bf X}|{\bf w}) $$ Using assumption A2, $$ p(\mathcal{D}|{\bf w}) = p({\bf s} | {\bf X}, {\bf w}) p({\bf X}) $$ and, finally, using assumption A3, we can express the estimation problem as the computation of \begin{align} \hat{\bf w}_{\text{ML}} &= \arg \max_{\bf w} p({\bf s}|{\bf X},{\bf w}) \\ \qquad \quad &= \arg \max_{\bf w} \prod_{k=0}^{K-1} p(s_k \mid {\bf x}_k, {\bf w}) \\ \qquad \quad &= \arg \max_{\bf w} \sum_{k=0}^{K-1}\log p(s_k \mid {\bf x}_k, {\bf w}) \end{align} Any of the last three terms can be used to optimize ${\bf w}$. The sum in the last term is usually called the **log-likelihood** function, $L({\bf w})$, whereas the product in the previous line is simply referred as the **likelihood** function. ### 2.2. Summary. Let's summarize what we need to do in order to design a regression algorithm based on ML estimation: 1. Assume a parametric data model $p(s| {\bf x},{\bf w})$ 2. Using the data model and the i.i.d. assumption, compute $p({\bf s}| {\bf X},{\bf w})$. 3. Find an expression for ${\bf w}_{\text{ML}}$ 4. Assuming ${\bf w} = {\bf w}_{\text{ML}}$, compute the MAP or the minimum MSE estimate of $s$ given ${\bf x}$. ## 3. ML estimation for a Gaussian model. ### 3.1. Step 1: The Gaussian generative model Let us assume that the target variables $s_k$ in dataset $\mathcal{D}$ are given by $$ s_k = {\bf w}^\top {\bf z}_k + \varepsilon_k $$ where ${\bf z}_k$ is the result of some transformation of the inputs, ${\bf z}_k = T({\bf x}_k)$, and $\varepsilon_k$ are i.i.d. instances of a Gaussian random variable with mean zero and varianze $\sigma_\varepsilon^2$, i.e., $$ p_E(\varepsilon) = \frac{1}{\sqrt{2\pi}\sigma_\varepsilon} \exp\left(-\frac{\varepsilon^2}{2\sigma_\varepsilon^2}\right) $$ Assuming that the noise variables are independent of ${\bf x}$ and ${\bf w}$, then, for a given ${\bf x}$ and ${\bf w}$, the target variable is gaussian with mean ${\bf w}^\top {\bf z}_k$ and variance $\sigma_\varepsilon^2$ $$ p(s|{\bf x}, {\bf w}) = p_E(s-{\bf w}^\top{\bf z}) = \frac{1}{\sqrt{2\pi}\sigma_\varepsilon} \exp\left(-\frac{(s-{\bf w}^\top{\bf z})^2}{2\sigma_\varepsilon^2}\right) $$ ### 3.2. Step 2: Likelihood function Now we need to compute the likelihood function $p({\bf s}, {\bf X} | {\bf w})$. If the samples are i.i.d. we can write $$ p({\bf s}| {\bf X}, {\bf w}) = \prod_{k=0}^{K-1} p(s_k| {\bf x}_k, {\bf w}) = \prod_{k=0}^{K-1} \frac{1}{\sqrt{2\pi}\sigma_\varepsilon} \exp\left(-\frac{\left(s_k-{\bf w}^\top{\bf z}_k\right)^2}{2\sigma_\varepsilon^2}\right) \\ = \left(\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}\right)^K \exp\left(-\sum_{k=1}^K \frac{\left(s_k-{\bf w}^\top{\bf z}_k\right)^2}{2\sigma_\varepsilon^2}\right) \\ $$ Finally, grouping variables ${\bf z}_k$ in $${\bf Z} = \left({\bf z}_0, \dots, {\bf z}_{K-1}\right)^\top$$ we get $$ p({\bf s}| {\bf X}, {\bf w}) = \left(\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}\right)^K \exp\left(-\frac{1}{2\sigma_\varepsilon^2}\|{\bf s}-{\bf Z}{\bf w}\|^2\right) $$ ### 3.3. Step 3: ML estimation. The <b>maximum likelihood</b> solution is then given by: $$ {\bf w}_{ML} = \arg \max_{\bf w} p({\bf s}|{\bf w}) = \arg \min_{\bf w} \|{\bf s} - {\bf Z}{\bf w}\|^2 $$ Note that $\|{\bf s} - {\bf Z}{\bf w}\|^2$ is the sum or the squared prediction errors (Sum of Squared Errors, SSE) for all samples in the dataset. This is also called the * **Least Squares** * (LS) solution. The LS solution can be easily computed by differentiation, $$ \nabla_{\bf w} \|{\bf s} - {\bf Z}{\bf w}\|^2\Bigg|_{{\bf w} = {\bf w}_\text{ML}} = - 2 {\bf Z}^\top{\bf s} + 2 {\bf Z}^\top{\bf Z} {\bf w}_{\text{ML}} = {\bf 0} $$ and it is equal to $$ {\bf w}_\text{ML} = ({\bf Z}^\top{\bf Z})^{-1}{\bf Z}^\top{\bf s} $$ ### 3.4. Step 4: Prediction function. The last step consists on computing an estimate of $s$ by assuming that the true value of the weight parameters is ${\bf w}_\text{ML}$. In particular, the minimum MSE estimate is $$ \hat{s}_\text{MSE} = \mathbb{E}\{s|{\bf x},{\bf w}_\text{ML}\} $$ Knowing that, given ${\bf x}$ and ${\bf w}$, $s$ is normally distributed with mean ${\bf w}^\top {\bf z}$ we can write $$ \hat{s}_\text{MSE} = {\bf w}_\text{ML}^\top {\bf z} $$ #### Exercise 5: Assume that the targets in the one-dimensional dataset given by ```python X = np.array([0.15, 0.41, 0.53, 0.80, 0.89, 0.92, 0.95]) s = np.array([0.09, 0.16, 0.63, 0.44, 0.55, 0.82, 0.95]) ``` have been generated by the polynomial Gaussian model $$ s = w_0 + w_1 x + w_2 x^2 + \epsilon $$ (i.e., with ${\bf z} = T(x) = (1, x, x^2)^\intercal$) with noise variance ```python sigma_eps = 0.3 ``` * **5.1.** Compute the ML estimate. ```python # Compute the extended input matrix Z nx = len(X) # Z = <FILL IN> # Compute the ML estimate using linalg.lstsq from Numpy. # wML = <FILL IN> print(wML) ``` * **5.2.** Compute the value of the log-likelihood function for ${\bf w}={\bf w}_\text{ML}$. ```python K = len(s) # Compute the likelihood for the ML parameter wML # d = <FILL IN> # LwML = [<FILL IN>] print(LwML) ``` * **5.3.** Plot the prediction function over the data scatter plot ```python xgrid = np.arange(0, 1.2, 0.01) nx = len(xgrid) # Compute the input matrix for the grid data in x # Z = <FILL IN> # sML = <FILL IN> plt.figure() plt.scatter(X, s) # plt.plot(<FILL IN>) plt.xlabel('x') plt.ylabel('s') plt.axis('tight') plt.show() ``` #### Exercise 6: Assume the dataset $\mathcal{D} = \{(x_k, s_k, k=0,\ldots, K-1\}$ contains i.i.d. samples from a distribution with posterior density given by $$ p(s \mid x, w) = w x \exp(- w x s), \qquad s\ge0, \,\, x\ge 0, \,\, w\ge 0 $$ * **6.1.** Determine an expression for the likelihood function **Solution**: <SOL> </SOL> * **6.2.** Draw the likelihood function for the dataset in **Exercise 4** in the range $0\le w\le 6$. ```python K = len(s) wGrid = np.arange(0, 6, 0.01) p = [] Px = np.prod(X) xs = np.dot(X,s) for w in wGrid: # p.append(<FILL IN>) plt.figure() # plt.plot(<FILL IN>) plt.xlabel('$w$') plt.ylabel('Likelihood function') plt.show() ``` * **6.3.** Determine the maximum likelihood coefficient, $w_\text{ML}$. (*Hint: you can maximize the log-likelihood function instead of the likelihood function in order to simplify the differentiation*) **Solution**: <SOL> </SOL> * **6.4.** Compute $w_\text{ML}$ for the dataset in **Exercise 4** ```python # wML = <FILL IN> print(wML) ``` * **6.5.** Assuming $w = w_\text{ML}$, compute the prediction function based on the estimate $s_{MSE}$ **Solution**: <SOL> </SOL> * **6.6.** Plot the prediction function obtained in ap. 6.5, and compare it with the linear predictor in exercise 4 ```python xgrid = np.arange(0.1, 1.2, 0.01) # sML = <FILL IN> plt.figure() plt.scatter(X, s) # plt.plot(<FILL IN>) plt.xlabel('x') plt.ylabel('s') plt.axis('tight') plt.show() ``` Subjectively, we can see that the predictor computed in exercise 6 does not fit the given data very well. This could be a false perception. If the data have been truly generated by the parametric model assumed in exercise 6 (i.e. $p(s \mid x, w) = w x \exp(- w x s)$, the apparent missbehavior of the estimator could be caused by the natural randomness of the data, and a greater amount of data would show a better adjustement. Alternative, it may be the case the model assumed in sec. 6 is incorrect. Again, more data would be useful to asses that. This shows that the choice of the data model is important. In many applications, no parametric data model is available, and the data scientist must make a choice based on the nature of the data or any previous knowledge about the statistical behavior of the data. If no previous information is available, the data scientist can try different models, and compare using validation data and some cross validation technique.
a846075dc36821ac5d8e2b4d8dfaf68e53586a4c
27,681
ipynb
Jupyter Notebook
R4.ML_Regression/Regression_ML_student.ipynb
ML4DS/ML4all
7336489dcb87d2412ad62b5b972d69c98c361752
[ "MIT" ]
27
2016-11-30T17:34:00.000Z
2022-03-23T23:11:48.000Z
R4.ML_Regression/Regression_ML_student.ipynb
ML4DS/ML4all
7336489dcb87d2412ad62b5b972d69c98c361752
[ "MIT" ]
5
2019-08-12T18:28:49.000Z
2019-11-26T11:01:39.000Z
R4.ML_Regression/Regression_ML_student.ipynb
ML4DS/ML4all
7336489dcb87d2412ad62b5b972d69c98c361752
[ "MIT" ]
14
2016-11-30T17:34:18.000Z
2021-09-15T09:53:32.000Z
28.102538
439
0.515408
true
4,993
Qwen/Qwen-72B
1. YES 2. YES
0.689306
0.771843
0.532036
__label__eng_Latn
0.966537
0.074427
```python from IPython.display import display, Markdown, Latex import linear_systems as ls import numpy as np ``` ```python ``` Using KVL in 3 meshes we have the relations:- $$ \begin{align} -10+10I_1+20+10(I_1-I_2)-10I_1 = 0 \\ -20 +10(I_2-I_1) + 30I_2+10(I_2-I_3) = 0 \\ -10 -20I_3-10(I_3-I_2)= 0 \end{align} $$ In matrix form we have the following relation $$ \begin{pmatrix} 4 & -1 & 0 \\ -1 & 5 & -1 \\ 0 & 1 & -3 \\ \end{pmatrix} \begin{pmatrix} I_1 \\ I_2\\ I_3 \\ \end{pmatrix} = \begin{pmatrix} -1 \\ 2\\ 1 \\ \end{pmatrix} $$ $$ \left[\begin{array}{ccc|c} 4 & -1 & 0 &-1 \\ -1 & 5 & -1 &2\\ 0 & 1 & -3& 1 \\ \end{array}\right] $$ Using Gaussian elimination we have, ```python A = np.array( [[4,-1,0], [-1,5,-1], [0,1,-3]] , dtype=float ) b = np.array([-1,2,1]) print(ls.gauss_seidel(A,b,tol=1e-13)[1]) np.linalg.solve(A,b) ``` 2.4452662117369073e-14 array([-0.16981132, 0.32075472, -0.22641509]) ```python ```
c45769084df50f0ff835d3503f4e732ce9c2cf92
2,793
ipynb
Jupyter Notebook
linear_system/gauss_seidel.ipynb
plancky/mathematical_physics_II
c912dca1a58c218ddb06dc6cbca021b03a703540
[ "CC0-1.0" ]
null
null
null
linear_system/gauss_seidel.ipynb
plancky/mathematical_physics_II
c912dca1a58c218ddb06dc6cbca021b03a703540
[ "CC0-1.0" ]
null
null
null
linear_system/gauss_seidel.ipynb
plancky/mathematical_physics_II
c912dca1a58c218ddb06dc6cbca021b03a703540
[ "CC0-1.0" ]
null
null
null
20.23913
77
0.457214
true
414
Qwen/Qwen-72B
1. YES 2. YES
0.872347
0.855851
0.7466
__label__eng_Latn
0.267706
0.572933
# The Kalman Filter This filter is the optimal estimator for linear functions and Gaussian distributions. The Kalman filter can be used as a solution to the online SLAM problem, defined as follows: **Given:** - The robot's controls $u_{1:T} = \{u_1,u_2,u_3...,u_T\}$ - Observations $z_{1:T} = \{z_1,z_2,z_3...,z_T\}$ **Wanted:** - Map of the environment $m$ - Path of the robot $x_{0:T} = \{x_0,x_1,x_2...,x_T\}$ We will make a probabilistic estimation of the robot's path and the map according to the motion and observation models. **Motion Model** The motion model describes the relative motion of the robot. Some examples of motion model are the Odometry-based model (wheeled robots) or the velocity-based model (flying robots). The motion model is represented by matrices and thus is a linear function. \begin{equation} x_t = A_tx_{t-1} + B_tu_t + \epsilon_t \end{equation} - $A_t (n\times n)$ maps the state at $t$, given the previous state at $t-1$. - $B_t (n\times l)$ describes state change from $t-1$ to $t$, given control command. **Observation Model** The observation or sensor model relates measurements with the robot's pose: what I am going to observe given that I know the pose or the map. The Beam-Endpoint model is characterized by a gaussian blur around the obstables. The Ray-cast model experiments an exponential decay and covers dynamic obstacles. The linear mapping between the state and the observation space: \begin{equation} z_t = C_tx_{t} + \delta_t \end{equation} - $C_t (k\times n)$ maps state $x_t$ to an observation $z_t$. The Kalman filter is a recursive algorithm and it is computed as follows: Where $Q_t$ and $R_t$ describe measurement and motion noise, $\mathbf{\mu}_{t}$ is the mean, $\textstyle\sum_t$ is the covariance (uncertainty), $K_t$ is the Kalman gain. The Kalman gain computes how certain I am about the prediction with respect to the motion. We introduce to the algorithm our current estimate of where we have been in tearms of mean estimate and covariance matrix, as well as the new control command and the observations. We want to update the mean and the covariance matrix so we transit from t-1 to t. We are computing a weighted sum with the Kalman gain weighting the prediction and the correction. # The Extended Kalman Filter However, in most real scenarios we do not have linear functions to describe the movements or the sensor model. Non-linear functions lead to non-Gaussian distributions and we cannot use the Kalman filter anymore. This is resolved by the Extended Kalman Filter by local linearization. However, we're still assuming Gaussian noise and Gaussian uncertainties. The algorithm for the Extended Kalman Filter is: The terms highlighted in blue are those in which the EKF perform the local linearization. Hereafter, I will introduce how these terms are computed. ## Using the EKF to solve the SLAM problem First we need to define our state vector, and the assumptions for the example that we are using in this notebook: - Assumption: known correspondences. That is, when I get an observation, I know which landmark it is in my map. - State space (for 2D plane) contains the robot pose (3 dimensions) and the landmark locations (2 dimensions each), and it's defined as: \begin{equation} x_t = ( \underbrace{x,y,\theta}_{\text{robot's pose}},\underbrace{m_{1,x},m_{1,y}}_{\text{landmark 1}},...,\underbrace{m_{n,x},m_{n,y}}_{\text{landmark n}})^T \end{equation} - State representation (very compactly, with $x_R\rightarrow x)$: \begin{equation} \underbrace{\begin{pmatrix}x\\m\end{pmatrix}}_{\mu}\underbrace{\begin{pmatrix}\textstyle\sum_{xx} \textstyle\sum_{xm}\\\textstyle\sum_{mx} \textstyle\sum_{mm}\end{pmatrix}}_{\textstyle\sum} \end{equation} With $\sum_{xx}$ representing the uncertainty around the pose, $\sum_{mm}$ the uncertainty about the landmark location, and $\sum_{xm}$ the link between the landmark locations and the position of the robot within the platform. # Let's code! ```python import pandas as pd import matplotlib.pyplot as plt from matplotlib.patches import Ellipse from celluloid import Camera from IPython.display import HTML import numpy as np import os import math import seaborn as sns %matplotlib inline ``` ### Auxiliary functions ```python def normalize_angle(phi): # Normalize phi to be between -pi and pi while(phi>np.pi): phi -= 2*np.pi; while(phi<-np.pi): phi += 2*np.pi phiNorm = phi return phiNorm ``` ```python def normalize_all_bearings(z): # Go over the observations vector and normalize the bearings # The expected format of z is [range; bearing; range; bearing; ...] for i in range (1,z.shape[0],2): z[i] = normalize_angle(z[i]) zNorm = z return zNorm ``` ```python def plot_state(mu,sigma,landmarks,observedLandmarks,fig,ax): # Visualizes the state of the EKF SLAM algorithm. # # The resulting plot displays the following information: # - map ground truth (black +'s) # - current robot pose estimate (red) # - current landmark pose estimates (blue) # - visualization of the observations made at this time step (line between robot and landmark) # using seaborn, set background grid to gray sns.set_style("dark") #fig,ax = plt.subplots() ax.set_xticks([x for x in range(-2,12)],minor=True ) ax.set_yticks([y for y in range(-2,12)],minor=True) # Plot grid on minor axes in gray (width = 1) plt.grid(which='minor',ls='-',lw=1, color='white') # Plot grid on major axes in larger width plt.grid(which='major',ls='-',lw=2, color='white') # Draw the robot ax.text(mu[0], mu[1], 'o', ha='center', va='center', color='black', fontsize=20) e = plot_conf_ellipse(mu[0:3],sigma[0:3,0:3], 0.6, 'red') ax.add_patch(e) # Draw the ground truth of the landmarks for i,l in enumerate(landmarks): ax.text(l[0], l[1], 'x', ha='center', va='center', color='black', fontsize=20) if (observedLandmarks[0][i] == 1.0): # plot landmark ellipse e = plot_conf_ellipse(mu[2*i+3:2*i+5],sigma[2*i+3:2*i+5,2*i+3:2*i+5], 0.6, 'blue') ax.add_patch(e) return fig ``` ```python from scipy.stats.distributions import chi2 def plot_conf_ellipse(x,C,alpha,color): # Calculate unscaled half axes sxx = C[0,0] syy = C[1,1] sxy = C[0,1] # Remove imaginary parts in case of neg. definite C a = np.sqrt(0.5*(sxx+syy+np.sqrt((sxx-syy)**2+4*sxy**2))).real # always greater b = np.sqrt(0.5*(sxx+syy-np.sqrt((sxx-syy)**2+4*sxy**2))).real # always smaller # Scaling in order to reflect specified probability a = a*np.sqrt(chi2.ppf(alpha, df=2)) b = b*np.sqrt(chi2.ppf(alpha, df=2)) # Calculate inclination (numerically stable) if math.isclose(sxx, syy, rel_tol=0.1): # this function launches a warning angle = 0.5*np.arctan(2*sxy/(sxx-syy)) elif (sxy==0): angle = 0 elif (sxy>0): angle = np.pi/4 elif (sxy<0): angle = -np.pi/4 return Ellipse((x[0],x[1]), a, b, angle, edgecolor = color, facecolor = color, alpha = 0.5) ``` ## Prediction step: defining $g(u_t,\mu_{t-1})$ In this step we only update the pose of the robot $x,y,\theta$ and its covariance $\sum_{xx}$ according to the motion model $g$. Note that since we are just considering the robot's movement, without taking into account the sensor's measurements yet, the landmarks locations are not updated in this step. ### The motion model Here the motion model considered is the Odometry Model: - The robot moves from $[\overline{x},\overline{y},\overline{\theta}]$ to $[\overline{x}',\overline{y}',\overline{\theta}']$ - We have odometry information $u = [\delta_{rot1},\delta_{rot2},\delta_{trans}]$ \begin{equation} \delta_{trans} = \sqrt{(\overline{x}'-\overline{x})^2+(\overline{y}'-\overline{y})^2} \end{equation} \begin{equation} \delta_{rot1} = atan2(\overline{y}'-\overline{y},\overline{x}'-\overline{x})-\overline{\theta} \end{equation} \begin{equation} \delta_{rot2} = \overline{\theta}'-\overline{\theta}-\delta_{rot1} \end{equation} Thus, our odometry motion model (without the noise model) that we will use to update the robot pose in the state vector $\mu$ is: \begin{equation} x' = x + \delta_{trans}\cos{(\theta+\delta_{rot1})} \end{equation} \begin{equation} y' = y + \delta_{trans}\sin{(\theta+\delta_{rot1})} \end{equation} \begin{equation} \theta' = \theta + \delta_{rot1} + \delta_{rot2} \end{equation} Next, we update the elements in the covariance matrix associated to the robot pose by performing a local linearization of the function. This is achieved with the partial derivatives of the previous functions, that is, with the Jacobian matrix $G$. \begin{equation} G_t^x = \frac{\delta g(u_t,\mu')}{\delta (x,y,\theta)} \end{equation} Remember that we're only updating the robot values in the state space. That is, our Jacobian will have the structure: \begin{equation} G_t = \begin{bmatrix} \frac{\delta g(u_t,\mu')}{\delta (x,y,\theta)} & \textbf{0}\\ \textbf{0} & \textbf{I} \end{bmatrix} \end{equation} ```python def prediction_step( mu, sigma, u): ''' Updates the belief concerning the robot pose according to the motion model, args: - mu :: 2N+3 x 1 vector representing the state mean. - sigma :: 2N+3 x 2N+3 covariance matrix. - u: odometry reading (r1, t, r2). ''' m,n = sigma.shape # Compute new mu based on the noise-free (odometry-based) motion model mu[0,0] = mu[0,0] + u['t'] * np.cos(mu[2,0] + u['r1']) # x mu[1,0] = mu[1,0] + u['t'] * np.sin(mu[2,0] + u['r1']) # y mu[2,0] = mu[2,0] + u['r1'] + u['r2'] # y mu[2,0] = normalize_angle(mu[2,0]) # Compute the 3x3 Jacobian Gx of the motion model Gx = np.identity(3) Gx[0,2] = - u['t'] * np.sin(mu[2,0] + u['r1']) Gx[1,2] = u['t'] * np.cos(mu[2,0] + u['r1']) # Construct the full Jacobian G G = np.concatenate((Gx,np.zeros((3,n-3))),axis = 1) aux = np.concatenate((np.zeros((n-3,3)), np.identity(n-3)),axis = 1) G = np.concatenate((G,aux), axis = 0) # Motion noise R motionNoise = 0.1 R3 = np.identity(3)*motionNoise R3[2,2] = motionNoise/10 R = np.zeros((sigma.shape)) R[0:3, 0:3] = R3 # Compute predicted sigma sigma = np.matmul(G,np.matmul(sigma,G.T)) +R # i.e. sigma = G*sigma*G.T + R return mu,sigma ``` # Correction step Once the pose has been update, it's time for the correction. - Predicted measurement $h(x)$ : we need to predict what the robot sees. For that, we take the current position of the robot, and the position of the landmarks in the map, and with that we compute our predicted measurement. - Obtained measurement $z$: we take the real observations of the landmarks according to the sensor. - Data association: we compute the discrepancy between $h(x)$ and $z$. - Update step: finally, all the elements in $\mu$ and $\sum$ are updated ```python def correction_step(mu,sigma,z,observedLandmarks): ''' Updates the belief, i. e., mu and sigma after observing landmarks, according to the sensor model. The employed sensor model measures the range and bearing of a landmark. mu: 2N+3 x 1 vector representing the state mean. The first 3 components of mu correspond to the current estimate of the robot pose [x; y; theta] The current pose estimate of the landmark with id = j is: [mu(2*j+2); mu(2*j+3)] sigma: 2N+3 x 2N+3 is the covariance matrix z: struct array containing the landmark observations. Each observation z(i) has an id z(i).id, a range z(i).range, and a bearing z(i).bearing The vector observedLandmarks indicates which landmarks have been observed at some point by the robot. observedLandmarks(j) is false if the landmark with id = j has never been observed before. ''' # Number of measurements in this time step m = z.shape[0] # Number of dimensions to mu dim = mu.shape[0] # Z: vectorized form of all measurements made in this time step: [range_1; bearing_1; range_2; bearing_2; ...; range_m; bearing_m] # ExpectedZ: vectorized form of all expected measurements in the same form. # They are initialized here and should be filled out in the for loop below Z = np.zeros([m*2, 1],float) expectedZ = np.zeros([m*2, 1],float) # Iterate over the measurements and compute the H matrix # (stacked Jacobian blocks of the measurement function) # H will be 2m x 2N+3 H = [] j = 0 for i,row in z.iterrows(): # Get the id of the landmark corresponding to the i-th observation landmarkId = int(row['r1']) # r1 == ID here #landmarkId = landmarkId -1 # adapt the 1-9 range to 0-8 range of the array # If the landmark is obeserved for the first time: if (observedLandmarks[0][landmarkId-1] == 0): # Initialize its pose in mu based on the measurement and the current robot pose: a = float(row['t']*np.cos(row['r2']+mu[2])) b = float(row['t']*np.sin(row['r2']+mu[2])) mu[2*landmarkId+1 : 2*landmarkId+3] = mu[0:2] + np.array([[a], [b]]) # Indicate in the observedLandmarks vector that this landmark has been observed observedLandmarks[0][landmarkId-1] = 1 # Add the landmark measurement to the Z vector Z[2*j] = row['t'] Z[2*j+1] = row['r2'] # Use the current estimate of the landmark pose # to compute the corresponding expected measurement in expectedZ: delta = mu[2 * landmarkId + 1 : 2 * landmarkId + 2+1] - mu[0:2] q = np.matmul(delta.T,delta) expectedZ[2*j] = math.sqrt(q) expectedZ[2*j+1] = normalize_angle(np.arctan2(delta[1],delta[0]) - mu[2]) delta0 = float(delta[0]) delta1 = float(delta[1]) # Compute the Jacobian Hi of the measurement function h for this observation Hi = 1/q * np.array([[float(-math.sqrt(q)*delta0),-math.sqrt(q)*delta1, 0, math.sqrt(q)*delta0, math.sqrt(q)*delta1], [ delta1, -delta0, float(-q), -delta1, delta0] ]) # Map Jacobian Hi to high dimensional space by a mapping matrix Fxj Fxj = np.zeros([5,dim]) Fxj[0:3,0:3] = np.identity(3) Fxj[3,2*landmarkId+1] = 1 Fxj[4,2*landmarkId+2] = 1 Hi = np.matmul(Hi, Fxj) # Augment H with the new Hi H.append(Hi) j+=1 # Construct the sensor noise matrix Q Q = 0.01*np.identity(2*m) # Compute the Kalman gain # K = sigma * H.T * inv(H * sigma * H.T + Q) Hnp = np.asarray(H).reshape((2*m,dim)) sigmaxHt = np.matmul(sigma,Hnp.T) inverse = np.linalg.inv(np.matmul(Hnp,sigmaxHt)+Q) K = np.matmul(sigmaxHt,inverse) # Compute the difference between the expected and recorded measurements. # Remember to normalize the bearings after subtracting! diffZ = normalize_all_bearings(Z-expectedZ) # Finish the correction step by computing the new mu and sigma. # Normalize theta in the robot pose. mu = mu + np.matmul(K, diffZ) # sigma = (eye(dim) - K * H) * sigma sigma = np.matmul((np.identity(dim) - np.matmul(K, Hnp)), sigma) return mu, sigma, observedLandmarks ``` ## Initialization ### Data preprocessing First, we need to read the world data and the sensor measurements. The world data contains the landmark positions. The sensor data includes the odometry measurements and the range-bearing sensor measurements. ```python def read_world(filename,path): landmarks = pd.read_csv(path+filename,delimiter = ' ',header=None, names = ['x','y']) return (np.asarray(landmarks)) def read_data(filename,path): data = pd.read_csv(path + filename,delimiter = ' ',header=None, names = ['sensor','r1','t','r2']) # or id, range and bearing for sensor return (data) world_ldmrks = read_world('/data/ekf_world.dat',os.path.abspath(os.getcwd())) data = read_data('/data/ekf_sensor_data.dat',os.path.abspath(os.getcwd())) ``` We read the sensor data file and assign a time step for each ```python data = read_data('/data/ekf_sensor_data.dat',os.path.abspath(os.getcwd())) indexodometry = data[ (data['sensor'] == 'ODOMETRY')].index timestepindex = [] timestep = 0 for i in range (0,data.shape[0]): if(timestep+1 < indexodometry.shape[0]): if (i < indexodometry[timestep+1]) : timestepindex.append(timestep) else: timestep +=1 timestepindex.append(timestep) else: timestepindex.append(timestep) data.insert(0, "timestep", timestepindex, True) data ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>timestep</th> <th>sensor</th> <th>r1</th> <th>t</th> <th>r2</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0</td> <td>ODOMETRY</td> <td>0.100692</td> <td>0.100073</td> <td>0.000171</td> </tr> <tr> <th>1</th> <td>0</td> <td>SENSOR</td> <td>1.000000</td> <td>1.896454</td> <td>0.374032</td> </tr> <tr> <th>2</th> <td>0</td> <td>SENSOR</td> <td>2.000000</td> <td>3.853678</td> <td>1.519510</td> </tr> <tr> <th>3</th> <td>1</td> <td>ODOMETRY</td> <td>0.099366</td> <td>0.099968</td> <td>-0.000241</td> </tr> <tr> <th>4</th> <td>1</td> <td>SENSOR</td> <td>1.000000</td> <td>1.839227</td> <td>0.248026</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>1538</th> <td>330</td> <td>SENSOR</td> <td>5.000000</td> <td>4.951031</td> <td>-1.512977</td> </tr> <tr> <th>1539</th> <td>330</td> <td>SENSOR</td> <td>6.000000</td> <td>4.917189</td> <td>-0.862938</td> </tr> <tr> <th>1540</th> <td>330</td> <td>SENSOR</td> <td>7.000000</td> <td>-0.035066</td> <td>0.975887</td> </tr> <tr> <th>1541</th> <td>330</td> <td>SENSOR</td> <td>8.000000</td> <td>1.900675</td> <td>3.144946</td> </tr> <tr> <th>1542</th> <td>330</td> <td>SENSOR</td> <td>9.000000</td> <td>4.171552</td> <td>0.072790</td> </tr> </tbody> </table> <p>1543 rows × 5 columns</p> </div> Once the data is stored in a pandas dataframe, we can for example read the odometry measurements for the time step 0 as: ```python data.loc[(data['timestep'] == 0) & (data['sensor'] == 'ODOMETRY')] ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>timestep</th> <th>sensor</th> <th>r1</th> <th>t</th> <th>r2</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0</td> <td>ODOMETRY</td> <td>0.100692</td> <td>0.100073</td> <td>0.000171</td> </tr> </tbody> </table> </div> Or the range bearing measurement for the time step 330 as: ```python data.loc[(data['timestep'] == 330) & (data['sensor'] == 'SENSOR')] ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>timestep</th> <th>sensor</th> <th>r1</th> <th>t</th> <th>r2</th> </tr> </thead> <tbody> <tr> <th>1536</th> <td>330</td> <td>SENSOR</td> <td>3.0</td> <td>3.510814</td> <td>0.993067</td> </tr> <tr> <th>1537</th> <td>330</td> <td>SENSOR</td> <td>4.0</td> <td>4.925102</td> <td>-2.218075</td> </tr> <tr> <th>1538</th> <td>330</td> <td>SENSOR</td> <td>5.0</td> <td>4.951031</td> <td>-1.512977</td> </tr> <tr> <th>1539</th> <td>330</td> <td>SENSOR</td> <td>6.0</td> <td>4.917189</td> <td>-0.862938</td> </tr> <tr> <th>1540</th> <td>330</td> <td>SENSOR</td> <td>7.0</td> <td>-0.035066</td> <td>0.975887</td> </tr> <tr> <th>1541</th> <td>330</td> <td>SENSOR</td> <td>8.0</td> <td>1.900675</td> <td>3.144946</td> </tr> <tr> <th>1542</th> <td>330</td> <td>SENSOR</td> <td>9.0</td> <td>4.171552</td> <td>0.072790</td> </tr> </tbody> </table> </div> Get the number of landmarks in the map ```python N = world_ldmrks.shape[0] N ``` 9 observedLandmarks is a vector that keeps track of which landmarks have been observed so far. observedLandmarks(i) will be true if the landmark i has been observed at some point ```python observedLandmarks = np.zeros((1,N)) ``` ### Initializing the belief Given that we know the number of landmarks in our map, we can define the shape of the mean and the covariance matrix: - mu: 2N+3x1 vector representing the mean of the normal distribution. The first 3 components of mu correspond to the pose of the robot, and the landmark poses (xi, yi) are stacked in ascending id order. - sigma: (2N+3)x(2N+3) covariance matrix of the normal distribution. Everything is completely unknown in the beginnig, so we define the starting point as our coordinate system. That is, $\mu = \textbf{0}$. Since we are completely certain about that (because we've defined it), the corresponding values in sigma are also zero $\sum_{xx} = \textbf{0}$. However, we don't know anything about the landmarks, because we haven't seen anything yet, so they have an infinite uncertainty. ```python # Initialize mu mu = np.zeros((2*N+3,1),dtype = 'float') # Initialize sigma robSigma = np.zeros((3,3)) robMapSigma = np.zeros((3,2*N)) mapSigma = np.identity((2*N))*1000 # 1000 as a "infinite" or just "high" value aux1 = np.concatenate((robSigma,robMapSigma),axis = 1) aux2 = np.concatenate((robMapSigma.T,mapSigma), axis = 1) sigma = np.concatenate((aux1,aux2),axis = 0) ``` The sigma values corresponding to the robot pose, as mentioned, have 0. value (no uncertainty) ```python sigma[0:3, 0:3] ``` array([[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]) However, the covariance associated to the landmark pose has infinite value(here 1000, just a high value), for instance, for the first landmark: ```python sigma[3,3] ``` 1000.0 ```python def ekf_loop(mu,sigma,observedLandmarks,fig,ax,camera): for t in range (0,timestepindex[-1]): # Perform the prediction step of the EKF mu,sigma = prediction_step(mu, sigma, data.loc[(data['timestep'] == t) & (data['sensor'] == 'ODOMETRY')]) #Perform the correction step of the EKF mu, sigma, observedLandmarks = correction_step(mu,sigma,data.loc[(data['timestep'] == t) & (data['sensor'] == 'SENSOR')],observedLandmarks) # Generate visualization plots fig = plot_state(mu,sigma,world_ldmrks,observedLandmarks,fig,ax) camera.snap() return camera, mu[0:3],sigma[0:3,0:3] ``` ```python observedLandmarks = np.zeros((1,N)) robSigma = np.zeros((3,3)) robMapSigma = np.zeros((3,2*N)) mapSigma = np.identity((2*N))*1000 # 1000 as a "infinite" or just "high" value aux1 = np.concatenate((robSigma,robMapSigma),axis = 1) aux2 = np.concatenate((robMapSigma.T,mapSigma), axis = 1) sigma = np.concatenate((aux1,aux2),axis = 0) mu = np.zeros((2*9+3,1),dtype = 'float') fig,ax = plt.subplots() camera = Camera(fig) camera = ekf_loop(mu,sigma,observedLandmarks,fig,ax,camera)[0] animation = camera.animate() HTML(animation.to_html5_video()) ``` ## Testing Here I compare the results obtained here with the results obtained by the original Octave code. Note that there are differences due to the different functions that the python and octave libraries have. ```python def test_total(decimal = 2): # input to function fig,ax = plt.subplots() camera = Camera(fig) muini = np.zeros((2*9+3,1),dtype = 'float') robSigma = np.zeros((3,3)) robMapSigma = np.zeros((3,2*N)) mapSigma = np.identity((2*N))*1000 # 1000 as a "infinite" or just "high" value aux1 = np.concatenate((robSigma,robMapSigma),axis = 1) aux2 = np.concatenate((robMapSigma.T,mapSigma), axis = 1) sigmaini = np.concatenate((aux1,aux2),axis = 0) observedLandmarks = np.zeros((1,9)) # desired output muresult = np.array([[5.0178],[4.6234],[1.5430]]) sigmaresult = np.array([[0.1607056, -0.0557768, -0.0105228],\ [-0.0557768, 0.1671933, 0.0119245],\ [-0.0105228, 0.0119245, 0.0037868]]) # function output _,testmu,testsigma = ekf_loop(muini,sigmaini,observedLandmarks,fig,ax,camera) # test try: print('====================================================================') np.testing.assert_almost_equal(testmu, muresult,decimal= decimal) print('\x1b[6;30;42m' + 'Test Success for mu in ekf_loop' + '\x1b[0m') except AssertionError as e: print('====================================================================') print('\x1b[0;30;41m' + 'Test Failed for mu in ekf_loop' + '\x1b[0m') print(e) try: np.testing.assert_almost_equal(testsigma, sigmaresult,decimal= decimal+1) print('====================================================================') print('\x1b[6;30;42m' + 'Test Success for sigma in ekf_loop' + '\x1b[0m') except AssertionError as e: print('====================================================================') print('\x1b[0;30;41m' + 'Test Failed for sigma in ekf_loop:' + '\x1b[0m') print(e) def test_prediction(decimal = 2): # input to function predmu = np.array([[5.04492],[4.42067],[1.55460],[1.80649],[0.83467],[-0.10844],[3.87679],\ [2.00993],[6.81895],[8.85230],[1.62635],[9.96294],[4.60379],[9.03066],[7.60910],[4.93532],\ [4.76744],[4.87090],[2.75450],[5.02812],[8.76238]]) u = data.loc[(data['timestep'] == 330) & (data['sensor'] == 'ODOMETRY')] presigma = np.array([[2.6108e-01, -5.6749e-02, -1.0594e-02, 1.1899e-01, -1.9931e-02, 1.5414e-01, 1.2071e-03, 1.8575e-01 , -2.2452e-02, 1.2952e-01, -9.6479e-02 , 1.6147e-01, -1.0754e-01, 1.9297e-01, -9.7449e-02, 1.6260e-01,-5.3947e-02, 1.4095e-01, -5.3326e-02, 2.0615e-01, -5.5533e-02],\ [ -5.6749e-02, 2.6897e-01, 1.2191e-02, -1.0836e-02, 1.2655e-01, -4.9441e-02, 1.0319e-01 , -8.4403e-02, 1.2944e-01, -2.2330e-02, 2.1129e-01, -5.7534e-02, 2.2349e-01, -9.2427e-02, 2.1233e-01, -5.8696e-02,1.6350e-01, -3.4450e-02, 1.6275e-01, -1.0540e-01, 1.6522e-01],\ [-1.0594e-02, 1.2191e-02, 1.3796e-02, -1.6596e-03, 3.8382e-03 , -9.4275e-03, -8.1671e-04, -1.6430e-02, 4.3996e-03, -3.9440e-03, 2.0860e-02, -1.1036e-02, 2.3321e-02, -1.8038e-02 , 2.1076e-02, -1.1296e-02,1.1375e-02, -6.4802e-03, 1.1235e-02, -2.0938e-02, 1.1734e-02],\ [1.1899e-01, -1.0836e-02, -1.6596e-03, 1.1266e-01, -5.2917e-03, 1.1804e-01, -1.9095e-03, 1.2291e-01, -5.6064e-03, 1.1426e-01, -1.6993e-02, 1.1916e-01, -1.8694e-02, 1.2403e-01, -1.7138e-02, 1.1932e-01,-1.0452e-02, 1.1600e-01, -1.0357e-02, 1.2602e-01, -1.0683e-02],\ [-1.9931e-02, 1.2655e-01, 3.8382e-03, -5.2917e-03, 1.1371e-01, -1.7745e-02, 1.0593e-01, -2.8980e-02, 1.1446e-01, -8.9791e-03, 1.4079e-01, -2.0313e-02, 1.4472e-01, -3.1570e-02, 1.4112e-01, -2.0700e-02,1.2566e-01, -1.3012e-02, 1.2544e-01, -3.6174e-02, 1.2619e-01],\ [1.5414e-01, -4.9441e-02, -9.4275e-03, 1.1804e-01, -1.7745e-02, 1.4908e-01, 7.4526e-04 , 1.7629e-01, -1.9837e-02, 1.2720e-01, -8.4449e-02, 1.5505e-01, -9.4097e-02, 1.8260e-01, -8.5284e-02, 1.5603e-01,-4.7269e-02, 1.3714e-01, -4.6730e-02, 1.9389e-01, -4.8597e-02],\ [1.2071e-03, 1.0319e-01 , -8.1671e-04, -1.9095e-03, 1.0593e-01, 7.4526e-04 , 1.0758e-01, 3.1316e-03 , 1.0577e-01, -1.1237e-03 , 1.0016e-01, 1.2881e-03, 9.9327e-02, 3.6820e-03, 1.0009e-01 , 1.3707e-03,1.0338e-01, -2.6522e-04, 1.0343e-01, 4.6616e-03, 1.0327e-01],\ [1.8575e-01, -8.4403e-02, -1.6430e-02, 1.2291e-01, -2.8980e-02, 1.7629e-01, 3.1316e-03, 2.2457e-01, -3.2848e-02, 1.3882e-01, -1.4541e-01, 1.8735e-01, -1.6223e-01, 2.3540e-01 , -1.4687e-01 , 1.8903e-01,-8.0603e-02, 1.5610e-01 , -7.9612e-02, 2.5507e-01, -8.2915e-02],\ [-2.2452e-02, 1.2944e-01, 4.3996e-03, -5.6064e-03, 1.1446e-01, -1.9837e-02, 1.0577e-01, -3.2848e-02, 1.1572e-01, -9.8738e-03, 1.4580e-01, -2.2882e-02, 1.5031e-01, -3.5759e-02, 1.4619e-01, -2.3327e-02,1.2843e-01, -1.4501e-02, 1.2814e-01, -4.1035e-02, 1.2905e-01],\ [1.2952e-01 , -2.2330e-02, -3.9440e-03, 1.1426e-01, -8.9791e-03, 1.2720e-01, -1.1237e-03, 1.3882e-01, -9.8738e-03, 1.1842e-01, -3.6883e-02, 1.2992e-01, -4.0882e-02, 1.4147e-01, -3.7199e-02, 1.3030e-01,-2.1392e-02, 1.2239e-01, -2.1184e-02, 1.4618e-01, -2.1935e-02],\ [-9.6479e-02, 2.1129e-01, 2.0860e-02 , -1.6993e-02, 1.4079e-01, -8.4449e-02, 1.0016e-01, -1.4541e-01, 1.4580e-01, -3.6883e-02, 2.8941e-01, -9.8511e-02, 3.1067e-01, -1.5933e-01, 2.9118e-01, -1.0063e-01,2.0647e-01, -5.8939e-02, 2.0514e-01, -1.8441e-01, 2.0949e-01],\ [1.6147e-01, -5.7534e-02 , -1.1036e-02, 1.1916e-01 , -2.0313e-02, 1.5505e-01, 1.2881e-03, 1.8735e-01, -2.2882e-02 , 1.2992e-01, -9.8511e-02, 1.6285e-01, -1.0988e-01, 1.9480e-01, -9.9591e-02, 1.6367e-01,-5.4965e-02, 1.4151e-01, -5.4295e-02, 2.0797e-01, -5.6577e-02],\ [-1.0754e-01, 2.2349e-01, 2.3321e-02 , -1.8694e-02, 1.4472e-01, -9.4097e-02, 9.9327e-02, -1.6223e-01, 1.5031e-01, -4.0882e-02, 3.1067e-01, -1.0988e-01, 3.3472e-01, -1.7781e-01, 3.1287e-01, -1.1218e-01,2.1812e-01, -6.5564e-02, 2.1662e-01, -2.0584e-01, 2.2152e-01],\ [1.9297e-01 , -9.2427e-02, -1.8038e-02, 1.2403e-01, -3.1570e-02, 1.8260e-01, 3.6820e-03, 2.3540e-01, -3.5759e-02, 1.4147e-01, -1.5933e-01, 1.9480e-01, -1.7781e-01, 2.4811e-01, -1.6079e-01, 1.9658e-01,-8.8286e-02, 1.6044e-01, -8.7155e-02, 2.6931e-01, -9.0957e-02],\ [-9.7449e-02 , 2.1233e-01, 2.1076e-02, -1.7138e-02, 1.4112e-01, -8.5284e-02, 1.0009e-01, -1.4687e-01, 1.4619e-01, -3.7199e-02, 2.9118e-01, -9.9591e-02, 3.1287e-01, -1.6079e-01, 2.9341e-01, -1.0164e-01,2.0748e-01, -5.9509e-02, 2.0612e-01, -1.8624e-01, 2.1054e-01],\ [1.6260e-01, -5.8696e-02, -1.1296e-02, 1.1932e-01, -2.0700e-02, 1.5603e-01, 1.3707e-03, 1.8903e-01, -2.3327e-02, 1.3030e-01, -1.0063e-01, 1.6367e-01, -1.1218e-01, 1.9658e-01, -1.0164e-01, 1.6488e-01,-5.6116e-02, 1.4217e-01, -5.5455e-02, 2.1010e-01 , -5.7733e-02],\ [-5.3947e-02, 1.6436e-01, 1.1394e-02, -1.0472e-02, 1.2570e-01, -4.7379e-02, 1.0337e-01, -8.0808e-02, 1.2849e-01 , -2.1446e-02, 2.0674e-01, -5.5111e-02, 2.1841e-01, -8.8508e-02, 2.0774e-01 , -5.6242e-02, 1.6184e-01, -3.3445e-02, 1.6102e-01, -1.0218e-01, 1.6339e-01],\ [1.4095e-01, -3.4946e-02, -6.4906e-03, 1.1601e-01, -1.3041e-02, 1.3721e-01, -2.5900e-04, 1.5622e-01, -1.4534e-02 , 1.2242e-01, -5.9092e-02, 1.4160e-01, -6.5735e-02, 1.6057e-01, -5.9664e-02, 1.4225e-01, -3.3445e-02, 1.2933e-01, -3.3040e-02, 1.6835e-01, -3.4366e-02],\ [-5.3326e-02, 1.6360e-01, 1.1250e-02 , -1.0376e-02, 1.2548e-01 , -4.6834e-02, 1.0342e-01, -7.9803e-02, 1.2819e-01, -2.1235e-02 , 2.0537e-01, -5.4431e-02, 2.1688e-01, -8.7360e-02, 2.0636e-01, -5.5574e-02,1.6102e-01, -3.3040e-02, 1.6050e-01, -1.0087e-01, 1.6259e-01],\ [2.0615e-01, -1.0699e-01, -2.0970e-02, 1.2606e-01, -3.6263e-02, 1.9411e-01, 4.6805e-03, 2.5546e-01, -4.1141e-02, 1.4628e-01, -1.8489e-01, 2.0825e-01, -2.0638e-01 , 2.6973e-01, -1.8673e-01, 2.1035e-01, -1.0218e-01, 1.6835e-01 , -1.0087e-01, 2.9488e-01, -1.0522e-01],\ [-5.5533e-02, 1.6612e-01, 1.1753e-02, -1.0704e-02, 1.2624e-01 , -4.8714e-02, 1.0326e-01, -8.3133e-02, 1.2911e-01, -2.1992e-02, 2.0978e-01, -5.6735e-02, 2.2183e-01, -9.1199e-02, 2.1083e-01, -5.7870e-02,1.6339e-01, -3.4366e-02, 1.6259e-01, -1.0522e-01, 1.6541e-01]]) # desired output predmuresult =np.array([[5.04656],[4.52065],[1.55445],[1.80649],[0.83467],[-0.10844],[3.87679],\ [2.00993],[6.81895],[8.85230],[1.62635],[9.96294],[4.60379],[9.03066],[7.60910],\ [4.93532],[4.76744],[4.87090],[2.75450],[5.02812],[8.76238]]) sigmaresult = np.array([[2.6108e-01, -5.6749e-02, -1.0594e-02, 1.1899e-01, -1.9931e-02, 1.5414e-01, 1.2071e-03, 1.8575e-01, -2.2452e-02, 1.2952e-01, -9.6479e-02, 1.6147e-01, -1.0754e-01, 1.9297e-01, -9.7449e-02, 1.6260e-01, -5.3573e-02, 1.4073e-01, -5.2965e-02, 2.0545e-01, -5.5147e-02],\ [-5.5777e-02, 1.6719e-01, 1.1924e-02, -1.0710e-02, 1.2625e-01, -4.8728e-02, 1.0326e-01, -8.3151e-02, 1.2910e-01, -2.2023e-02, 2.0970e-01, -5.6684e-02 , 2.2172e-01, -9.1057e-02, 2.1073e-01, -5.7846e-02, 1.6350e-01, -3.4450e-02, 1.6275e-01, -1.0540e-01, 1.6522e-01],\ [-1.0523e-02 , 1.1924e-02, 3.7868e-03, -1.6571e-03, 3.8324e-03, -9.4136e-03, -8.1547e-04, -1.6404e-02, 4.3922e-03, -3.9369e-03, 2.0828e-02, -1.1017e-02, 2.3286e-02, -1.8010e-02, 2.1044e-02, -1.1280e-02, 1.1375e-02, -6.4802e-03, 1.1235e-02, -2.0938e-02, 1.1734e-02],\ [1.1893e-01, -1.0710e-02, -1.6571e-03, 1.1266e-01, -5.2842e-03, 1.1802e-01, -1.9111e-03, 1.2287e-01, -5.5982e-03, 1.1425e-01, -1.6955e-02, 1.1913e-01, -1.8652e-02, 1.2399e-01, -1.7100e-02, 1.1930e-01, -1.0452e-02, 1.1600e-01, -1.0357e-02, 1.2602e-01, -1.0683e-02],\ [ -1.9801e-02, 1.2625e-01, 3.8324e-03, -5.2842e-03, 1.1370e-01, -1.7704e-02, 1.0593e-01, -2.8907e-02, 1.1444e-01, -8.9590e-03, 1.4070e-01, -2.0261e-02, 1.4462e-01, -3.1492e-02, 1.4103e-01, -2.0653e-02,1.2566e-01, -1.3012e-02 , 1.2544e-01 , -3.6174e-02 , 1.2619e-01],\ [1.5382e-01, -4.8728e-02, -9.4136e-03, 1.1802e-01, -1.7704e-02, 1.4898e-01, 7.3662e-04, 1.7611e-01, -1.9791e-02, 1.2716e-01, -8.4240e-02, 1.5492e-01, -9.3865e-02, 1.8241e-01, -8.5073e-02, 1.5592e-01,-4.7269e-02, 1.3714e-01, -4.6730e-02, 1.9389e-01, -4.8597e-02],\ [1.1794e-03, 1.0326e-01, -8.1547e-04, -1.9111e-03, 1.0593e-01, 7.3662e-04, 1.0757e-01, 3.1162e-03, 1.0577e-01, -1.1279e-03, 1.0018e-01, 1.2770e-03, 9.9347e-02, 3.6656e-03, 1.0011e-01, 1.3608e-03, 1.0338e-01, -2.6522e-04, 1.0343e-01, 4.6616e-03, 1.0327e-01],\ [1.8519e-01, -8.3151e-02, -1.6404e-02, 1.2287e-01, -2.8907e-02, 1.7611e-01, 3.1162e-03, 2.2425e-01, -3.2761e-02, 1.3873e-01, -1.4503e-01, 1.8713e-01, -1.6181e-01, 2.3506e-01, -1.4648e-01, 1.8883e-01, -8.0603e-02, 1.5610e-01, -7.9612e-02, 2.5507e-01, -8.2915e-02],\ [-2.2306e-02, 1.2910e-01, 4.3922e-03, -5.5982e-03, 1.1444e-01, -1.9791e-02, 1.0577e-01 , -3.2761e-02, 1.1570e-01, -9.8519e-03 , 1.4569e-01, -2.2823e-02, 1.5019e-01, -3.5667e-02, 1.4608e-01, -2.3275e-02,1.2843e-01, -1.4501e-02, 1.2814e-01, -4.1035e-02, 1.2905e-01],\ [1.2938e-01, -2.2023e-02, -3.9369e-03, 1.1425e-01, -8.9590e-03, 1.2716e-01, -1.1279e-03, 1.3873e-01, -9.8519e-03, 1.1840e-01, -3.6779e-02, 1.2986e-01, -4.0767e-02, 1.4138e-01, -3.7095e-02, 1.3024e-01,-2.1392e-02, 1.2239e-01, -2.1184e-02, 1.4618e-01, -2.1935e-02],\ [-9.5795e-02, 2.0970e-01, 2.0828e-02, -1.6955e-02, 1.4070e-01, -8.4240e-02, 1.0018e-01, -1.4503e-01, 1.4569e-01, -3.6779e-02, 2.8891e-01, -9.8239e-02, 3.1012e-01, -1.5891e-01, 2.9067e-01, -1.0039e-01,2.0647e-01, -5.8939e-02, 2.0514e-01, -1.8441e-01, 2.0949e-01],\ [1.6109e-01, -5.6684e-02, -1.1017e-02, 1.1913e-01, -2.0261e-02, 1.5492e-01, 1.2770e-03, 1.8713e-01, -2.2823e-02, 1.2986e-01, -9.8239e-02, 1.6268e-01, -1.0957e-01, 1.9456e-01, -9.9313e-02, 1.6352e-01, -5.4965e-02, 1.4151e-01, -5.4295e-02, 2.0797e-01, -5.6577e-02],\ [-1.0677e-01, 2.2172e-01, 2.3286e-02, -1.8652e-02, 1.4462e-01, -9.3865e-02, 9.9347e-02, -1.6181e-01, 1.5019e-01, -4.0767e-02, 3.1012e-01, -1.0957e-01, 3.3411e-01, -1.7735e-01, 3.1231e-01, -1.1192e-01, 2.1812e-01, -6.5564e-02, 2.1662e-01, -2.0584e-01, 2.2152e-01],\ [1.9236e-01, -9.1057e-02, -1.8010e-02, 1.2399e-01, -3.1492e-02, 1.8241e-01 , 3.6656e-03, 2.3506e-01, -3.5667e-02, 1.4138e-01, -1.5891e-01, 1.9456e-01, -1.7735e-01, 2.4774e-01, -1.6037e-01, 1.9637e-01,-8.8286e-02, 1.6044e-01, -8.7155e-02, 2.6931e-01, -9.0957e-02],\ [-9.6757e-02, 2.1073e-01, 2.1044e-02, -1.7100e-02, 1.4103e-01 , -8.5073e-02, 1.0011e-01, -1.4648e-01, 1.4608e-01, -3.7095e-02, 2.9067e-01, -9.9313e-02, 3.1231e-01, -1.6037e-01, 2.9291e-01, -1.0140e-01, 2.0748e-01, -5.9509e-02, 2.0612e-01, -1.8624e-01, 2.1054e-01],\ [1.6222e-01, -5.7846e-02, -1.1280e-02, 1.1930e-01, -2.0653e-02, 1.5592e-01, 1.3608e-03, 1.8883e-01, -2.3275e-02, 1.3024e-01, -1.0039e-01, 1.6352e-01, -1.1192e-01, 1.9637e-01, -1.0140e-01, 1.6475e-01, -5.6116e-02, 1.4217e-01, -5.5455e-02 , 2.1010e-01, -5.7733e-02],\ [-5.3573e-02, 1.6350e-01, 1.1375e-02, -1.0452e-02 , 1.2566e-01, -4.7269e-02, 1.0338e-01, -8.0603e-02, 1.2843e-01, -2.1392e-02, 2.0647e-01, -5.4965e-02, 2.1812e-01, -8.8286e-02, 2.0748e-01, -5.6116e-02,1.6184e-01, -3.3445e-02, 1.6102e-01, -1.0218e-01, 1.6339e-01],\ [ 1.4073e-01, -3.4450e-02, -6.4802e-03, 1.1600e-01, -1.3012e-02, 1.3714e-01, -2.6522e-04, 1.5610e-01, -1.4501e-02, 1.2239e-01, -5.8939e-02, 1.4151e-01, -6.5564e-02, 1.6044e-01, -5.9509e-02, 1.4217e-01, -3.3445e-02, 1.2933e-01 , -3.3040e-02, 1.6835e-01, -3.4366e-02],\ [ -5.2965e-02, 1.6275e-01, 1.1235e-02, -1.0357e-02, 1.2544e-01, -4.6730e-02, 1.0343e-01, -7.9612e-02, 1.2814e-01, -2.1184e-02, 2.0514e-01, -5.4295e-02, 2.1662e-01, -8.7155e-02 , 2.0612e-01 , -5.5455e-02,1.6102e-01, -3.3040e-02, 1.6050e-01, -1.0087e-01, 1.6259e-01],\ [ 2.0545e-01 , -1.0540e-01, -2.0938e-02, 1.2602e-01, -3.6174e-02, 1.9389e-01, 4.6616e-03, 2.5507e-01, -4.1035e-02, 1.4618e-01, -1.8441e-01, 2.0797e-01, -2.0584e-01, 2.6931e-01, -1.8624e-01, 2.1010e-01,-1.0218e-01, 1.6835e-01, -1.0087e-01, 2.9488e-01, -1.0522e-01],\ [-5.5147e-02, 1.6522e-01, 1.1734e-02, -1.0683e-02, 1.2619e-01, -4.8597e-02, 1.0327e-01, -8.2915e-02, 1.2905e-01, -2.1935e-02, 2.0949e-01, -5.6577e-02, 2.2152e-01, -9.0957e-02, 2.1054e-01, -5.7733e-02, 1.6339e-01, -3.4366e-02, 1.6259e-01, -1.0522e-01, 1.6541e-01]]) # function output testmu, testsigma = prediction_step(predmu,presigma,u) # test try: print('====================================================================') np.testing.assert_almost_equal(testmu, predmuresult,decimal= decimal) print('\x1b[6;30;42m' + 'Test Success for mu in prediction_step' + '\x1b[0m') except AssertionError as e: print('====================================================================') print('\x1b[0;30;41m' + 'Test Failed for mu in prediction_step' + '\x1b[0m') print(e) try: np.testing.assert_almost_equal(testsigma, sigmaresult,decimal= decimal+1) print('====================================================================') print('\x1b[6;30;42m' + 'Test Success for sigma in prediction_step' + '\x1b[0m') except AssertionError as e: print('====================================================================') print('\x1b[0;30;41m' + 'Test Failed for sigma in prediction_step:' + '\x1b[0m') print(e) def test_correction(decimal = 2): # input values muinput = np.array([[5.04656],[4.52065],[1.55445],[1.80649],[0.83467],[-0.10844],\ [3.87679],[2.00993],[6.81895],[8.85230],[1.62635],[9.96294],[4.60379],\ [9.03066],[7.60910],[4.93532],[4.76744],[4.87090],[2.75450],[5.02812],[8.76238]]) sigmainput = np.array([[2.6108e-01, -5.6749e-02, -1.0594e-02, 1.1899e-01, -1.9931e-02, 1.5414e-01, 1.2071e-03, 1.8575e-01, -2.2452e-02, 1.2952e-01, -9.6479e-02, 1.6147e-01, -1.0754e-01, 1.9297e-01, -9.7449e-02, 1.6260e-01,-5.3947e-02, 1.4095e-01, -5.3326e-02, 2.0615e-01, -5.5533e-02],\ [-5.6749e-02, 2.6897e-01, 1.2191e-02, -1.0836e-02, 1.2655e-01, -4.9441e-02 , 1.0319e-01, -8.4403e-02, 1.2944e-01, -2.2330e-02, 2.1129e-01, -5.7534e-02, 2.2349e-01, -9.2427e-02, 2.1233e-01, -5.8696e-02, 1.6436e-01, -3.4946e-02, 1.6360e-01, -1.0699e-01, 1.6612e-01],\ [-1.0594e-02, 1.2191e-02, 1.3796e-02, -1.6596e-03, 3.8382e-03, -9.4275e-03, -8.1671e-04, -1.6430e-02, 4.3996e-03, -3.9440e-03, 2.0860e-02, -1.1036e-02, 2.3321e-02, -1.8038e-02, 2.1076e-02, -1.1296e-02,1.1394e-02, -6.4906e-03, 1.1250e-02, -2.0970e-02, 1.1753e-02],\ [1.1899e-01 , -1.0836e-02, -1.6596e-03, 1.1266e-01, -5.2917e-03, 1.1804e-01, -1.9095e-03, 1.2291e-01, -5.6064e-03, 1.1426e-01, -1.6993e-02, 1.1916e-01, -1.8694e-02, 1.2403e-01, -1.7138e-02, 1.1932e-01,-1.0472e-02, 1.1601e-01, -1.0376e-02, 1.2606e-01, -1.0704e-02],\ [-1.9931e-02, 1.2655e-01, 3.8382e-03, -5.2917e-03, 1.1371e-01, -1.7745e-02, 1.0593e-01, -2.8980e-02, 1.1446e-01, -8.9791e-03, 1.4079e-01, -2.0313e-02, 1.4472e-01, -3.1570e-02, 1.4112e-01, -2.0700e-02,1.2570e-01, -1.3041e-02 , 1.2548e-01, -3.6263e-02, 1.2624e-01],\ [1.5414e-01, -4.9441e-02, -9.4275e-03, 1.1804e-01, -1.7745e-02, 1.4908e-01, 7.4526e-04, 1.7629e-01, -1.9837e-02 , 1.2720e-01, -8.4449e-02, 1.5505e-01, -9.4097e-02, 1.8260e-01, -8.5284e-02, 1.5603e-01,-4.7379e-02, 1.3721e-01, -4.6834e-02, 1.9411e-01, -4.8714e-02],\ [1.2071e-03, 1.0319e-01, -8.1671e-04, -1.9095e-03, 1.0593e-01, 7.4526e-04, 1.0758e-01, 3.1316e-03, 1.0577e-01, -1.1237e-03, 1.0016e-01, 1.2881e-03, 9.9327e-02, 3.6820e-03, 1.0009e-01, 1.3707e-03,1.0337e-01, -2.5900e-04, 1.0342e-01, 4.6805e-03, 1.0326e-01],\ [1.8575e-01, -8.4403e-02, -1.6430e-02, 1.2291e-01, -2.8980e-02, 1.7629e-01, 3.1316e-03, 2.2457e-01, -3.2848e-02, 1.3882e-01, -1.4541e-01, 1.8735e-01, -1.6223e-01, 2.3540e-01, -1.4687e-01, 1.8903e-01,-8.0808e-02, 1.5622e-01, -7.9803e-02, 2.5546e-01, -8.3133e-02],\ [-2.2452e-02, 1.2944e-01, 4.3996e-03, -5.6064e-03, 1.1446e-01, -1.9837e-02, 1.0577e-01, -3.2848e-02, 1.1572e-01, -9.8738e-03, 1.4580e-01, -2.2882e-02, 1.5031e-01, -3.5759e-02, 1.4619e-01, -2.3327e-02,1.2849e-01, -1.4534e-02, 1.2819e-01, -4.1141e-02, 1.2911e-01],\ [1.2952e-01, -2.2330e-02, -3.9440e-03, 1.1426e-01, -8.9791e-03, 1.2720e-01, -1.1237e-03, 1.3882e-01, -9.8738e-03, 1.1842e-01, -3.6883e-02, 1.2992e-01, -4.0882e-02, 1.4147e-01, -3.7199e-02, 1.3030e-01,-2.1446e-02, 1.2242e-01, -2.1235e-02, 1.4628e-01, -2.1992e-02],\ [-9.6479e-02, 2.1129e-01, 2.0860e-02, -1.6993e-02, 1.4079e-01, -8.4449e-02 , 1.0016e-01, -1.4541e-01, 1.4580e-01 , -3.6883e-02, 2.8941e-01, -9.8511e-02, 3.1067e-01, -1.5933e-01, 2.9118e-01, -1.0063e-01,2.0674e-01, -5.9092e-02, 2.0537e-01, -1.8489e-01, 2.0978e-01],\ [1.6147e-01, -5.7534e-02, -1.1036e-02, 1.1916e-01, -2.0313e-02, 1.5505e-01, 1.2881e-03, 1.8735e-01, -2.2882e-02, 1.2992e-01, -9.8511e-02, 1.6285e-01, -1.0988e-01, 1.9480e-01, -9.9591e-02, 1.6367e-01,-5.5111e-02, 1.4160e-01, -5.4431e-02, 2.0825e-01, -5.6735e-02],\ [-1.0754e-01, 2.2349e-01, 2.3321e-02, -1.8694e-02, 1.4472e-01, -9.4097e-02, 9.9327e-02, -1.6223e-01, 1.5031e-01, -4.0882e-02, 3.1067e-01, -1.0988e-01, 3.3472e-01, -1.7781e-01, 3.1287e-01, -1.1218e-01,2.1841e-01, -6.5735e-02, 2.1688e-01, -2.0638e-01, 2.2183e-01],\ [1.9297e-01, -9.2427e-02, -1.8038e-02, 1.2403e-01, -3.1570e-02, 1.8260e-01, 3.6820e-03, 2.3540e-01, -3.5759e-02, 1.4147e-01, -1.5933e-01, 1.9480e-01, -1.7781e-01, 2.4811e-01, -1.6079e-01, 1.9658e-01,-8.8508e-02, 1.6057e-01, -8.7360e-02, 2.6973e-01, -9.1199e-02],\ [-9.7449e-02, 2.1233e-01, 2.1076e-02, -1.7138e-02, 1.4112e-01, -8.5284e-02 , 1.0009e-01, -1.4687e-01, 1.4619e-01, -3.7199e-02, 2.9118e-01, -9.9591e-02, 3.1287e-01, -1.6079e-01, 2.9341e-01, -1.0164e-01, 2.0774e-01, -5.9664e-02 , 2.0636e-01, -1.8673e-01, 2.1083e-01],\ [1.6260e-01, -5.8696e-02, -1.1296e-02, 1.1932e-01, -2.0700e-02, 1.5603e-01, 1.3707e-03, 1.8903e-01, -2.3327e-02, 1.3030e-01, -1.0063e-01, 1.6367e-01, -1.1218e-01, 1.9658e-01, -1.0164e-01, 1.6488e-01,-5.6242e-02, 1.4225e-01, -5.5574e-02, 2.1035e-01, -5.7870e-02],\ [-5.3947e-02, 1.6436e-01, 1.1394e-02, -1.0472e-02, 1.2570e-01, -4.7379e-02, 1.0337e-01, -8.0808e-02, 1.2849e-01, -2.1446e-02, 2.0674e-01, -5.5111e-02, 2.1841e-01, -8.8508e-02, 2.0774e-01, -5.6242e-02, 1.6198e-01, -3.3526e-02, 1.6115e-01, -1.0244e-01, 1.6354e-01],\ [1.4095e-01, -3.4946e-02, -6.4906e-03, 1.1601e-01, -1.3041e-02, 1.3721e-01, -2.5900e-04, 1.5622e-01, -1.4534e-02, 1.2242e-01, -5.9092e-02, 1.4160e-01, -6.5735e-02, 1.6057e-01, -5.9664e-02, 1.4225e-01, -3.3526e-02, 1.2938e-01, -3.3115e-02, 1.6851e-01, -3.4452e-02],\ [-5.3326e-02, 1.6360e-01, 1.1250e-02, -1.0376e-02, 1.2548e-01, -4.6834e-02 , 1.0342e-01, -7.9803e-02, 1.2819e-01, -2.1235e-02, 2.0537e-01, -5.4431e-02, 2.1688e-01, -8.7360e-02, 2.0636e-01, -5.5574e-02, 1.6115e-01, -3.3115e-02, 1.6062e-01, -1.0111e-01, 1.6273e-01],\ [2.0615e-01, -1.0699e-01, -2.0970e-02, 1.2606e-01, -3.6263e-02, 1.9411e-01, 4.6805e-03, 2.5546e-01, -4.1141e-02, 1.4628e-01, -1.8489e-01, 2.0825e-01, -2.0638e-01, 2.6973e-01, -1.8673e-01, 2.1035e-01, -1.0244e-01, 1.6851e-01, -1.0111e-01, 2.9537e-01, -1.0550e-01],\ [-5.5533e-02, 1.6612e-01, 1.1753e-02, -1.0704e-02, 1.2624e-01, -4.8714e-02, 1.0326e-01, -8.3133e-02, 1.2911e-01, -2.1992e-02, 2.0978e-01, -5.6735e-02, 2.2183e-01, -9.1199e-02, 2.1083e-01, -5.7870e-02, 1.6354e-01, -3.4452e-02, 1.6273e-01, -1.0550e-01, 1.6559e-01]]) observedLandmarks = np.ones((1,9)) # desired values corrmu = np.array([[5.01782],[4.62335],[1.54298],[1.80461],[0.83901],[-0.11898],[3.87586],\ [1.99148],[6.82428],[8.84647],[1.65308],[9.94389],[4.63452], [9.00673],\ [7.63752],[4.92296],[4.78357],[4.86310],[2.76855],[5.00123],[8.78071]]) corrsigma = np.array([[1.6071e-01, -5.5777e-02, -1.0523e-02, 1.1893e-01, -1.9801e-02, 1.5382e-01, 1.1794e-03, 1.8519e-01, -2.2306e-02, 1.2938e-01, -9.5795e-02, 1.6109e-01 , -1.0677e-01, 1.9236e-01, -9.6757e-02, 1.6222e-01, -5.3573e-02, 1.4073e-01, -5.2965e-02, 2.0545e-01, -5.5147e-02],\ [-5.5777e-02, 1.6719e-01, 1.1924e-02, -1.0710e-02, 1.2625e-01, -4.8728e-02, 1.0326e-01, -8.3151e-02, 1.2910e-01, -2.2023e-02, 2.0970e-01, -5.6684e-02, 2.2172e-01, -9.1057e-02, 2.1073e-01, -5.7846e-02, 1.6350e-01, -3.4450e-02, 1.6275e-01, -1.0540e-01, 1.6522e-01],\ [-1.0523e-02, 1.1924e-02, 3.7868e-03, -1.6571e-03, 3.8324e-03, -9.4136e-03, -8.1547e-04 , -1.6404e-02, 4.3922e-03, -3.9369e-03, 2.0828e-02 , -1.1017e-02, 2.3286e-02, -1.8010e-02, 2.1044e-02, -1.1280e-02,1.1375e-02, -6.4802e-03, 1.1235e-02, -2.0938e-02, 1.1734e-02],\ [1.1893e-01, -1.0710e-02, -1.6571e-03, 1.1266e-01, -5.2842e-03, 1.1802e-01, -1.9111e-03, 1.2287e-01, -5.5982e-03, 1.1425e-01, -1.6955e-02, 1.1913e-01, -1.8652e-02, 1.2399e-01, -1.7100e-02, 1.1930e-01,-1.0452e-02, 1.1600e-01, -1.0357e-02, 1.2602e-01, -1.0683e-02],\ [-1.9801e-02, 1.2625e-01, 3.8324e-03, -5.2842e-03 , 1.1370e-01, -1.7704e-02, 1.0593e-01, -2.8907e-02, 1.1444e-01, -8.9590e-03, 1.4070e-01 , -2.0261e-02, 1.4462e-01, -3.1492e-02, 1.4103e-01, -2.0653e-02,1.2566e-01, -1.3012e-02, 1.2544e-01, -3.6174e-02, 1.2619e-01],\ [1.5382e-01, -4.8728e-02, -9.4136e-03, 1.1802e-01, -1.7704e-02, 1.4898e-01, 7.3662e-04, 1.7611e-01, -1.9791e-02, 1.2716e-01, -8.4240e-02, 1.5492e-01, -9.3865e-02, 1.8241e-01, -8.5073e-02, 1.5592e-01,-4.7269e-02, 1.3714e-01, -4.6730e-02, 1.9389e-01, -4.8597e-02],\ [1.1794e-03, 1.0326e-01, -8.1547e-04, -1.9111e-03, 1.0593e-01, 7.3662e-04, 1.0757e-01, 3.1162e-03, 1.0577e-01, -1.1279e-03, 1.0018e-01, 1.2770e-03, 9.9347e-02, 3.6656e-03, 1.0011e-01, 1.3608e-03,1.0338e-01, -2.6522e-04, 1.0343e-01, 4.6616e-03, 1.0327e-01],\ [1.8519e-01, -8.3151e-02, -1.6404e-02, 1.2287e-01, -2.8907e-02, 1.7611e-01, 3.1162e-03, 2.2425e-01, -3.2761e-02, 1.3873e-01, -1.4503e-01, 1.8713e-01, -1.6181e-01, 2.3506e-01, -1.4648e-01, 1.8883e-01,-8.0603e-02, 1.5610e-01, -7.9612e-02, 2.5507e-01, -8.2915e-02],\ [-2.2306e-02, 1.2910e-01, 4.3922e-03, -5.5982e-03, 1.1444e-01, -1.9791e-02, 1.0577e-01, -3.2761e-02, 1.1570e-01, -9.8519e-03, 1.4569e-01, -2.2823e-02, 1.5019e-01, -3.5667e-02, 1.4608e-01, -2.3275e-02,1.2843e-01, -1.4501e-02, 1.2814e-01, -4.1035e-02, 1.2905e-01],\ [1.2938e-01, -2.2023e-02, -3.9369e-03, 1.1425e-01, -8.9590e-03, 1.2716e-01, -1.1279e-03, 1.3873e-01, -9.8519e-03, 1.1840e-01, -3.6779e-02, 1.2986e-01, -4.0767e-02, 1.4138e-01, -3.7095e-02, 1.3024e-01, -2.1392e-02, 1.2239e-01, -2.1184e-02, 1.4618e-01, -2.1935e-02],\ [-9.5795e-02, 2.0970e-01, 2.0828e-02, -1.6955e-02, 1.4070e-01, -8.4240e-02, 1.0018e-01, -1.4503e-01, 1.4569e-01, -3.6779e-02, 2.8891e-01, -9.8239e-02, 3.1012e-01, -1.5891e-01, 2.9067e-01, -1.0039e-01,2.0647e-01, -5.8939e-02, 2.0514e-01, -1.8441e-01, 2.0949e-01],\ [1.6109e-01, -5.6684e-02, -1.1017e-02, 1.1913e-01, -2.0261e-02, 1.5492e-01, 1.2770e-03, 1.8713e-01, -2.2823e-02, 1.2986e-01, -9.8239e-02, 1.6268e-01, -1.0957e-01, 1.9456e-01, -9.9313e-02, 1.6352e-01,-5.4965e-02, 1.4151e-01, -5.4295e-02, 2.0797e-01, -5.6577e-02],\ [-1.0677e-01, 2.2172e-01, 2.3286e-02, -1.8652e-02, 1.4462e-01, -9.3865e-02, 9.9347e-02, -1.6181e-01, 1.5019e-01, -4.0767e-02, 3.1012e-01, -1.0957e-01, 3.3411e-01, -1.7735e-01, 3.1231e-01, -1.1192e-01,2.1812e-01, -6.5564e-02, 2.1662e-01, -2.0584e-01, 2.2152e-01],\ [1.9236e-01, -9.1057e-02, -1.8010e-02, 1.2399e-01, -3.1492e-02, 1.8241e-01, 3.6656e-03, 2.3506e-01, -3.5667e-02, 1.4138e-01, -1.5891e-01, 1.9456e-01, -1.7735e-01, 2.4774e-01, -1.6037e-01, 1.9637e-01,-8.8286e-02, 1.6044e-01, -8.7155e-02, 2.6931e-01, -9.0957e-02],\ [-9.6757e-02, 2.1073e-01, 2.1044e-02, -1.7100e-02, 1.4103e-01, -8.5073e-02, 1.0011e-01, -1.4648e-01, 1.4608e-01 , -3.7095e-02, 2.9067e-01, -9.9313e-02, 3.1231e-01, -1.6037e-01, 2.9291e-01, -1.0140e-01,2.0748e-01, -5.9509e-02, 2.0612e-01, -1.8624e-01, 2.1054e-01],\ [1.6222e-01, -5.7846e-02, -1.1280e-02, 1.1930e-01, -2.0653e-02, 1.5592e-01, 1.3608e-03, 1.8883e-01, -2.3275e-02, 1.3024e-01, -1.0039e-01, 1.6352e-01, -1.1192e-01, 1.9637e-01, -1.0140e-01, 1.6475e-01,-5.6116e-02, 1.4217e-01 , -5.5455e-02, 2.1010e-01, -5.7733e-02],\ [-5.3573e-02, 1.6350e-01 , 1.1375e-02, -1.0452e-02, 1.2566e-01, -4.7269e-02, 1.0338e-01, -8.0603e-02, 1.2843e-01, -2.1392e-02, 2.0647e-01, -5.4965e-02 , 2.1812e-01, -8.8286e-02, 2.0748e-01, -5.6116e-02,1.6184e-01, -3.3445e-02, 1.6102e-01, -1.0218e-01, 1.6339e-01],\ [1.4073e-01, -3.4450e-02, -6.4802e-03, 1.1600e-01, -1.3012e-02, 1.3714e-01, -2.6522e-04, 1.5610e-01, -1.4501e-02, 1.2239e-01, -5.8939e-02, 1.4151e-01, -6.5564e-02, 1.6044e-01, -5.9509e-02, 1.4217e-01,-3.3445e-02, 1.2933e-01, -3.3040e-02, 1.6835e-01, -3.4366e-02],\ [-5.2965e-02, 1.6275e-01, 1.1235e-02, -1.0357e-02, 1.2544e-01, -4.6730e-02, 1.0343e-01, -7.9612e-02, 1.2814e-01, -2.1184e-02, 2.0514e-01, -5.4295e-02, 2.1662e-01, -8.7155e-02, 2.0612e-01, -5.5455e-02, 1.6102e-01, -3.3040e-02, 1.6050e-01, -1.0087e-01, 1.6259e-01],\ [2.0545e-01, -1.0540e-01, -2.0938e-02, 1.2602e-01, -3.6174e-02, 1.9389e-01, 4.6616e-03, 2.5507e-01, -4.1035e-02, 1.4618e-01, -1.8441e-01, 2.0797e-01, -2.0584e-01, 2.6931e-01, -1.8624e-01, 2.1010e-01,-1.0218e-01, 1.6835e-01, -1.0087e-01, 2.9488e-01, -1.0522e-01],\ [-5.5147e-02, 1.6522e-01, 1.1734e-02 , -1.0683e-02, 1.2619e-01, -4.8597e-02, 1.0327e-01, -8.2915e-02 , 1.2905e-01, -2.1935e-02, 2.0949e-01, -5.6577e-02, 2.2152e-01, -9.0957e-02, 2.1054e-01, -5.7733e-02,1.6339e-01, -3.4366e-02 , 1.6259e-01, -1.0522e-01, 1.6541e-01]]) # test values testmu, testsigma, observedLandmarks = correction_step(mu,sigma,data.loc[(data['timestep'] == 330) & (data['sensor'] == 'SENSOR')],observedLandmarks) # test try: print('====================================================================') np.testing.assert_almost_equal(testmu, corrmu,decimal= decimal) print('\x1b[6;30;42m' + 'Test Success for mu in correction_step' + '\x1b[0m') except AssertionError as e: print('====================================================================') print('\x1b[0;30;41m' + 'Test Failed for mu in correction_step' + '\x1b[0m') print(e) try: np.testing.assert_almost_equal(testsigma, corrsigma,decimal= decimal) print('====================================================================') print('\x1b[6;30;42m' + 'Test Success for sigma in correction_step' + '\x1b[0m') except AssertionError as e: print('====================================================================') print('\x1b[0;30;41m' + 'Test Failed for sigma in correction_step:' + '\x1b[0m') print(e) ``` ```python test_total(decimal = 1) ``` ```python test_prediction(decimal = 1) ``` ==================================================================== Test Success for mu in prediction_step ==================================================================== Test Failed for sigma in prediction_step: Arrays are not almost equal to 2 decimals Mismatched elements: 3 / 441 (0.68%) Max absolute difference: 0.20181353 Max relative difference: 5.28393366 x: array([[ 3.63e-01, -5.80e-02, -1.20e-02, 1.19e-01, -2.03e-02, 1.55e-01, 1.29e-03, 1.87e-01, -2.29e-02, 1.30e-01, -9.86e-02, 1.63e-01, -1.10e-01, 1.95e-01, -9.96e-02, 1.64e-01, -5.51e-02, 1.42e-01,... y: array([[ 2.61e-01, -5.67e-02, -1.06e-02, 1.19e-01, -1.99e-02, 1.54e-01, 1.21e-03, 1.86e-01, -2.25e-02, 1.30e-01, -9.65e-02, 1.61e-01, -1.08e-01, 1.93e-01, -9.74e-02, 1.63e-01, -5.36e-02, 1.41e-01,... ```python test_correction(decimal = 1) ``` ==================================================================== ==================================================================== Test Failed for mu in correction_step Arrays are not almost equal to 1 decimals Mismatched elements: 17 / 21 (81%) Max absolute difference: 14.75384379 Max relative difference: 2.71482664 x: array([[ 0.1], [ 0. ], [ 0.1],... y: array([[ 5. ], [ 4.6], [ 1.5],... ==================================================================== Test Failed for sigma in correction_step: Arrays are not almost equal to 1 decimals Mismatched elements: 123 / 441 (27.9%) Max absolute difference: 999.89243 Max relative difference: 9295.27219485 x: array([[0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00, 0.0e+00],... y: array([[ 1.6e-01, -5.6e-02, -1.1e-02, 1.2e-01, -2.0e-02, 1.5e-01, 1.2e-03, 1.9e-01, -2.2e-02, 1.3e-01, -9.6e-02, 1.6e-01, -1.1e-01, 1.9e-01, -9.7e-02, 1.6e-01, -5.4e-02, 1.4e-01,... ```python mu ``` array([[ 0.09956596], [ 0.01005956], [ 0.10086379], [ 1.78615901], [ 0.87720484], [-0.09141201], [ 3.85900198], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ]]) ```python sigma ``` array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 1000., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 1000., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 1000., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 1000., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 1000., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 1000., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1000., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1000., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1000., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1000., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1000., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1000., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1000., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1000., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1000., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1000., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1000., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1000.]]) ```python ```
da879c29f1c5a512410969b9492dc8a3aa7ea6b5
522,842
ipynb
Jupyter Notebook
EFK_SLAM.ipynb
olayasturias/SLAM-course
a999ccb9e8154d33287542c2624a2fae826abe31
[ "MIT" ]
3
2021-05-17T07:52:04.000Z
2022-03-09T10:27:02.000Z
EFK_SLAM.ipynb
olayasturias/SLAM-course
a999ccb9e8154d33287542c2624a2fae826abe31
[ "MIT" ]
null
null
null
EFK_SLAM.ipynb
olayasturias/SLAM-course
a999ccb9e8154d33287542c2624a2fae826abe31
[ "MIT" ]
null
null
null
144.431492
112,268
0.847069
true
29,757
Qwen/Qwen-72B
1. YES 2. YES
0.831143
0.749087
0.622599
__label__eng_Latn
0.148626
0.284836
## 2-3 量子フーリエ変換 この節では、量子アルゴリズムの中でも最も重要なアルゴリズムの一つである量子フーリエ変換について学ぶ。 量子フーリエ変換はその名の通りフーリエ変換を行う量子アルゴリズムであり、様々な量子アルゴリズムのサブルーチンとしても用いられることが多い。 (参照:Nielsen-Chuang 5.1 `The quantum Fourier transform`) ※なお、最後のコラムでも多少述べるが、回路が少し複雑である・入力状態を用意することが難しいといった理由から、いわゆるNISQデバイスでの量子フーリエ変換の実行は難しいと考えられている。 ### 定義 まず、$2^n$成分の配列 $\{x_j\}$ に対して$(j=1,\cdots,2^n)$、その[離散フーリエ変換](https://ja.wikipedia.org/wiki/離散フーリエ変換)である配列$\{ y_k \}$を $$ y_k = \frac{1}{\sqrt{2^n}} \sum_{j=1}^{2^n} x_j e^{i\frac{2\pi kj}{2^n}} \tag{1} $$ で定義する$(k=1, \cdots 2^n)$。配列 $\{x_j\}$ は$\sum_{j=1}^{2^n} |x_j|^2 = 1$ と規格化されているものとする。 量子フーリエ変換アルゴリズムは、入力の量子状態 $$ |x\rangle := \sum_{j=1}^{2^n} x_j |j\rangle $$ を、 $$ |y \rangle := \sum_{k=1}^{2^n} y_k |k\rangle \tag{2} $$ となるように変換する量子アルゴリズムである。ここで、$|i \rangle$は、整数$i$の二進数での表示$i_1 \cdots i_n$ ($i_m = 0,1$)に対応する量子状態$|i_1 \cdots i_n \rangle$の略記である。(例えば、$|2 \rangle = |0\cdots0 10 \rangle, |7 \rangle = |0\cdots0111 \rangle$となる) ここで、式(1)を(2)に代入してみると、 $$ |y \rangle = \frac{1}{\sqrt{2^n}} \sum_{k=1}^{2^n} \sum_{j=1}^{2^n} x_j e^{i\frac{2\pi kj}{2^n}} |k\rangle = \sum_{j=1}^{2^n} x_j \left( \frac{1}{\sqrt{2^n}} \sum_{k=1}^{2^n} e^{i\frac{2\pi kj}{2^n}} |k\rangle \right) $$ となる。よって、量子フーリエ変換では、 $$ |j\rangle \to \frac{1}{\sqrt{2^n}} \sum_{k=1}^{2^n} e^{i\frac{2\pi kj}{2^n}} |k\rangle $$ を行う量子回路(変換)$U$を見つければ良いことになる。(余裕のある読者は、これがユニタリ変換であることを実際に計算して確かめてみよう) この式はさらに式変形できて(やや複雑なので最後の結果だけ見てもよい) $$ \begin{eqnarray} \sum_{k=1}^{2^n} e^{i\frac{2\pi kj}{2^n}} |k\rangle &=& \sum_{k_1=0}^1 \cdots \sum_{k_n=0}^1 e^{i\frac{2\pi (k_1 2^{n-1} + \cdots k_n 2^0 )\cdot j}{2^n}} |k_1 \cdots k_n\rangle \:\:\:\: \text{(kの和を2進数表示で書き直した)} \\ &=& \sum_{k_1=0}^1 \cdots \sum_{k_n=0}^1 e^{i 2\pi j (k_1 2^{-1} + \cdots k_n 2^{-n})} |k_1 \cdots k_n\rangle \\ &=& \left( \sum_{k_1=0}^1 e^{i 2\pi j k_1 2^{-1}} |k_1 \rangle \right) \otimes \cdots \otimes \left( \sum_{k_n=0}^1 e^{i 2\pi j k_n 2^{-n}} |k_n \rangle \right) \:\:\:\: \text{("因数分解"をして、全体をテンソル積で書き直した)} \\ &=& \left( |0\rangle + e^{i 2\pi 0.j_n} |1 \rangle \right) \otimes \left( |0\rangle + e^{i 2\pi 0.j_{n-1}j_n} |1 \rangle \right) \otimes \cdots \otimes \left( |0\rangle + e^{i 2\pi 0.j_1j_2\cdots j_n} |1 \rangle \right) \:\:\:\: \text{(カッコの中の和を計算した)} \end{eqnarray} $$ となる。ここで、 $$ 0.j_l\cdots j_n = \frac{j_l}{2} + \frac{j_{l-1}}{2^2} + \cdots + \frac{j_n}{2^{n-l+1}} $$ は二進小数であり、$e^{i 2\pi j/2^{-l} } = e^{i 2\pi j_1 \cdots j_l . j_{l-1}\cdots j_n } = e^{i 2\pi 0. j_{l-1}\cdots j_n }$となることを用いた。($e^{i2\pi}=1$なので、整数部分は関係ない) まとめると、量子フーリエ変換では、 $$ |j\rangle = |j_1 \cdots j_n \rangle \to \frac{ \left( |0\rangle + e^{i 2\pi 0.j_n} |1 \rangle \right) \otimes \left( |0\rangle + e^{i 2\pi 0.j_{n-1}j_n} |1 \rangle \right) \otimes \cdots \otimes \left( |0\rangle + e^{i 2\pi 0.j_1j_2\cdots j_n} |1 \rangle \right) }{\sqrt{2^n}} \tag{*} $$ という変換ができればよい。 ### 回路の構成 それでは、量子フーリエ変換を実行する回路を実際にどのように構成するかを見ていこう。 そのために、次のアダマールゲート$H$についての等式(計算すると合っていることが分かる) $$ |m \rangle = \frac{|0\rangle + e^{i 2\pi 0.m}|1\rangle }{\sqrt{2}} \:\:\: (m=0,1) $$ と、角度 $2\pi/2^l$ の一般位相ゲート $$ R_l = \begin{pmatrix} 1 & 0\\ 0 & e^{i \frac{2\pi}{2^l} } \end{pmatrix} $$ を多用する。 1. まず、状態$\left( |0\rangle + e^{i 2\pi 0.j_1j_2\cdots j_n} |1\rangle \right)$の部分をつくる。1番目の量子ビット$|j_1\rangle$にアダマールゲートをかけると $$ |j_1 \cdots j_n \rangle \to \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1} |1\rangle \right) |j_2 \cdots j_n \rangle $$ となるが、ここで、2番目のビット$|j_2\rangle$を制御ビットとする一般位相ゲート$R_2$を1番目の量子ビットにかけると、$j_2=0$の時は何もせず、$j_2=1$の時のみ1番目の量子ビットの$|1\rangle$部分に位相 $2\pi/2^2 = 0.01$(二進小数)がつくから、 $$ \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1} |1\rangle \right) |j_2 \cdots j_n \rangle \to \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1j_2} |1\rangle \right) |j_2 \cdots j_n \rangle $$ となる。以下、$l$番目の量子ビット$|j_l\rangle$を制御ビットとする一般位相ゲート$R_l$をかければ($l=3,\cdots n$)、最終的に $$ \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1\cdots j_n} \right) |j_2 \cdots j_n \rangle $$ が得られる。 2. 次に、状態$\left( |0\rangle + e^{i2\pi 0.j_{n-1} j_n} |1\rangle\right)$の部分をつくる。先ほどと同様に、2番目のビット$|j_2\rangle$にアダマールゲートをかければ $$ \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1\cdots j_n} \right) \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_2} \right) |j_3 \cdots j_n \rangle $$ ができる。再び、3番目の量子ビットを制御ビット$|j_3\rangle$とする位相ゲート$R_2$をかければ $$ \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1\cdots j_n} \right) \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_2j_3} \right) |j_3 \cdots j_n \rangle $$ となり、これを繰り返して $$ \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1\cdots j_n} \right) \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_2\cdots j_n} \right) |j_3 \cdots j_n \rangle $$ を得る。 3. 1,2と同様の手順で、$l$番目の量子ビット$|j_l\rangle$にアダマールゲート・制御位相ゲート$R_l$をかけていく($l=3,\cdots,n$)。すると最終的に $$ |j_1 \cdots j_n \rangle \to \left( \frac{|0\rangle + e^{i 2\pi 0.j_1\cdots j_n} |1 \rangle}{\sqrt{2}} \right) \otimes \cdots \otimes \left( \frac{|0\rangle + e^{i 2\pi 0.j_{n-1}j_n} |1 \rangle}{\sqrt{2}} \right) \otimes \left( \frac{|0\rangle + e^{i 2\pi 0.j_n} |1 \rangle}{\sqrt{2}} \right) $$ が得られるので、最後にビットの順番をSWAPゲートで反転させてあげれば、量子フーリエ変換を実行する回路が構成できたことになる(式($*$)とはビットの順番が逆になっていることに注意)。 SWAPを除いた部分を回路図で書くと以下のようである。 ### SymPyを用いた実装 量子フーリエ変換への理解を深めるために、SymPyを用いて$n=3$の場合の回路を実装してみよう。 ```python from sympy import * from sympy.physics.quantum import * from sympy.physics.quantum.qubit import Qubit,QubitBra init_printing() # ベクトルや行列を綺麗に表示するため from sympy.physics.quantum.gate import X,Y,Z,H,S,T,CNOT,SWAP,CPHASE,CGateS ``` ```python # Google Colaboratory上でのみ実行してください from IPython.display import HTML def setup_mathjax(): display(HTML(''' ''')) get_ipython().events.register('pre_run_cell', setup_mathjax) ``` まず、フーリエ変換される入力$|x\rangle$として、 $$ |x\rangle = \sum_{j=1}^8 \frac{1}{\sqrt{8}} |i\rangle $$ という全ての状態の重ね合わせ状態を考える($x_1 = \cdots = x_8 = 1/\sqrt{8}$)。 ```python input = 1/sqrt(8) *( Qubit("000")+Qubit("001")+Qubit("010")+Qubit("011")+Qubit("100")+Qubit("101")+Qubit("110")+Qubit("111")) input ``` この状態に対応する配列をnumpyでフーリエ変換すると ```python import numpy as np input_np_array = 1/np.sqrt(8)*np.ones(8) print( input_np_array ) ## 入力 print( np.fft.ifft(input_np_array) * np.sqrt(8) ) ## 出力. ここでのフーリエ変換の定義とnumpyのifftの定義を合わせるため、sqrt(2^3)をかける ``` [0.35355339 0.35355339 0.35355339 0.35355339 0.35355339 0.35355339 0.35355339 0.35355339] [1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j] となり、フーリエ変換すると $y_1=1,y_2=\cdots=y_8=0$ という簡単な配列になることが分かる。これを量子フーリエ変換で確かめてみよう。 まず、$R_1, R_2, R_3$ゲートはそれぞれ$Z, S, T$ゲートに等しいことに注意する($e^{i\pi}=-1, e^{i\pi/2}=i$)。 ```python represent(Z(0),nqubits=1), represent(S(0),nqubits=1), represent(T(0),nqubits=1) ``` $\displaystyle \left( \left[\begin{matrix}1 & 0\\0 & -1\end{matrix}\right], \ \left[\begin{matrix}1 & 0\\0 & i\end{matrix}\right], \ \left[\begin{matrix}1 & 0\\0 & e^{\frac{i \pi}{4}}\end{matrix}\right]\right)$ 量子フーリエ変換(Quantum Fourier TransformなのでQFTと略す)を実行回路を構成していく。 最初に、1番目(SymPyは右から0,1,2とビットを数えるので、SymPyでは2番目)の量子ビットにアダマール演算子をかけ、2番目・3番目のビットを制御ビットとする$R_2, R_3$ゲートをかける。 ```python QFT_gate = H(2) QFT_gate = CGateS(1, S(2)) * QFT_gate QFT_gate = CGateS(0, T(2)) * QFT_gate ``` 2番目(SymPyでは1番目)の量子ビットにもアダマールゲートと制御$R_2$演算を施す。 ```python QFT_gate = H(1) * QFT_gate QFT_gate = CGateS(0, S(1)) * QFT_gate ``` 3番目(SymPyでは0番目)の量子ビットにはアダマールゲートのみをかければ良い。 ```python QFT_gate = H(0) * QFT_gate ``` 最後に、ビットの順番を合わせるためにSWAPゲートをかける。 ```python QFT_gate = SWAP(0, 2) * QFT_gate ``` これで$n=3$の時の量子フーリエ変換の回路を構成できた。回路自体はやや複雑である。 ```python QFT_gate ``` 入力ベクトル$|x\rangle$ にこの回路を作用させると、以下のようになり、正しくフーリエ変換された状態が出力されていることが分かる。($y_1=1,y_2=\cdots=y_8=0$) ```python simplify( qapply( QFT_gate * input) ) ``` 読者は是非、入力を様々に変えてこの回路を実行し、フーリエ変換が正しく行われていることを確認してみてほしい。 --- ### コラム:計算量について 「量子コンピュータは計算を高速に行える」とは、どういうことだろうか。本節で学んだ量子フーリエ変換を例にとって考えてみる。 量子フーリエ変換を行うために必要なゲート操作の回数は、1番目の量子ビットに$n$回、2番目の量子ビットに$n-1$回、...、$n$番目の量子ビットに1回で合計$n(n-1)/2$回、そして最後のSWAP操作が約$n/2$回であるから、全て合わせると$\mathcal{O}(n^2)$回である($\mathcal{O}$記法について詳しく知りたい人は、下記セクションを参照)。 一方、古典コンピュータでフーリエ変換を行う[高速フーリエ変換](https://ja.wikipedia.org/wiki/高速フーリエ変換)は、同じ計算を行うのに$\mathcal{O}(n2^n)$の計算量を必要とする。この意味で、量子フーリエ変換は、古典コンピュータで行う高速フーリエ変換に比べて「高速」と言える。 これは一見喜ばしいことに見えるが、落とし穴がある。フーリエ変換した結果$\{y_k\}$は量子フーリエ変換後の状態$|y\rangle$の確率振幅として埋め込まれているが、この振幅を素直に読み出そうとすると、結局は**指数関数的な回数の観測を繰り返さなくてはならない**。さらに、そもそも入力$|x\rangle$を用意する方法も簡単ではない(素直にやると、やはり指数関数的な時間がかかってしまう)。 このように、量子コンピュータや量子アルゴリズムを「実用」するのは簡単ではなく、さまざまな工夫・技術発展がまだまだ求められている。 一体どのような問題で量子コンピュータが高速だと思われているのか、理論的にはどのように扱われているのかなど、詳しく学びたい方はこのQmediaの記事[「量子計算機が古典計算機より優れている」とはどういうことか](https://www.qmedia.jp/computational-complexity-and-quantum-computer/)(竹嵜智之)を参照されたい。 #### オーダー記法$\mathcal{O}$についての註 そもそも、アルゴリズムの性能はどのように定量評価できるのだろうか。ここでは、アルゴリズムの実行に必要な資源、主に時間をその基準として考える。とくに問題のサイズを$n$としたとき、計算ステップ数(時間)や消費メモリなど、必要な計算資源が$n$の関数としてどう振る舞うかを考える。(問題のサイズとは、例えばソートするデータの件数、あるいは素因数分解したい数の二進数表現の桁数などである。) 例えば、問題のサイズ$n$に対し、アルゴリズムの要求する計算資源が次の$f(n)$で与えられるとする。 $$ f(n) = 2n^2 + 5n + 8 $$ $n$が十分大きいとき(例えば$n=10^{10}$)、$2n^2$に比べて$5n$や$6$は十分に小さい。したがって、このアルゴリズムの評価という観点では$5n+8$という因子は重要ではない。また、$n^2$の係数が$2$であるという情報も、$n$が十分大きいときの振る舞いには影響を与えない。こうして、計算時間$f(n)$の一番**「強い」**項の情報が重要であると考えることができる。このような考え方を漸近的評価といい、計算量のオーダー記法では次の式で表す。 $$f(n) = \mathcal{O}(n^2)$$ 一般に$f(n) = \mathcal{O}(g(n))$とは、ある正の数$n_0, c$が存在して、任意の$n > n_0$に対して $$|f(n)| \leq c |g(n)|$$ が成り立つことである。上の例では、$n_0=7, c=3$とすればこの定義の通りである(グラフを描画してみよ)。練習として、$f(n) = 6n^3 +5n$のオーダー記法$f(n) = \mathcal{O}(n^3)$を与える$n_0, c$の組を考えてみよ。 アルゴリズムの性能評価では、その入力のサイズを$n$としたときに必要な計算資源を$n$の関数として表す。特にオーダー記法による漸近評価は、入力のサイズが大きくなったときの振る舞いを把握するときに便利である。そして、こうした漸近評価に基づいた計算量理論というものを用いて、様々なアルゴリズムの分類が行われている。詳細は上記のQmedia記事を参照されたい。
2f2698adad3bdaab88ed2f356b15d91709855b5b
41,452
ipynb
Jupyter Notebook
notebooks/2.3_quantum_Fourier_transform.ipynb
kumagaimasahito/quantum-native-dojo
887dc0b9a8c50c43ab634adbea2c42e36c6d37c5
[ "BSD-3-Clause" ]
1
2019-12-05T06:52:15.000Z
2019-12-05T06:52:15.000Z
notebooks/2.3_quantum_Fourier_transform.ipynb
kumagaimasahito/quantum-native-dojo
887dc0b9a8c50c43ab634adbea2c42e36c6d37c5
[ "BSD-3-Clause" ]
null
null
null
notebooks/2.3_quantum_Fourier_transform.ipynb
kumagaimasahito/quantum-native-dojo
887dc0b9a8c50c43ab634adbea2c42e36c6d37c5
[ "BSD-3-Clause" ]
1
2022-03-06T17:53:36.000Z
2022-03-06T17:53:36.000Z
45.551648
3,472
0.533243
true
6,544
Qwen/Qwen-72B
1. YES 2. YES
0.937211
0.746139
0.69929
__label__yue_Hant
0.556787
0.463015
```python from IPython.core.display import HTML, Image css_file = 'style.css' HTML(open(css_file, 'r').read()) ``` <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Philosopher:400,700,400italic,700italic' rel='stylesheet' type='text/css'> <style> @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } /* Formatting for header cells */ .text_cell_render h1 { font-family: 'Philosopher', sans-serif; font-weight: 400; font-size: 2.2em; line-height: 100%; color: rgb(0, 80, 120); margin-bottom: 0.1em; margin-top: 0.1em; display: block; } .text_cell_render h2 { font-family: 'Philosopher', serif; font-weight: 400; font-size: 1.9em; line-height: 100%; color: rgb(245,179,64); margin-bottom: 0.1em; margin-top: 0.1em; display: block; } .text_cell_render h3 { font-family: 'Philosopher', serif; margin-top:12px; margin-bottom: 3px; font-style: italic; color: rgb(94,127,192); } .text_cell_render h4 { font-family: 'Philosopher', serif; } .text_cell_render h5 { font-family: 'Alegreya Sans', sans-serif; font-weight: 300; font-size: 16pt; color: grey; font-style: italic; margin-bottom: .1em; margin-top: 0.1em; display: block; } .text_cell_render h6 { font-family: 'PT Mono', sans-serif; font-weight: 300; font-size: 10pt; color: grey; margin-bottom: 1px; margin-top: 1px; } .CodeMirror{ font-family: "PT Mono"; font-size: 100%; } </style> ```python from sympy import init_printing, Matrix, symbols init_printing() ``` # Transposes, permutations and vector spaces ## The permutation matrices Permutation matrices, usually denoted as $P$, have the property that $P^{-1}=p^T$. This means that the inverse of the matrix is equal to the transpose of the matrix. We have not discussed inverses in detail, but for now, remember that it is easy to calculate the inverse of a square matrix (one that is invertible), by using the `.inv()` method. Permutation matrices allow for row exchanges as elementary operation. They are useful when dealing with $0$'s in pivot positions. Below, we create a matrix, `P`, which is an $n=3$ identity matrix with rows $1$ and $2$ exchanged. ```python P = Matrix([[0, 1, 0], [1, 0, 0], [0, 0, 1]]) P # Exchanging rows 1 and 2 ``` The transpose of a matrix interchanges the rows and columns. We can calculate the transpose using the `,transpose()` method. Below we call the inverse and transpose methods on the matrix `P`. ```python P.inv(), P.transpose() ``` They are indeed equal. ```python P.inv() == P.transpose() ``` True For a matrix of size $n \times n$ then there are $n!$ number of permutations. This is easy to conceptualize. When starting with an $n=3$ square matrix, there are $n$ rows available for interchanging. Once one is chosen, there are $n-1$ left, and so on, hence $n!$. ## The transpose of a matrix We have briefly mentioned transposes of a matrix, but what are they? They simply make rows of the column elements and columns of the row elements as in the example below. ```python a11, a12, a13, a14, a21, a22, a23, a24, a31, a32, a33, a34 = symbols('a11, a12, a13, a14, a21, a22, a23, a24, a31, a32, a33, a34') # Creating mathematical scalar constants ``` ```python A = Matrix([[a11, a12, a13], [a21, a22, a23], [a31, a32, a33]]) A ``` ```python A.transpose() ``` Note how the elements in the first row became the elements of the first column. We need not stick with square matrices, this applies to any sized matrix. ```python A = Matrix([[a11, a12, a13, a14], [a21, a22, a23, a24]]) A ``` ```python A.transpose() ``` This means that a matrix of size $m \times n$ becomes a matrix of size $n \times m$ after transposing it. Multiplying a matrix by its transpose results in a symmetric matrix (detail in the next section). ```python A * A.transpose() ``` ## Symmetric matrices A symmetric matrix is a square matrix with elements opposite the main diagonal all equal to each other. Here is an example with integers as elements. Note the values across from each other along the main diagonal, i.e. the $29$ and the $29$, the $15$ and the $15$, then the $43,16$ and $16,43$ and so on. ```python B = Matrix([[1, 5, 3], [3, 4, 2], [2, 2, 1], [4, 6, 3]]) S = B * B.transpose() S ``` Symmetric matrices are equal to their inverses. ```python S == S.transpose() ``` True ## Vector spaces A vector space consists of a set of vectors with useful properties. Think of the common $\mathbb{R}^2$. Every point $\left( a,b \right) \in \mathbb{R}$ is a vector. We can fill all (reach all of the points) of $\mathbb{R}$ with an infinite set of vectors. Note that these vector spaces also include the zero vector $\underline{0}$. When considering the familiar vector space $\mathbb{R}^2$ it is clear to see that we can take a subset of the vectors space and through linear combinations of the, fill all of the space. Think of the two vectors $\left(1,0\right)$ and $\left(0,1\right)$ These are the two vectors, each of length $1$ and respectively along the $x$ and $y$ axes. A constant multiple of each, added together (what we call a linear combination) will fill the vector space. These two vectors in our example are then called _basis vectors_ and we say that they _span_ the vector space. Most texts denote a vector space as $V$. Basis vectors in $V$ allow us to use scalar multiplication and addition which are then closed operations, that is, the result is still an element of the vector space. ### A subspace A subspace is a subset of a vector space. It must still allow for the closure property mentioned above. This means that one of the quadrants of $V=\mathbb{R}^2$ cannot be a vector subspace. Scalar multiplication and vector addition can result in a vector that is not in that quadrant. It is important to note that the zero vector is a subspace, trivial though it may be. In fact, all vector subspaces must contain the zero vector. The other trivial subspace of a vector space, $V$, is $V$ itself. Another example of a subspace in $\mathbb{R}^2$ is a line through the origin. Addition of any vectors on that line or a scalar multiple of any such vectors results in a vector that will still be on that line (closure). In $\mathbb{R}^3$, the zero vector, a line through the origin, and a plane through the origin are all vector subspaces. ## Column spaces of matrices The idea of the column space of a matrix is one that we have seen before. It is a very important way to consider a matrix. Here we see each column in a matrix as a vector. In (1) below, we depict $ A \underline{x}$. $$\begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} \tag{1}$$ We can rewrite this as in (2). $$x_1 \begin{bmatrix} a_{11} \\ a_{21} \\ a_{31} \end{bmatrix} + x_2 \begin{bmatrix} a_{12} \\ a_{22} \\ a_{32} \end{bmatrix} + x_3 \begin{bmatrix} a_{13} \\ a_{23} \\ a_{33} \end{bmatrix}\tag{2}$$ This flows naturally when you consider that $A \underline{x}$ as a linear set of three equations written in matrix form, i.e. we started as in (3) and cerated the matrix of coefficients $A$ and the column vector $\underline{x}$. $$a_{11}x_1 + a_{12}x_2 + a_{13}x_3 \\ a_{21}x_1 + a_{22}x_2 + a_{23}x_3 \\ a_{31}x_1 + a_{32}x_2 + a_{33}x_3 \tag{3}$$ Giving this linear system a right-hand side, as in (4), we create another column vector, $\underline{b}$. $$a_{11}x_1 + a_{12}x_2 + a_{13}x_3 = b_1 \\ a_{21}x_1 + a_{22}x_2 + a_{23}x_3 = b_2 \\ a_{31}x_1 + a_{32}x_2 + a_{33}x_3 = b_3 \tag{4}$$ In matrix notation we take (2) and end with (5) below. $$x_1 \begin{bmatrix} a_{11} \\ a_{21} \\ a_{31} \end{bmatrix} + x_2 \begin{bmatrix} a_{12} \\ a_{22} \\ a_{32} \end{bmatrix} + x_3 \begin{bmatrix} a_{13} \\ a_{23} \\ a_{33} \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}\tag{5}$$ We now see that we put constraints on $\underline{b}$. If we view the problem as a linear combination of the three column vectors that make up $A$, we have that they are three vectors in a vector space and that $\underline{b}$ must be another vector in that vector space. ```python ```
be5b4929a3b9b2498af42ef6e85836aed5bd8b5d
36,670
ipynb
Jupyter Notebook
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_5_Transposes_Permutations_Spaces.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
null
null
null
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_5_Transposes_Permutations_Spaces.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
null
null
null
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_5_Transposes_Permutations_Spaces.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
2
2022-02-09T15:41:33.000Z
2022-02-11T07:47:40.000Z
52.088068
4,068
0.713526
true
2,675
Qwen/Qwen-72B
1. YES 2. YES
0.651355
0.863392
0.562374
__label__eng_Latn
0.986965
0.144914
# Papoulis TODO ```python import sympy import sympy.functions.elementary.exponential as symExp from sympy.plotting import plot import numpy as np import matplotlib.pyplot as plt from visuals import * ``` ```python c, t, U = sympy.symbols('c, t, U') ``` ```python f = c*symExp.exp(-c*t) f ``` c*exp(-c*t) ```python int_f = sympy.integrate(f, (t, 0, t)) int_f sympy.oo ``` oo ```python q = sympy.integrate(f, t) q ``` -exp(-c*t) sympy.limit(q, t, sympy.oo) $\infty$ ```python temp = int_f.subs({c:1, U:1}) temp ``` 1 - exp(-t) ```python p = plot(temp, (t, -2, 5), show=False) p.show() ``` ```python from sympy.utilities.lambdify import lambdify func = lambdify(t, temp, 'numpy') # returns a numpy-ready function1 ``` ```python interval = np.linspace(0, 5, num=1000) values = func(interval) ``` ```python rng ``` 3 ```python x = [0] np.isscalar(x) ``` False ```python len[rng] ``` ```python def pdfInfo(interval, values, rng): fig, ax = plt.subplots() ax.plot(interval, values, 'k') if not np.isscalar(rng) and len(rng)==1: rng = rng[0] if not np.isscalar(rng): x_1 = np.argmax(interval>rng[0]) x_2 = np.argmax(interval>rng[1]) ax.fill_between(np.linspace(rng[0], rng[1], num=len(values[x_1:x_2])), values[x_1:x_2], 'k') else: x = np.argmax(interval>rng) x_0 = interval[0] ax.fill_between(np.linspace(x_0, rng, num=len(values[0:x])), values[0:x], 'k') ax.axhline(0, c='k') pdfInfo(interval, values, [2, 3]) ``` ```python ``` ```python ``` ```python ```
3fa7a10b6274472e1a1242a70f23f85e4a65969b
28,454
ipynb
Jupyter Notebook
Probability/Inicial-checkpoint.ipynb
carlos-faria/Stochastic-Processes
2ee57a1029566b606af781ec5d307eb33434fb79
[ "MIT" ]
null
null
null
Probability/Inicial-checkpoint.ipynb
carlos-faria/Stochastic-Processes
2ee57a1029566b606af781ec5d307eb33434fb79
[ "MIT" ]
null
null
null
Probability/Inicial-checkpoint.ipynb
carlos-faria/Stochastic-Processes
2ee57a1029566b606af781ec5d307eb33434fb79
[ "MIT" ]
null
null
null
84.937313
11,788
0.843678
true
547
Qwen/Qwen-72B
1. YES 2. YES
0.891811
0.847968
0.756227
__label__yue_Hant
0.289815
0.595301
# Blackbody A [blackbody](http://en.wikipedia.org/wiki/Black_body) or *planckian radiator* is an ideal thermal radiator that absorbs completely all incident radiation, whatever the wavelength, the direction of incidence or the polarization. <a name="back_reference_1"></a><a href="#reference_1">[1]</a> A *blackbody* in [thermal equilibrium](http://en.wikipedia.org/wiki/Thermal_equilibrium) emits electromagnetic radiation called [blackbody radiation](http://en.wikipedia.org/wiki/Black-body_radiation). ## Planck's Law The spectral radiance of a blackbody at thermodynamic temperature $T [K]$ in a medium having index of refraction $n$ is given by the [Planck's law](http://en.wikipedia.org/wiki/Planck%27s_law) equation: <a name="back_reference_2"></a><a href="#reference_2">[2]</a> $$ \begin{equation} L_{e\lambda}(\lambda,T)=\cfrac{C_1n^{-2}\lambda^{-5}}{\pi}{\Biggl[\exp\biggl(\cfrac{C_2}{n\lambda T}\biggr)-1\Biggr]^{-1}} \end{equation} $$ where $$ \begin{equation} \begin{aligned} C_1&=2\pi hc^2\\ C_2&=\cfrac{hc}{k} \end{aligned} \end{equation} $$ $h$ is Planck's constant, $c$ is the speed of light in vacuum, $k$ is the Boltzmann constant and $\lambda$ is the wavelength. As per *CIE 015:2004 Colorimetry, 3rd Edition* recommendation $C_2$ value when used in colorimetry should be $C_2= 1,4388x10^{-2}mK$ as defined by the International Temperature Scale (ITS-90). $C_1$ value is given by the Committee on Data for Science and Technology (CODATA) and should be $C_1=3,741771x10^{16}Wm^2$. In the current *CIE 015:2004 Colorimetry, 3rd Edition* recommendation, colour temperature and correlated colour temperature are calculated with $n=1$. [Colour](https://github.com/colour-science/colour/) implements various *blackbody* computation related objects in the `colour.colorimetry` sub-package: ```python import colour.colorimetry ``` > Note: `colour.colorimetry` package public API is also available from the `colour` namespace. The *Planck's law* is called using either the `colour.planck_law` or `colour.blackbody_spectral_radiance` definitions, they are expecting the wavelength $\lambda$ to be given in nanometers and the temperature $T$ to be given in degree kelvin: ```python import colour colour.colorimetry.planck_law(500 * 1e-9, 5500) ``` 20472701909806.578 Generating the spectral distribution of a *blackbody* is done using the `colour.sd_blackbody` definition: ```python with colour.utilities.suppress_warnings(python_warnings=True): colour.sd_blackbody(6500, colour.SpectralShape(0, 10000, 10)) ``` With its temperature lowering, the blackbody peak shifts to longer wavelengths while its intensity decreases: ```python from colour.plotting import * ``` ```python colour_style(); ``` ```python # Plotting various *blackbodies* spectral distributions. blackbodies_sds = [colour.sd_blackbody(i, colour.SpectralShape(0, 10000, 10)) for i in range(1000, 15000, 1000)] with colour.utilities.suppress_warnings(python_warnings=True): plot_multi_sds(blackbodies_sds, y_label='W / (sr m$^2$) / m', use_sds_colours=True, normalise_sds_colours=True, legend_location='upper right', bounding_box=[0, 1000, 0, 2.25e15]); ``` Let's plot the blackbody colours from temperature in domain [150, 12500, 50]: ```python plot_blackbody_colours(colour.SpectralShape(500, 12500, 50)); ``` ## Stars Colour Let's compare the extraterrestrial solar spectral irradiance to the blackbody spectral radiance of a thermal radiator with a temperature of 5778 K: ```python # Comparing theoretical and measured *Sun* spectral distributions. # Arbitrary ASTMG173_ETR scaling factor calculated with # :def:`colour.sd_to_XYZ` definition. ASTMG173_sd = ASTMG173_ETR.copy() * 1.37905559e+13 blackbody_sd = colour.sd_blackbody( 5778, ASTMG173_sd.shape) blackbody_sd.name = 'The Sun - 5778K' plot_multi_sds([ASTMG173_sd, blackbody_sd], y_label='W / (sr m$^2$) / m', legend_location='upper right'); ``` As you can see the *Sun* spectral distribution is very close to the one from a blackbody at similar temperature $T$. Calculating theoritical colour of any star is possible, for example the [VY Canis Majoris](http://en.wikipedia.org/wiki/VY_Canis_Majoris) red hypergiant in the constellation *Canis Major*. ```python plot_blackbody_spectral_radiance(temperature=3500, blackbody='VY Canis Majoris'); ``` Or [Rigel](http://en.wikipedia.org/wiki/Rigel) the brightest star in the constellation *Orion* and the seventh brightest star in the night sky. ```python plot_blackbody_spectral_radiance(temperature=12130, blackbody='Rigel'); ``` And finally the [Sun](http://en.wikipedia.org/wiki/Sun), our star: ```python plot_blackbody_spectral_radiance(temperature=5778, blackbody='The Sun'); ``` ## Bibliography 1. <a href="#back_reference_1">^<a> <a name="reference_1"></a>CIE. (n.d.). 17-960 Planckian radiator. Retrieved June 26, 2014, from http://eilv.cie.co.at/term/960 2. <a href="#back_reference_2">^<a> <a name="reference_2"></a>CIE TC 1-48. (2004). APPENDIX E. INFORMATION ON THE USE OF PLANCK’S EQUATION FOR STANDARD AIR. In *CIE 015:2004 Colorimetry, 3rd Edition* (pp. 77–82). ISBN:978-3-901-90633-6
3fccd065964c432decf673b02e0483af3873ad5c
439,639
ipynb
Jupyter Notebook
notebooks/colorimetry/blackbody.ipynb
colour-science/colour-notebooks
f227bb1ebc041812de4048ae20e2b702ffb3150d
[ "BSD-3-Clause" ]
13
2016-11-23T22:13:24.000Z
2021-09-28T14:52:13.000Z
notebooks/colorimetry/blackbody.ipynb
colour-science/colour-ipython
f227bb1ebc041812de4048ae20e2b702ffb3150d
[ "BSD-3-Clause" ]
2
2015-07-13T19:38:16.000Z
2015-12-14T06:30:04.000Z
notebooks/colorimetry/blackbody.ipynb
colour-science/colour-ipython
f227bb1ebc041812de4048ae20e2b702ffb3150d
[ "BSD-3-Clause" ]
9
2016-10-06T16:18:40.000Z
2020-08-01T10:04:27.000Z
1,144.893229
168,432
0.956844
true
1,504
Qwen/Qwen-72B
1. YES 2. YES
0.868827
0.849971
0.738478
__label__eng_Latn
0.750863
0.554063
## Week8 Version/Date: Nov 7, 2017 ### Exercise > PREDICT_400-DL_SEC56 > Week8 Discussion ### File(s) Fundamental Theorem of Calculus Example.ipynb ### Instructions The Fundamental Theorem of Calculus requires that the function be continuous on a closed interval before we can integrate. Find or create a function that is not continuous over some interval and explain how we might still be able to integrate the function. Using Python, incorporate a graph of your function that also indicates the area under the curve. Be sure to share your Python code and output. ### Description Fundamental Theorem of Calculus states the following is given in the cell below. In this example I will evaluate the improper integral of a function that is not continuous over the entire interval by breaking it into parts on each side of the undefined point. Evaluate each separately using the equation below and attempt to plot using python. ```python %%HTML <a href="https://www.codecogs.com/eqnedit.php?latex=\int_{a}^{b}f(x)dx&space;=&space;F(b)&space;-&space;F(a)&space;=&space;F(x)|\binom{b}{a}" target="_blank"></a> ``` <a href="https://www.codecogs.com/eqnedit.php?latex=\int_{a}^{b}f(x)dx&space;=&space;F(b)&space;-&space;F(a)&space;=&space;F(x)|\binom{b}{a}" target="_blank"></a> ```python import plotly.plotly from plotly.graph_objs import Scatter, Layout import numpy as np import sympy as sp from sympy.utilities.lambdify import lambdify x, y, z = sp.symbols('x y z') sp.init_printing() print('imports completed') ``` imports completed ```python #Note: Code in this cell is reused from my Wk 7 Example class MyFunctions: # class init fn def __init__(self, low_bound, high_bound, samples): self.low_bound = low_bound self.high_bound = high_bound self.samples = samples def get_space(self): return np.linspace(self.low_bound, self.high_bound, self.samples) # functions for homework def h(self, x): expr = 2*x ** 3 + 6*x ** 2 - 12 return expr def g(self, x): expr = 13*x ** 2 + 76*x expr_prime = sp.diff(expr) result = expr_prime.evalf(subs={x:100}) return expr_prime def eval_sympy_fn(self, expr_str, x_val): #sympy_expr = sp.parsing.parse_expr(expr_str) sympy_expr = expr_str print(sympy_expr) result = sympy_expr.evalf(subs={x:x_val}) return result # Test MyFunctions mf = MyFunctions(-10, 10, 20) # build data array test function def build_data(fn_name, space): tmparray = [] fn = getattr(mf, fn_name) for i in space: tmparray.append(float(fn(i))) #print(tmparray) return np.array(tmparray) # build sympy data array using lambda def test_impl_fn(space): f2 = implemented_function(sp.Function('f2'), lambda x: 2*x ** 3 + 6*x ** 2 - 12) lam_f2 = lambdify(x, f2(x), 'numpy') #print(lam_f2(a)) return lam_f2(space) def lambdify_fun(fn, space): lam = lambdify(x, fn, 'numpy') return lam(space) #new function added to give indefinite integral of original function def lambdify_indef_integr_fun(fn, space): expr_int = sp.integrate(fn) lam_p = lambdify(x, expr_int, 'numpy') return lam_p(space) #new function added to give definite integral of original function over limit range def lambdify_def_integr_fun(fn, ran_a, ran_b, space): expr_int = sp.integrate(fn) lam_p = lambdify(x, expr_int, 'numpy') return lam_p(space) ``` ```python # The expression #expression = sp.sympify(2*x / (x-2)) - test different functions expression = sp.sympify((2*pow(x,3) - 0.5 * x) / (x-8)) print('Original Function: ') print(expression) ``` Original Function: (2*x**3 - 0.5*x)/(x - 8) ```python # Instantiate mf = MyFunctions(0, 10, 100) # Required for displaying plotly in jupyter notebook plotly.offline.init_notebook_mode(connected=True) # Create traces the_space = mf.get_space() trace1 = Scatter(x=the_space, y=lambdify_fun(expression, the_space), name='f(x)', line=dict(color='#bc42f4'), fill='tonexty') #trace2 = Scatter(x=the_space, y=lambdify_fun(testresult, the_space), name='ex integral', line=dict(color='#52FF33')) # plot it plotly.offline.iplot({ "data": [trace1], "layout": Layout(title="Original Function") }) ``` <div id="8562730a-8111-45ba-9413-5ad8afd21b8e" style="height: 525px; width: 100%;" class="plotly-graph-div"></div> ```python # Sympy has a nice a simple integrate function. However, due to the discontinuity at x = 8, we get #complete = sp.integrate(expression, x) complete = sp.integrate(expression, (x, 0, 10)) #print(complete) print(sp.N(complete)) ``` 1327.64641832438 - 3204.42450666159*I Now, instead take the area under the curve between x = 0 and 8, and then subtract it from the area from 8 to 10 using definite integrals for the same function. This shows the value is the same taken on the two separate intervals and combined. Taking values arbitrarily close to the point of discontinuity, we can estimate the value using FTC. ```python # This is an alternative way to estimate the area under the curve soln1 = sp.integrate(expression, (x, 0, 7.9999)) #print(sp.N(soln1)) soln2 = sp.integrate(expression, (x, 8.0001, 10)) #print(sp.N(soln2)) totalsoln = soln2 - soln1 print(totalsoln) ``` 20612.1348550407 ```python # Plot separately # Instantiate mf1 = MyFunctions(0, 7.9999, 100) mf2 = MyFunctions(8.0001, 10, 100) # Create traces space1 = mf1.get_space() space2 = mf2.get_space() trace1 = Scatter(x=space1, y=lambdify_fun(expression, space1), name='left side - negative', line=dict(color='#33F9FF'), fill='tozeroy') trace2 = Scatter(x=space2, y=lambdify_fun(expression, space2), name='right side - positive', line=dict(color='#33A8FF'), fill='tozeroy') # plot it plotly.offline.iplot({ "data": [trace1, trace2], "layout": Layout(title="Split Plot") }) ``` <div id="7b4451a5-0f78-4c10-9487-1487da9f3d25" style="height: 525px; width: 100%;" class="plotly-graph-div"></div> ```python import sympy as sp x, y, z = sp.symbols('x y z') sp.init_printing() exp = 8 / (4*x + 5)**3 ans = sp.integrate(exp, x) print(ans) ``` -8/(128*x**2 + 320*x + 200) ```python ```
22e9173b9a8fbe12b1398ee133480a2360591255
55,495
ipynb
Jupyter Notebook
Wk8/Fundamental Theorem.ipynb
knightman/MSPA-PREDICT400
911196182b021caf6670e755ddf23a5fb495340a
[ "MIT" ]
null
null
null
Wk8/Fundamental Theorem.ipynb
knightman/MSPA-PREDICT400
911196182b021caf6670e755ddf23a5fb495340a
[ "MIT" ]
null
null
null
Wk8/Fundamental Theorem.ipynb
knightman/MSPA-PREDICT400
911196182b021caf6670e755ddf23a5fb495340a
[ "MIT" ]
null
null
null
55.218905
8,491
0.665069
true
1,830
Qwen/Qwen-72B
1. YES 2. YES
0.731059
0.853913
0.62426
__label__eng_Latn
0.816983
0.288696
# Laboratório 6: Pesca ### Referente ao capítulo 11 Suponha que uma população de peixes é introduzida em um tanque artificial ou em uma região de água com redes. Seja $x(t)$ o nível de peixes escalado em $t$, com $x(0) = x_0 > 0$. Os peixes inicialmente são pequenos e tem massa média um valor quase nula: trateremos como $0$. Após, a massa média é uma função $$ f_{massa}(t) = k\frac{t}{t+1}, $$ onde $k$ é o máximo de massa possivelmente atingido. Consideraremos $T$ suficientemente pequeno de forma que não haja reprodução de peixes. Seja $u(t)$ a taxa de colheita e $m$ a taxa de morte natural do peixe. Queremos maximizar a massa apanhada no intervalo, mas minimizando os custos envolvidos. Assim o problema é $$ \max_u \int_0^T Ak\frac{t}{t+1}x(t)u(t) - u(t)^2 dt, A \ge 0 $$ $$ \text{sujeito a }x'(t) = -(m + u(t))x(t), x(0) = x_0, $$ $$ 0 \le u(t) \le M, $$ onde $M$ é o limite físico da colheita. ## Condições Necessárias ### Hamiltoniano $$ H = Ak\frac{t}{t+1}x(t)u(t) - u(t)^2 - \lambda(t)\left(m + u(t)\right)x(t) $$ ### Equação adjunta $$ \lambda '(t) = - Ak\frac{t}{t+1}u(t) + \lambda(t)\left(m + u(t)\right) $$ ### Condição de transversalidade $$ \lambda(T) = 0 $$ ### Condição de otimalidade $$ H_u = Ak\frac{t}{t+1}x(t) - 2u(t) - \lambda(t)x(t) $$ $$ H_u < 0 \implies u^*(t) = 0 \implies x(t)\left(Ak\frac{t}{t+1} - \lambda(t)\right) < 0 $$ $$ H_u = 0 \implies 0 \le u^*(t) = 0.5x(t)\left(Ak\frac{t}{t+1} - \lambda(t)\right) \le M $$ $$ H_u > 0 \implies u^*(t) = M \implies 0.5x(t)\left(Ak\frac{t}{t+1} - \lambda(t)\right) > M $$ Assim $u^*(t) = \min\left\{M, \max\left\{0, 0.5x(t)\left(Ak\frac{t}{t+1} - \lambda(t)\right)\right\}\right\}$ ### Importanto as bibliotecas ```python import numpy as np import matplotlib.pyplot as plt from scipy.integrate import solve_ivp import sympy as sp import sys sys.path.insert(0, '../pyscripts/') from optimal_control_class import OptimalControl ``` ### Usando a biblitoca sympy ```python t_sp, x_sp,u_sp,lambda_sp, k_sp, A_sp, m_sp = sp.symbols('t x u lambda k A m') H = A_sp*k_sp*(t_sp/(t_sp+1))*x_sp*u_sp - u_sp**2 - lambda_sp*(m_sp + u_sp)*x_sp H ``` $\displaystyle \frac{A k t u x}{t + 1} - \lambda x \left(m + u\right) - u^{2}$ ```python print('H_x = {}'.format(sp.diff(H,x_sp))) print('H_u = {}'.format(sp.diff(H,u_sp))) print('H_lambda = {}'.format(sp.diff(H,lambda_sp))) ``` H_x = A*k*t*u/(t + 1) - lambda*(m + u) H_u = A*k*t*x/(t + 1) - lambda*x - 2*u H_lambda = -x*(m + u) Resolvendo para $H_u = 0$ ```python eq = sp.Eq(sp.diff(H,u_sp), 0) sp.solve(eq,u_sp) ``` [x*(A*k*t - lambda*t - lambda)/(2*(t + 1))] Aqui podemos descrever as funções necessárias para a classe. ```python parameters = {'A': None, 'k': None, 'm': None, 'M': None} diff_state = lambda t, x, u, par: -x*(par['m'] + u) diff_lambda = lambda t, x, u, lambda_, par: - par['A']*par['k']*t*u/(t + 1) + lambda_*(par['m'] + u) update_u = lambda t, x, lambda_, par: np.minimum(par['M'], np.maximum(0, 0.5*x*(par['A']*par['k']*t - lambda_*t - lambda_)/(t + 1))) ``` ## Aplicando a classe ao exemplo Vamos fazer algumas exeperimentações. Sinta-se livre para variar os parâmetros. Nesse caso passaremos os limites como parâmetro do `solve`. ```python problem = OptimalControl(diff_state, diff_lambda, update_u) ``` ```python x0 = 0.4 T = 10 parameters['A'] = 5 parameters['k'] = 10 parameters['m'] = 0.2 parameters['M'] = 1 ``` ```python t,x,u,lambda_ = problem.solve(x0, T, parameters, bounds = [(0, parameters['M'])]) ax = problem.plotting(t,x,u,lambda_) for i in range(3): ax[i].set_xlabel('Semanas') plt.show() ``` A estratégia ótima nesse caso inicia em $0$ e logo aumenta muito rapidamente, com um declínio posterior suave. A população é praticamente extinta no período considerado. O limite superior não teve efeito, dado que foi bem alto. Por isso, podemos testar com outros valores. ```python parameters['M'] = 0.4 t,x,u,lambda_ = problem.solve(x0, T, parameters, bounds = [(0, parameters['M'])]) ax = problem.plotting(t,x,u,lambda_) for i in range(3): ax[i].set_xlabel('Semanas') plt.show() ``` Sugerimos que experimente a variação dos outros parâmetros. ## Experimentação ```python #N0 = 1 #T = 5 #parameters['r'] = 0.3 #parameters['a'] = 10 #parameters['delta'] = 0.4 # #t,x,u,lambda_ = problem.solve(N0, T, parameters) #roblem.plotting(t,x,u,lambda_) ``` ### Este é o final do notebook
de5daf349f551fdae6fb17d0b0b68886ae9b3bb0
115,173
ipynb
Jupyter Notebook
notebooks/Laboratory6.ipynb
lucasmoschen/optimal-control-biological
642a12b6a3cb351429018120e564b31c320c44c5
[ "MIT" ]
1
2021-11-03T16:27:39.000Z
2021-11-03T16:27:39.000Z
notebooks/.ipynb_checkpoints/Laboratory6-checkpoint.ipynb
lucasmoschen/optimal-control-biological
642a12b6a3cb351429018120e564b31c320c44c5
[ "MIT" ]
null
null
null
notebooks/.ipynb_checkpoints/Laboratory6-checkpoint.ipynb
lucasmoschen/optimal-control-biological
642a12b6a3cb351429018120e564b31c320c44c5
[ "MIT" ]
null
null
null
332.869942
53,528
0.932962
true
1,654
Qwen/Qwen-72B
1. YES 2. YES
0.917303
0.917303
0.841444
__label__por_Latn
0.925886
0.79329
```python !pip install sympy tornado ``` # Symbolic Computation with Python ## Declaring Symbols, Printing Functions ```python import sympy as sp sp.init_printing() ``` ```python x=sp.Symbol('x') y,z=sp.symbols('y,z') f=sp.Function('f') g=x**2 + y**2 + z**2 ``` ```python print(g) ``` ```python g ``` ```python sp.pprint(g) ``` ## Math Expressions and Evaluating at certain values ```python h=x**2+2*x-5 h ``` ```python print(h.subs(x,2)) ``` ```python print(h.subs(x,y**2.5)) print(h.subs(x,z**2)) ``` ### Simplifying, Expanding and Factorizing Expressions ```python f=(x**2-x-6)/(x**2-3*x) sp.simplify(f) ``` ```python f=(x+1)**3*(x+2)**2 sp.expand(f) ``` ```python f=3*x**4-36*x**3+99*x**2-6*x-144 sp.factor(f) ``` ## Differentiating and Intergrating Functions ```python y=(sp.sin(x))**2 *sp.exp(2*x) y ``` ```python z=sp.diff(y,x) z ``` ```python z.subs(x,3.2) ``` ```python f=x**2*sp.sin(x**2) f ``` ```python g=sp.integrate(f,(x,0,5)) g ``` ```python g.evalf() ``` ## Solving Equations and Groups of Equations ```python y1=sp.Eq(x**3+15*x**2,3*x-10) y1 ``` ```python z=sp.solve(y1,x) z ``` ```python z[0].evalf() ``` ```python z[1].evalf() ``` ```python z[2].evalf() ``` ```python for w in z: sp.pprint(w.evalf()) ``` ```python x,y,z=sp.symbols('x,y,z') eq1=sp.Eq(x+y+z,0) eq1 ``` ```python eq2=sp.Eq(2*x-y-z,10) eq2 ``` ```python eq3=sp.Eq(y+2*z,5) eq3 ``` ```python sp.solve([eq1,eq2,eq3],[x,y,z]) ``` ### Verifying answer using scipy methods of solving equations ```python from scipy.optimize import fsolve def f(w): x=w[0] y=w[1] z=w[2] f1=x+y+z f2=2*x-y-z-10 f3=y+2*z-5 return [f1,f2,f3] result=fsolve(f,[0,0,0]) sp.pprint(result) ``` ## Solving differential equations symbolically using the ```dsolve``` function: $$\frac{df(x)}{dx} = x\cos(x).$$ ```python pf=sp.Function('pf') y=sp.dsolve(sp.Derivative(pf(x),x)-x*sp.cos(x),pf(x)) y ``` ```python z=sp.integrate(x*sp.cos(x)) z ``` ## Matrices, Vectors and Solving Equations of the form `Ax=b` : ```python from sympy import Matrix A=sp.Matrix([[1,2,5],[3,4,6],[-1,0,3]]) b=sp.Matrix([1,0,-2]) sp.pprint(A) sp.pprint(b) ``` ```python sp.pprint(A.inv()*b) sp.pprint(A.LUsolve(b)) ``` ```python sp.pprint(A[1:2,:]) ``` ```python sp.pprint(A[:,1:2]) ``` ```python M=sp.zeros(2,2) M[1,1]=3 M[1,0]=x**2 sp.pprint(M) ``` ```python M=sp.ones(2,2) M[1,1]=0 sp.pprint(M.inv()) ``` ```python !pip install keras ``` ```python !pip install rise ``` ```python !jupyter-nbextension install rise --py --sys-prefix ``` ```python !jupyter-nbextension enable rise --py --sys-prefix ``` ```python !pip install tensorflow ``` ```python !pip install plotly ``` ```python ```
922f10a77b794a31c319b95419c043e82b240a9d
9,139
ipynb
Jupyter Notebook
SymbolicComputationAlgebraWithPythonSymPy.ipynb
TralahM/myjupyternotebooks
53f4ea8f64da90e1dcb5e67d0ac9db4c038d0e24
[ "MIT" ]
null
null
null
SymbolicComputationAlgebraWithPythonSymPy.ipynb
TralahM/myjupyternotebooks
53f4ea8f64da90e1dcb5e67d0ac9db4c038d0e24
[ "MIT" ]
null
null
null
SymbolicComputationAlgebraWithPythonSymPy.ipynb
TralahM/myjupyternotebooks
53f4ea8f64da90e1dcb5e67d0ac9db4c038d0e24
[ "MIT" ]
null
null
null
17.146341
88
0.46504
true
1,039
Qwen/Qwen-72B
1. YES 2. YES
0.92523
0.884039
0.81794
__label__eng_Latn
0.259911
0.738681
# Uso de simpy ## Conceptos basico: definición de simbolos y operaciones **Para representar valores se puede usar:** ```python # Salida Latex automatica from sympy import Symbol x = Symbol("x") x**2+2*x+1 ``` ```python #Salida comun from sympy import Symbol, pprint x = Symbol("x") ec = x**2+2*x+1 print(ec) ``` x**2 + 2*x + 1 ```python #Salida usando pprint import sympy as sp x = Symbol("x") ec = x**2+2*x+1 sp.pprint(ec) ``` 2 1 + 2⋅x + x ```python #Salida tipo Latex """Configuracion inicial""" from sympy.interactive import printing printing.init_printing(use_latex=True) """modulo necesario""" import sympy as sp x = sp.Symbol("x") ecuacion = x**2 +2*x +1 print("Ecuacion") display(ecuacion) ``` ### Definiendo ecuaciones ```python """Configuracion incial""" from sympy.interactive import printing printing.init_printing(use_latex=True) """modulos necesarios""" import sympy as sp #Declaracion de un simbolo x = sp.Symbol("x") #Declaracion de multiples simbolos y,z = sp.symbols("y,z") ecuacion = x**2 +2*y*z**2 +2*y**2*z +z**2 print("La ecuacion definida es:") display(ecuacion) ``` ### Sustituyendo expresiones **_Uso del metodo subs()_** ```python from sympy.interactive import printing printing.init_printing(use_latex=True) import sympy as sp x,y = sp.symbols("x,y") expresion = x*x +x*y +x*y +y*y print("Expresion: ", end="\n") display(expresion) print("\nSustituyendo valores y=2, x=1") est_valor = expresion.subs({x:1, y:2}) display(est_valor) print("\n\nSegunda sustitucion x = 1-y\n") est_valor = expresion.subs({x:y**2+2}) display(est_valor) ``` ### Factorizacion de expresiones ```markdown Uso de * factor() * expand() ``` ```python # Factorizacion simple from sympy.interactive import printing printing.init_printing(use_latex=True) import sympy as sp x,y = sp.symbols("x,y") expresion = x**3 +3*x**2*y +3*x*y**2 +y**3 display(sp.factor(expresion)) ``` ```python from sympy.interactive import printing printing.init_printing(use_latex=True) import sympy as sp x,y = sp.symbols("x,y") # Estableciendo la ecuacion con la que vamos a trabajar diff_cuadrados = x**2 - y**2 print("Diferencia de cuadrados") display(diff_cuadrados) print() # Uso de factor ec_factor = sp.factor(diff_cuadrados) print("Expresion factorizada") display(ec_factor) print() # Uso de expand print("Expresion regresada a su forma original") display(sp.expand(ec_factor)) ``` ### Uso de sympify() **LECTURA DE ECUACIONES INGRESADAS POR EL USUARIO** Usada para convertir un string en algo con lo que pueda trabajar la biblioteca sympy ```python # Definiendo expresiones desde teclado from sympy.interactive import printing printing.init_printing(use_latex=True) from sympy import sympify from sympy.core.sympify import SympifyError import sympy as sp expresion = input("Ingrese su expresion matematica: ") try: expresion = sp.sympify(expresion) print("La expresion multiplicada por dos es:") display(expresion * 2) except SympifyError: print("Valor invalido") ``` ```python # Multiplicacion de expresiones from sympy.interactive import printing printing.init_printing(use_latex=True) from sympy.core.sympify import SympifyError import sympy as sp def producto(expre1, expre2): print("\nProducto de:") prod =(expre1 * expre2) display(prod) display(prod.expand()) print("MULTIPLICACION DE ECUACIONES") expre1 = input("Ingrese su primera ecuacion") expre2 = input("Ingrese su segunda ecuacion") try: expre1 = sp.sympify(expre1) expre2 = sp.sympify(expre2) except SympifyError: print("Valor invalido") else: producto(expre1, expre2) ``` # Resolviendo ecuaciones ## Uso de solve() Usado para encontrar la solucionde a la ecuacion, la funcion resulve las expresiones entendiendo que son igual a cero ```python from sympy.interactive import printing printing.init_printing(use_latex=True) import sympy as sp x = sp.Symbol("x") ecuacion = x -10 -7 # Devuelve una lista con el valor que hace cero a la ecuacion. display(sp.solve(ecuacion)) display(sp.solve(ecuacion, dict=True)) ``` ```python # Resolucion de ecuaciones cuadraticas from sympy.interactive import printing printing.init_printing(use_latex=True) from sympy.core.sympify import SympifyError import sympy as sp ecuacion = input("ingrese la ecuacion a resolver") try: ecuacion = sp.sympify(ecuacion) print("Ecuacion\t") display(ecuacion) except SympifyError: print("Valor invalido") else: print("Soluciones:\t",end="") display(sp.solve(ecuacion, dict=True)) ``` ```python # Resolucion de una variable en terminos de otra ## Ejemplo de la ecuacion cuadratica from sympy.interactive import printing printing.init_printing(use_latex=True) import sympy as sp x,a,b,c = sp.symbols("x,a,b,c") exprecion = a*x**2 + b*x +c print("Las soluciones de una ecuacion cuadratica son: ") display(sp.solve(exprecion, x, dict=True)) ``` ### Resolucion de sistema de ecuaciones lineales ```python # Configuracion inicial from sympy.interactive import printing printing.init_printing(use_latex=True) # Importamos los paquetes que vamos a usar import sympy as sp x,y = sp.symbols("x,y") ecuacion_1 = 2*x + 3*y -6 ecuacion_2 = 3*x + 2*y -12 # llamada a solve con los dos elementos en una tupla print("Sistema de ecuaciones lineales") display(ecuacion_1, ecuacion_2) print("\nSolucion") display(sp.solve((ecuacion_1, ecuacion_2), dict=True)) print("\nComprando solucion") soluciones = sp.solve((ecuacion_1, ecuacion_2), dict= True) print("El valor devuelto es una lista",soluciones) soluciones = soluciones[0] display(ecuacion_1.subs({x: soluciones[x], y: soluciones[y]})) display(ecuacion_2.subs({x: soluciones[x], y: soluciones[y]})) ``` ## Trabajando con series ```python from sympy.interactive import printing printing.init_printing(use_latex=True) import sympy as sp def imprimir_serie(n): printing.init_printing(order="rev-lex") #Establece el orden de la impresion x = sp.Symbol("x") serie = x for i in range(2, n+1): serie = serie+((x**i)/i) display(serie) print() sp.pprint(serie) n = int(input("Ingrese el numero de terminos que quiere en la serie: ")) imprimir_serie(n) ``` ## Graficas usando sympy ```python # Graficando una funcion lineal from sympy.interactive import printing printing.init_printing(use_latex=True) import sympy as sp x = sp.Symbol("x") print("Graficando la funcion y=2x+3") display(sp.plot(2*x+3)) ``` ```python # Graficando delimintando valores de x # Estableciendo el titulo de los ejes # Graficando una funcion lineal from sympy.interactive import printing printing.init_printing(use_latex=True) import sympy as sp x = sp.Symbol("x") print("Graficando la funcion y=2x+3") display( sp.plot(2*x+3,x+1,(x, -5, 5), title="Grafico A", xlabel = "x ", ylabel="y=2*x+3") ) ``` ### Graficando expresiones ingresadas por el usuario ```python from sympy.interactive import printing printing.init_printing(use_latex=True) import sympy as sp # Como se devuelve una lista, tenemos que "sacar" la expresion de ella def graficar_expresion(expresion): y=sp.Symbol("y") soluciones=sp.solve(expresion, y) # Como se devuelve una lista, tenemos que "sacar" la expresion de ella y = soluciones[0] sp.plot(y) def main(): # Se convierte la expresion en terminos de x expresion = input("ingrese su ecuacion igualando a cero") try: expresion = sp.sympify(expresion) except sp.SympifyError: print("Entrada invalida") else: print("Expresion a graficar") display(expresion) graficar_expresion(expresion) main() ``` # Aplicaciones ```python # Encontrar el factor de una expresion from sympy.interactive import printing printing.init_printing(use_latex=True) import sympy as sp def factorizar(expresion): factores=sp.factor(expresion) display(factores) def inicio(): expresion = input("Ingrese la a expresion a factorizar") try: expresion = sp.simplify(expresion) except sp.SympifyError: print("Entrada no valida") else: print("Factores de la expresion") display(expresion) factorizar(expresion) inicio() ``` ```python # Gaficador de ecuaciones from sympy.interactive import printing printing.init_printing(use_latex=True) import sympy as sp def graficar(): y=sp.Symbol("y") soluciones=sp.solve(expresion, y) # Como se devuelve una lista, tenemos que "sacar" la expresion de ella y = soluciones[0] sp.plot(y) def inicio(): expre1, expre2 =(input("Ingrese expresion igualando a cero: ") for_ in range(2)) try: expre1 = sp.sympify(expre1) expre2 = sp.sympify(expre2) except sp.SympifyError: print("Entrada invalida") else: ``` ```python ```
922d0a328a445b0cf23bd062903327a59ae60af5
130,445
ipynb
Jupyter Notebook
E-0 Uso de sympy.ipynb
MkCst/Python-numerico
7dd8b57b8161e5432f9b6d535228bad840e883f7
[ "CC0-1.0" ]
null
null
null
E-0 Uso de sympy.ipynb
MkCst/Python-numerico
7dd8b57b8161e5432f9b6d535228bad840e883f7
[ "CC0-1.0" ]
null
null
null
E-0 Uso de sympy.ipynb
MkCst/Python-numerico
7dd8b57b8161e5432f9b6d535228bad840e883f7
[ "CC0-1.0" ]
null
null
null
98.672466
22,040
0.851899
true
2,650
Qwen/Qwen-72B
1. YES 2. YES
0.91848
0.885631
0.813435
__label__spa_Latn
0.46218
0.728215
### Square Wave Packet Widget ```python # setup import numpy as np import sympy as sp sp.init_printing(use_latex='mathjax') import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (20, 20) # (width, height) plt.rcParams['font.size'] = 20 plt.rcParams['legend.fontsize'] = 30 from matplotlib import patches #get_ipython().magic('matplotlib') # separate window get_ipython().magic('matplotlib inline') # inline plotting from __future__ import print_function from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets def f(numks, aa): lam = 1e-9 # wavelength kval = 2*np.pi/lam # wavenumber can be changed by changing wavelength #numks = x #numks = 10 # how many wavenumbers do you want in the sum? aa = aa*lam/100 # width of square wave as a fraction of wavelength xsquare = np.arange(-2*lam, 2*lam, lam/100) # The x values for our sine waves. arrlen = np.size(xsquare) ksquare = np.arange(1, numks+1, 1) # wavenumbers go from 1 to something nvals = np.arange(1, numks+1, 1) ksquare = ksquare * kval Ans = 2/np.pi/nvals * np.sin(np.pi*aa*nvals/lam) fxarg = np.ones((numks,arrlen)) # Create a matrix for kx values to simplify calculations later. fxarg = fxarg[:, 0:arrlen]*xsquare # Fill the kx matrix with x values along the rows. fxarg = fxarg[0:arrlen, :].T*ksquare # Multiply each column by its appropriate k value. fx = np.ones((numks,arrlen)) fx=Ans*np.cos(fxarg) # Fill the matrix with the appropriate cosine waves. ones = np.ones(numks) sqwv = np.matmul(fx, ones) plt.figure(1) plt.rcParams.update({'font.size': 20}) plt.rcParams.update({'legend.fontsize' : 24}) plt.subplot(211) plt.plot(xsquare,fx[:, :]) # Plot a few of the waves to check that they look right (see Fig. 6.11 in Taylor, Zafiratos, Dubson). plt.title('The waves') plt.xlabel('x (m)') plt.ylabel('Amplitude') plt.subplot(212) plt.plot(xsquare, sqwv) plt.title('The wave packet') plt.xlabel('x (m)') plt.ylabel('Amplitude') plt.subplots_adjust(top=3, bottom=0.1, left=0.1, right=3.0, hspace=0.25, wspace=0.1) plt.show() # return 0 interact(f, numks=widgets.IntSlider(min=1,max=30,step=1,value=10), aa=widgets.IntSlider(min=5,max=95,step=5,value=5)); ``` A Jupyter Widget ```python ```
d5eba7ff6bad91e174b419e24216499255faf3d0
3,914
ipynb
Jupyter Notebook
SquareWavePackets.ipynb
guygastineau/Jupyter
5d97e1a57cc812e1131f1d39edbd6b15341f38ed
[ "MIT" ]
null
null
null
SquareWavePackets.ipynb
guygastineau/Jupyter
5d97e1a57cc812e1131f1d39edbd6b15341f38ed
[ "MIT" ]
2
2018-08-06T17:55:54.000Z
2018-08-07T18:47:52.000Z
SquareWavePackets.ipynb
guygastineau/Jupyter
5d97e1a57cc812e1131f1d39edbd6b15341f38ed
[ "MIT" ]
1
2018-08-06T17:39:08.000Z
2018-08-06T17:39:08.000Z
32.616667
141
0.554676
true
724
Qwen/Qwen-72B
1. YES 2. YES
0.903294
0.857768
0.774817
__label__eng_Latn
0.721743
0.638492
<a href="https://colab.research.google.com/github/Taylor-X01/Optimization-Algorithms/blob/main_project/Line_search.ipynb" target="_parent"></a> ```python from sympy import * from sympy.abc import x,y # from sympy.plotting import plot as symplot from scipy.misc import derivative from scipy.linalg import * import numpy as np import pandas as pd import matplotlib.pyplot as plt from mpl_toolkits import mplot3d dx = 1e-6 ``` ```python def plot2D(f,lb,ub,points_x=None,points_y=None,Lambdify=False): if Lambdify: f_lambda = lambdify(x,f) else: f_lambda = f fig = plt.figure(figsize=(30, 10)) X = np.linspace(lb, ub, 5500) ax = fig.gca() plt.plot(X,f_lambda(X)) if (points_x != None) or (points_y != None): plt.plot(points_x,points_y,'xr',ms=10) plt.grid(b=True, which='major', color='#666666', linestyle='-') plt.minorticks_on() plt.grid(b=True, which='minor', color='#999999', linestyle='-', alpha=0.2) plt.show() ``` ## Bissection Method $$ Let\ f:\mathbb{R}\rightarrow\mathbb{R}\ and\ a<b \in\mathbb{R} : f'(a) \times f'(b)<0\\ \ $$ ```python def bissection(fct,a,b,err=0.005, verbose=False): f = fct f_prime_a = derivative(fct,a,dx=dx) f_prime_b = derivative(fct,b,dx=dx) result = [[a,b],[f(a),f(b)]] if np.dot(f_prime_a,f_prime_b)<0: print("Hypothesis match : Success") while (abs(b-a)>err) and (f_prime_a < 0 and f_prime_b > 0): c = (a+b)/2 result[0].append(c) result[1].append(f(c)) if verbose: print("c = ",c) f_prime_c = derivative(fct,c,dx=dx) if verbose: print("f'(c) = ",f_prime_c) if f_prime_c <= 0 : a = c else: b = c if verbose: print("2D: [x,f(x)] == ",[a,f(a)]) return result else: print(f"Hypothesis match : Failed for a = {a} and b = {b}") ``` ### Test functions : $$ f(x)=x^2+5x+2 $$ ```python f1 = lambda x: x**2+5*x+2 results = bissection(f1,-5,5,0.02) print("[x*,f(x*)] = ",[results[0][-1],results[1][-1]]) plot2D(f1,-5,5,results[0],results[1]) ``` ## Secant Method $$ Secant's\ Method\ :\ x_{k+1}=x_{k}-\frac{f(x_k)}{ \left[ \frac{f(x_k)-f(x_{k-1})}{x_k - x_{k-1}}\right]}\\ To\ find\ the\ minimal\ point\ of\ the\ function\ f:\mathbb{R}\rightarrow \mathbb{R}\\ x_{k+1}=x_{k}-\frac{f'(x_k)}{ \left[ \frac{f'(x_k)-f'(x_{k-1})}{f(x_k) - f(x_{k-1})}\right]} $$ ```python def secant(fct,x0,x1,err=0.005,verbose=False): if verbose: print("Function :",fct) #fct must be lambda f = fct x_k = x1 x_k_1 = x0 f_prime_xk = derivative(f,x_k,dx=dx) f_prime_xk_1 = derivative(f,x_k_1,dx=dx) result = [[x_k_1,x_k],[f(x_k_1),f(x_k)]] while (abs(x_k - x_k_1) > err) and (abs(f_prime_xk) > err): x_k1 = x_k-(f_prime_xk/ ( (f_prime_xk-f_prime_xk_1)/(f(x_k) - f(x_k_1)) ) ) x_k_1 = x_k x_k = x_k1 result[0].append(x_k) result[1].append(f(x_k)) f_prime_xk = derivative(fct,x_k,dx=dx) f_prime_xk_1 = derivative(fct,x_k_1,dx=dx) if verbose: print("x_k: ",x_k) print("x_k-1: ",x_k_1) print("f_prime_xk : ",f_prime_xk) print("f_prime_xk-1 : ",f_prime_xk_1) if abs(f_prime_xk) < err: print([x_k,f(x_k)]) return results print("ERREUR DE CALCUL") ``` ```python fun = lambda x: np.sin(np.cos(np.exp(x))) results_sec = secant(fun,1,1.5) plot2D(fun,-10,10,results_sec[0],results_sec[1]) ``` ## Newton-Raphson Method $$ Newton's\ Method\ :\ x_{k+1}=x_{k}-\frac{f(x_k)}{f'(x_k)}\\ To\ find\ the\ minimal\ point\ of\ the\ function\ f:\mathbb{R}\rightarrow \mathbb{R}\\ x_{k+1}=x_{k}-\frac{f'(x_k)}{f''(x_k)} $$ ```python def newton(fct,x0,err=0.005,verbose=False): cpt = 0 x_k = x0 f = fct f_prime_xk = derivative(fct,x_k,dx=dx) f_second_xk =derivative(fct,x_k,dx=dx,n=2) results_nwtn = [] if (np.any(f_prime_xk) != np.any(0)) and (np.any(f_second_xk) != np.any(0)): while abs(f_prime_xk) > err: results_nwtn = [[x_k],[f(x_k)]] if verbose: print ("cpt = ",cpt) cpt+=1 x_k1 = x_k-(f_prime_xk/f_second_xk) x_k = x_k1 results_nwtn[0].append(x_k) results_nwtn[1].append(f(x_k)) f_prime_xk =derivative(fct,x_k,dx=dx) f_second_xk =derivative(fct,x_k,dx=dx,n=2) if verbose : print("x_k: ",x_k) print("f(x_k) = ",f(x_k)) print("f_prime_xk : ",f_prime_xk) print("f_second_xk : ",f_second_xk) print("norm(f_prime(x_k)) : ",norm(f_prime_xk)) if verbose: print("[x,f(x)] = ",[x_k,fct(x_k)]) return results_nwtn print("ERREUR DE CALCUL") ``` ```python # f2 = lambda x : 1/(-np.sin(-x**2)+2) # # f2 = lambda x: np.sin(np.cos(np.exp(x))) results_newton = newton(f1,1,0.002) print(results_newton) print("[x*,f(x*)] = ",[results_newton[0][-1],results_newton[1][-1]]) plot2D(f1,-6,1,results_newton[0],results_newton[1]) # plot(f2,-1,1,results_newton[0],results_newton[1]) ``` ## False Position Method - Regula Falsi $$ False\ Position's\ method\ :\ x_{k+1}=x_k-f'(x_k)\times\frac{(x_k-x_{k-1})}{f'(x_k)-f'(x_{k-1})} $$ ```python def regula_falsi(fct,x0,x1,err=0.005,verbose=False): if verbose: print("f(x) = ",fct) f = fct x_k = x1 x_k_1 = x0 f_prime_xk = derivative(fct,x_k,dx=dx) f_prime_xk_1 = derivative(fct,x_k_1,dx=dx) res = [[x_k_1,x_k],[f(x_k_1),f(x_k)]] while (abs(x_k - x_k_1) > err) and (abs(f_prime_xk) > err): x_k1 = x_k-(f_prime_xk * (x_k - x_k_1)/ (f_prime_xk-f_prime_xk_1) ) x_k_1 = x_k x_k = x_k1 res[0].append(x_k) res[1].append(f(x_k)) f_prime_xk = derivative(fct,x_k,dx=dx) f_prime_xk_1 = derivative(fct,x_k_1,dx=dx) if verbose: print("x_k: ",x_k) print("x_k-1: ",x_k_1) print("f_prime_xk : ",f_prime_xk) print("f_prime_xk-1 : ",f_prime_xk_1) if abs(f_prime_xk) < err: print([res[0][-1],res[1][-1]]) return res print("ERREUR DE CALCUL") ``` ```python # f2 = lambda x: np.sin(np.power(x,2))+1 # f2 = lambda x: np.sin(np.cos(np.log(1/np.power(x,2)))) results_falsi = regula_falsi(f1,2,2.5) print(results_falsi) print("[x*,f(x*)]",[results_falsi[0][-1],results_falsi[1][-1]]) plot2D(f1,-10,5,results_falsi[0],results_falsi[1]) ``` ## Bissection - Newton's Method (Hybrid) ```python def bissection_newton(fct,a,b,err=.005): print("\n\nBissection :\n") results_biss = bissection(fct,a,b,err) # print(results_biss) [x0,f_x0] = [results_biss[0][-1],results_biss[1][-1]] print("\n\nNewton : \n") results_nwtn = newton(fct,x0,err) return results_nwtn ``` ## Comparison < Newton, Bissection, Hybrid > ```python f3 = lambda x: np.sin(np.cos(np.exp(x))) f3_prime = lambda x: -exp(x)*sin(exp(x))*cos(cos(exp(x))) print("\n\n\n ~~> Newton's Method \n") nwtn = newton(f3,0.8) print(f"f(x*={nwtn[0][-1]}) = {nwtn[1][-1]}") # result: 1.1494460852213106, -0.8414113953797868 # f'(x*) = 0.02533145117663931 print("\n\n\n ~~> Bissection's Method\n") biss = bissection(f3,0,1.7) print(f"f(x*={biss[0][-1]}) = {biss[1][-1]}") # result: 1.1421875, -0.8414537940888557 # f'(x*) = -0.013506376215777379 print("\n\n\n ~~> Bissection-Newton's Method\n") biss_nwtn = bissection_newton(f3,-2,1.8) # Precise et nous donne une marge plus grande pour choisir les points initiaux print(f"f(x*={biss_nwtn[0][-1]}) = {biss_nwtn[1][-1]}" ) # result: 1.1450208128703885, -0.8414707590717263 # f'(x*) = 0.0015520666672475564 plt.plot(nwtn[0][-1],nwtn[1][-1],'og',label="Newton") # Newton plt.plot(biss[0][-1],biss[1][-1],'xb',label="Bissection") # Bissection plt.plot(biss_nwtn[0][-1],biss_nwtn[1][-1],'.r',label="Bissection-Newton") # Bissection-Newton plt.legend() plt.show() # plot2D(f3,1.144,1.146,[nwtn[0][-1],biss[0][-1],biss_nwtn[0][-1]],[nwtn[1][-1],biss[1][-1],biss_nwtn[1][-1]]) plot2D(f3,0,2,[biss_nwtn[0]],[biss_nwtn[1]]) ```
aa6f12940e92d26e8250522a09880851eea160d8
296,106
ipynb
Jupyter Notebook
Line_search.ipynb
Taylor-X01/Optimization-Algorithms
c21bcc8d16e344cbac1c6b65d12211075fd1ae2b
[ "MIT" ]
null
null
null
Line_search.ipynb
Taylor-X01/Optimization-Algorithms
c21bcc8d16e344cbac1c6b65d12211075fd1ae2b
[ "MIT" ]
null
null
null
Line_search.ipynb
Taylor-X01/Optimization-Algorithms
c21bcc8d16e344cbac1c6b65d12211075fd1ae2b
[ "MIT" ]
null
null
null
442.609865
62,454
0.932919
true
3,045
Qwen/Qwen-72B
1. YES 2. YES
0.879147
0.831143
0.730697
__label__eng_Latn
0.078815
0.535985
```python # Finite Difference Method import numpy as np from scipy.sparse import diags from tqdm import tqdm K_R = 2.596 # (J/s*m*C) thermal conductivity of rock U = 120 # (J/s*m^2*C) overall heat transfer coefficient between rock and water A = 10**3*10**3 # (m^2) area for heat transfer between rock and water m = 145 # (kg/s) mass flowrate of water rho_W = 1000 # kg/m^3 density of water c_W = 4184 # (J/kg*C) heat capacity of water rho_R = 2650 # (kg/m^3) density of rock c_R = 1050 # (J/kg*K) heat capacity of rock t_final = 100 # (years) maximum amount of time L = 40 # (m) maximum length of rock T_W0 = 65 # (deg. C) initial water temperature T_R0 = 300 # (deg. C) final rock temperature t_final = 31536000 * t_final # time in seconds dt = t_final/100 dx = L/100 w = m * c_W / U / A kappa = K_R / rho_R / c_R h = -U * A / K_R * (w / (w + 1/2)) s = kappa * dt / dx**2 # Backwards Euler Implicit Method t_array = np.arange(0, t_final+dt, dt) x_array = np.arange(0, L+dx, dx) len_t = len(t_array) len_x = len(x_array) T = np.zeros([len_t, len_x]) # temperature matrix # initial boundary condition T[0,:] = T_R0 - T_W0 # backward implicit for t in tqdm(range(0, len_t-1)): A = diags([-s, 1+2*s, -s], [-1, 0, 1], shape=(len_x, len_x)).toarray() B = T[t,:] # boundary condition on left side A[0,0] = A[0,0] - s * (1 + h*dx) # boundary condition on right side A[-1,-1] = A[-1,-1] - s T[t+1,:] = np.linalg.solve(A,B) # add initial temperature of water T = T + T_W0 ``` 100%|███████████████████████████████████████| 100/100 [00:00<00:00, 2482.56it/s] ```python import matplotlib.pyplot as plt from ipywidgets import interact @interact(t_=(0,t_final/31536000,t_final/31536000/100)) def plot(t_): time = int(t_*31536000/t_final*(len_t-1)) plt.plot( x_array, T[time,:] ) plt.ylim(T_W0,T_R0) plt.show() ``` interactive(children=(FloatSlider(value=50.0, description='t_', step=1.0), Output()), _dom_classes=('widget-in… ```python # plot average rock temperature over time plt.plot( t_array/31536000, np.average(T,axis=1) ) ``` ```python # Semi-Analytical Method import numpy as np from scipy.optimize import minimize_scalar from math import tan, pi from sympy import symbols, integrate, sin, cos from sympy.functions import exp x = symbols('x') t = symbols('t') f_x = T_R0 - T_W0 # 100*(1-x/3) def MSE_mu(mu, h, L): """ Mean squared error between -mu/h and cot(mu*L) """ linear = -mu/h nonlinear = 1/tan(mu*L) # cotangent MSE = (linear - nonlinear)**2 return MSE # generate table of equations num_constants = 10 equation = 0 for num_index in range(num_constants): ### mu_n n = num_index + 1 left_bound = pi*(n - 1)/L right_bound = pi*(n)/L answer = minimize_scalar(MSE_mu, method='bounded', bounds=(left_bound, right_bound), args=(h, L)) mu_n = answer.x ### A_n top = f_x * (sin(mu_n*x)-mu_n/h*cos(mu_n*x)) top = integrate(top, (x, L, 0)) bottom = (sin(mu_n*x)-mu_n/h*cos(mu_n*x))**2 bottom = integrate(bottom, (x, L, 0)) A_n = top/bottom ### lambda_n exponent = -mu_n**2 * kappa * t ### equation X_n = A_n*(sin(mu_n*x)-mu_n/h*cos(mu_n*x)) T_n = exp(exponent) equation += X_n*T_n # add initial water temperature equation += T_W0 # show equation equation ``` $\displaystyle \left(299.211289920126 \sin{\left(0.039269906993858 x \right)} + 2.53934732968718 \cdot 10^{-5} \cos{\left(0.039269906993858 x \right)}\right) e^{- 1.43876299925069 \cdot 10^{-9} t} + \left(99.7370815488355 \sin{\left(0.117809720981574 x \right)} + 2.53934694545887 \cdot 10^{-5} \cos{\left(0.117809720981574 x \right)}\right) e^{- 1.29488669932562 \cdot 10^{-8} t} + \left(59.8422670387347 \sin{\left(0.196349534969289 x \right)} + 2.53934771391485 \cdot 10^{-5} \cos{\left(0.196349534969289 x \right)}\right) e^{- 3.5969074981267 \cdot 10^{-8} t} + \left(42.744457053262 \sin{\left(0.274889348957004 x \right)} + 2.53934656123012 \cdot 10^{-5} \cos{\left(0.274889348957004 x \right)}\right) e^{- 7.0499386963283 \cdot 10^{-8} t} + \left(33.2457089407928 \sin{\left(0.353429162944719 x \right)} + 2.5393480981418 \cdot 10^{-5} \cos{\left(0.353429162944719 x \right)}\right) e^{- 1.16539802939304 \cdot 10^{-7} t} + \left(27.2010140090101 \sin{\left(0.431968976932434 x \right)} + 2.53934617700093 \cdot 10^{-5} \cos{\left(0.431968976932434 x \right)}\right) e^{- 1.7409032290933 \cdot 10^{-7} t} + \left(23.0162635185009 \sin{\left(0.510508790920148 x \right)} + 2.53934848236803 \cdot 10^{-5} \cos{\left(0.510508790920148 x \right)}\right) e^{- 2.43150946873361 \cdot 10^{-7} t} + \left(19.9474072550222 \sin{\left(0.589048604907861 x \right)} + 2.53934579277128 \cdot 10^{-5} \cos{\left(0.589048604907861 x \right)}\right) e^{- 3.23721674831396 \cdot 10^{-7} t} + \left(17.6006747655179 \sin{\left(0.667588418895575 x \right)} + 2.53934886659353 \cdot 10^{-5} \cos{\left(0.667588418895575 x \right)}\right) e^{- 4.15802506783436 \cdot 10^{-7} t} + \left(15.7479507132371 \sin{\left(0.746128232883288 x \right)} + 2.53934540854119 \cdot 10^{-5} \cos{\left(0.746128232883288 x \right)}\right) e^{- 5.19393442729479 \cdot 10^{-7} t} + 65$ ```python @interact(time=(0,t_final/31536000, t_final/31536000/1000)) def plot(time): x_range = np.arange(0, L, L/100) plt.plot( x_range, [equation.subs({'x':i, 't':time*31536000}) for i in x_range] ) plt.ylim(T_W0,T_R0) plt.show() ``` interactive(children=(FloatSlider(value=50.0, description='time'), Output()), _dom_classes=('widget-interact',… ```python # Error between analytical and numerical solution @interact(t_=(0,t_final/31536000, t_final/31536000/100)) def plot(t_): # analytical T_analytical = [equation.subs({'x':i, 't':t_*31536000}) for i in x_array] # numerical time = int(t_*31536000/t_final*(len_t-1)) T_numerical = T[time,:] # difference T_difference = T_analytical - T_numerical plt.plot( x_array, T_difference ) plt.show() ``` interactive(children=(FloatSlider(value=50.0, description='t_', step=1.0), Output()), _dom_classes=('widget-in… ```python # plot difference between average temperatures between analytical and finite difference method water_temp_diff = [] for times in tqdm(np.arange(0,31536000, 31536000/100)): T_analytical = [equation.subs({'x':i, 't':times}) for i in x_array] time = int(times/t_final*(len_t-1)) T_numerical = T[time,:] avg_T_analytical = sum(T_analytical)/len(T_analytical) avg_T_numerical = sum(T_numerical)/len(T_numerical) water_temp_diff.append(avg_T_analytical - avg_T_numerical) ``` 100%|█████████████████████████████████████████| 100/100 [02:28<00:00, 1.48s/it] ```python plt.plot( t_array[:-1]/31536000, np.array(water_temp_diff)/T_R0*100 ) plt.title('% Temperature Difference Between Analytical and FD') plt.xlabel('Time (years)') plt.ylabel('% Temperature Difference') plt.show() ``` ```python # plot nondimensionalized equation def make_nondimensional_equation(h, L): global T_W0, f_x equation = 0 for num_index in range(num_constants): ### mu_n n = num_index + 1 left_bound = pi*(n - 1)/L right_bound = pi*(n)/L answer = minimize_scalar(MSE_mu, method='bounded', bounds=(left_bound, right_bound), args=(h, L)) mu_n = answer.x ### A_n top = f_x * (sin(mu_n*x)-mu_n/h*cos(mu_n*x)) top = integrate(top, (x, L, 0)) bottom = (sin(mu_n*x)-mu_n/h*cos(mu_n*x))**2 bottom = integrate(bottom, (x, L, 0)) A_n = top/bottom ### lambda_n exponent = -mu_n**2 * kappa * t ### equation X_n = A_n*(sin(mu_n*x)-mu_n/h*cos(mu_n*x)) T_n = exp(exponent) equation += X_n*T_n # add initial water temperature equation += T_W0 # show equation return equation ``` ```python eq_1 = make_nondimensional_equation(h=-.1 , L=40) eq_2 = make_nondimensional_equation(h=-1 ,L=40) eq_3 = make_nondimensional_equation(h=-10,L=40) ``` ```python theta = 80000000*31536000/(L**2/(pi**2*kappa)) nondimensional_x_array = np.arange(0,1,1/100) plt.plot( nondimensional_x_array, (np.array([ [eq_1.subs({'x':i*L, 't':theta}) for i in nondimensional_x_array], [eq_2.subs({'x':i*L, 't':theta}) for i in nondimensional_x_array], [eq_3.subs({'x':i*L, 't':theta}) for i in nondimensional_x_array], ]).T - T_W0)/(T_R0-T_W0), ) plt.legend( [ 'h=-0.1', 'h=-10', 'h=-100', ]) plt.title('Effect of h parameter') plt.xlabel('Dimensionless Length') plt.ylabel('Dimensionless Temperature') ``` ```python eq_1 = make_nondimensional_equation(h=h, L=20) eq_2 = make_nondimensional_equation(h=h ,L=80) eq_3 = make_nondimensional_equation(h=h ,L=110) ``` ```python nondimensional_x_array = np.arange(0,1,1/100) plt.plot( nondimensional_x_array, (np.array([ [eq_1.subs({'x':i*L, 't':1200000000*31536000/(1**2/(pi**2*kappa))}) for i in nondimensional_x_array], [eq_2.subs({'x':i*L, 't':1200000000*31536000/(10**2/(pi**2*kappa))}) for i in nondimensional_x_array], [eq_3.subs({'x':i*L, 't':1200000000*31536000/(100**2/(pi**2*kappa))}) for i in nondimensional_x_array], ]).T - T_W0)/(T_R0-T_W0), ) plt.legend( [ 'L=20', 'L=80', 'L=110', ]) plt.title('Effect of L parameter') plt.xlabel('Dimensionless Length') plt.ylabel('Dimensionless Temperature') ``` ```python plt.plot( [i for i in np.arange(-10,0,1/100)], np.array([i/(i+1/2) for i in np.arange(-10,0,1/100)]) ) plt.ylim(1,2) plt.xlim(-10,-1) plt.title('Effect of mass flowrate on h parameter') plt.xlabel('m (kg/s)') plt.ylabel('h') ``` ```python # plot regular equation def make_equation(L, K_R, m): A = 10**6 w = m * c_W / U / A kappa = K_R / rho_R / c_R h = -U * A / K_R * (w / (w + 1/2)) equation = 0 for num_index in range(num_constants): ### mu_n n = num_index + 1 left_bound = pi*(n - 1)/L right_bound = pi*(n)/L answer = minimize_scalar(MSE_mu, method='bounded', bounds=(left_bound, right_bound), args=(h, L)) mu_n = answer.x ### A_n top = f_x * (sin(mu_n*x)-mu_n/h*cos(mu_n*x)) top = integrate(top, (x, L, 0)) bottom = (sin(mu_n*x)-mu_n/h*cos(mu_n*x))**2 bottom = integrate(bottom, (x, L, 0)) A_n = top/bottom ### lambda_n exponent = -mu_n**2 * kappa * t ### equation X_n = A_n*(sin(mu_n*x)-mu_n/h*cos(mu_n*x)) T_n = exp(exponent) equation += X_n*T_n # add initial water temperature equation += T_W0 # show equation return equation ``` ```python eq_1 = make_equation(L=4, K_R=K_R, m=m) eq_2 = make_equation(L=40, K_R=K_R, m=m) eq_3 = make_equation(L=110, K_R=K_R, m=m) ``` ```python time = 12 * 31536000 plt.plot( x_array, np.array([ [eq_1.subs({'x':i, 't':time}) for i in x_array], [eq_2.subs({'x':i, 't':time}) for i in x_array], [eq_3.subs({'x':i, 't':time}) for i in x_array], ]).T, ) plt.legend( [ 'L=4', 'L=40', 'L=110', ]) plt.title('Effect of L parameter') plt.xlabel('Length of Fracture (m)') plt.ylabel('Temperature (°C)') ``` ```python eq_1 = make_equation(L=L, K_R=K_R*.1, m=m) eq_2 = make_equation(L=L, K_R=K_R, m=m) eq_3 = make_equation(L=L, K_R=K_R*10, m=m) ``` ```python time = 12 * 31536000 plt.plot( x_array, np.array([ [eq_1.subs({'x':i, 't':time}) for i in x_array], [eq_2.subs({'x':i, 't':time}) for i in x_array], [eq_3.subs({'x':i, 't':time}) for i in x_array], ]).T, ) plt.legend( [ 'K_R=.26', 'K_R=2.6', 'K_R=26', ]) plt.title('Effect of K_R parameter') plt.xlabel('Length of Fracture (m)') plt.ylabel('Temperature (°C)') ``` ```python eq_1 = make_equation(L=L, K_R=K_R, m=m*0.1) eq_2 = make_equation(L=L, K_R=K_R, m=m) eq_3 = make_equation(L=L, K_R=K_R, m=m*10) ``` ```python time = 12 * 31536000 plt.plot( x_array, np.array([ [eq_1.subs({'x':i, 't':time}) for i in x_array], [eq_2.subs({'x':i, 't':time}) for i in x_array], [eq_3.subs({'x':i, 't':time}) for i in x_array], ]).T, ) plt.legend( [ 'm=14.5', 'm=145', 'm=1450', ]) plt.title('Effect of m parameter') plt.xlabel('Length of Fracture (m)') plt.ylabel('Temperature (°C)') ``` ```python print( m/10, m, m*10) ``` 14.5 145 1450 ```python ```
e915a36f288c7a3876125ce410f2570c76adc967
174,642
ipynb
Jupyter Notebook
Robin Conditions Final.ipynb
mathemusician/GeoHeat
be9d0ac94e0b4e169f4b091bffd2d1bcd06b28e1
[ "MIT" ]
null
null
null
Robin Conditions Final.ipynb
mathemusician/GeoHeat
be9d0ac94e0b4e169f4b091bffd2d1bcd06b28e1
[ "MIT" ]
null
null
null
Robin Conditions Final.ipynb
mathemusician/GeoHeat
be9d0ac94e0b4e169f4b091bffd2d1bcd06b28e1
[ "MIT" ]
null
null
null
197.112867
22,548
0.898197
true
4,448
Qwen/Qwen-72B
1. YES 2. YES
0.903294
0.76908
0.694706
__label__eng_Latn
0.351797
0.452366
# 符号计算 符号计算又称计算机代数,通俗地说就是用计算机推导数学公式,如对表达式进行因式分解,化简,微分,积分,解代数方程,求解常微分方程等. 符号计算主要是操作数学对象与表达式.这些数学对象与表达式可以直接表现自己,它们不是估计/近似值.表达式/对象是未经估计的变量只是以符号形式呈现. ## 使用SymPy进行符号计算 [SymPy](https://www.sympy.org/zh/index.html)是python环境下的符号计算库,他可以用于: + 简化数学表达式 + 计算微分,积分与极限 + 求方程的解 + 矩阵运算以及各种数学函数. 所有这些功能都通过数学符号完成. 下面是使用SymPy做符号计算与一般计算的对比: > 一般的计算 ```python import math math.sqrt(3) ``` 1.7320508075688772 ```python math.sqrt(27) ``` 5.196152422706632 > 使用SymPy进行符号计算 ```python import sympy sympy.sqrt(3) ``` $\displaystyle \sqrt{3}$ ```python sympy.sqrt(27) ``` $\displaystyle 3 \sqrt{3}$ SymPy程序库由若干核心能力与大量的可选模块构成.SymPy的主要功能: + 包括基本算术与公式简化,以及模式匹配函数,如三角函数/双曲函数/指数函数与对数函数等(核心能力) + 支持多项式运算,例如基本算术/因式分解以及各种其他运算(核心能力) + 微积分功能,包括极限/微分与积分等(核心能力) + 各种类型方程式的求解,例如多项式求解/方程组求解/微分方程求解(核心能力) + 离散数学(核心能力) + 矩阵表示与运算功能(核心能力) + 几何函数(核心能力) + 借助pyglet外部模块画图 + 物理学支持 + 统计学运算,包括概率与分布函数 + 各种打印功能 + LaTeX代码生成功能 ## 使用SymPy的工作流 使用SymPy做符号计算不同于一般计算,它的流程是: + 在构建算式前申明符号,然后利用声明的符号构建算式 + 利用算式进行推导,计算等符号运算操作 + 输出结果 下面是一个简单的例子,就当作SymPy的helloworld吧 ```python import sympy as sp x, y = sp.symbols('x y') #声明符号x,y expr = x + 2*y # 构造算式 expr ``` $\displaystyle x + 2 y$ ```python expr + 1 # 在算式之上构建新算式 ``` $\displaystyle x + 2 y + 1$ ```python expr + x # 新构建的算式可以明显的化简就会自动化简 ``` $\displaystyle 2 x + 2 y$ ```python x*(expr) # 新算式不能明显的化简,比如这个例子,就不会自动化简 ``` $\displaystyle x \left(x + 2 y\right)$ ```python expand_expr = sp.expand(x*(expr)) # 手动化简新算式 expand_expr ``` $\displaystyle x^{2} + 2 x y$ ```python sp.factor(expand_expr) # 将化简的式子做因式分解 ``` $\displaystyle x \left(x + 2 y\right)$ ```python sp.latex(expand_expr) # 输出符号的latex代码 ``` 'x^{2} + 2 x y'
59fcffc75a6578d044be0c3fdf39804c9051a5c2
6,358
ipynb
Jupyter Notebook
src/数据分析篇/工具介绍/SymPy/README.ipynb
hsz1273327/TutorialForDataScience
1d8e72c033a264297e80f43612cd44765365b09e
[ "MIT" ]
1
2020-04-27T12:40:25.000Z
2020-04-27T12:40:25.000Z
README.ipynb
TutorialForPython/python-symbolic-computation
724e3a294e87eefe25dfe37c96c2606d24cfc767
[ "MIT" ]
3
2020-03-31T03:36:05.000Z
2020-03-31T03:36:21.000Z
src/数据分析篇/工具介绍/SymPy/README.ipynb
hsz1273327/TutorialForDataScience
1d8e72c033a264297e80f43612cd44765365b09e
[ "MIT" ]
null
null
null
17.710306
78
0.454545
true
1,072
Qwen/Qwen-72B
1. YES 2. YES
0.926304
0.888759
0.823261
__label__yue_Hant
0.910624
0.751043
# SIRモデル ## 記号の定義 * $S(t)$ : 時点$t$における未感染者数 * $I(t)$ : 時点$t$における感染者数 * $R(t)$ : 時点$t$における感染済かつ回復者数(免疫保持者数) * $S(t)+I(t)+R(t) \equiv N(t)=N$ :総人口(死亡者数を含め保存されるものとする) * $\beta$ : 未感染者が感染者と1回の接触で感染する確率 * $\gamma$ : 感染者が1日の内に回復し感染力を失う確率 (※)便宜的に各パラメータを定義する際,時間軸の単位を日としている ## 感染ダイナミクス 時点$t$において,未感染者1人がのべ$j$人と接触したときに,感染から**免れる確率**は以下で与えられる: $$ \left( 1 - \frac{\beta I(t)}{N} \right)^{j} $$ ここで,ある1人が1日にのべ$j$人と接触する確率を$P(j)$と定義すると,時点$t$から時点$t+1$の間に増加する感染者数の期待値は次のように書ける: $$ S(t+1)-S(t) = S(t) \times \sum^{\infty}_{j=0} \left[1- \left( 1 - \frac{\beta I(t)}{N} \right)^{m} \right]P(m) $$ この感染者数の増分期待値および回復率$\gamma$を用いて,SIRダイナミクスは 次の差分方程式で表される: $$ \begin{align} S(t+1) &= S(t) - S(t) \sum^{\infty}_{j=0} \left[1- \left( 1 - \frac{\beta I(t)}{N} \right)^{j} \right]P(j) \\ I(t+1) &= I(t) + S(t) \sum^{\infty}_{j=0} \left[1- \left( 1 - \frac{\beta I(t)}{N} \right)^{j} \right]P(j) - \gamma I(t) \\ R(t+1) &= R(t) + \gamma I(t) \end{align} $$ ここで,人口に対する各群の比率$\widehat{S}(t) \equiv \dfrac{S(t)}{N} ,\widehat{I}(t) \equiv \dfrac{I(t)}{N} ,\widehat{R}(t) \equiv \dfrac{R(t)}{N}$を導入すれば,先の差分方程式は以下に帰着する: $$ \begin{align} \widehat{S}(t+1) &= \widehat{S}(t) - \widehat{S}(t) \sum^{\infty}_{j=0} \left[1- \left( 1 - \beta \widehat{I}(t) \right)^{j} \right]P(j) \\ \widehat{I}(t+1) &= \widehat{I}(t) + \widehat{S}(t) \sum^{\infty}_{j=0} \left[1- \left( 1 - \beta \widehat{I}(t) \right)^{j} \right]P(j) - \gamma \widehat{I}(t) \\ \widehat{R}(t+1) &= \widehat{R}(t) + \gamma \widehat{I}(t) \end{align} $$ >(※)以下補足:$P(j)$が期待値$\lambda$のポアソン分布$P(j)=\lambda^{j}\mathrm{e}^{-\lambda}/j!$であるとき,時間ステップ長の極限$\Delta t \rightarrow 0$において,次に示す連続時間型SIRモデルと対応する. > >$$ \begin{align} \dfrac{\mathrm{d} \widehat{S}(t)}{\mathrm{d}t} &= - \lambda \beta \widehat{S}(t) \widehat{I}(t) \\ \dfrac{\mathrm{d} \widehat{I}(t)}{\mathrm{d}t} &= \lambda \beta \widehat{S}(t) \widehat{I}(t) - \gamma \widehat{I}(t) \\ \dfrac{\mathrm{d} \widehat{R}(t)}{\mathrm{d}t} &= \gamma \widehat{I}(t) \end{align} $$ > > >多くの連続時間モデルでは,$\lambda \beta$を感染率と定義しているケースが多い.このときの感染率は,ウイルス固有の値でなく人々の接触の仕方の情報(ポアソン分布)も含まれていると捉えることもできる.連続モデルと離散モデルの対応の詳細は[瀬野,2011](https://repository.kulib.kyoto-u.ac.jp/dspace/bitstream/2433/171302/1/1757-07.pdf)を参照した. ```python import math import matplotlib.pyplot as plt %matplotlib inline ``` ```python T_ini = 100 S_ini = 1 I_ini = 0 R_ini = 0 class Prameter: def __init__(self,N): self.N = N self.gamma = 0.2 self.beta = 0.3 self.m = 5 class Model: def __init__(self, prm, S=S_ini, I=I_ini, R=R_ini, T=T_ini): self.prm = prm self.S = float(S) self.I = float(I) self.R = float(R) self.T = T self.S_list = [] self.S_list.append(S) self.I_list = [] self.I_list.append(I) self.R_list = [] self.R_list.append(R) self.m_list = [] def solve_NC(self): for t in range(self.T): m = self.prm.m self.newI = (1-(1-self.prm.beta*self.I)**m)*self.S self.m_list.append(m) New_S = self.S_dynamics() New_I = self.I_dynamics() New_R = self.R_dynamics() self.Update(New_S, New_I, New_R) def S_dynamics(self): m = self.m_list[-1] New_S \ = self.S \ - (1-(1-self.prm.beta*self.I)**m)*self.S self.S_list.append(New_S) return New_S def I_dynamics(self): m = self.m_list[-1] New_I \ = self.I\ +(1-(1-self.prm.beta*self.I)**m)*self.S\ - self.I*self.prm.gamma self.I_list.append(New_I) return New_I def R_dynamics(self): New_R \ = self.R\ + self.I * self.prm.gamma self.R_list.append(New_R) return New_R def Update(self,New_S, New_I, New_R, ): self.S = New_S self.I = New_I self.R = New_R ``` ```python prm = Prameter(N=1) prm.gamma = 0.07 prm.beta = 0.1 prm.m =5 model = Model(prm=prm, T=150, I=10**(-3)) model.solve_NC() ``` ```python plt.plot(model.S_list, color='green', label='S(t) :Susceptible') plt.plot(model.I_list, color='pink', label='I(t) : Infectious') plt.plot(model.R_list, color='blue', label='R(t) : Recovered') plt.legend() ```
4d273696b7a49a0db1dc14852d701cf6d7adb142
29,400
ipynb
Jupyter Notebook
SIR/SIR_Discrete.ipynb
takala4/takala-bako
e7405e3240567860ce5b5164f2da9083787e6d18
[ "MIT" ]
1
2018-12-28T10:33:06.000Z
2018-12-28T10:33:06.000Z
SIR/SIR_Discrete.ipynb
takala4/takala-bako
e7405e3240567860ce5b5164f2da9083787e6d18
[ "MIT" ]
null
null
null
SIR/SIR_Discrete.ipynb
takala4/takala-bako
e7405e3240567860ce5b5164f2da9083787e6d18
[ "MIT" ]
null
null
null
102.797203
21,600
0.817823
true
2,089
Qwen/Qwen-72B
1. YES 2. YES
0.899121
0.774583
0.696444
__label__yue_Hant
0.213623
0.456405
###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 L.A. Barba, C.D. Cooper, G.F. Forsyth. # Riding the wave ## Convection problems Welcome to *Riding the wave: Convection problems*, the third module of ["Practical Numerical Methods with Python"](http://openedx.seas.gwu.edu/courses/GW/MAE6286/2014_fall/about). In the [first module](https://github.com/numerical-mooc/numerical-mooc/tree/master/lessons/01_phugoid), we learned about numerical integration methods for the solution of ordinary differential equations (ODEs). The [second module](https://github.com/numerical-mooc/numerical-mooc/tree/master/lessons/02_spacetime) introduced the finite difference method for numerical solution of partial differential equations (PDEs), where we need to discretize both *space* and *time*. This module explores the convection equation in more depth, applied to a traffic-flow problem. We already introduced convection in [Lesson 1 of Module 2](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/02_spacetime/02_01_1DConvection.ipynb). This hyperbolic equation is very interesting because the solution can develop *shocks*, or regions with very high gradient, which are difficult to resolve well with numerical methods. We will start by introducing the concept of a conservation law, closely related to the convection equation. Then we'll explore different numerical schemes and how they perform when shocks are present. ## Conservation laws You know from (non relativistic) physics that mass is _conserved_. This is one example of a conserved quantity, but there are others (like momentum and energy) and they all obey a _conservation law_. Let's start with the more intuitive case of conservation of mass. ### Conservation of mass In any closed system, we know that the mass $M$ in the system does not change, which we can write: $\frac{D\,M}{Dt} =0$. When we change the point of view from a closed system to what engineers call a _control volume_, mass can move in and out of the volume and conservation of mass is now expressed by: Let's imagine the control volume as a tiny cylinder of cross-section dA and length dx, like in the sketch below. #### Figure 1. Tiny control volume in the shape of a cylinder. If we represent the mass density by $\rho$, then mass is equal to $\rho\times$ volume. For simplicity, let's assume that mass flows in or out of the control volume only in one direction, say, the $x$-direction. Express the 1D velocity component by $u$, and conservation of mass for the control volume is translated to a mathematical expression as follows: \begin{equation} \frac{\partial}{\partial t}\int_{\text{cv}}\rho \, dV + \int_{\text{cs}}\rho \, u\, dA =0 \end{equation} where "cv" stands for control volume and "cs" stands for control surface. The first term represents the rate of change of mass in the control volume, and the second term is the rate of flow of mass, with velocity $u$, accross the control surface. Since the control volume is very small, we can take, to leading order, $\rho$ as a uniform quantity inside it, and the first term in equation (1) can be simplified to the time derivative of density multiplied by the volume of the tiny cylinder, $dAdx$: $$\frac{\partial}{\partial t}\int_{\text{cv}}\rho \, dV \rightarrow \frac{\partial \rho}{\partial t} dAdx$$ Now, for the second term in equation (1), we have to do a little more work. The quantity inside the integral is now $\rho u$ and, to leading order, we have to take into consideration that this quantity can change in the distance $dx$. Take $\rho u$ to be the value in the center of the cylinder. Then the flow of mass on each side is illustrated in the figure below, where we use a Taylor expansion of the quantity $\rho u$ around the center of the control volume (to first order). #### Figure 2. Flux terms on the control surfaces. Subtracting the negative flux on the left to the positive flux on the right, we arrive at the total flux of mass accross the control surfaces, the second term in equation (1): $$ \int_{\text{cs}}\rho \, u\, dA \rightarrow \frac{\partial}{\partial x}(\rho u) dAdx$$ We can now put together the equation of conservation of mass for the tiny cylindrical control volume, which after diving by $dAdx$ is: \begin{equation} \frac{\partial \rho}{\partial t} + \frac{\partial}{\partial x}(\rho u)=0 \end{equation} This is the 1D mass conservation equation in differential form. If we take $u$ to be a constant and take it out of the spatial derivative this equation looks the same as the first PDE we studied: the linear convection equation in [Lesson 1 of Module 2](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/02_spacetime/02_01_1DConvection.ipynb). But in the form shown above, it is a typical _conservation law_. The term under the spatial derivative is called the _flux_, for reasons that should be clear from our discussion above: it represents amounts of the conserved quantity flowing across the boundary of the control volume. ##### Dig deeper You can follow the derivation of the full three-dimensional equation of conservation of mass for a flow on this screencast by Prof. Barba (duration 12:47). ```python from IPython.display import YouTubeVideo YouTubeVideo('35unQgSaT88') ``` ### General conservation laws All conservation laws express the same idea: the variation of a conserved quantity inside a control volume is due to the total flux of that quantity crossing the boundary surface (plus possibly the effect of any sources inside the volume, but let's ignore those for now). The _flux_ is a fundamental concept in conservation laws: it represents the amount of the quantity that crosses a surface per unit time. Our discussion above was limited to flow in one dimension, but in general the flux has any direction and is a vector quantity. Think about this: if the direction of flow is parallel to the surface, then no quantity comes in or out. We really only care about the component of flux perpendicular to the surface. Mathematically, for a vector flux $\vec{F}$, the amount of the conserved quantity crossing a small surface element is: $$\vec{F}\cdot d\vec{A}$$ where $d\vec{A}$ points in the direction of the outward normal to the surface. A general conservation law for a quantity $e$ is thus (still ignoring possible sources): \begin{equation} \frac{\partial}{\partial t}\int_{\text{cv}}e \, dV + \oint_{\text{cs}}\vec{F}\cdot d\vec{A} =0 \end{equation} To obtain a differential form of this conservation equation, we can apply the theorem of Gauss to the second integral, which brings the gradient of $\vec{F}$ into play. One way to recognize a conservation law in differential form is that the _fluxes appear only under the gradient operator_. Recall the non-linear convection equation from [Lesson 1 of Module 2](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/02_spacetime/02_01_1DConvection.ipynb). It was: \begin{equation}\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = 0\end{equation} If we look closely at the spatial derivative, we can rewrite this equation as \begin{equation}\frac{\partial u}{\partial t} + \frac{\partial}{\partial x} \left(\frac{u^2}{2} \right) = 0, \end{equation} which is the *conservation form* of the non-linear convection equation, with flux $F=\frac{u^2}{2}$. ## Traffic flow model We've all experienced it: as rush hour approaches certain roads in or out of city centers start getting full of cars, and the speed of travel can reduce to a crawl. Sometimes, the cars stop altogether. If you're a driver, you know that the more cars on the road, the slower your trip will flow. Traffic flow models seek to describe these everyday experiences with mathematics, to help engineers design better road systems. Let's review the [Lighthill-Whitham-Richards](http://en.wikipedia.org/wiki/Macroscopic_traffic_flow_model) traffic model that was offered as an exercise at the end of Module 2. This model considers cars with a continuous *traffic density* (average number of cars per unit length of road) rather than keeping track of them individually. If $\rho(x)=0$, there are no cars at that point $x$ of the road. If $\rho(x) = \rho_{\rm max}$, traffic is literally bumper to bumper. If the number of cars on a bounded stretch of road changes, it means that cars are entering or leaving the road somehow. _Traffic density obeys a conservation law_ (!) where the flux is the number of cars leaving the road per unit time. It is given by $F=\rho u$—as with mass conservation, flux equals density times velocity. But don't forget your experience on the road: the speed of travel depends on the car density. Here, $u$ refers not to the speed of each individual car, but to the _traffic speed_ at a given point of the road. You know from experience that with more cars on the road, the speed of travel decreases. It is also true that if you are traveling at fast speed, you are advised to leave a larger gap with cars ahead. These two considerations lead us to propose a monotonically decreasing $u=u(\rho)$ function. As a first approximation, we may consider the linear function: \begin{equation}u(\rho) = u_{\rm max} \left(1-\frac{\rho}{\rho_{\rm max}}\right)\end{equation} #### Figure 3. Traffic speed vs. traffic density. The linear model of the behavior of drivers satisfies these experimental observations: 1. All drivers will approach a maximum velocity $u_{max}$ when the road is empty. 2. If the road is completely jampacked ($\rho \rightarrow \rho_{max}$), velocity goes to zero. That seems like a reasonable approximation of reality! Applying a conservation law to the vehicle traffic, the traffic density will obey the following transport equation: \begin{equation} \frac{\partial \rho}{\partial t} + \frac{\partial F}{\partial x} = 0 \end{equation} where $F$ is the *traffic flux*, which in the linear traffic-speed model is given by: \begin{equation} F = \rho u_{\rm max} \left(1-\frac{\rho}{\rho_{\rm max}}\right). \end{equation} We can now use our numerical kung-fu to solve some interesting traffic situations, and check if our simple model gives realistic results! ### Green light! Let's say that we are examining a road of length $4$ where the speed limit is $u_{\rm max}=1$, fitting $10$ cars per unit length $(\rho_{\rm max}=10)$. Now, imagine we have an intersection with a red light at $x=2$. At the stoplight, traffic is bumper-to-bumper, and the traffic density decreases linearly to zero as we approach the beginning of our road. Ahead of the stoplight, the road is clear. Mathematically, we can represent this situation with the following initial condition: \begin{equation}\rho(x,0) = \left\{ \begin{array}{cc} \rho_{\rm max}\frac{x}{2} & 0 \leq x < 2 \\ 0 & 2 \leq x \leq 4 \\ \end{array} \right.\end{equation} Let's see what a plot of that looks like. ```python %matplotlib inline import numpy from matplotlib import pyplot from matplotlib import rcParams rcParams['font.family'] = 'serif' rcParams['font.size'] = 16 ``` ```python def rho_green_light(nx, rho_light): """Computes "green light" initial condition with shock, and linear distribution behind Parameters ---------- nx : int Number of grid points in x rho_light : float Density of cars at stoplight Returns ------- rho: array of floats Array with initial values of density """ rho = numpy.arange(nx)*2./nx*rho_light # Before stoplight rho[int((nx-1)/2):] = 0 return rho ``` ```python #Basic initial condition parameters #defining grid size, time steps nx = 81 nt = 30 dx = 4.0/(nx-1) x = numpy.linspace(0,4,nx) rho_max = 10. u_max = 1. rho_light = 10. rho = rho_green_light(nx, rho_light) ``` ```python pyplot.plot(x, rho, color='#003366', ls='-', lw=3) pyplot.ylabel('Traffic density') pyplot.xlabel('Distance') pyplot.ylim(-0.5,11.); ``` **How does the traffic behave once the light turns green?** Cars should slowly start moving forward: the density profile should move to the right. Let's see if the numerical solution agrees with that! Before we start, let's define a function to calculate the traffic flux. We'll use it in each time step of our numerical solution. ```python def computeF(u_max, rho_max, rho): """Computes flux F=V*rho Parameters ---------- u_max : float Maximum allowed velocity rho : array of floats Array with density of cars at every point x rho_max: float Maximum allowed car density Returns ------- F : array Array with flux at every point x """ return u_max*rho*(1-rho/rho_max) ``` ### Forward-time/backward-space Start by using a forward-time, backward-space scheme, like you used in Module 2. The discretized form of our traffic model is: \begin{equation} \frac{\rho^{n+1}_i- \rho^n_{i}}{\Delta t}+ \frac{F^n_{i}-F^n_{i-1}}{\Delta x}=0 \end{equation} Like before, we'll step in time via a for-loop, and we'll operate on all spatial points simultaneously via array operations. In each time step, we also need to call the function that computes the flux. Here is a function that implements in code the forward-time/backward-space difference scheme: ```python def ftbs(rho, nt, dt, dx, rho_max, u_max): """ Computes the solution with forward in time, backward in space Parameters ---------- rho : array of floats Density at current time-step nt : int Number of time steps dt : float Time-step size dx : float Mesh spacing rho_max: float Maximum allowed car density u_max : float Speed limit Returns ------- rho_n : array of floats Density after nt time steps at every point x """ #initialize our results array with dimensions nt by nx rho_n = numpy.zeros((nt,len(rho))) #copy the initial u array into each row of our new array rho_n[0,:] = rho.copy() for t in range(1,nt): F = computeF(u_max, rho_max, rho) rho_n[t,1:] = rho[1:] - dt/dx*(F[1:]-F[:-1]) rho_n[t,0] = rho[0] rho = rho_n[t].copy() return rho_n ``` We're all good to go! **Note:** The code above saves the complete traffic density at each time-step—we'll use that in a second to create animations with our results. Running the numerical solution is easy now: we just need to call the function for evolving the initial condition with the forward-time/backward-space scheme. ```python sigma = 1. dt = sigma*dx rho_n = ftbs(rho, nt, dt, dx, rho_max, u_max) ``` Let's see how that looks. Below is another way to use the `matplotlib` animation routines. Instead of computing and animating at the same time, we can pass the results of our calculation, stored in the array `rho_n`, as individual frames. It doesn't make much of a difference for a computationally light example like this one, but if you want to animate more complicated problems, it's nice to separate the computation from the visualization. You don't want to re-run your entire simulation just so you can update the line style! ```python from matplotlib import animation from IPython.display import HTML ``` ```python fig = pyplot.figure(); ax = pyplot.axes(xlim=(0,4),ylim=(-.5,11.5),xlabel=('Distance'),ylabel=('Traffic density')); line, = ax.plot([],[],color='#003366', lw=2); def animate(data): x = numpy.linspace(0,4,nx) y = data line.set_data(x,y) return line, anim = animation.FuncAnimation(fig, animate, frames=rho_n, interval=50) ``` ```python HTML(anim.to_html5_video()) ``` **Yikes! The solution is blowing up.** This didn't happen in your traffic-flow exercise (coding assignment) for Module 2! (Thankfully.) What is going on? Is there a bug in the code? No need to panic. Let's take a closer look at the equation we are solving: \begin{equation}\frac{\partial \rho}{\partial t} + \frac{\partial F}{\partial x} = 0\end{equation} Using the chain rule of calculus, rewrite is as follows: \begin{equation} \frac{\partial \rho}{\partial t} + \frac{\partial F}{\partial \rho} \frac{\partial \rho}{\partial x} = 0\end{equation} This form of the equation looks like the nonlinear convection equation from [Lesson 1 of Module 2](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/02_spacetime/02_01_1DConvection.ipynb), right? This is a wave equation where the wave speed is $u_{\rm wave} = \frac{\partial F}{\partial\rho}$. That term is: \begin{equation}u_{\rm wave} = \frac{\partial F}{\partial \rho} = u_{\rm max} \left( 1-2\frac{\rho}{\rho_{\rm max}} \right).\end{equation} See how the wave speed changes sign at $\rho = \rho_{\rm max}/2$? That means that for the initial conditions given for the green-light problem, the part of the wave under $\rho = \rho_{\rm max}/2$ will want to move right, whereas the part of the wave over this mark, will move left! There is no real problem with that in terms of the model, but a scheme that is backward in space is *unstable* for negative values of the wave speed. ## Upwind schemes Maybe you noticed that the backward-space discretization is spatially biased: we include the points $i$ and $i-1$ in the formula. Look again at the stencil and you'll see what we mean. #### Figure 4. Stencil of forward-time/backward-space. In fact, the spatial bias was meant to be in the direction of propagation of the wave—this was true when we solved the convection equation (with positive wave speed $c$), but now we have some problems. Discretization schemes that are biased in the direction that information propagates are called _upwind schemes_. Remember when we discussed the characteristic lines for the linear convection equation in [lesson 1 of the previous module](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/wave_dev/lessons/02_spacetime/02_01_1DConvection.ipynb)? Compare the sketch of the characteristic lines with the stencil above. The point is that there is an inherent directionality in the physics, and we want the numerical scheme to have the same directionality. This is one example of _choosing an appropriate scheme_ for the physical problem. If we wanted to solve the convection equation with negative wave speed, $c<0$, we would need a spatial bias "slanting left," which we would obtain by using the points $i$ and $i+1$ in the formula. But if we have waves traveling in both directions, we are in a bit of a bind. One way to avoid this problem with our traffic flow model is to simply use an initial condition that doesn't produce negative speed. This should work. But later we will learn about other numerical schemes that are able to handle waves in both directions. Just for a sanity check, let's try the forward-time/backward-space scheme with the initial conditions \begin{equation}\rho(x,0) = \left\{ \begin{array}{cc} 2.5 x & 0 \leq x < 2 \\ 0 & 2 \leq x \leq 4 \\ \end{array} \right.\end{equation} If all values of $\rho \leq \rho_{\rm max}/2$, then $\frac{\partial F}{\partial \rho}$ is positive everywhere. For these conditions, our forward-time/backward-space scheme shouldn't have any trouble, as all wave speeds are positive. ```python rho_light = 5. nt = 40 rho = rho_green_light(nx, rho_light) rho_n = ftbs(rho, nt, dt, dx, rho_max, u_max) anim = animation.FuncAnimation(fig, animate, frames=rho_n, interval=50) HTML(anim.to_html5_video()) ``` Phew! It works! Try this out yourself with different initial conditions. Also, you can easily create a new function `ftfs` to do a forward-time/forward-space scheme, which is stable for negative wave speeds. Unfortunately, forward in space is unstable for positive wave speeds. If you don't want it blowing up, make sure the wave speed is negative everywhere: $u_{\rm wave} = \frac{\partial F}{\partial \rho} < 0 \ \forall \ x$. Look at that solution again, and you'll get some nice insights of the real physical problem. See how on the trailing edge, a shock is developing? In the context of the traffic flow problem, a shock is a sign of a traffic jam: a region where traffic is heavy and slow next to a region that is free of cars. In the initial condition, the cars in the rear end of the triangle see a mostly empty road (traffic density is low!). They see an empty road and speed up, accordingly. The cars in the peak of the triangle are moving pretty slowly because traffic density is higher there. Eventually the cars that started in the rear will catch up with the rest and form a traffic jam. ## Beware the CFL! [Lesson 2 of Module 2](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/wave_dev/lessons/02_spacetime/02_02_CFLCondition.ipynb) discusses the CFL condition for the linear convection equation. To refresh your memory, for a constant wave speed $u_{\rm wave} = c$: \begin{equation} \sigma = c\frac{\Delta t}{\Delta x} < 1.\end{equation} What happens for non-linear equations? The wave speed is space- and time-dependent, $u_{\rm wave} = u_{\rm wave}(x,t)$, and the CFL condition needs to apply for every point in space, at every instant of time. We just need $\sigma>1$ in one spot, for the whole solution to blow up! Let's generalize the CFL condition to \begin{equation} \sigma = \max\left[ \left| u_{\rm wave} \right| \frac{\Delta t}{\Delta x} \right] < 1.\end{equation} which in our case is \begin{equation} \sigma = \max\left[ u_{\rm max} \left| 1-\frac{2 \rho}{\rho_{\rm max}} \right| \frac{\Delta t}{\Delta x} \right] < 1.\end{equation} Here, the closer $\rho$ is to zero, the more likely it is to be unstable. ### Green light and CFL We know that the green-light problem with density at the stop light $\rho = \rho_{\rm light} = 4$ is stable using a forward-time/backward -space scheme. Earlier, we used $u_{\rm max} = 1$, and $\Delta t/\Delta x=1$, which gives a CFL $= 1$, when $\rho = 0$. What if we change the conditions slightly, say $u_{\rm max} = 1.1$? ```python rho_light = 4. u_max = 1.1 nt = 40 rho = rho_green_light(nx, rho_light) rho_n = ftbs(rho, nt, dt, dx, rho_max, u_max) anim = animation.FuncAnimation(fig, animate, frames=rho_n, interval=50) HTML(anim.to_html5_video()) ``` That failed miserably! Only by changing $u_{\rm max}$ to $1.1$, even an algorithm that we know is stable for this problem, fails. Since we kept $\Delta t/\Delta x=1$, the CFL number for $\rho=0$ is $1.1$. See where the instability begins? Beware the CFL! ## References * Neville D. Fowkes and John J. Mahony, *"An Introduction to Mathematical Modelling,"* Wiley & Sons, 1994. Chapter 14: Traffic Flow. * M. J. Lighthill and G. B. Whitham (1955), On kinematic waves. II. Theory of traffic flow and long crowded roads, _Proc. Roy. Soc. A_, Vol. 229, pp. 317–345. [PDF from amath.colorado.edu](https://amath.colorado.edu/sites/default/files/2013/09/1710796241/PRSA_Lighthill_1955.pdf), checked Oct. 14, 2014. [Original source](http://rspa.royalsocietypublishing.org/content/229/1178/317.short) on the Royal Society site. --- ###### The cell below loads the style of the notebook. ```python from IPython.core.display import HTML css_file = '../../styles/numericalmoocstyle.css' HTML(open(css_file, "r").read()) ``` <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'> <link href='https://fonts.googleapis.com/css?family=Source+Code+Pro' rel='stylesheet' type='text/css'> <style> @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } #notebook_panel { /* main background */ background: rgb(245,245,245); } div.cell { /* set cell width */ width: 750px; } div #notebook { /* centre the content */ background: #fff; /* white background for content */ width: 1000px; margin: auto; padding-left: 0em; } #notebook li { /* More space between bullet points */ margin-top:0.8em; } /* draw border around running cells */ div.cell.border-box-sizing.code_cell.running { border: 1px solid #111; } /* Put a solid color box around each cell and its output, visually linking them*/ div.cell.code_cell { background-color: rgb(256,256,256); border-radius: 0px; padding: 0.5em; margin-left:1em; margin-top: 1em; } div.text_cell_render{ font-family: 'Alegreya Sans' sans-serif; line-height: 140%; font-size: 125%; font-weight: 400; width:600px; margin-left:auto; margin-right:auto; } /* Formatting for header cells */ .text_cell_render h1 { font-family: 'Nixie One', serif; font-style:regular; font-weight: 400; font-size: 45pt; line-height: 100%; color: rgb(0,51,102); margin-bottom: 0.5em; margin-top: 0.5em; display: block; } .text_cell_render h2 { font-family: 'Nixie One', serif; font-weight: 400; font-size: 30pt; line-height: 100%; color: rgb(0,51,102); margin-bottom: 0.1em; margin-top: 0.3em; display: block; } .text_cell_render h3 { font-family: 'Nixie One', serif; margin-top:16px; font-size: 22pt; font-weight: 600; margin-bottom: 3px; font-style: regular; color: rgb(102,102,0); } .text_cell_render h4 { /*Use this for captions*/ font-family: 'Nixie One', serif; font-size: 14pt; text-align: center; margin-top: 0em; margin-bottom: 2em; font-style: regular; } .text_cell_render h5 { /*Use this for small titles*/ font-family: 'Nixie One', sans-serif; font-weight: 400; font-size: 16pt; color: rgb(163,0,0); font-style: italic; margin-bottom: .1em; margin-top: 0.8em; display: block; } .text_cell_render h6 { /*use this for copyright note*/ font-family: 'PT Mono', sans-serif; font-weight: 300; font-size: 9pt; line-height: 100%; color: grey; margin-bottom: 1px; margin-top: 1px; } .CodeMirror{ font-family: "Source Code Pro"; font-size: 90%; } .alert-box { padding:10px 10px 10px 36px; margin:5px; } .success { color:#666600; background:rgb(240,242,229); } </style>
a68517d2ab7d51fe85317cd6ce64e4e01fbf62f3
193,347
ipynb
Jupyter Notebook
lessons/03_wave/03_01_conservationLaw.ipynb
sergiommr/numerical-mooc
b088e9d205f15dbc22f83e45c2181a2c5809365f
[ "CC-BY-3.0" ]
1
2020-05-27T04:13:23.000Z
2020-05-27T04:13:23.000Z
lessons/03_wave/03_01_conservationLaw.ipynb
sergiommr/numerical-mooc
b088e9d205f15dbc22f83e45c2181a2c5809365f
[ "CC-BY-3.0" ]
1
2017-01-16T20:53:59.000Z
2017-01-16T20:53:59.000Z
lessons/03_wave/03_01_conservationLaw.ipynb
sergiommr/numerical-mooc
b088e9d205f15dbc22f83e45c2181a2c5809365f
[ "CC-BY-3.0" ]
1
2020-05-27T04:13:25.000Z
2020-05-27T04:13:25.000Z
92.510526
21,547
0.819061
true
7,111
Qwen/Qwen-72B
1. YES 2. YES
0.640636
0.843895
0.540629
__label__eng_Latn
0.991971
0.094393
```python from sympy import init_session, init_printing, Matrix init_session() init_printing() ``` IPython console for SymPy 0.7.6 (Python 2.7.8-64-bit) (ground types: python) These commands were executed: >>> from __future__ import division >>> from sympy import * >>> x, y, z, t = symbols('x y z t') >>> k, m, n = symbols('k m n', integer=True) >>> f, g, h = symbols('f g h', cls=Function) >>> init_printing() Documentation can be found at http://www.sympy.org ## Twiss Matrices ```python B, Bi, R, mu = symbols('B, B_{inv}, R, mu') a0, a1, b0, b1 = symbols('alpha_0, alpha_1, beta_0, beta_1', real=True, positive=True) # V, h, phi, eta, R, C = symbols('V, h, phi, eta, R, C') # z, dp = symbols('z, dp', real=True) # Hc = symbols('H_c') B = Matrix([[1/sqrt(b0), 0], [a0/sqrt(b0), sqrt(b0)]]) R = Matrix([[cos(mu), sin(mu)], [-sin(mu), cos(mu)]]) Bi = Matrix([[sqrt(b1), 0], [-a1/sqrt(b1), 1/sqrt(b1)]]) ``` ```python expand_trig(simplify(trigsimp(Bi * R * B))) ``` ```python simplify(Bi * Matrix([[1, 0],[0, 1]]) * B) ``` ```python simplify(Bi * Matrix([[0, 1],[-1, 0]]) * B) ``` ## Hamiltonian ```python e, beta, c = symbols('e, beta, c') V, h, phi, eta, R, C = symbols('V, h, phi, eta, R, C') z, dp = symbols('z, dp', real=True) Hc = symbols('H_c') zs = C/h/2. def H(z, dp): return -eta*beta*c*dp**2 + e*V/C * cos(h*z/R + phi) H(z, dp) ``` ```python def ps(z): return solve(Eq(H(z,dp), 0), dp)[0] ps(z) ``` ```python def ps(z): return sqrt(cos(z)) # return -sqrt(e*V/(eta*beta*c*C) * h*z/R) # return -sqrt(e*V/(eta*beta*c*C) * cos(h*z/R+phi)) ps(z) ``` ```python integrate(ps(z), (z, -zs, zs)) ```
b455c62a346add87dc88637d50975f99c1b88f0d
17,701
ipynb
Jupyter Notebook
Auxiliaries.ipynb
like2000/PyCOBRA
8cbac23da295b5d3b9e9a2b6c1560173f929eea1
[ "MIT" ]
null
null
null
Auxiliaries.ipynb
like2000/PyCOBRA
8cbac23da295b5d3b9e9a2b6c1560173f929eea1
[ "MIT" ]
null
null
null
Auxiliaries.ipynb
like2000/PyCOBRA
8cbac23da295b5d3b9e9a2b6c1560173f929eea1
[ "MIT" ]
null
null
null
63.444444
5,122
0.695328
true
639
Qwen/Qwen-72B
1. YES 2. YES
0.92944
0.845942
0.786253
__label__eng_Latn
0.139803
0.665062
# Maximum likelihood estimation: how neural networks learn This reading is a review of maximum likelihood estimation (MLE), an important learning principle used in neural network training. ```python from IPython.display import Image ``` ## Introduction Why are neural networks trained the way they are? For example, why do you use a mean squared error loss function for a regression task, but a sparse categorical crossentropy loss for classification? The answer lies in the *likelihood* function, with a long history in statistics. In this reading, we'll look at what this function is and how it leads to the loss functions used to train deep learning models. Since you're taking a course in Tensorflow Probability, I'll assume you already have some understanding of probability distributions, both discrete and continous. If you don't, there are countless resources to help you understand them. I find the [Wikipedia page](https://en.wikipedia.org/wiki/Probability_distribution) works well for an intuitive introduction. For a more solid mathematical description, see an introductory statistics course. ## Probability mass and probability density functions Every probability distribution has either a probability mass function (if the distribution is discrete) or a probability density function (if the distribution is continuous). This function roughly indicates the probability of a sample taking a particular value. We will denote this function $P(y | \theta)$ where $y$ is the value of the sample and $\theta$ is the parameter describing the probability distribution. Written out mathematically, we have: $$ P(y | \theta) = \text{Prob} (\text{sampling value $y$ from a distribution with parameter $\theta$}). $$ When more than one sample is drawn *independently* from the same distribution (which we usually assume), the probability mass/density function of the sample values $y_1, \ldots, y_n$ is the product of the probability mass/density functions for each individual $y_i$. Written formally: $$ P(y_1, \ldots, y_n | \theta) = \prod_{i=1}^n P(y_i | \theta). $$ This all sounds more complicated than it is: see the examples below for a more concrete illustration. ## The likelihood function Probability mass/density functions are usually considered functions of $y_1, \ldots, y_n$, with the parameter $\theta$ considered fixed. They are used when you know the parameter $\theta$ and want to know the probability of a sample taking some values $y_1, \ldots, y_n$. You use this function in *probability*, where you know the distribution and want to make deductions about possible values sampled from it. The *likelihood* function is the same, but with the $y_1, \ldots, y_n$ considered fixed and with $\theta$ considered the independent variable. You usually use this function when you know the sample values $y_1, \ldots, y_n$ (because you've observed them by collecting data), but don't know the parameter $\theta$. You use this function in *statistics*, where you know the data and want to make inferences about the distribution they came from. This is an important point, so I'll repeat it: $P(y_1, \ldots, y_n | \theta)$ is called the *probability mass/density function* when considered as a function of $y_1, \ldots, y_n$ with $\theta$ fixed. It's called the *likelihood* when considered as a function of $\theta$ with $y_1, \ldots, y_n$ fixed. For the likelihood, the convention is using the letter $L$, so that $$ \underbrace{L(y_1, \ldots, y_n | \theta)}_{\text{ likelihood,} \\ \text{function of $\theta$}} = \underbrace{P(y_1, \ldots, y_n | \theta)}_{\text{probabiliy mass/density,} \\ \text{ function of $y_1, \ldots, y_n$}} $$ Let's see some examples of this below. #### Bernoulli distribution We'll start by looking at the [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution) with parameter $\theta$. It's the distribution of a random variable that takes value 1 with probability $\theta$ and 0 with probability $1-\theta$. Let $P(y | \theta)$ be the probability that the event returns value $y$ given parameter $\theta$. Then $$ \begin{align} L(y | \theta) = P(y | \theta) &= \begin{cases} 1 - \theta \quad \text{if} \, y = 0 \\ \theta \quad \quad \, \, \, \text{if} \, y = 1 \\ \end{cases} \\ &= (1 - \theta)^{1 - y} \theta^y \quad y \in \{0, 1\} \end{align} $$ If we assume samples are independent, we also have $$ L(y_1, \ldots, y_n | \theta) = \prod_{i=1}^n (1 - \theta)^{1 - y_i} \theta^{y_i}. $$ For example, the probability of observing $0, 0, 0, 1, 0$ is $$ L(0, 0, 0, 1, 0 | \theta) = (1 - \theta)(1 - \theta)(1 - \theta)\theta(1 - \theta) = \theta(1 - \theta)^4. $$ Note that, in this case, we have fixed the data, and are left with a function just of $\theta$. This is called the *likelihood* function. Let's plot the likelihood as a function of $\theta$ below. ```python # Run this cell to download and view a figure to plot the Bernoulli likelihood function !wget -q -O bernoulli_likelihood.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1vX9ARfK3QU6ZqxUyMM63s2lKfdwx2Bj9" Image("bernoulli_likelihood.png", width=500) ``` #### Normal (Gaussian) distribution This idea also generalises naturally to the [Normal distribution](https://en.wikipedia.org/wiki/Normal_distribution) (also called the *Gaussian* distribution). This distribution has two parameters: a mean $\mu$ and a standard deviation $\sigma$. We hence let $\theta = (\mu, \sigma)$. The probability density function (the analogue of the probability mass function for continuous distributions) is: $$ L(y | \theta) = P(y | \theta) = P(y | \mu, \sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp \Big( - \frac{1}{2 \sigma^2} (y - \mu)^2 \Big). $$ For a sequence of independent observations $y_1, \ldots, y_n$, the likelihood is $$ L(y_1, \ldots, y_n | \mu, \sigma) = \prod_{i=1}^n \frac{1}{\sqrt{2 \pi \sigma^2}} \exp \Big( - \frac{1}{2 \sigma^2} (y_i - \mu)^2 \Big). $$ The *likelihood* is hence the same, but viewed as a function of $\mu$ and $\sigma$, with $y_1, \ldots, y_n$ viewed as constants. For example, if the observed data is -1, 0, 1, the likelihood becomes $$ L(-1, 0, 1 | \mu, \sigma) = (2 \pi \sigma^2)^{-3/2} \exp \Big( - \frac{1}{2 \sigma^2} (\mu-1)^2 + (\mu)^2 + (\mu+1)^2 \Big). $$ which we can plot as a function of $\mu$ an $\sigma$ below. ```python # Run this cell to download and view a figure to plot the Gaussian likelihood function !wget -q -O gaussian_likelihood.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1vKOhDpuujwANx1dpAw5-CMLPeIiyDgEi" Image("gaussian_likelihood.png", width=500) ``` ## Maximum likelihood estimation The likelihood function is commonly used in statistical inference when we are trying to fit a distribution to some data. This is usually done as follows. Suppose we have observed data $y_1, \ldots, y_n$, assumed to be from some distribution with unknown parameter $\theta$, which we want to estimate. The likelihood is $$ L(y_1, \ldots, y_n | \theta). $$ The *maximum likelihood estimate* $\theta_{\text{MLE}}$ of the parameter $\theta$ is then the value that maximises the likelihood $L(y_1, \ldots, y_n | \theta)$. For the example of the Bernoulli distribution with observed data 0, 0, 0, 1, 0 (as in the plot above), this gives us $p=\frac{1}{5}$, which is where the plot takes its maximum. For the normal distribution with data -1, 0, 1, this is the region where the plot is brightest (indicating the highest value), and this occurs at $\mu=0, \sigma=\sqrt{\frac{2}{3}}$. In this way, we *pick the values of the parameter that make the data we have observed the most likely*. Written in mathematical notation, this is $$ \theta_{\text{MLE}} = \arg \max_{\theta} L(y_1, \ldots, y_n | \theta). $$ ## The negative log-likelihood Recall that, for independent observations, the likelihood becomes a product: $$ L(y_1, \ldots, y_n | \theta) = \prod_{i=1}^n P(y_i | \theta). $$ Furthermore, since the $\log$ function increases with its argument, maximising the likelihood is equivalent to maximising the log-likelihood $\log L(y_1, \ldots, y_n | \theta)$. This changes the product into a sum: $$ \begin{align} \theta_{\text{MLE}} &= \arg \max_{\theta} L(y_1, \ldots, y_n | \theta) \\ &= \arg \max_{\theta} \log L(y_1, \ldots, y_n | \theta) \\ &= \arg \max_{\theta} \log \prod_{i=1}^n L(y_i | \theta) \\ &= \arg \max_{\theta} \sum_{i=1}^n \log L(y_i | \theta). \end{align} $$ Furthermore, convention in optimisation is that we always *minimise* a function instead of maximising it. Hence, maximising the likelihood is equivalent to *minimising* the *negative log-likelihood*: $$ \theta_{\text{MLE}} = \arg \min_{\theta} \text{NLL}(y_1, \ldots, y_n | \theta) $$ where the *negative log-likelihood* NLL is defined as $$ \text{NLL}(y_1, \ldots, y_n | \theta) = - \sum_{i=1}^n \log L(y_i | \theta). $$ ## Training neural networks How is all this used to train neural networks? We do this, given some training data, by picking the weights of the neural network that maximise the likelihood (or, equivalently, minimise the negative loglikelihood) of having observed that data. More specifically, the neural network is a function that maps a data point $x_i$ to the parameter $\theta$ of some distribution. This parameter indicates the probability of seeing each possible label. We then use our true labels and the likelihood to find the best weights of the neural network. Let's be a bit more precise about this. Suppose we have a neural network $\text{NN}$ with weights $\mathbf{w}$. Furthemore, suppose $x_i$ is some data point, e.g. an image to be classified, or an $x$ value for which we want to predict the $y$ value. The neural network prediction (the feedforward value) $\hat{y}_i$ is $$ \hat{y}_i = \text{NN}(x_i | \mathbf{w}). $$ We can use this to train the neural network (determine its weights $\mathbf{w}$) as follows. We assume that the neural network prediction $\hat{y}_i$ forms part of a distribution that the true label is drawn from. Suppose we have some training data consisting of inputs and the associated labels. Let the data be $x_i$ and the labels $y_i$ for $i=1, \ldots, n$, where $n$ is the number of training samples. The training data is hence $$ \text{training data} = \{(x_1, y_1), \ldots, (x_n, y_n)\} $$ For each point $x_i$, we have the neural network prediction $\hat{y}_i = \text{NN}(x_i | \mathbf{w})$, which we assume specifies a distribution. We also have the true label $y_i$. The weights of the trained neural network are then those that minimise the negative log-likelihood: $$ \begin{align} \mathbf{w}^* &= \arg \min_{\mathbf{w}} \big( - \sum_{i=1}^n \log L(y_i | \hat{y}_i) \big) \\ &= \arg \min_{\mathbf{w}} \big( - \sum_{i=1}^n \log L(y_i | \text{NN}(x_i | \mathbf{w})) \big) \end{align} $$ In practice, determining the true optimum $\mathbf{w}^*$ is not always possible. Instead, an approximate value is sought using stochastic gradient descent, usually via a *backpropagation* of derivatives and some optimization algorithm such as `RMSprop` or `Adam`. Let's see some examples to make this idea more concrete. #### Bernoulli distribution: binary classifiers Suppose we want a neural network NN that classifies images into either cats or dogs. Here, $x_i$ is an image of either a cat or a dog, and $\hat{y}_i$ is the probability that this image is either a cat (value 0) or a dog (value 1): $$ \hat{y}_i = \text{NN}(x_i | \mathbf{w}) = \text{Prob}(\text{image is dog}). $$ Note that this is just a Bernoulli distribution with values 0 and 1 corresponding to cat and dog respectively, of which we discussed the likelihood function above. Given training data $\{(x_1, y_1), \ldots, (x_n, y_n)\}$, with $y_i \in \{0, 1\}$, we have the negative log-likelihood $$ \begin{align} \text{NLL}((x_1, y_1), \ldots, (x_n, y_n) | \mathbf{w}) &= - \sum_{i=1}^n \log L(y_i | \hat{y}_i) \\ &= - \sum_{i=1}^n \log \big( (1 - \hat{y}_i)^{1 - y_i} \hat{y}_i^{y_i} \big) \\ &= - \sum_{i=1}^n \big( (1 - y_i) \log(1 - \hat{y}_i) + y_i \log \hat{y}_i \big) \\ &= - \sum_{i=1}^n \big( (1 - y_i) \log(1 - \text{NN}(x_i | \mathbf{w})) + y_i \log \text{NN}(x_i | \mathbf{w}) \big). \\ \end{align} $$ This is exactly the sparse categorical cross-entropy loss function used when training a classification neural network. Hence, the reason why we typically use categorical cross-entropy loss functions when training classification data is exactly because this is the negative log-likelihood under a Bernoulli (or, when there are more than 2 classes, a categorical) distribution. #### Normal distribution: least squares regression The idea works the same way in a regression task. Here, we have an $x$-value $x_i$ and want to predict the associated $y$-value $y_i$. We can use a neural network to do this, giving a prediction $\hat{y}_i$: $$ \hat{y}_i = \text{NN}(x_i | \mathbf{w}). $$ For example, suppose we were doing linear regression with the following data. ```python # Run this cell to download and view a figure to plot the example data !wget -q -O linear_regression.png --no-check-certificate "https://docs.google.com/uc?export=download&id=13p6E1qKf92b7UIYOxkU_jPpu9R5rUWfz" Image("linear_regression.png", width=500) ``` It's not possible to put a straight line through every data point. Furthermore, even points with the same $x$ value might not have the same $y$ value. We can interpret this as $y$ being linearly related to $x$ with some noise. More precisely, we may assume that $$ y_i = f(x_i) + \epsilon_i \quad \quad \epsilon_i \sim N(0, \sigma^2) $$ where $f$ is some function we want to determine (the regression) and $\epsilon_i$ is some Gaussian noise with mean 0 and constant variance $\sigma^2$. In deep learning, we might approximate $f(x_i)$ by a neural network $\text{NN}(x_i | \mathbf{w})$ with weights $\mathbf{w}$ and output $\hat{y}_i$. $$ \hat{y}_i = \text{NN}(x_i | \mathbf{w}) = f(x_i) $$ Under this assumption, we have $$ \epsilon_i = y_i - \hat{y}_i \sim N(0, \sigma^2) $$ and hence, given training data $\{(x_1, y_1), \ldots, (x_n, y_n)\}$, we have the negative log-likelihood (assuming the noise terms are independent): $$ \begin{align} \text{NLL}((x_1, y_1), \ldots, (x_n, y_n) | \mathbf{w}) &= - \sum_{i=1}^n \log L(y_i | \hat{y}_i) \\ &= - \sum_{i=1}^n \log \Big( \frac{1}{\sqrt{2\pi\sigma^2}} \exp \Big( - \frac{1}{2\sigma^2} (\hat{y}_i - y_i)^2 \Big) \Big) \\ &= \frac{n}{2} \log (2\pi\sigma^2) + \frac{1}{2\sigma^2} \sum_{i=1}^n (\hat{y}_i - y_i)^2 \\ &= \frac{n}{2} \log (2\pi\sigma^2) + \frac{1}{2\sigma^2} \sum_{i=1}^n (\text{NN}(x_i | \mathbf{w}) - y_i)^2. \end{align} $$ Note that only the last term includes the weights. Hence, minimising the negative log-likelihood is equivalent to minimising $$ \sum_{i=1}^n (\text{NN}(x_i | \mathbf{w}) - y_i)^2 $$ which is exactly the sum of squared errors. Hence, least squares regression (or training a neural network using the mean squared error) is equivalent to training a neural network to match the expected value of an output by minimising the negative log-likelihood assuming a Gaussian error term with constant variance. ## Conclusion This was a very short introduction to maximum likelihood estimation, which is essential for deep learning, especially of the probabilistic variety that we'll be doing in this course. The method of maximum likelihood estimation is key to training neural networks, and typically informs the choice of loss function. In fact, you have probably trained neural networks using maximum likelihood estimation without even knowing it! ## Further reading and resources I find that the Wikipedia pages for many statistical concepts offer excellent intuition. If you'd like to read up on these ideas in more detail, I'd recommend these: * The Wikipedia page for Probability Distribution: https://en.wikipedia.org/wiki/Probability_distribution * The Wikipedia page for Maximum Likelihood Estimation: https://en.wikipedia.org/wiki/Maximum_likelihood_estimation
53f249009b6bbdb212abeda1cd9e6d4127053dd8
70,542
ipynb
Jupyter Notebook
Week2/Maximum likelihood estimation.ipynb
stevensmiley1989/Prob_TF2_Examples
fa022e58a44563d09792070be5d015d0798ca00d
[ "MIT" ]
null
null
null
Week2/Maximum likelihood estimation.ipynb
stevensmiley1989/Prob_TF2_Examples
fa022e58a44563d09792070be5d015d0798ca00d
[ "MIT" ]
null
null
null
Week2/Maximum likelihood estimation.ipynb
stevensmiley1989/Prob_TF2_Examples
fa022e58a44563d09792070be5d015d0798ca00d
[ "MIT" ]
null
null
null
70,542
70,542
0.883615
true
4,668
Qwen/Qwen-72B
1. YES 2. YES
0.897695
0.826712
0.742135
__label__eng_Latn
0.988721
0.562561
# 13 Root Finding An important tool in the computational tool box is to find roots of equations for which no closed form solutions exist: We want to find the roots $x_0$ of $$ f(x_0) = 0 $$ ## Problem: Projectile range The equations of motion for the projectile with linear air resistance (see *12 ODE applications*) can be solved exactly. As a reminder: the linear drag force is $$ \mathbf{F}_1 = -b_1 \mathbf{v}\\ b := \frac{b_1}{m} $$ Equations of motion with force due to gravity $\mathbf{g} = -g \hat{\mathbf{e}}_y$ \begin{align} \frac{d\mathbf{r}}{dt} &= \mathbf{v}\\ \frac{d\mathbf{v}}{dt} &= - g \hat{\mathbf{e}}_y -b \mathbf{v} \end{align} ### Analytical solution of the equations of motion (Following Wang Ch 3.3.2) Solve $x$ component of the velocity $$ \frac{dv_x}{dt} = -b v_x $$ by integration: $$ v_x(t) = v_{0x} \exp(-bt) $$ The drag force reduces the forward velocity to 0. Integrate again to get the $x(t)$ component $$ x(t) = x_0 + \frac{v_{0x}}{b} \left[1 - \exp(-bt)\right] $$ Integrating the $y$ component of the velocity $$ \frac{dv_y}{dt} = -g - b v_y $$ gives $$ v_y = \left(v_{0y} + \frac{g}{b}\right) \exp(-bt) - \frac{g}{b} $$ and integrating again $$ y(t) = y_0 + \frac{v_{0y} + \frac{g}{b}}{b} \left[1 - \exp(-bt)\right] - \frac{g}{b} t $$ (Note: This shows immediately that the *terminal velocity* is $$ \lim_{t\rightarrow\infty} v_y(t) = - \frac{g}{b}, $$ i.e., the force of gravity is balanced by the drag force.) #### Analytical trajectory To obtain the **trajectory $y(x)$** eliminate time (and for convenience, using the origin as the initial starting point, $x_0 = 0$ and $y_0 = 0$. Solve $x(t)$ for $t$ $$ t = -\frac{1}{b} \ln \left(1 - \frac{bx}{v_{0x}}\right) $$ and insert into $y(t)$: $$ y(x) = \frac{x}{v_{0x}} \left( v_{0y} + \frac{g}{b} \right) + \frac{g}{b^2} \ln \left(1 - \frac{bx}{v_{0x}}\right) $$ #### Plot Plot the analytical solution for $\theta = 30^\circ$ and $v_0 = 100$~m/s. ```python import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.style.use('ggplot') ``` ```python def y_lindrag(x, v0, b1=0.2, g=9.81, m=0.5): b = b1/m v0x, v0y = v0 return x/v0x * (v0y + g/b) + g/(b*b) * np.log(1 - b*x/v0x) def initial_v(v, theta): x = np.deg2rad(theta) return v * np.array([np.cos(x), np.sin(x)]) ``` The analytical function drops *very* rapidly towards the end ($> 42$ m – found by manual trial-and-error plotting...) so in order to nicely plot the function we use a fairly coarse sampling of points along $x$ for the range $0 \le x < 42$ and very fine sampling for the last 2 m ($42 \le x < 45$): ```python X = np.concatenate([np.linspace(0, 42, 100), np.linspace(42, 45, 10000)]) ``` Evaluate the function for all $x$ values: ```python Y = y_lindrag(X, initial_v(100, 30), b1=1) ``` /Users/oliver/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:4: RuntimeWarning: invalid value encountered in log after removing the cwd from sys.path. (The warning can be ignored, it just means that some of our `X` values were not approriate and outside the range of validity – when the argument of the logarithm becomes ≤0). To indicate the ground we also plot a dashed black line: note that the analytical solution goes below the dashed line. ```python plt.plot(X, Y) plt.xlabel("$x$ (m)") plt.ylabel("$y$ (m)") plt.hlines([0], X[0], X[-1], colors="k", linestyles="--"); ``` Compare to the numerical solution (from **12 ODE Applications**): ```python import ode def simulate(v0, h=0.01, b1=0.2, g=9.81, m=0.5): def f(t, y): # y = [x, y, vx, vy] return np.array([y[2], y[3], -b1/m * y[2], -g - b1/m * y[3]]) vx, vy = v0 t = 0 positions = [] y = np.array([0, 0, vx, vy], dtype=np.float64) while y[1] >= 0: positions.append([t, y[0], y[1]]) # record t, x and y y[:] = ode.rk4(y, f, t, h) t += h return np.array(positions) ``` ```python r = simulate(initial_v(100, 30), h=0.01, b1=1) ``` ```python plt.plot(X, Y, lw=2, label="analytical") plt.plot(r[:, 1], r[:, 2], '--', label="RK4") plt.legend(loc="best") plt.xlabel("$x$ (m)"); plt.ylabel("$y$ (m)") plt.hlines([0], X[0], X[-1], colors="k", linestyles="--"); ``` The RK4 solution tracks the analytical solution perfectly (and we also programmed it to not go below ground...) OPTIONAL: Show the residual $$ r = y_\text{numerical}(x) - y_\text{analytical} $$ ```python residual = r[:, 2] - y_lindrag(r[:, 1], initial_v(100, 30), b1=1) ``` ```python plt.plot(r[:, 1], residual) plt.xlabel("$x$ (m)"); plt.ylabel("residual $r$ (m)"); ``` ### Predict the range $R$ How far does the ball or projectile fly, i.e., that value $x=R$ where $y(R) = 0$: $$ \frac{R}{v_{0x}} \left( v_{0y} + \frac{g}{b} \right) + \frac{g}{b^2} \ln \left(1 - \frac{bR}{v_{0x}}\right) = 0 $$ This *transcendental equation* can not be solved in terms of elementary functions. Use a **root finding** algorithm. ## Root-finding with the Bisection algorithm **Bisection** is the simplest (but very robust) root finding algorithm that uses trial-and-error: * bracket the root * refine the brackets * see [13_Root-finding-algorithms (PDF)](13_Root-finding-algorithms.pdf) More specifically: 1. determine a bracket that contains the root: $a < x_0 < b$ (i.e., an interval $[a, b]$ with $f(a) > 0$ and $f(b) < 0$ or $f(a) < 0$ and $f(b) > 0$) 2. cut bracket in half: $x' = \frac{1}{2}(a + b)$ 3. determine in which half the root lies: either in $[a, x']$ or in $[x', b]$: If $f(a) f(x') > 0$ then the root lies in the right half $[x', b]$, otherwise the left half $[a, x']$. 4. Change the boundaries $a$ or $b$. 5. repeat until $|f(x')| < \epsilon$. ### Implementation of Bisection - Test that the initial bracket contains a root; if not, return `None` (and possibly print a warning). - If either of the bracket points is a root then return the bracket point. - Allow `Nmax` iterations or until the convergence criterion `eps` is reached. - Bonus: print a message if no root was found after `Nmax` iterations, but print the best guess and the error (but return `None`). ```python def bisection(f, a, b, Nmax=100, eps=1e-14): fa, fb = f(a), f(b) if (fa*fb) > 0: print("bisect: Initial bracket [{0}, {1}] " "does not contain a single root".format(a, b)) return None if np.abs(fa) < eps: return a if np.abs(fb) < eps: return b for iteration in range(Nmax): x = (a + b)/2 fx = f(x) if f(a) * fx > 0: # root is not between a and x a = x else: b = x if np.abs(fx) < eps: break else: print("bisect: no root found after {0} iterations (eps={1}); " "best guess is {2} with error {3}".format(Nmax, eps, x, fx)) x = None return x ``` ### Finding the range with the bisection algorithm Define the trial function `f`. Note that our `y_lindrag()` function depends on `x` **and** `v` but `bisect()` only accepts functions `f` that depend on a *single variable*, $f(x)$. We therefore have to wrap `y_lindrag(x, v)` into a function `f(x)` that sets `v` already to a value *outside* the function: [Python's scoping rules](https://stackoverflow.com/questions/291978/short-description-of-the-scoping-rules#292502) say that inside the function `f(x)`, the variable `x` has the value assigned to the argument of `f(x)` but any other variables such as `v` or `b1`, which were *not defined inside `f`*, will get the value that they had *outside `f`* in the *enclosing code*. ```python v = initial_v(100, 30) def f(x): return y_lindrag(x, v, b1=b1) ``` The initial bracket $[a_\text{initial}, b_\text{initial}]$ is a little bit difficult for this function: choose the right bracket near the point where the argument of the logarithm becomes 0 (which is actually the maximum $x$ value $\lim_{t\rightarrow +\infty} x(t) = \frac{v_{0x}}{b}$): $$ b_\text{initial} = \frac{v_{0x}}{b} - \epsilon' $$ where $\epsilon'$ is a small number. ```python b1 = 1. m = 0.5 b = b1/m bisection(f, 0.1, v[0]/b - 1e-12, eps=1e-6) ``` 43.300674233470772 Note that this solution is *not* the maximum value $\lim_{t\rightarrow +\infty} x(t) = \frac{v_{0x}}{b}$: ```python v[0]/b ``` 43.301270189221938 ### Find the range as a function of the initial angle ```python b1 = 1. m = 0.5 b = b1/m v0 = 100 u = [] for theta in np.arange(1, 90): v = initial_v(v0, theta) def f(x): return y_lindrag(x, v, b1=b1) R = bisection(f, 0.1, v[0]/b - 1e-16, eps=1e-5) if R is not None: u.append((theta, R)) u = np.array(u) ``` /Users/oliver/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:4: RuntimeWarning: divide by zero encountered in log after removing the cwd from sys.path. ```python plt.plot(u[:, 0], u[:, 1]) plt.xlabel(r"launch angle $\theta$ ($^\circ$)") plt.ylabel(r"range $R$ (m)"); ``` Write a function `find_range()` to calculate the range for a given initial velocity $v_0$ and plot $R(\theta)$ for $10\,\text{m/s} ≤ v_0 ≤ 100\,\text{m/s}$. ```python def find_range(v0, b1=1, m=0.5): b = b1/m u = [] for theta in np.arange(1, 90): v = initial_v(v0, theta) def f(x): return y_lindrag(x, v, b1=b1) R = bisection(f, 0.1, v[0]/b - 1e-16, eps=1e-5) if R is not None: u.append((theta, R)) return np.array(u) ``` ```python for v0 in (10, 25, 50, 75, 100): u = find_range(v0) plt.plot(u[:, 0], u[:, 1], label="{} m/s".format(v0)) plt.legend(loc="best") plt.xlabel(r"$\theta$ (degrees)") plt.ylabel(r"range $R$ (m)"); ``` As a bonus, find the dependence of the *optimum launch angle* on the initial velocity, i.e., that angle that leads to the largest range: ```python np.argmax(u[:, 1]) ``` 10 ```python velocities = np.linspace(5, 100, 100) results = [] # (v0, theta_opt) for v0 in velocities: u = find_range(v0) thetas, ranges = u.transpose() # find index for the largest range and pull corresponding theta theta_opt = thetas[np.argmax(ranges)] results.append((v0, theta_opt)) results = np.array(results) plt.plot(results[:, 0], results[:, 1]) plt.xlabel(r"velocity $v_0$ (m/s)") plt.ylabel(r"$\theta_\mathrm{best}$ (degrees)"); ``` The launch angle decreases with the velocity. The steps in the graph are an artifact of choosing to only calculate the trajectories for integer angles (see `for theta in np.arange(1, 90)` in `find_range()`). ## Newton-Raphson algorithm (see derivation in class and in the PDF or [Newton's Method](http://mathworld.wolfram.com/NewtonsMethod.html) on MathWorld) ### Activity: Implement Newton-Raphson 1. Implement the Newton-Raphson algorithm 2. Test with $g(x)$. $$ g(x) = 2 \cos x - x $$ 3. Bonus: test performance of `newton_raphson()` against `bisection()`. ```python def g(x): return 2*np.cos(x) - x ``` ```python xvals = np.linspace(0, 7, 30) plt.plot(xvals, np.zeros_like(xvals), 'k--') plt.plot(xvals, g(xvals)) ``` ```python def newton_raphson(f, x, h=1e-3, Nmax=100, eps=1e-14): """Find root x0 so that f(x0)=0 with the Newton-Raphson algorithm""" for iteration in range(Nmax): fx = f(x) if np.abs(fx) < eps: break df = (f(x + h/2) - f(x - h/2))/h Delta_x = -fx/df x += Delta_x else: print("Newton-Raphson: no root found after {0} iterations (eps={1}); " "best guess is {2} with error {3}".format(Nmax, eps, x, fx)) x = None return x ``` ```python x0 = newton_raphson(g, 2) print(x0) ``` 1.02986652932 ```python g(x0) ``` 6.6613381477509392e-16 But note that the algorithm only converges well near the root. With other values it might not converge at all: ```python newton_raphson(g, 3) ``` Newton-Raphson: no root found after 100 iterations (eps=1e-14); best guess is -9322690.425062027 with error 16607175.892541457 ```python newton_raphson(g, 10) ``` 1.0298665293222589 ```python newton_raphson(g, 15) ``` Newton-Raphson: no root found after 100 iterations (eps=1e-14); best guess is 52.96931452309739 with error -117.5878129311007 Let's look how Newton-Raphson iterates: also return all intermediate $x$ values: ```python def newton_raphson_with_history(f, x, h=1e-3, Nmax=100, eps=1e-14): xvals = [] for iteration in range(Nmax): fx = f(x) if np.abs(fx) < eps: break df = (f(x + h/2) - f(x - h/2))/h Delta_x = -fx/df x += Delta_x xvals.append(x) else: print("Newton-Raphson: no root found after {0} iterations (eps={1}); " "best guess is {2} with error {3}".format(Nmax, eps, x, fx)) x = None return x, np.array(xvals) ``` ```python x = {} x0, xvals = newton_raphson_with_history(g, 1.5) x[1.5] = xvals print("root x0 = {} after {} iterations".format(x0, len(xvals))) x0, xvals = newton_raphson_with_history(g, 5) x[5] = xvals print("root x0 = {} after {} iterations".format(x0, len(xvals))) x0, xvals = newton_raphson_with_history(g, 10) x[10] = xvals print("root x0 = {} after {} iterations".format(x0, len(xvals))) ``` root x0 = 1.0298665293222589 after 4 iterations root x0 = 1.0298665293222589 after 58 iterations root x0 = 1.0298665293222589 after 21 iterations ```python for xstart in sorted(x.keys()): plt.semilogx(x[xstart], label=str(xstart)) plt.legend(loc="best") plt.xlabel("iteration") plt.ylabel("current guess for root $x_0$"); ```
7b8f56758808f490dc96147c7c1895c394c82b4f
171,196
ipynb
Jupyter Notebook
13_root_finding/13-Root-finding.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2019
e6114b49d28df887abe37c8144df8f4ae8cf6419
[ "CC-BY-4.0" ]
null
null
null
13_root_finding/13-Root-finding.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2019
e6114b49d28df887abe37c8144df8f4ae8cf6419
[ "CC-BY-4.0" ]
null
null
null
13_root_finding/13-Root-finding.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2019
e6114b49d28df887abe37c8144df8f4ae8cf6419
[ "CC-BY-4.0" ]
null
null
null
159.846872
37,968
0.889425
true
4,535
Qwen/Qwen-72B
1. YES 2. YES
0.793106
0.845942
0.670922
__label__eng_Latn
0.889031
0.397108
# Inference Overview and Variable Elimination Algorithm > So long, we have talked about probabilistic graphical model representation (PGM). > - We've seen that there are two main types: Bayesian networks (directed) and Markov networks (undirected). > - We've alse seen that they encode different **independence assumptions**. > > In this module, we will operationalize the PGMs and study how to use these representations (models) to answer actual queries or questions. > **Objetives:** > - To learn the different types of queries one can perform to a PGM. > - To describe the Variable Elimination algorithm. > - To analyze the computational complexity of the Variable Elimination algorithm. > - To learn how to use the Variable Elimination algorithm to answer a query to an actual network. > **References:** > - Probabilistic Graphical Models: Principles and Techniques, By Daphne Koller and Nir Friedman. Ch. 9. > - Mastering Probabilistic Graphical Models Using Python, By Ankur Ankan and Abinash Panda. Ch. 3. > - Probabilistic Graphical Models Specialization, offered through Coursera. Prof. Daphne Koller. <p style="text-align:right;"> Imagen recuperada de: https://static.thenounproject.com/png/542457-200.png.</p> ___ ## 1. Inference - Overview In the previous module, we looked at the different types of representations and how to use them to create models for some problems. We also saw how the probabilities of variables change when we incorporate evidence, in an intuitive way. In this module, we will describe several algorithms to compute these probabilities and how they may change. Similarly, we will see how to use inference algorithms to predict the values of variables based on our model. ### 1.1. Conditional probability queries These are the most common queries. Given: - A joint probability distribution $$P(X_1,\dots, X_n),$$ (which is modeled through a BN or a MN); - a set of query variables $\bar{Y}\subseteq \left\{X_1,\dots, X_n\right\}$; - and a set of observed variables (evidence) $\bar{E} = \bar{e}$, con $\bar{E}\subseteq \left\{X_1,\dots, X_n\right\}$, we define $\bar{W} = \left\{X_1,\dots, X_n\right\} \setminus \left\{\bar{Y} \cup \bar{E}\right\}$ as the rest of the variables. The task is to compute the *conditional probability query* $P(\bar{Y} | \bar{E}=\bar{e})$. **Applications:** - In the credit-worthiness model of the exam, we saw that the bank was able to observe some variables. Then we would be interested in computing the probability of the unobserved variables given the evidence that the bank observes. - In a medical diagnosis system, we observe certain symptoms and some test results and we are interested in computing the probability of different diseases. By definition of conditional probability: $$P(\bar{Y} | \bar{E}=\bar{e}) = \frac{P(\bar{Y}, \bar{e})}{P(\bar{e})}.$$ In this expression: - $P(\bar{Y}, \bar{e}) = \sum_{\bar{W}} P(\bar{Y}, \bar{e}, \bar{W})$. Recall that $\left\{X_1,\dots, X_n\right\} = \bar{Y} \cup \bar{E} \cup \bar{W}$. Then, the terms in the right-side sumation are joint probabilities of all the variables. - $P(\bar{e}) = \sum_{\bar{Y}} P(\bar{Y}, \bar{e})$. This is simply a normalizing constant for converting $P(\bar{Y}, \bar{e})$ into $P(\bar{Y} | \bar{e})$. Hence, in principle, we could: - Take a PGM; - multiply all of its factors to obtain the joint distribution; - sum out (marginalize) the unwanted variables in the joint distribution; - and, that's it! **Examples:** 1. Bayesian networks: restaurant example. Random variables: - Location $L$ (Bad: $l^0$, Good: $l^1$). - Quality $Q$ (Bad: $q^0$, Normal: $q^1$, Good: $q^2$). - Cost $C$ (Low: $c^0$, High: $c^1$). - Number of people $N$ (Low: $n^0$, High: $n^1$). ```python from IPython.display import Image ``` ```python Image("figures/restaurant.png") ``` We will forget about if factors represent CPDs or general affinity functions. In this case: $$P(L,Q,C,N)= \phi_L(L)\phi_Q(Q)\phi_C(C,L,Q)\phi_N(N,L,C).$$ We'd like, for example: $$P(N)=\sum_{L,Q,C}\phi_L(L)\phi_Q(Q)\phi_C(C,L,Q)\phi_N(N,L,C).$$ > Sum-product algorithm. ```python # Import pgmpy.factors.discrete.DiscreteFactor from pgmpy.factors.discrete import DiscreteFactor ``` ```python # Define factors P_L = DiscreteFactor(variables=["L"], cardinality=[2], values=[0.4, 0.6]) P_Q = DiscreteFactor(variables=["Q"], cardinality=[3], values=[0.2, 0.5, 0.3]) P_C_given_LQ = DiscreteFactor(variables=["C", "L", "Q"], cardinality=[2, 2, 3], values=[0.95, 0.4, 0.4, 0.9, 0.4, 0.2, 0.05, 0.6, 0.6, 0.1, 0.6, 0.8]) P_N_given_LC = DiscreteFactor(variables=["N", "L", "C"], cardinality=[2, 2, 2], values=[0.4, 0.9, 0.2, 0.4, 0.6, 0.1, 0.8, 0.6]) ``` ```python # Joint probability joint = P_L * P_Q * P_C_given_LQ * P_N_given_LC print(joint) ``` +------+------+------+------+----------------+ | L | Q | C | N | phi(L,Q,C,N) | +======+======+======+======+================+ | L(0) | Q(0) | C(0) | N(0) | 0.0304 | +------+------+------+------+----------------+ | L(0) | Q(0) | C(0) | N(1) | 0.0456 | +------+------+------+------+----------------+ | L(0) | Q(0) | C(1) | N(0) | 0.0036 | +------+------+------+------+----------------+ | L(0) | Q(0) | C(1) | N(1) | 0.0004 | +------+------+------+------+----------------+ | L(0) | Q(1) | C(0) | N(0) | 0.0320 | +------+------+------+------+----------------+ | L(0) | Q(1) | C(0) | N(1) | 0.0480 | +------+------+------+------+----------------+ | L(0) | Q(1) | C(1) | N(0) | 0.1080 | +------+------+------+------+----------------+ | L(0) | Q(1) | C(1) | N(1) | 0.0120 | +------+------+------+------+----------------+ | L(0) | Q(2) | C(0) | N(0) | 0.0192 | +------+------+------+------+----------------+ | L(0) | Q(2) | C(0) | N(1) | 0.0288 | +------+------+------+------+----------------+ | L(0) | Q(2) | C(1) | N(0) | 0.0648 | +------+------+------+------+----------------+ | L(0) | Q(2) | C(1) | N(1) | 0.0072 | +------+------+------+------+----------------+ | L(1) | Q(0) | C(0) | N(0) | 0.0216 | +------+------+------+------+----------------+ | L(1) | Q(0) | C(0) | N(1) | 0.0864 | +------+------+------+------+----------------+ | L(1) | Q(0) | C(1) | N(0) | 0.0048 | +------+------+------+------+----------------+ | L(1) | Q(0) | C(1) | N(1) | 0.0072 | +------+------+------+------+----------------+ | L(1) | Q(1) | C(0) | N(0) | 0.0240 | +------+------+------+------+----------------+ | L(1) | Q(1) | C(0) | N(1) | 0.0960 | +------+------+------+------+----------------+ | L(1) | Q(1) | C(1) | N(0) | 0.0720 | +------+------+------+------+----------------+ | L(1) | Q(1) | C(1) | N(1) | 0.1080 | +------+------+------+------+----------------+ | L(1) | Q(2) | C(0) | N(0) | 0.0072 | +------+------+------+------+----------------+ | L(1) | Q(2) | C(0) | N(1) | 0.0288 | +------+------+------+------+----------------+ | L(1) | Q(2) | C(1) | N(0) | 0.0576 | +------+------+------+------+----------------+ | L(1) | Q(2) | C(1) | N(1) | 0.0864 | +------+------+------+------+----------------+ ```python # P(N) P_N = joint.marginalize(variables=["L", "Q", "C"], inplace=False) ``` ```python # Show print(P_N) ``` +------+----------+ | N | phi(N) | +======+==========+ | N(0) | 0.4452 | +------+----------+ | N(1) | 0.5548 | +------+----------+ 2. Markov networks: pairwise. ```python Image("figures/pairwiseMN.png") ``` $$ P(A, B, C, D) = \frac{1}{Z} \phi_1(A,B)\phi_2(B,C)\phi_3(C,D)\phi_4(A,D) $$ For example $$P(D) = \frac{1}{Z}\sum_{A,B,C}\phi_1(A,B)\phi_2(B,C)\phi_3(C,D)\phi_4(A,D).$$ > Sum-product algorithm. **So, what is the problem?** *There is an exponential blow-up of the joint distribution that the graphical model representation was precisely designed to avoid*. In fact, the problem of inference in PGMs (just like most interesting problems) is $\mathcal{NP}-$hard. *What does this mean?* - If a problem is $\mathcal{NP}-$hard it is very unlikely to come up with an efficient solution (in the general case). - This means that **all the algorithms** that people have constructed to solve this problem require a time (or number of operations) which is at leat exponential in the size of the representation of the problem. However, we will be seeing a variety of algorithms for both, exact and approximate inference that can do considerably better that this worst case result. ### 1.3. Maximum A-Posteriori (MAP) Inference Given: - A joint probability distribution $$P(X_1,\dots, X_n),$$ (which is modeled through a BN or a MN); - a set of query variables $\bar{Y}\subseteq \left\{X_1,\dots, X_n\right\}$; - and a set of observed variables (evidence) $\bar{E} = \bar{e}$, con $\bar{E}\subseteq \left\{X_1,\dots, X_n\right\}$. In this case, we assume for simplicity that $\bar{Y}\cup \bar{E} = \left\{X_1,\dots, X_n\right\}$. The task is to compute $$MAP(\bar{Y} | \bar{E}=\bar{e}) = \arg\max_{\bar{y}\in\mathrm{Val}(\bar{Y})} P(\bar{Y} | \bar{E}=\bar{e}).$$ > There may be multiple solutions! **Application:** - Classifier: Once we learn a model from data, we would like to compute the most likely assignment for a set of variables given some observed variables (evidence). **MAP $\neq$ Max over marginals:** ```python Image("figures/simple.png") ``` Again, this problem is $\mathcal{NP}-$hard. ___ ## 2. Variable Elimination This is the simplest and most fundamental algorithm. ### 2.1. Initial examples We will introduce the **variable elimination (VE) algorithm** through a number of simple examples. **Elimination in a chain** Consider the following pairwise chain MN: ```python Image("figures/MNchain.png") ``` We have that: $$P(A,B,C,D,E) \propto \phi_1(A,B)\phi_2(B,C)\phi_3(C,D)\phi_4(D,E)$$ Our objective is to calculate $P(E)$ (<font color=blue>in the whiteboard, then show</font>): $$\begin{align}P(E) \propto & \sum_{D,C,B,A} \phi_1(A,B)\phi_2(B,C)\phi_3(C,D)\phi_4(D,E) \\ = & \sum_{D}\sum_{C}\sum_{B}\sum_{A} \phi_1(A,B)\phi_2(B,C)\phi_3(C,D)\phi_4(D,E) \\ = & \sum_{D}\sum_{C}\sum_{B} \phi_2(B,C)\phi_3(C,D)\phi_4(D,E) \underbrace{\sum_{A} \phi_1(A,B)}_{\tau_1(B)} \\ = & \sum_{D}\sum_{C}\sum_{B} \phi_2(B,C)\phi_3(C,D)\phi_4(D,E) \tau_1(B) \\ = & \sum_{D}\sum_{C} \phi_3(C,D)\phi_4(D,E) \underbrace{\sum_{B} \phi_2(B,C)\tau_1(B)}_{\tau_2(C)} \\ = & \sum_{D}\phi_4(D,E) \underbrace{\sum_{C} \phi_3(C,D)\tau_2(C)}_{\tau_3(D)} \\ = & \sum_{D}\phi_4(D,E)\tau_3(D) = \tau_4(E) \end{align}$$ **Elimination in a BN** ```python Image("figures/restaurant.png") ``` The joint probability is: $$P(L,Q,C,N)= \phi_L(L)\phi_Q(Q)\phi_C(C,L,Q)\phi_N(N,L,C).$$ If we want to calculate $P(N)$ (<font color=blue>in the whiteboard, then show</font>): $$\begin{align} P(N) = & \sum_{L,C,Q}\phi_L(L)\phi_Q(Q)\phi_C(C,L,Q)\phi_N(N,L,C) \\ = & \sum_{L}\sum_{C}\sum_{Q} \phi_L(L)\phi_Q(Q)\phi_C(C,L,Q)\phi_N(N,L,C) \\ = & \sum_{L}\sum_{C} \phi_L(L)\phi_N(N,L,C) \underbrace{\sum_{Q} \phi_Q(Q) \phi_C(C,L,Q)}_{\tau_1(C,L)} \\ = & \sum_{L} \phi_L(L) \underbrace{\sum_{C}\phi_N(N,L,C)\tau_1(C,L)}_{\tau_2(N,L)} \\ = & \sum_{L} \phi_L(L)\tau_2(N,L) = \tau_3(N) \end{align}$$ ```python # Import pgmpy.models.BayesianModel from pgmpy.models import BayesianModel ``` ```python # Import pgmpy.factors.discrete.TabularCPD from pgmpy.factors.discrete import TabularCPD ``` ```python # Define model skeleton restaurant_model = BayesianModel([("L", "C"), ("Q", "C"), ("L", "N"), ("C", "N")]) ``` ```python # Define CPDs cpd_L = TabularCPD(variable="L", variable_card=2, values=[[0.4], [0.6]]) cpd_Q = TabularCPD(variable="Q", variable_card=3, values=[[0.2], [0.5], [0.3]]) cpd_C = TabularCPD(variable="C", variable_card=2, evidence=["L", "Q"], evidence_card=[2, 3], values=[[0.95, 0.4, 0.4, 0.9, 0.4, 0.2], [0.05, 0.6, 0.6, 0.1, 0.6, 0.8]]) cpd_N = TabularCPD(variable="N", variable_card=2, evidence=["L", "C"], evidence_card=[2, 2], values=[[0.4, 0.9, 0.2, 0.4], [0.6, 0.1, 0.8, 0.6]]) ``` ```python # Attach CPDs to the model restaurant_model.add_cpds(cpd_L, cpd_Q, cpd_C, cpd_N) ``` ```python # Check if the model is correctly defined restaurant_model.check_model() ``` True ```python # Import pgmpy.inference.VariableElimination from pgmpy.inference import VariableElimination ``` ```python # Create inference object inference = VariableElimination(restaurant_model) ``` ```python # Perform query P_N = inference.query(variables=["N"]) ``` Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1532.07it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 233.88it/s] ```python # Show print(P_N) ``` +------+----------+ | N | phi(N) | +======+==========+ | N(0) | 0.4452 | +------+----------+ | N(1) | 0.5548 | +------+----------+ **What happens if we have evidence?** If we now want to calculate $P(N, L=l^1)$: - First, we should reduce our factors according to the given evidence: $\phi_L(l^1)=\phi_L'()$, $\phi_C(C,l^1,Q)=\phi_C'(C,Q)$, $\phi_N(N,l^1,C)=\phi_N'(N,C)$. - Then, we run the VE algorithm as usual: $$\begin{align} P(N, l^1) = & \sum_{C,Q}\phi_L'()\phi_Q(Q)\phi_C'(C,Q)\phi_N'(N,C) \\ = & \sum_{C}\sum_{Q} \phi_L'()\phi_Q(Q)\phi_C'(C,Q)\phi_N'(N,C) \\ = & \sum_{C} \phi_L'()\phi_N'(N,C) \underbrace{\sum_{Q} \phi_Q(Q) \phi_C'(C,Q)}_{\tau_1(C)} \\ = & \sum_{C} \phi_L'()\phi_N'(N,C) \tau_1(C) \end{align}$$ **And, if we want the conditional probability?** You only have to take the above and renormalize: $$P(N|l^1)=\frac{P(N,l^1)}{P(l^1)}.$$ ```python # Perform query P_N_given_l1 = inference.query(variables=["N"], evidence={"L": 1}) ``` Finding Elimination Order: : 100%|██████████| 2/2 [00:00<00:00, 698.47it/s] Eliminating: C: 100%|██████████| 2/2 [00:00<00:00, 364.45it/s] ```python # Show print(P_N_given_l1) ``` +------+----------+ | N | phi(N) | +======+==========+ | N(0) | 0.3120 | +------+----------+ | N(1) | 0.6880 | +------+----------+ ### 2.2. Algorithm description and computational complexity #### Algorithm description For the algorithm description, we have a set of factors $\bar{\Phi}$. In the examples above: 1. MN: $\bar{\Phi}=\{\phi_1(A,B), \phi_2(B,C), \phi_3(C,D), \phi_4(D,E)\}$. 2. BN: $\bar{\Phi}=\{\phi_L(L), \phi_Q(Q), \phi_C(C,L,Q), \phi_N(N,L,C)\}$. We will describe the steps for eliminating some variable $Z$ from $\bar{\Phi}$: *Eliminate Var-$Z$ from $\bar{\Phi}$*. 1. Determine the set of factors that involve $Z$: $$\Phi' = \left\{\phi_i \in \Phi : Z \in \mathrm{scope}[\phi_i]\right\}$$ 2. Compute: $$\psi = \prod_{\phi_i \in \Phi'} \phi_i$$ 3. Compute: $$\tau = \sum_Z \psi$$ 4. Overwrite: $$\bar{\Phi} := \left(\bar{\Phi}\setminus \Phi'\right) \cup \{\tau\}$$ Thus, the complete algorithm can be described as: 1. The first step is to reduce all factors in $\bar{\Phi}$ acording to the given evidence, if any. 2. For each non-query variable $Z$: - Eliminate Var-$Z$ from $\bar{\Phi}$. 3. Multiply all the remaining factors. #### Computational complexity We analyze the computational complexity of Eliminate Var-$Z$ from $\bar{\Phi}$. 1. Determine the set of factors that involve $Z$: $$\Phi' = \left\{\phi_i \in \Phi : Z \in \mathrm{scope}[\phi_i]\right\}$$ 2. Compute: $$\psi = \prod_{\phi_i \in \Phi'} \phi_i$$ 3. Compute: $$\tau = \sum_Z \psi$$ 4. Overwrite: $$\Phi := \left(\Phi\setminus \Phi'\right) \cup \{\tau\}$$ **Complexity of the second step:** Assume that $|\Phi'| = m_k$ (the number of factors that involve $Z$ is $m_k$). Thus, $$\psi(\bar{X}_k) = \prod_{i=1}^{m_k} \phi_i.$$ Now, assuming that $N_k = |\mathrm{Val}(\bar{X}_k)|$ (the number of rows in the factor resulting from the multiplication), the computational cost of this step is: $$N_k (m_k - 1)$$ (each row in the resulting factor is produced by $m_k - 1$ multiplications). <font color=blue>See this in the whiteboard</font> **Complexity of the third step:** $$\tau_k (\bar{X}_k \setminus \{Z\}) = \sum_Z \phi(\bar{X}_k)$$ Assuming that $|\mathrm{Val}(Z)|=z$ (the number of possible values of the variable $Z$). The computational cost of this step is (Each row of the resulting factor involves $z-1$ sumations): $$N_k \frac{(z-1)}{z} \leq N_k.$$ <font color=blue>See this in the whiteboard</font> **The whole algorithm:** 1. At most, $n$ elimination steps ($n$ is the number of variables). 2. Start with $m = |\bar{\Phi}|$ factors. 3. At each elimination step, generate $1$ factor ($\psi$). 4. Total number of factors is $m^{\ast}\leq m+n$. 5. $N:=\max_k N_k$: size of the biggest factor. 6. Number of product operations (each factor is multiplied only once): $$\sum_k N_k(m_k-1)\leq N m^{\ast}.$$ 7. Number of marginalization operation: $$\sum_k N_k\leq N n.$$ > Until here, the computational cost is linear in $N$ and in $m^\ast$. 8. However, $N_k=|\mathrm{Val}(\bar{X}_k)|=\mathcal{O}(d^{r_k})$, where - $d:=\max |\mathrm{Val}(X_i)|$; - $r_k=|\bar{X}_k|$. > Exponential blowup! **In the BN example** - Operation: $\tau_1(C,L)=\sum_{Q}\phi_Q(Q)\phi_C(C,L,Q)$ - $r_1=3$. - Operation: $\tau_2(N,L)=\sum_{C}\phi_N(N,L,C)\tau_1(C,L)$ - $r_2=3$. - Operation: $\tau_3(N)=\sum_{L}\phi_L(L)\tau_2(N,L)$ - $r_3=2$. On the other hand $d=3$. Total computational complexity $\leq (n+m^\ast) d^{r_k}=(4+7) 3^3 = 297$. ```python # Perform query with given elimination order %timeit inference.query(variables=["N"], elimination_order=["Q", "C", "L"]) ``` Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 332.62it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 410.52it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 309.25it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 353.56it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 237.58it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 503.30it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 381.69it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 475.63it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 542.86it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 487.67it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 416.39it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 457.49it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 464.95it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 315.77it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 394.50it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 362.82it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 339.92it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 606.55it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 464.49it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 318.18it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 258.05it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 226.30it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 276.04it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 333.99it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 280.82it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 280.69it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 212.75it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 329.02it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 173.93it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 227.32it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 288.00it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 222.83it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 379.70it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 286.05it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 238.97it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 462.32it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 559.86it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 446.30it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 428.00it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 432.92it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 436.30it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 447.20it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 526.59it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 243.21it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 389.61it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 302.47it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 588.87it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 492.96it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 321.52it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 260.89it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 468.53it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 453.96it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 380.40it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 286.03it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 241.71it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 360.71it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 284.35it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 336.79it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 294.13it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 275.50it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 356.00it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 298.18it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 258.16it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 381.77it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 490.33it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 263.28it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 219.28it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 349.26it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 326.08it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 284.62it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 326.49it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 218.11it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 399.08it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 445.05it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 203.28it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 280.71it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 276.85it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 441.72it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 233.96it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 366.25it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 254.27it/s] 22.3 ms ± 1.13 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) **A different elimination order** $$\begin{align} P(N) = & \sum_{L,C,Q}\phi_L(L)\phi_Q(Q)\phi_C(C,L,Q)\phi_N(N,L,C) \\ = & \sum_{L}\sum_{C}\sum_{Q} \phi_L(L)\phi_Q(Q)\phi_C(C,L,Q)\phi_N(N,L,C) \\ = & \sum_{Q}\sum_{C} \phi_Q(Q) \underbrace{\sum_{L} \phi_L(L) \phi_C(C,L,Q) \phi_N(N,L,C)}_{\tau_1(Q,C,N)} \\ = & \sum_{Q} \phi_Q(Q) \underbrace{\sum_{C} \tau_1(Q,C,N)}_{\tau_2(Q,N)} \\ = & \sum_{Q} \phi_Q(Q)\tau_2(Q,N) \end{align}$$ - Operation: $\tau_1(Q,C,N) = \sum_{L} \phi_L(L) \phi_C(C,L,Q) \phi_N(N,L,C)$ - $r_1=4$ - Operation: $\tau_2(Q,N) = \sum_{C} \tau_1(Q,C,N)$ - $r_2=3$ - Operation: $\tau_3(Q) = \sum_{Q} \phi_Q(Q)\tau_2(Q,N)$ - $r_3=2$ On the other hand $d=3$. Total computational complexity $\leq (n+m^\ast) d^{r_k}=(4+7) 3^4 = 891$. ```python # Perform query with given elimination order %timeit inference.query(variables=["N"], elimination_order=["L", "Q", "C"]) ``` Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 268.45it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 347.81it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 344.29it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 286.59it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 550.27it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 400.27it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 232.45it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 468.13it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 303.33it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 244.23it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 292.54it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 224.76it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 355.05it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 250.11it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 266.38it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 239.12it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 372.62it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 199.97it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 410.13it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 346.81it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 307.35it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 335.01it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 263.74it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 255.98it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 313.24it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 314.74it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 266.55it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 305.89it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 278.54it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 421.75it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 331.77it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 234.24it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 360.80it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 259.20it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 277.36it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 316.22it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 283.25it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 230.91it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 342.58it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 236.73it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 360.31it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 298.82it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 300.32it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 255.33it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 226.40it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 243.28it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 245.29it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 302.53it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 416.76it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 312.72it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 162.89it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 352.79it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 254.68it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 249.99it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 218.61it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 224.73it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 253.09it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 349.06it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 219.38it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 330.90it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 292.54it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 342.43it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 191.15it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 230.68it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 254.90it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 310.57it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 350.53it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 210.73it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 301.34it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 309.44it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 242.79it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 328.66it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 369.88it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 247.23it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 386.32it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 401.64it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 310.14it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 279.58it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 200.39it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 229.38it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 389.44it/s] 24 ms ± 368 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) ```python # Show P_N = inference.query(variables=["N"], elimination_order=["L", "Q", "C"]) print(P_N) ``` Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 239.77it/s] +------+----------+ | N | phi(N) | +======+==========+ | N(0) | 0.4452 | +------+----------+ | N(1) | 0.5548 | +------+----------+ The computational complexity strongly depends on the elimination order! The result is the same for any elimination order though. ### 2.3. Finding elimination orderings The VE algorithm works well no matter the selected ordering. However, we have shown that the ordering significantly affects the computation complexity of the algorithm. **How do we find a good elimination ordering?** - Good ideas can be obtained from the graphical representation. We won't cover that here, but you can take a look at Section 9.4, pages 305-315. In practice, one often performs **greedy search using heuristic cost functions** (at each iteration calculate the variable to eliminate to obtain the smallest cost). > These algorithms are not optimal, but perform sufficiently well. Possible cost functions: - min-weight: # of values of factor formed ($N_k$). - min-neighbors: # of resulting neighbor nodes after variable elimination. - min-fill: # of new fill edges. ```python # min-weight inference.query(variables=["N"], elimination_order="MinWeight") ``` Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1788.87it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 380.65it/s] <DiscreteFactor representing phi(N:2) at 0x7fb9ab3ea9d0> ```python # min-neighbors inference.query(variables=["N"], elimination_order="MinNeighbors") ``` Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 705.52it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 327.27it/s] <DiscreteFactor representing phi(N:2) at 0x7fb9ac499bd0> ```python # min-fill (default) inference.query(variables=["N"], elimination_order="MinFill") ``` Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1933.75it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 247.69it/s] <DiscreteFactor representing phi(N:2) at 0x7fb9ac56ebd0> ```python # Compare execution times using timeit %timeit inference.query(variables=["N"], elimination_order="MinWeight") ``` Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 755.78it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 357.72it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1848.53it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 395.90it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 796.13it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 497.96it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1140.07it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 297.62it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1156.09it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 213.17it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1191.56it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 359.18it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1073.35it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 432.46it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1297.07it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 254.24it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 644.62it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 451.71it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1269.97it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 286.88it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2328.88it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 371.25it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1135.23it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 240.78it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 809.76it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 347.86it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1318.27it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 234.55it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1246.20it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 281.55it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 499.52it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 265.47it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1616.09it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 272.43it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1400.12it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 270.83it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1686.72it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 307.30it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1681.98it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 403.62it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 889.13it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 427.66it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1243.49it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 270.73it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1218.80it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 405.56it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 776.39it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 281.34it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 604.08it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 378.02it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1771.74it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 423.85it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1114.81it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 191.88it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2034.42it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 362.82it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 578.37it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 320.97it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 991.72it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 312.01it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1260.18it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 331.03it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 539.09it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 271.18it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 473.38it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 265.60it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1448.81it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 349.61it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1027.09it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 293.25it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1185.05it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 318.35it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1051.20it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 326.38it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 997.38it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 450.55it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1140.07it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 261.08it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 773.81it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 331.57it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 640.65it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 217.50it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 828.10it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 360.71it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1446.48it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 279.58it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2323.71it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 349.63it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 791.68it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 279.98it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 797.30it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 270.17it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 795.93it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 279.99it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 738.35it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 336.02it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 843.64it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 262.66it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1428.09it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 316.36it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 914.72it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 256.32it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1608.86it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 265.66it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1240.18it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 255.50it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 612.25it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 249.28it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 998.56it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 292.69it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 945.80it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 429.73it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1008.25it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 343.70it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 840.54it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 408.43it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1345.05it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 333.01it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1166.27it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 340.44it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1974.41it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 443.09it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1577.40it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 248.92it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1271.00it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 546.96it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1440.19it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 330.15it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1803.74it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 385.20it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 987.90it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 282.64it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1214.45it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 303.83it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 878.94it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 324.03it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1139.03it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 368.10it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 661.21it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 419.25it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2087.76it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 297.14it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 779.66it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 310.38it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 870.55it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 317.73it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1482.09it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 278.00it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1932.86it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 256.93it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 770.68it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 326.46it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 987.67it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 174.48it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 605.33it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 436.12it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 902.58it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 290.01it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 656.21it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 339.95it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1626.54it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 322.57it/s] 31.6 ms ± 1.75 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) ```python %timeit inference.query(variables=["N"], elimination_order="MinNeighbors") ``` Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1432.81it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 403.69it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1533.19it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 257.47it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1097.89it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 277.25it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 462.98it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 425.92it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1107.94it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 382.31it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 862.97it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 339.90it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 522.72it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 237.99it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 944.45it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 294.48it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1109.90it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 451.58it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 753.51it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 377.24it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 889.19it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 219.89it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1035.63it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 271.66it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2085.34it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 272.66it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 747.16it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 257.16it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1053.05it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 205.82it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1739.89it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 278.99it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 843.70it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 276.29it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1635.42it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 301.95it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1442.00it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 259.75it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1003.50it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 411.14it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1367.11it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 165.28it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1344.04it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 365.37it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1134.21it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 195.61it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1520.41it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 366.69it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 566.03it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 241.75it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1902.18it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 321.54it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1077.40it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 263.53it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 924.40it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 381.45it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 659.31it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 335.37it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 565.52it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 188.28it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1037.51it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 266.68it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1035.03it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 349.11it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1260.69it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 117.76it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1139.55it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 481.79it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1596.21it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 313.96it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1497.43it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 353.01it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 478.84it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 320.08it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 812.12it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 310.10it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 413.60it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 459.88it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1338.18it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 322.68it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 849.16it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 244.04it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 734.81it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 281.77it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 885.56it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 299.49it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1014.01it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 299.99it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1240.18it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 286.62it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 932.21it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 456.46it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1644.18it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 496.92it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 945.44it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 208.82it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1413.02it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 381.99it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 446.68it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 293.27it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 429.79it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 264.71it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1305.55it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 202.91it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1158.33it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 198.90it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1478.43it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 475.56it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1800.13it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 280.80it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1213.51it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 282.64it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1317.44it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 346.97it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1452.99it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 339.27it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2406.37it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 318.29it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 891.58it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 129.48it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 826.41it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 298.30it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 790.98it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 322.26it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1314.69it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 298.93it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 646.90it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 263.35it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1585.35it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 297.77it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1360.61it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 421.65it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1742.79it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 339.89it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 881.09it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 330.74it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 906.16it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 258.07it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1265.50it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 283.05it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 741.83it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 232.78it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1110.98it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 272.22it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1585.15it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 203.68it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 326.98it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 316.60it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1129.83it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 167.24it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1540.14it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 334.84it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 689.32it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 265.67it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1464.32it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 365.95it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 677.81it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 242.02it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 962.22it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 198.71it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1193.15it/s] Eliminating: C: 100%|██████████| 3/3 [00:00<00:00, 233.22it/s] 33.7 ms ± 1.5 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) ```python %timeit inference.query(variables=["N"], elimination_order="MinFill") ``` Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1667.72it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 269.30it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 829.08it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 325.59it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1252.65it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 341.71it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1140.79it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 329.12it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1768.50it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 242.31it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1180.50it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 337.60it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1104.64it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 328.24it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1015.41it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 298.44it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1360.46it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 341.19it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 882.76it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 257.07it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 764.08it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 432.36it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 534.08it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 433.50it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1354.02it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 388.21it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 731.39it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 382.40it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1764.78it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 296.28it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 850.94it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 381.43it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1166.49it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 279.08it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 601.88it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 296.84it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1433.79it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 361.55it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2926.26it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 321.18it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 766.41it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 394.15it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1008.25it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 372.39it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1236.53it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 377.66it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 991.09it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 373.91it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1603.74it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 343.46it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 982.58it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 227.10it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1688.98it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 234.53it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 923.72it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 285.55it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1246.82it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 347.05it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1661.11it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 214.93it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 771.96it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 350.86it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1183.49it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 217.98it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1004.06it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 267.22it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2016.82it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 347.12it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1802.71it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 331.95it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1095.88it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 340.32it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1007.36it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 360.65it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1120.17it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 219.74it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1411.27it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 256.37it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1114.72it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 265.60it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1394.85it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 287.75it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 672.02it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 337.01it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1366.22it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 321.97it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2611.10it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 277.43it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 429.14it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 239.03it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 962.59it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 331.17it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 938.60it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 249.78it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 899.94it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 310.70it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1554.79it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 177.15it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1456.02it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 370.31it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1487.69it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 331.11it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1171.48it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 270.25it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1391.15it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 283.74it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 657.55it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 302.06it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1286.99it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 267.53it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1281.09it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 248.93it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2230.22it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 376.55it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1522.07it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 348.62it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1423.25it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 297.32it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 966.88it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 446.58it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1387.62it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 292.39it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 970.83it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 288.94it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 890.45it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 421.92it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1071.25it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 215.86it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2556.46it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 359.60it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 563.67it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 310.99it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1244.35it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 398.09it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2202.51it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 231.93it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 800.34it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 255.17it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2100.30it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 439.53it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 641.04it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 214.06it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1665.51it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 270.52it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1439.36it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 308.45it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1752.98it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 343.90it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 617.38it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 338.65it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 477.24it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 263.16it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1905.06it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 321.09it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1538.25it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 358.27it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 397.16it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 240.35it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 2467.72it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 263.49it/s] Finding Elimination Order: : 100%|██████████| 3/3 [00:00<00:00, 1479.47it/s] Eliminating: L: 100%|██████████| 3/3 [00:00<00:00, 265.41it/s] 30.3 ms ± 640 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) # Announcements ## 1. Quiz. <footer id="attribution" style="float:right; color:#808080; background:#fff;"> Created with Jupyter by Esteban Jiménez Rodríguez. </footer>
bfd8c743deab640bd26acf7639fa8b26d2e0d481
262,209
ipynb
Jupyter Notebook
Modulo2/Clase5/VariableEliminationAlgorithm.ipynb
AdrianRamosDS/mgpo2021
1004b3fff386b389594fd2b0f995756dd1b93d6e
[ "MIT" ]
null
null
null
Modulo2/Clase5/VariableEliminationAlgorithm.ipynb
AdrianRamosDS/mgpo2021
1004b3fff386b389594fd2b0f995756dd1b93d6e
[ "MIT" ]
null
null
null
Modulo2/Clase5/VariableEliminationAlgorithm.ipynb
AdrianRamosDS/mgpo2021
1004b3fff386b389594fd2b0f995756dd1b93d6e
[ "MIT" ]
6
2021-08-18T01:07:56.000Z
2021-09-07T04:06:28.000Z
126.854862
59,764
0.794732
true
29,886
Qwen/Qwen-72B
1. YES 2. YES
0.637031
0.865224
0.551174
__label__eng_Latn
0.160126
0.118892
# Spectral Estimation of Random Signals *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* ## The Welch Method In the previous section it has been shown that the [periodogram](periodogram.ipynb) as a non-parametric estimator of the power spectral density (PSD) $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of a random signal $x[k]$ is not consistent. This is due to the fact that its variance does not converge towards zero even when the length of the random signal is increased towards infinity. In order to overcome this problem, the [Bartlett method](https://en.wikipedia.org/wiki/Bartlett's_method) and [Welch method](https://en.wikipedia.org/wiki/Welch's_method) 1. split the random signal into segments, 2. estimate the PSD for each segment, and 3. average over these local estimates. The averaging reduces the variance of the estimated PSD. While Barlett's method uses non-overlapping segments, Welch's is a generalization using windowed overlapping segments. For the discussion of Welch's method we assume a wide-sense ergodic real-valued random process. ### Derivation The random signal $x[k]$ is split into into $L$ overlapping segments of length $N$, starting at multiples of the step size $M \in {1,2, \dots, N}$. These segments are windowed by the window $w[k]$ of length $N$, resulting in a windowed $l$-th segment $x_l[k]$ with $0\leq l\leq L-1$. The discrete-time Fourier transformation (DTFT) $X_l(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of the windowed $l$-th segment is then given as \begin{equation} X_l(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{k = 0}^{N-1} x[k + l \cdot M] \, w[k] \; \mathrm{e}^{\,-\mathrm{j}\,\Omega\,k} \end{equation} where the window $w[k]$ defined within $0\leq k\leq N-1$ should be normalized as $\frac{1}{N} \sum\limits_{k=0}^{N-1} | w[k] |^2 = 1$. The latter condition ensures that the power of the signal is maintained in the estimate. The stepsize $M$ determines the overlap between the segments. In general, $N-M$ number of samples overlap between adjacent segments. For $M = N$ no overlap occurs. The overlap is sometimes given as ratio $\frac{N-M}{N}\cdot 100\%$. Introducing $X_l(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ into the definition of the periodogram yields the periodogram of the $l$-th segment \begin{equation} \hat{\Phi}_{xx,l}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{N} \,| X_l(\mathrm{e}^{\,\mathrm{j}\,\Omega}) |^2 \end{equation} The estimated PSD is then given by averaging over the segment's periodograms $\hat{\Phi}_{xx,l}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ \begin{equation} \hat{\Phi}_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{L} \sum_{l = 0}^{L-1} \hat{\Phi}_{xx,l}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \end{equation} Note, that the total number $L$ of segments has to be chosen such that the last required sample $(L-1)\cdot M + N - 1$ does not exceed the total length of the random signal. Otherwise the last segment $x_{L-1}[k]$ may be zeropadded towards length $N$. The Bartlett method uses a rectangular window $w[k] = \text{rect}_N[k]$ and non-overlapping segments $M=N$. The Welch method uses overlapping segments and a window that must be chosen according to the intended spectral analysis task. ### Example The following example is equivalent to the previous [periodogram example](periodogram.ipynb#Example---Periodogram). We aim at estimating the PSD of a random process which draws samples from normally distributed white noise with zero-mean and unit variance. The true PSD is consequently given as $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = 1$. ```python %matplotlib inline import numpy as np import matplotlib.pyplot as plt import scipy.signal as sig N = 128 # length of segment M = 64 # stepsize L = 100 # total number of segments # generate random signal np.random.seed(5) x = np.random.normal(size=L*M) # compute periodogram by Welch's method nf, Pxx = sig.welch(x, window='hamming', nperseg=N, noverlap=(N-M)) Pxx = .5*Pxx # due to normalization in scipy.signal Om = 2*np.pi*nf # plot results plt.figure(figsize=(10,4)) plt.stem(Om, Pxx, 'b', label=r'$\hat{\Phi}_{xx}(e^{j \Omega})$', basefmt=' ') plt.plot(Om, np.ones_like(Pxx), 'r', label=r'$\Phi_{xx}(e^{j \Omega})$') plt.title('Estimated and true PSD') plt.xlabel(r'$\Omega$') plt.axis([0, np.pi, 0, 2]) plt.legend() # compute mean value of the periodogram print('Mean value of the periodogram: %f' %np.mean(np.abs(Pxx))) ``` **Exercise** * Compare the results to the periodogram example. Is the variance of the estimator lower? * Change the number of segments `L`. What changes? * Change the segment length `N` and stepsize `M`. What changes? Solution: When comparing both the estimates of the PSD in the previous periodogram and above example, it is obvious that the variance of the Welch estimator is lower. Increasing the number of segments `L` lowers the variance further. Increasing the segment length `N` increases the total number of discrete frequencies in the estimated PSD. Since in above example the total number of segments is kept constant, the variance increases. Lowering the stepsize `M` has the same result, since the total number of samples is reduced for a fixed number of segments. ### Evaluation It is shown in [[Stoica et al.](../index.ipynb#Literature)] that Welch's method is asymptotically unbiased. Under the assumption of a wide-sense stationary (WSS) random process, the periodograms $\hat{\Phi}_{xx,l}(e^{j \Omega})$ of the segments can be assumed to be approximately uncorrelated. Hence, averaging over these reduces the overall variance of the estimator. It can be shown formally that in the limiting case of an infinitely number of segments (infintely long signal) the variance tends towards zero. As a result Welch's method is an asymptotically consistent estimator of the PSD. Note, that for a finite segment length $N$ the properties of the estimated PSD $\hat{\Phi}_{xx}(e^{j \Omega})$ depend on the length $N$ of the segments and the window function $w[k]$ due to the [leakage effect](../spectral_analysis_deterministic_signals/leakage_effect.ipynb). **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2018*.
7171119fea3b392602b6c61d316d7051952130df
24,618
ipynb
Jupyter Notebook
spectral_estimation_random_signals/welch_method.ipynb
hustcxl/digital-signal-processing-lecture
1d6d9af39ed8cc2fc768a9af523cfa97ec4123f8
[ "MIT" ]
2
2018-12-29T19:13:49.000Z
2020-05-25T09:53:21.000Z
spectral_estimation_random_signals/welch_method.ipynb
cphysics/signal
2e47bb4f0cf368418ee9a1108f0cea24a5dc812d
[ "MIT" ]
null
null
null
spectral_estimation_random_signals/welch_method.ipynb
cphysics/signal
2e47bb4f0cf368418ee9a1108f0cea24a5dc812d
[ "MIT" ]
3
2020-10-17T07:48:22.000Z
2022-03-17T06:28:58.000Z
134.52459
15,520
0.846373
true
1,827
Qwen/Qwen-72B
1. YES 2. YES
0.851953
0.888759
0.757181
__label__eng_Latn
0.984779
0.597516
```python # autoreload nangs %reload_ext autoreload %autoreload 2 %matplotlib inline ``` Bad key "text.kerning_factor" on line 4 in C:\Users\sensio\miniconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test_patch.mplstyle. You probably need to get an updated matplotlibrc file from https://github.com/matplotlib/matplotlib/blob/v3.1.3/matplotlibrc.template or from the matplotlib source distribution # Driven Cavity In this example we solve the two-dimensional incompressible steady Navier Stokes equations in the Driven Cavity problem \begin{equation} \\mathbf{u} \cdot \nabla \mathbf{u} = \frac{1}{Re} \nabla^2 \mathbf{u} - \nabla p \end{equation} where $\mathbf{u} = (u, v)$ is the velocity of the fluid and $Re$ is the Reynolds number. The geometry of the problem is described as follows ```python #imports import numpy as np import matplotlib.pyplot as plt import torch import pandas as pd import nangs from nangs import * device = "cuda" if torch.cuda.is_available() else "cpu" nangs.__version__, torch.__version__ ``` ('0.1.2', '1.5.0') ```python U = 1. Re = 100 class NavierStokes2d(PDE): def computePDELoss(self, inputs, outputs): u, v, p = outputs[:, 0], outputs[:, 1], outputs[:, 2] # first order derivatives grads = self.computeGrads(u, inputs) dudx, dudy = grads[:, 0], grads[:, 1] grads = self.computeGrads(v, inputs) dvdx, dvdy = grads[:, 0], grads[:, 1] grads = self.computeGrads(p, inputs) dpdx, dpdy = grads[:, 0], grads[:, 1] # second order derivatives du2dx2 = self.computeGrads(dudx, inputs)[:, 0] dv2dy2 = self.computeGrads(dvdy, inputs)[:, 1] # compute losses return { 'mass': dudx + dvdy, 'mom_x': u*dudx + v*dudy + dpdx - (1./Re)*(du2dx2 + dv2dy2), 'mom_y': u*dvdx + v*dvdy + dpdy - (1./Re)*(du2dx2 + dv2dy2) } # instanciate pde pde = NavierStokes2d(inputs=('x', 'y'), outputs=('u', 'v', 'p')) ``` ```python # mesh x = np.linspace(0,1,30) y = np.linspace(0,1,30) mesh = Mesh({'x': x, 'y': y}, device=device) pde.set_mesh(mesh) ``` ```python # left-right walls class NeumannX(Neumann): def computeBocoLoss(self, inputs, outputs): dpdx = self.computeGrads(outputs[:, 2], inputs)[:, 0] return {'gradX': dpdx} u0 = np.zeros(2*len(y)) v0 = np.zeros(2*len(y)) boco = Dirichlet({'x': np.array([0, 1]), 'y': y}, {'u': u0, 'v': v0}, name='x_w_d', device=device) pde.add_boco(boco) boco = NeumannX({'x': np.array([0, 1]), 'y': y}, name='x_w_n', device=device) pde.add_boco(boco) ``` Boco x_w_d with different outputs ! ('u', 'v', 'p') vs ('u', 'v') ```python # bootom wall class NeumannY(Neumann): def computeBocoLoss(self, inputs, outputs): dpdy = self.computeGrads(outputs[:, 2], inputs)[:, 1] return {'gradY': dpdy} u0 = np.zeros(len(x)) v0 = np.zeros(len(x)) boco = Dirichlet({'x': x, 'y': np.array([0])}, {'u': u0, 'v': v0}, name='b_w_d', device=device) pde.add_boco(boco) boco = NeumannY({'x': x, 'y': np.array([0])}, name='b_w_n', device=device) pde.add_boco(boco) ``` Boco b_w_d with different outputs ! ('u', 'v', 'p') vs ('u', 'v') ```python # top wall u0 = np.full(len(x), U) v0 = np.zeros(len(x)) boco = Dirichlet({'x': x, 'y': np.array([1])}, {'u': u0, 'v': v0}, name='t_w_d', device=device) pde.add_boco(boco) boco = NeumannY({'x': x, 'y': np.array([1])}, name='t_w_n', device=device) pde.add_boco(boco) ``` Boco t_w_d with different outputs ! ('u', 'v', 'p') vs ('u', 'v') ```python from nangs import MLP BATCH_SIZE = 512 LR = 1e-2 EPOCHS = 500 NUM_LAYERS = 5 NUM_HIDDEN = 256 mlp = MLP(len(pde.inputs), len(pde.outputs), NUM_LAYERS, NUM_HIDDEN).to(device) optimizer = torch.optim.Adam(mlp.parameters()) scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=LR, pct_start=0.1, total_steps=EPOCHS) pde.compile(mlp, optimizer, scheduler) %time hist = pde.solve(EPOCHS, BATCH_SIZE) ``` ```python # evaluate the solution x = np.linspace(0,1,50) y = np.linspace(0,1,50) eval_mesh = Mesh({'x': x, 'y': y}, device=device) outputs = pde.eval(eval_mesh).cpu() u = outputs[:, 0].view(len(x),len(y)).numpy() v = outputs[:, 1].view(len(x),len(y)).numpy() p = outputs[:, 2].view(len(x),len(y)).numpy() ``` ```python # plot results fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(10,8)) vel = np.sqrt(u**2 + v**2) im=ax1.imshow(vel, vmin=0, vmax=1, origin='lower', extent=[x.min(), x.max(), y.min(), y.max()]) fig.colorbar(im, ax=ax1) ax1.set_xlabel("x", fontsize=14) ax1.set_ylabel("y", fontsize=14, rotation=np.pi/2) ax1.set_title("Vel") ax1.axis(False) im=ax2.imshow(u, vmin=u.min(), vmax=u.max(), origin='lower', extent=[x.min(), x.max(), y.min(), y.max()]) fig.colorbar(im, ax=ax2) ax2.set_title("U") im=ax3.imshow(v, vmin=v.min(), vmax=v.max(), origin='lower', extent=[x.min(), x.max(), y.min(), y.max()]) fig.colorbar(im, ax=ax3) ax2.axis(False) ax3.axis(False) ax3.set_title("V") im=ax4.imshow(p, vmin=p.min(), vmax=p.max(), origin='lower', extent=[x.min(), x.max(), y.min(), y.max()]) fig.colorbar(im, ax=ax4) ax4.axis(False) ax4.set_title("P") plt.tight_layout() plt.show() ``` ```python # profiles mid_u = u[:,len(u)//2] mid_v = v[len(v)//2,:] exp_u = pd.read_csv('data/dc_100_ux.csv', header=None).values exp_v= pd.read_csv('data/dc_100_uy.csv', header=None).values fig = plt.figure(figsize=(15,5)) ax1 = plt.subplot(121) ax1.plot(mid_u, y, label="numeric") ax1.plot(exp_u[:,0], exp_u[:,1], '.k', label="experimental") ax1.set_xlabel('x') ax1.set_ylabel('u') ax1.legend() ax1.grid(True) ax2 = plt.subplot(122) ax2.plot(x,mid_v, label="numeric") ax2.plot(exp_v[:,0], exp_v[:,1], '.k', label="experimental") ax2.legend() ax2.set_xlabel('y') ax2.set_ylabel('v') ax2.grid(True) plt.show() ```
cd062a2819c6645647872581a62175fdb0711053
168,925
ipynb
Jupyter Notebook
examples/06_DrivenCavity_steady.ipynb
SanjuSoni/nangs
fe7474fad91c87ecd083538637caa2e590c0ce15
[ "Apache-2.0" ]
1
2021-02-22T11:17:22.000Z
2021-02-22T11:17:22.000Z
examples/06_DrivenCavity_steady.ipynb
panghalshagun/nangs
fe7474fad91c87ecd083538637caa2e590c0ce15
[ "Apache-2.0" ]
null
null
null
examples/06_DrivenCavity_steady.ipynb
panghalshagun/nangs
fe7474fad91c87ecd083538637caa2e590c0ce15
[ "Apache-2.0" ]
2
2020-07-23T09:10:23.000Z
2021-02-22T11:14:24.000Z
344.744898
88,140
0.931668
true
1,967
Qwen/Qwen-72B
1. YES 2. YES
0.828939
0.709019
0.587733
__label__eng_Latn
0.239429
0.203832
```python # https://socratic.org/questions/a-fence-8-ft-tall-runs-parallel-to-a-tall-building-at-a-distance-of-4-ft-from-th-1 from IPython.display import Image from IPython.core.display import HTML from sympy import *; x,h,y,n,t,d,c,A,r = symbols("x h y n t d c A r"); C, D = symbols("C D", real=True) Image(url= "https://i.imgur.com/XYvZXIq.png") ``` ```python Image(url= "https://i.imgur.com/2DeA9ZT.png") # change 4 to 5 ``` ```python Image(url= "https://i.imgur.com/um3ZnlW.png") ``` ```python h = factor(8*(5+x))/x h ``` $\displaystyle \frac{8 \left(x + 5\right)}{x}$ ```python L = h**2+(x+5)**2 Eq(symbols("L")**2,L) ``` $\displaystyle L^{2} = \left(x + 5\right)^{2} + \frac{64 \left(x + 5\right)^{2}}{x^{2}}$ ```python diff(L) ``` $\displaystyle 2 x + 10 + \frac{64 \left(2 x + 10\right)}{x^{2}} - \frac{128 \left(x + 5\right)^{2}}{x^{3}}$ ```python solve(diff(L),x) ``` [-5, 4*5**(1/3), -2*5**(1/3) - 2*sqrt(3)*5**(1/3)*I, -2*5**(1/3) + 2*sqrt(3)*5**(1/3)*I] ```python L.subs(x, 4*5**(1/3)) ``` $\displaystyle 331.951408234819$ ```python sqrt(L.subs(x, 4*5**(1/3))) ``` $\displaystyle 18.2195336997087$ ```python Image(url= "https://i.imgur.com/yafiIYG.png") ```
390d432c583a83772d5e6237fbc7c685f8678c01
5,539
ipynb
Jupyter Notebook
Calculus_Homework/WWB14.6.ipynb
NSC9/Sample_of_Work
8f8160fbf0aa4fd514d4a5046668a194997aade6
[ "MIT" ]
null
null
null
Calculus_Homework/WWB14.6.ipynb
NSC9/Sample_of_Work
8f8160fbf0aa4fd514d4a5046668a194997aade6
[ "MIT" ]
null
null
null
Calculus_Homework/WWB14.6.ipynb
NSC9/Sample_of_Work
8f8160fbf0aa4fd514d4a5046668a194997aade6
[ "MIT" ]
null
null
null
20.141818
124
0.455136
true
500
Qwen/Qwen-72B
1. YES 2. YES
0.91611
0.845942
0.774976
__label__yue_Hant
0.387495
0.638861
### The beauty of the Gershgorin disk theorem In this post I'll talk about one of the most beatiful theorems I've encountered while studying linear algebra. I bumped into it while taking the ACM104 Applied Linear Algebra course at Caltech, taught by Prof. Kostia Zuev. Btw he is amazing - check out his math videos on Youtube ! I find this theorem very aesthetically pleasing because it has a visual representation, and also a remarkable number of applications. Moreover, I think that the wit with which it came about is to be praised for. It is a clear example of how mathematics are *just right there*, waiting to be discovered. We follow the derivation of the book *Applied Linear Algebra* by Shakiban. ### The statement Before stating the theorem, we'll need some definitions. Also, recall that for a complex number $z = a + ib$ its magnitude is defined by $|z| = \sqrt{a^2 + b^2}$. A nice equality is also that if we define the conjugate $\bar{z} = a - ib$, then $|z|^2 = z\bar{z}$. **Definition:** *Gershgorin disk*. Let $A \in \mathbb{M}_{n \times n}$ be a real matrix over $\mathbb{F}$. For each $1 \le i \le n$, define the $i$-th Gershgorin disk as : $$ D_i = \{ |A_{ii} - z_i | \le r_i : z \in \mathbb{C} \} $$ where $r_i = \sum_{j = 1, j \neq i} |A_ij| $ **Definition** *Gershgorin domain*. The union of the $n$ Gershgorin disks is the Gershgorin domain. $$ \mathfrak{D}_A = \{ \cup_{i = 1}^n D_i \} \subseteq \mathbb{C} $$ **Definition** *Spectrum of a matrix*. We call the spectrum of a matrix A to the set of eigenvalues associated with A. We denote it as $\mathrm{spec} A$. **Theorem** $$\mathrm{spec A} \subseteq \mathfrak{D}_A \subseteq \mathbb{C}$$ *Proof* The constructive proof is surprisingly straightforward. Let $\lambda$ be an eigenvalue of A, $\vec{v}$ be its associated eigenvector. Let $\vec{u} = \frac{v}{ || v ||_{\infty} }$ be the corresponding unit eigenvector with respect to (w.r.t.) the $\infty$ norm, i.e. $$ || u ||_\infty = \mathrm{max} \{ |u_1|, ..., |u_n| \} = 1 $$ Let $u_i$ be an entry of $\vec{u}$ that achievs the maximum $|u_i| = 1$. Writing out the $i$-th component of the equation $\mathbf{A}\vec{v} = \lambda \vec{v}$ we obtain: $$ \sum_{j = 1}^ n \mathbf{A}_{ij} u_j = \lambda u_i $$ which we can rewrite as: $$ \sum_{j \neq i } \mathbf{A}_{ij} u_j = \lambda u_i - \mathbf{A}_{ii} u_i = (\lambda - \mathbf{A}_{ii} ) u_i $$ Note: in the last step we just subtracted $\mathbf{A}_{ii} u_i $ from both sides. Thus, since all $|u_j| \le 1 $ while $|u_i|$ = 1 we have that \begin{align} |\mathbf{A}_{ii} - \lambda| &= |\lambda - \mathbf{A}_{ii}|\\[1em] &= |\lambda - \mathbf{A}_{ii}| |u_i|\\[1em] &= | (\lambda - \mathbf{A}_{ii}) u_i | \\[.7em] &= | \sum_{j \neq i} \mathbf{A}_{ij} u_j | \\[.7em] &\le \sum_{j \neq i } |\mathbf{A}_{ij}||u_j| \\[.7em] &\le \sum_{j \neq i} |\mathbf{A}_{ij}|\\[1em] &= r_i. \end{align} This implies that $|\lambda - \mathbf{A}_{ii}| \le r_i$, i.e. $\lambda \in D_i$. We have equality in the third step as can be checked for all cases ( ++, +-, -+, -- ). The fourth step is just substituting the equation above. The fifth step holds by the triangle inequality $ ||x+y|| \le ||x|| + ||y|| $. The sixth line holds as $|u_j| \le 1 \forall j \neq i$ by construction of $\vec{u}$. **Definition.** A square matrix is called diagonally dominant if $$ |a_{ii}| > \sum_{j \neq i} |a_{ij}| \forall i = 1, ..., n. $$ **Corollary.** A strictly diagonally dominant matrix is **nonsingular**. ### Examples Here are some visualizations in a jupyter notebook if you want to get a feel of the theorem. ```python import numpy as np import matplotlib.pyplot as plt from numpy import linalg as la from matplotlib import rc import panel as pn pn.extension() #rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']}) ## for Palatino and other serif fonts use: #rc('font',**{'family':'serif','serif':['Palatino']}) #rc('text', usetex=True) ``` ```python %config InlineBackend.figure_format = "svg" ``` Let's test the hypothesis that a random matrix is invertible/non-singular with high probability. ```python rng = np.random.default_rng(seed=47) ``` ```python A = rng.random((3,3)) ``` ```python A ``` We can quickly check if the matrix is non-singular by checking that the determinant is not zero. ```python la.det(A) ``` Indeed it is not singular, but close to being singular. Let's plot its Gershgorin disks. ```python def get_gershgorin_disk_radius(A, idx): "Returns Gershgorin disk radius for square matrix." r_i = np.delete(A[idx, :], idx).sum() return r_i ``` ```python def plot_gershgorin_domain(A): n = A.shape[0] fig, ax = plt.subplots(figsize=(4,4)) # Extract diagonal (A_ii's) diag = np.diag(A) diag_x,diag_y = np.real(diag),np.imag(diag) radius = np.zeros(n) for i in range(n): a_ii = diag[i] r_i = get_gershgorin_disk_radius(A, i) radius[i] = r_i # Save radius real_aii, imag_aii = diag_x[i], diag_y[i] plt.scatter( real_aii, imag_aii, marker = "D", color = "lightgrey", s= 5 ) ax.add_patch( plt.Circle( (real_aii, imag_aii), r_i, alpha = 0.3, facecolor="lightblue", edgecolor = "lightgrey" ) ) plt.scatter(0, 0, marker = "X", color = "k", label = "origin",s= 10) # Make symmetric limits max_radius = max(radius) + 0.5 maxmin_centers = max(diag_x), min(diag_x), max(diag_y), min(diag_y) maxmin_centers = [abs(x) for x in maxmin_centers] lim = np.max(maxmin_centers) plt.xlim(-lim - max_radius , lim + max_radius) plt.ylim(-lim - max_radius , lim + max_radius) plt.legend() plt.xlabel("$\mathrm{Re}(z)$", fontsize=18) plt.ylabel("$\mathrm{Im}(z)$", fontsize =18) plt.title("$\mathcal{D}_A$", fontsize = 20) plt.close(fig); return fig ``` ```python fig=plot_gershgorin_domain(A) ``` ```python fig ``` Indeed, we can see that the origin is in the Gershgorin domain $ \mathcal{D}_A$. We can use our corollary by increasing the values along the diagonal. ```python def get_diag_dominant(A_, eps): """Scales values along the diagonal of a square matrix.""" return A_ + np.diag(np.diag(A_)*eps) ``` ```python def get_diag_dominant_iden(A_, eps): """Regularization, applies A = A_ + ϵI.""" n = A.shape[0] return A_ + eps*np.eye(n) ``` ```python eps = 20 # multiplier A_diag_dom = get_diag_dominant(A,eps) ``` ```python fig = plot_gershgorin_domain(A_diag_dom) ``` ```python fig ``` By heavily increasing the diagonal we can help making a matrix invertible. This process is usually called regularization. Let's visualize the process of how both the Gershgorin disks and the determinant changes as we increase the value of our multiplier. ```python data_dict = { "A":A, # add another matrix here # "name_of_mat": np.array } ``` ```python eps_slider = pn.widgets.FloatSlider(start= 0.0, end = 20, step = 0.5, value =1) data_widget = pn.widgets.Select( name='data', options=list(data_dict.keys()) ) ``` ```python @pn.depends(eps_slider.param.value, data_widget.param.value) def diag_dom_gershgorin(eps, data_name): """ Visualize "diagonally dominant-alizing" a matrix. """ A_ = data_dict[data_name] A = get_diag_dominant(A_, eps) n = A.shape[0] #fig, ax1 = plt.subplots(figsize=(4,4)) fig = plt.figure(figsize = (10, 5)) ax1 = fig.add_subplot(121) diag = np.diag(A) # Extract diagonal (A_ii's) diag_x,diag_y = np.real(diag),np.imag(diag) radius = np.zeros(n) for i in range(n): a_ii = diag[i] r_i = get_gershgorin_disk_radius(A, i) radius[i] = r_i # Save radius real_aii, imag_aii = diag_x[i], diag_y[i] plt.scatter( real_aii, imag_aii, marker = "D", color = "lightgrey", s= 5 ) ax1.add_patch( plt.Circle( (real_aii, imag_aii), r_i, alpha = 0.3, facecolor="lightblue", edgecolor = "lightgrey" ) ) plt.scatter(0, 0, marker = "X", color = "k", label = "origin",s= 5) # Make symmetric limits max_radius = max(radius) + 0.5 maxmin_centers = max(diag_x), min(diag_x), max(diag_y), min(diag_y) maxmin_centers = [abs(x) for x in maxmin_centers] lim = np.max(maxmin_centers) plt.xlim(-lim - max_radius , lim + max_radius) plt.ylim(-lim - max_radius , lim + max_radius) plt.legend() plt.xlabel("$\mathrm{Re}(z)$", fontsize=18) plt.ylabel("$\mathrm{Im}(z)$", fontsize =18) plt.title("$\mathcal{D}_A$", fontsize = 20) ax2 = fig.add_subplot(122) evals = np.linspace(0, eps, 500) dets = [la.det(get_diag_dominant(A_, eps)) for eps in evals] #plt.scatter(evals, dets, s = 5, color = "lightgrey") plt.semilogy(evals, np.abs(dets), color = "lightblue", lw=1) plt.xlim(0, 23) plt.ylim(0,500) plt.xlabel("$\epsilon$", fontsize=18) plt.ylabel("$\mathrm{det}(A)$", fontsize=18) #plt.yscale("log") plt.tight_layout() plt.close(fig) return fig ``` ```python pn.Column( eps_slider, data_widget, diag_dom_gershgorin ) ``` Feel free to visualize another matrix by adding values to the data dictionary `data_dict`! You could also try out another function like the `get_diag_dominant_iden` above. Can you think of other applications of the theorem ?
8e0f334caf2c9781a1eed54e7f96644225f2bcc6
18,743
ipynb
Jupyter Notebook
notebooks/gershgorin.ipynb
manuflores/sandbox
27b44dfb6bea20d56ece640c5db9d842cbfc424b
[ "MIT" ]
null
null
null
notebooks/gershgorin.ipynb
manuflores/sandbox
27b44dfb6bea20d56ece640c5db9d842cbfc424b
[ "MIT" ]
null
null
null
notebooks/gershgorin.ipynb
manuflores/sandbox
27b44dfb6bea20d56ece640c5db9d842cbfc424b
[ "MIT" ]
null
null
null
29.423862
312
0.515713
true
2,971
Qwen/Qwen-72B
1. YES 2. YES
0.766294
0.901921
0.691136
__label__eng_Latn
0.869914
0.444072
# Assignment 2 - toc: true - badges: true - comments: true - categories: [jupyter] ```python import numpy as np import pandas as pd from decimal import * import math import unittest from sympy import * ``` ### Find the first 10-digit prime in the decimal expansion of $17\pi$ The first 5 digits in the decimal expansion of $\pi$ are 14159. The first 4-digit prime in the decimal expansion of $\pi$ are 4159. You are asked to find the first 10-digit prime in the decimal expansion of $17\pi$. #### Task 1: Write a function to generate an arbitrary large expansion of a mathematical expression like $\pi$. The 3rd party library `sympy` has a function called `N(expr, <args>)` that allows us to directly expand the expression to a certain precision. ```python >> N(pi, 5) 3.1416 ``` ```python def generate_expansion(precision, expression): """ This function returns an expansion of a mathematical expression given the precision. """ return N(expression, precision) ``` ```python >> a = generate_expansion(5, pi) >> a 3.1416 >> type(a) sympy.core.numbers.Float ``` * Then we can run a unit test to test the `generate_expansion` function ```python class TestNotebook(unittest.TestCase): def test_genexp(self): self.assertEqual(str(generate_expansion(10, pi)), '3.141592654') unittest.main(argv=[''], verbosity=2, exit=False) ``` test_genexp (__main__.TestFunc) ... ok test_genexp (__main__.TestNotebook) ... ok ---------------------------------------------------------------------- Ran 2 tests in 0.002s OK <unittest.main.TestProgram at 0x23203530ca0> #### Task 2: Write a function to check if a number is a prime number * Based on the definition of prime number, we need to check whether the number can be divided by any number that is less than the square root of the certain number. ```python def is_prime(num): """ The function returns whether a given number is prime or not """ if num > 1: for i in range(2, int(math.sqrt(num))+1): if (num % i) == 0: return False #break return True ``` ```python >> is_prime(40) False ``` * Then we can run a unit test to test the `is_prime` function ```python class TestNotebook(unittest.TestCase): def test_isprime(self): self.assertEqual(is_prime(29), True) # 29 is a prime number self.assertEqual(is_prime(30), False) # 30 is not a prime number unittest.main(argv=[''], verbosity=2, exit=False) ``` test_genexp (__main__.TestFunc) ... ok test_isprime (__main__.TestNotebook) ... ok ---------------------------------------------------------------------- Ran 2 tests in 0.005s OK <unittest.main.TestProgram at 0x2320352b910> #### Task 3: Slicing Window ```python def slicing_window (number_str, idx): """ The function returns the specified width from a long iterable, the inputs are the string format of a number and the start point of the number """ return int(number_str[idx: idx+10]) ``` The `slicing_window` works like below ```python >> slicing_window('12345678899443878169846',1) 2345678899 ``` * Then we can run a unit test to test the `slicing_window` function ```python class TestNotebook(unittest.TestCase): def test_slicingwindow(self): self.assertEqual(slicing_window('12345667788990',2), 3456677889) unittest.main(argv=[''], verbosity=2, exit=False) ``` test_genexp (__main__.TestFunc) ... ok test_slicingwindow (__main__.TestNotebook) ... ok ---------------------------------------------------------------------- Ran 2 tests in 0.003s OK <unittest.main.TestProgram at 0x23203542760> ```python # Find the first 10-digit number def find_prime(precision, expression): """The function returns the first 10-digit number in the expansion of the expression """ expansion = generate_expansion(precision, expression) formula = str(expansion) # The output of the generate_expansion is float, we need to convert it to string first string = formula.replace(".", "") # replace the decimal point in the expansion #initial = False for idx in range(len(string)): if is_prime(slicing_window(string, idx)) is True: print( f'The first 10-digit prime of {expression} is {slicing_window(string, idx)} and the start point is at {idx}') return slicing_window(string, idx) break ``` * Then we can run a unit test to test the `find_prime` function ```python class TestNotebook(unittest.TestCase): def test_slicingwindow(self): self.assertEqual(find_prime(500, exp(1)), 7427466391) unittest.main(argv=[''], verbosity=2, exit=False) ``` test_genexp (__main__.TestFunc) ... ok test_slicingwindow (__main__.TestNotebook) ... The first 10-digit prime of E is 7427466391 and the start point is at 99 ok ---------------------------------------------------------------------- Ran 2 tests in 0.015s OK <unittest.main.TestProgram at 0x23203542b20> * The the final solution to the problem is: ```python find_prime(500, 17*pi) ``` The first 10-digit prime of 17*pi is 8649375157 and the start point is at 20 8649375157 ```python ```
116eda86275204aac7e354786cfb50ae8cbeea18
10,225
ipynb
Jupyter Notebook
_notebooks/2021-09-17-Assignment2_Yili Lin.ipynb
lucylin1997/fastpage_copy
8bc6c543c20e57444e7ee248cb1c63b4157bfd23
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-09-17-Assignment2_Yili Lin.ipynb
lucylin1997/fastpage_copy
8bc6c543c20e57444e7ee248cb1c63b4157bfd23
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-09-17-Assignment2_Yili Lin.ipynb
lucylin1997/fastpage_copy
8bc6c543c20e57444e7ee248cb1c63b4157bfd23
[ "Apache-2.0" ]
null
null
null
25
225
0.503178
true
1,327
Qwen/Qwen-72B
1. YES 2. YES
0.884039
0.774583
0.684762
__label__eng_Latn
0.930562
0.429263
```python import sympy as sym from functools import reduce from sympy.matrices import Matrix, MatrixSymbol ``` ```python sym.init_printing() ``` ```python y = MatrixSymbol('y', 1, 1) x = MatrixSymbol('x', 1, 3) #x = Matrix([[1,2,3]]) theta = Matrix([[1],[2],[3]]) prediction = x * theta loss = (prediction - y)**2 loss ``` ```python sym.diff(loss, x) ``` ```python ```
1f891a8965fd05f3d9734ed890a9a440c5685510
19,734
ipynb
Jupyter Notebook
smoothness-writeup/notebooks/Matrix Test.ipynb
twosixlabs/csl
bd59181f0caa65c4f8e59433425beed612296cb4
[ "MIT" ]
null
null
null
smoothness-writeup/notebooks/Matrix Test.ipynb
twosixlabs/csl
bd59181f0caa65c4f8e59433425beed612296cb4
[ "MIT" ]
23
2022-02-09T03:47:46.000Z
2022-02-09T03:47:54.000Z
smoothness-writeup/notebooks/Matrix Test.ipynb
twosixlabs/csl
bd59181f0caa65c4f8e59433425beed612296cb4
[ "MIT" ]
null
null
null
164.45
2,120
0.715972
true
122
Qwen/Qwen-72B
1. YES 2. YES
0.92523
0.763484
0.706398
__label__eng_Latn
0.353698
0.479531
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Objectives" data-toc-modified-id="Objectives-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Objectives</a></span></li><li><span><a href="#What-Has-Calculus-Done-For-You" data-toc-modified-id="What-Has-Calculus-Done-For-You-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>What Has Calculus Done For You</a></span></li><li><span><a href="#Differentiation" data-toc-modified-id="Differentiation-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Differentiation</a></span><ul class="toc-item"><li><span><a href="#Common-Derivatives" data-toc-modified-id="Common-Derivatives-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Common Derivatives</a></span></li><li><span><a href="#sympy" data-toc-modified-id="sympy-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span><code>sympy</code></a></span></li></ul></li><li><span><a href="#Integration" data-toc-modified-id="Integration-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Integration</a></span><ul class="toc-item"><li><span><a href="#Common-Integrals" data-toc-modified-id="Common-Integrals-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Common Integrals</a></span></li><li><span><a href="#sympy" data-toc-modified-id="sympy-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span><code>sympy</code></a></span></li></ul></li><li><span><a href="#The-Chain-Rule" data-toc-modified-id="The-Chain-Rule-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>The Chain Rule</a></span><ul class="toc-item"><li><span><a href="#Exercise:" data-toc-modified-id="Exercise:-5.1"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>Exercise:</a></span></li></ul></li><li><span><a href="#Partial-Differentiation" data-toc-modified-id="Partial-Differentiation-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>Partial Differentiation</a></span></li></ul></div> ```python # conda install -c anaconda sympy ``` > NOTE: `sympy` is not a package that comes included in the `learn-env`. You can install it by running the command below though here in the notebook or in your terminal (with `learn-env` activated). You might have to restart the kernel after installation. ```python import numpy as np from matplotlib import pyplot as plt from sympy import * from sympy.abc import x, y from scipy import stats ``` # Objectives - Perform simple derivatives and indefinite integrals - Use the Chain Rule to construct derivatives of complex functions - Construct partial derivatives for functions of multiple variables # What Has Calculus Done For You We have already had occasion to use calculus in a few places. Calculus shows us: - that the mean of a group of numbers is the number $n$ that minimizes the sum of squared differences $\Sigma(p-n)^2$ for each number $p$ in the group; - that the median of a group of numbers is the number $n$ that minimizes the sum of absolute differences $\Sigma|p-n|$ for each number $p$ in the group; - how to find the coefficients for a linear regression optimization problem. The two main tools of calculus are **differentiation** and **integration**. For functions of one dimension: - Differentiation gives us the *slope* of the function at any point. - Integration gives us the *area under the curve* of the function between any two points. Surprisingly, these two operations turn out to be inverses of one another in the sense that the derivative of the integral of a given function takes us back to the initial function: $\frac{d}{dx}[\int^x_a f(t) dt] = f(x)$. This is known as the First Fundamental Theorem of Calculus. # Differentiation To find the slope of a function *at a point*, we imagine calculating the slope of the function between two points, and then gradually bringing those two points together. Consider the slope of the function $y=x^2$ at the point $x=100$. We'll calculate the slope of the parabola between $x_1=100$ and $x_2=1000$, and then slowly move $x_2$ close to $x_1$: ```python 10**(7/3) ``` 215.44346900318845 ```python X = np.logspace(3, 2, 11) fig, ax = plt.subplots() ax.plot(X, X**2, lw=5, c='k', alpha=0.4) for x_ in X[:-1]: ax.plot([100, x_], [10000, x_**2], 'r-.') ``` ```python # This will show how the slope approaches the value # of the derivative slopes = [] for x_ in X[:-1]: slopes.append((x_**2-10000) / (x_-100)) fig, ax = plt.subplots() ax.plot(X[:-1], slopes, label='approximation') ax.scatter(X[-1], 200, label='value of derivative at $x=100$', c='r') ax.set_xlabel('X') ax.set_ylabel('slope') ax.set_title('slope between x=100 and X') plt.legend(); ``` ## Common Derivatives Here is a list of rules for some common derivative patterns. ```python #first twwo and third from the bottom good to remember #2nd from bottom also probably see #bottom one is quotient rule, good also ``` $\large\frac{d}{dx}[cf(x)] = cf'(x)$ $\rightarrow$ Example: $\frac{d}{dx}[2x] = 2\frac{d}{dx}[x] = (2)(1) = 2$ $\large\frac{d}{dx}[x^n] = nx^{n - 1}$ $\rightarrow$ Example: $\frac{d}{dx}[x^4] = 4x^3$ $\large\frac{d}{dx}[sin(x)] = cos(x)$ $\large\frac{d}{dx}[cos(x)] = -sin(x)$ $\large\frac{d}{dx}[a^x] = a^xln(a)$ $\large\frac{d}{dx}[log_bx)] = \frac{1}{xln(b)}$ $\rightarrow$ Example: $\frac{d}{dx}[ln(x)] = \frac{1}{xln(e)} = \frac{1}{x}$ $\large\frac{d}{dx}[f(x) + g(x)] = f'(x) + g'(x)$ $\rightarrow$ Example: $\frac{d}{dx}[2x + 4x^2] = 2 + 8x$ $\large\frac{d}{dx}[f(x)g(x)] = f(x)g'(x) + g(x)f'(x)$ $\rightarrow$ Example: $\frac{d}{dx}[(x+1)(x-1)] = (x+1)(1) + (x-1)(1) = 2x$ $\large\frac{d}{dx}\left[\frac{f(x)}{g(x)}\right] = \frac{g(x)f'(x) - f(x)g'(x))}{(g(x))^2}$ $\rightarrow$ Example: $\frac{d}{dx}\left[\frac{x+1}{x-1}\right] = \frac{(x-1)(1) - (x+1)(1)}{(x-1)^2} = -\frac{2}{(x-1)^2}$ ## `sympy` The `sympy` package can be helpful: ```python diff(sin(x), x) ``` $\displaystyle \cos{\left(x \right)}$ ```python diff(exp(2*x)) ``` $\displaystyle 2 e^{2 x}$ # Integration Integration is how we calculate the area under a curve. If the curve is a probability density function, then the area under this curve will be equal to 1: ```python X = np.linspace(0, 5, 51) for pt in X: print(stats.norm.cdf(pt)) ``` 0.5 0.539827837277029 0.579259709439103 0.6179114221889527 0.6554217416103242 0.6914624612740131 0.7257468822499265 0.758036347776927 0.7881446014166034 0.8159398746532405 0.8413447460685429 0.8643339390536173 0.8849303297782918 0.9031995154143897 0.9192433407662289 0.9331927987311419 0.945200708300442 0.955434537241457 0.9640696808870742 0.9712834401839983 0.9772498680518208 0.9821355794371834 0.9860965524865014 0.9892758899783242 0.9918024640754038 0.9937903346742238 0.9953388119762813 0.9965330261969594 0.997444869669572 0.998134186699616 0.9986501019683699 0.9990323967867817 0.9993128620620841 0.9995165758576162 0.9996630707343231 0.9997673709209645 0.9998408914098424 0.9998922002665226 0.9999276519560749 0.9999519036559824 0.9999683287581669 0.9999793424930875 0.9999866542509841 0.999991460094529 0.9999945874560923 0.9999966023268753 0.9999978875452975 0.9999986991925461 0.999999206671848 0.9999995208167234 0.9999997133484281 How do you calculate the area of a shape with a curvy side? Imagine approximating the shape with rectangles, and then imagine making those rectangles narrower and narrower. Again, let's work with the parabola $y=x^2$ between $x=100$ and $x=1000$: ```python # This will show how we imagine ever narrower rectangles # under the curve to approximate the area underneath it. spacing = np.arange(3, 13) X = [np.linspace(100, 1000, step) for step in spacing] X_curve = np.linspace(100, 1000, 10000) fig, ax = plt.subplots(10, figsize=(10, 30)) for num in spacing: ax[num-3].plot(X_curve, X_curve**2) for j in range(1, len(X[num-3])-1): ax[num-3].hlines(X[num-3][j]**2, X[num-3][j], X[num-3][j+1]) ax[num-3].vlines(X[num-3][j], 0, X[num-3][j]**2) ax[num-3].set_xlabel(f'Area = {900/(num-1) * sum(X[num-3][1:-1]**2)}\n\ For a=100, b=1000, $\int^b_ax^2=333000000$') plt.tight_layout() ``` ```python # This will show the area of the rectangles as the number # of rectangles increases. spacing_longer = np.arange(3, 100) X_longer = [np.linspace(100, 1000, step) for step in spacing_longer] areas = [900 / (num-1) * sum(X_longer[num-3][1:-1]**2) for num in spacing_longer] fig, ax = plt.subplots() ax.hlines(333000000, 3, 99, label='333000000', color='r') ax.plot(spacing_longer, areas, label='approximation') ax.set_title('Area as a function of number of rectangles') ax.set_xlabel('Number of rectangles') ax.set_ylabel('Area') plt.legend(); ``` ## Common Integrals $\large\int cf(x)dx = c\int f(x)dx$ $\large\int x^ndx = \frac{x^{n+1}}{n+1}$ $\large\int sin(x)dx = -cos(x)$ $\large\int cos(x)dx = sin(x)$ $\large\int a^xdx = \frac{a^x}{ln(a)}$ $\large\int (f(x)dx+g(x))dx = \int f(x)dx + \int g(x)dx$ ## `sympy` ```python integrate(cos(x), x) ``` $\displaystyle \sin{\left(x \right)}$ ```python integrate(exp(2*x), x) ``` $\displaystyle \frac{e^{2 x}}{2}$ # The Chain Rule $\large\frac{d}{dx}[f(g(x))] = f'(g(x))g'(x)$ That is: The derivative of a *composition* of functions is: the derivative of the first applied to the second, multiplied by the derivative of the second. So if we know e.g. that $\frac{d}{dx}[e^x] = e^x$ and $\frac{d}{dx}[x^2] = 2x$, then we can use the Chain Rule to calculate $\frac{d}{dx}[e^{x^2}]$. We set $f(x) = e^x$ and $g(x) = x^2$, so the derivative must be: $\large\frac{d}{dx}[e^{x^2}] = (e^{x^2})(2x) = 2xe^{x^2}$. ## Exercise: Calculate the derivatives for the following compositions: 1. $\frac{d}{dx}[sin(4x)]$ <details> <summary> Answer </summary> $f(x) = sin(x)$ <br/> $g(x) = 4x$ <br/> So the derivative will be: $cos(4x)*4 = 4cos(4x)$ </details> 2. $\frac{d}{dx}[e^{sin(x)}]$ <details> <summary> Answer </summary> $f(x) = e^x$ <br/> $g(x) = sin(x)$ <br/> So the derivative will be: $e^{sin(x)}*cos(x) = cos(x)e^{sin(x)}$ # Partial Differentiation Partial differentiation is required for functions of multiple variables. If e.g. I have some function $h = h(a, b)$, then I can consider how $h$ changes with respect to $a$ (while keeping $b$ constant)––that's $\frac{\partial h}{\partial a}$, and I can consider how $h$ changes with respect to $b$ (while keeping $a$ constant)––that's $\frac{\partial h}{\partial b}$. And so the rule is simple enough: If I'm differentiating my function with respect to some variable, I'll **treat all other variables as constants**. Consider the following function: $\large\xi(x, y, z) = x^2y^5z^3 - ze^{cos(xy)} + (yz)^3$; for some parameters $x$, $y$, and $z$. What are the partial derivatives of this function? $\large\frac{\partial\xi}{\partial x} = ?$ <br/> <details> <summary> Check </summary> <br/> $2xy^5z^3 + yze^{cos(xy)}sin(xy)$ </details> <br/> $\large\frac{\partial\xi}{\partial y} = ?$ <br/> <details> <summary> Check </summary> <br/> $5x^2y^4z^3 + xze^{cos(xy)}sin(xy) + 3y^2z^3$ </details> <br/> $\large\frac{\partial\xi}{\partial z} = ?$ <br/> <details> <summary> Check </summary> <br/> $3x^2y^5z^2 - e^{cos(xy)} + 3y^3z^2$ </details> ```python #cost is like RMSE #gradient descent is like one big game of marco polo. #say wanted price from sqft_living #is each value of sqft_living randint(dollars) #then try each of these and call predict. What's our RMSE if we do this? #instead of randomly trying randomly, you get feedback from model. Partial differentiation. #you throw it in, 'marco!' model gives you an answer 'polo!'. #tells you which direction you should step in. #the idea is that you toss in a value, then get feedback and figure out which direction to move it. #the way you know this is by taking a partial derivative of the cost function #when you take a graddient there is a value that tells you whether to take a large or small step #lot of trial/feedback and repeat. update it, don't goi too fast/too far. little by little get closer #either way, you should get approximately the same answer. #why one over the other? #they take different amounts of time to run #linear regression is the model, there's different ways to solve it ```
176ca7cc2c1cb8a475aaf2e347d368c62e1a9349
229,855
ipynb
Jupyter Notebook
Phase_3/ds-calculus-main/calculus_for_data_scientists.ipynb
ismizu/ds-east-042621-lectures
3d962df4d3cb19a4d0c92c8246ec251a5969f644
[ "MIT" ]
1
2021-08-12T21:48:21.000Z
2021-08-12T21:48:21.000Z
Phase_3/ds-calculus-main/calculus_for_data_scientists.ipynb
ismizu/ds-east-042621-lectures
3d962df4d3cb19a4d0c92c8246ec251a5969f644
[ "MIT" ]
null
null
null
Phase_3/ds-calculus-main/calculus_for_data_scientists.ipynb
ismizu/ds-east-042621-lectures
3d962df4d3cb19a4d0c92c8246ec251a5969f644
[ "MIT" ]
null
null
null
246.625536
161,900
0.922081
true
4,214
Qwen/Qwen-72B
1. YES 2. YES
0.754915
0.828939
0.625778
__label__eng_Latn
0.887647
0.292223
## Gaussian Integration So far, we have focussed on integration schemes where the function is sampled at regular intervals. These are very useful for data that comes to us in this form, but if we wish to integrate a known function as accurately as possible it is better to take advantage of the freedom we have to choose our nodes, just as we did for interpolation where selecting Chebyshev points was optimal. Consider that quadrature formulas are typically of the form $$ \int_a^b f(x) dx = \sum_{i=0}^{n-1} c_i f(x_i) + error, $$ which has $2n$ parameters (note for this section we are stopping the sum at $n-1$ rather than $n$ as this is conventional for Gaussian quadrature formulas). The idea of Gaussian integration is that with these $2n$ parameters it should be possible to chose them to ensure that we can integrate all polynomials of degree less than or equal to $2n-1$ *exactly*. Before we do this, it is useful to first transform to a standard interval, namely $y$ in $[-1,1]$, from the original $x$ in $[a,b]$. Similar to what we did with the Chebyshev interpolation, we use the change of variables $$ x = \frac{b-a}{2}y+\frac{b+a}{2}. $$ We must also remember to perform the appropriate change of the integral measure, $$\int_a^b f(x) dx = \int_{-1}^1 f\left(\frac{b-a}{2}y+\frac{b+a}{2}\right) \frac{dx}{dy} dy,$$ with $\frac{dx}{dy}=\frac{b-a}{2}$. The $n-$point Gaussian quadrature routine will then result in $$\int_a^b f(x) dx \approx \frac{b-a}{2}\sum_{i=0}^{n-1} c_i f\left(\frac{b-a}{2}y_i+\frac{b+a}{2}\right).$$ So, we will focus on integration over $[-1,1]$ below. **Theorem** Suppose $\{x_0,x_1,\cdots,x_{n-1}\}$ are the roots of the $n$th Legendre polynomial $P_n(x)$ and $$ c_i=\int_{-1}^1 L_i^{n-1}(x) dx,\quad i=0,1,\cdots,n-1$$ where $L_i^{n-1}(x)$ is the Lagrange polynomial (from interpolation). Then if $p(x)$ is any polynomial of degree less than or equal to $2n-1$, then $$ \int_{-1}^1 p(x) dx = \sum_{i=0}^{n-1} c_i p(x_i),$$ exactly. **Proof:** First look at the case where the degree of $p(x)$ is less than $n$. Then $$ p(x) = \sum_{i=0}^{n-1} L_i^n(x) p(x_i) $$ is an exact interpolation of $p(x)$ (the [interpolation error](../InterpFit/InterpErrors) involves $\frac{d^n p(x)}{dx^n}$ which is zero for a polynomial of degree less than $n$). Hence, $$\int_{-1}^1 p(x)dx = \int_{-1}^1 \sum_{i=0}^{n-1} L_i^n(x) p(x_i) = \sum_{i=0}^{n-1} c_i p(x_i),$$ as claimed in the theorem. Now suppose that the degree of $p(x)$ is at least $n$ but less than or equal to $2n-1$. Then we can write $$p(x) = q(x)P_n(x) + r(x),$$ where $P_n(x)$ is the $n$th Legendre polynomial (which has degree $n$) and $q(x)$ and $r(x)$ are polynomials of degree less than $n$. (To construct $q(x)$ you pick its coefficients so that $q(x)P_n(x)$ has coefficients that match coefficients of $p(x)$ for terms $x^{2n-1}$ to $x^n$. $r(x)$ is then whatever is left over). Now, $$ p(x_i) = q(x_i)P_n(x_i) + r(x_i) = r(x_i),$$ where the last step follows from the fact that the $x_i$ are the roots of $P_n(x_i)$. As a result, $$\sum_{i=0}^{n-1} c_i p(x_i) = \sum_{i=0}^{n-1} c_i r(x_i).$$ Also, $$ \begin{align} \int_{-1}^1 p(x) dx &= \int_{-1}^1\left(q(x)P_n(x) + r(x)\right)dx,\\ &= \int_{-1}^1 q(x)P_n(x) dx + \int_{-1}^1 r(x) dx. \end{align} $$ Now $q(x)$ has degree less than $n$ so can be written using the Legendre polynomials as a basis (they are an orthogonal basis set for polynomials), i.e. $$ q(x) = \sum_{i=0}^{n-1} a_i P_i(x), $$ for some constants $a_i$. As a result, $$ \int_{-1}^1 q(x)P_n(x) dx = \sum_{i=0}^{n-1} a_i \int_{-1}^1 P_i(x)P_n(x) dx =0, $$ as $P_n(x)$ is orthogonal to all the $P_i(x)$ with $i<n$. As a result, we now have $$ \begin{align} \int_{-1}^1 p(x) dx&=\int_{-1}^1 r(x) dx,\\ &= \sum_{i=0}^{n-1} c_i r(x_i), \end{align} $$ where the last step is exact as $r(x)$ is of degree less than $n$ (and we showed this was exact for such polynomials in the first step of the proof above). -------- A general formula for the weights and nodes for Gauss-Legendre integration is not available. However, the first few have all been tabulated and are listed below for reference : | Number of nodes, $n$ | Points, $x_i$ | weights, $w_i$ | | :------------------: | :-----------: | :-------------: | | 1 | 0 | 2 | | 2 | $\pm \frac{1}{\sqrt{3}}$ | 1 | | 3 | 0, | $\frac{8}{9},$ | | | $\pm \sqrt{\frac{3}{5}}$ | $\frac{5}{9}$ | | 4 | $\pm \sqrt{\frac{3}{7}-\frac{2}{7}\sqrt{\frac{6}{5}}},$ | $\frac{18+\sqrt{30}}{36},$ | | | $\pm \sqrt{\frac{3}{7}+\frac{2}{7}\sqrt{\frac{6}{5}}}$| $\frac{18-\sqrt{30}}{36}$ | | 5 | 0, | $\frac{128}{225},$ | | | $\pm \frac{1}{3}\sqrt{5-2\sqrt{\frac{10}{7}}},$ | $\frac{322+13\sqrt{70}}{900},$ | | | $\pm \frac{1}{3}\sqrt{5+2\sqrt{\frac{10}{7}}}$ | $\frac{322-13\sqrt{70}}{900}$ | | 7 | 0, | 0.41795 91836 73469, | | | $\pm$0.40584 51513 77397, | 0.38183 00505 05119, | | | $\pm$0.74153 11855 99394, | 0.27970 53914 89277, | | | $\pm$0.94910 79123 42759 | 0.12948 49661 68870 | SciPy has an implementation of Gauss-Legendre integration illustrated in the example below: ```python import numpy as np from scipy import integrate normaldist = lambda x: 1/np.sqrt(np.pi) * np.exp(-x**2) print("n=1 result = ",integrate.fixed_quad(normaldist, 0.0, 2.0, n=1)) print("n=2 result = ",integrate.fixed_quad(normaldist, 0.0, 2.0, n=2)) print("n=3 result = ",integrate.fixed_quad(normaldist, 0.0, 2.0, n=3)) print("n=4 result = ",integrate.fixed_quad(normaldist, 0.0, 2.0, n=4)) print("n=5 result = ",integrate.fixed_quad(normaldist, 0.0, 2.0, n=5)) print("n=7 result = ",integrate.fixed_quad(normaldist, 0.0, 2.0, n=7)) ``` n=1 result = (0.4151074974205947, None) n=2 result = (0.5187644892255823, None) n=3 result = (0.4958462362382512, None) n=4 result = (0.49774446622792934, None) n=5 result = (0.49765922944191265, None) n=7 result = (0.4976611363409025, None) We also integrated this same function in our example for [Romberg integration](./HigherOrderInt). The accuracy of our result for $n=7$ here (so 7 function evaluations) was only obtained in the row for 16 steps (17 function evaluations) using Romberg integration! ```python ```
31dec91c9df8b5498dd95c78ca8923cfa1f5977f
8,833
ipynb
Jupyter Notebook
class/NDiffInt/GaussianInt.ipynb
CDenniston/NumericalAnalysis
8f4ccaa864461c36e269824a0e9038bc14ef10b1
[ "MIT" ]
null
null
null
class/NDiffInt/GaussianInt.ipynb
CDenniston/NumericalAnalysis
8f4ccaa864461c36e269824a0e9038bc14ef10b1
[ "MIT" ]
null
null
null
class/NDiffInt/GaussianInt.ipynb
CDenniston/NumericalAnalysis
8f4ccaa864461c36e269824a0e9038bc14ef10b1
[ "MIT" ]
null
null
null
46.489474
396
0.52802
true
2,299
Qwen/Qwen-72B
1. YES 2. YES
0.83762
0.938124
0.785791
__label__eng_Latn
0.973511
0.663989