text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
listlengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
listlengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
listlengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
# Frequent sub-population testing for COVID-19
_[Boaz Barak](https://boazbarak.org), [Mor Nitzan](https://mornitzan.wixsite.com/nitzan), Neta Ravid Tannenbaum, [Janni Yuval](https://eapsweb.mit.edu/people/janniy)_
[Working paper on arXiv](http://arxiv.org/abs/2007.04827) | [Open this notebook in colab](https://colab.research.google.com/drive/1ZyUylIfo3tSwDHsv942Xl6lgMxAcjGCP?usp=sharing) (see also ["playground" notebook"](https://colab.research.google.com/drive/1pgpVBqchciVYl2qnb6q_L33xLmXogmVo?usp=sharing) ) | [Code on GitHub](https://github.com/boazbk/seirsplus)
Consider an institution of $N$ people (think business, school, ...) that
was shut down due to COVID-19 and that we would like to open it back up.
_Testing_ will clearly play a significant role in any opening strategy, but what's the best way to use a limited testing budget?
For example, suppose that we have a budget of $N$ tests per month.
Is it better to test all $N$ members of the institution once per month? Ot is it better to test $\approx N/4$ of them per week?
Or perhaps test $N/30$ members per day?
Moreover, if we are smart about testing, could we avoid the added risk to society from
opening the business?
It turns out that it is almost always better to increase the test frequency (e.g., test $25\%$ per week as opposed to $100\%$ per mont),
and sometimes dramatically so,. Moreover, under reasonable assumptions, a moderate amount of frequent testing (coupled with mitigation once an outbreak is detected),
can __completely offset__ the added risk from opening the business.
In this notebook, we describe this in a simple example. See the [paper](http://arxiv.org/abs/2007.04827)
for more details and this [repository](https://github.com/boazbk/seirsplus) for the code.
__NOTE:__ This paper has not been yet peer reviewed.
## Setup
Suppose we have an institution of $N=1000$ members, and the prevalence
of COVID-19 in the community is such that each member has probability
$1/5000=0.05\%$ to be infected each day.
(For example, our own community of
[Cambridge, MA](https://cityofcambridge.shinyapps.io/COVID19/?tab=new)
has about 100K people and the number of daily reported cases has ranged from
$\approx 50$ at the mid April peak to $\approx 2$ recently.
Assuming there are $5$ to $10$ true cases for each reported one,
the daily probability of infection was something like $0.25\%$ to $0.5\%$ at the peak
and is roughly $0.01\%$ to $0.02\%$ now.)
Suppose that if we open up a business, then in expectation each infected member will infect
$R$ individuals within the business, in addition to the people they
will infect in the external society. For example, if each community member infects in expectation $0.9$ people in
external society, and $0.5$ people within their workplace, then opening all businesses can and will
make the difference between controlled and uncontrolled outbreak.
However _testing_ can mitigate this. If we discover an outbreak in a business through randomized testing, we can quarantine
those individuals and ensure they do not continue to infect others.
Moreover, since quarantined individuals also don't infect people in external society,
discovery through workplace testing can cut short their "effective infectious period" and hence
reduce the number of people they would have infected __even compared to the baseline when the workplace was not
open at all!__
We show this using both math (for a simple exponential spread model) and code (building on an SEIR model of [McGee et al](https://github.com/ryansmcgee/seirsplus)).
### Frequent testing outperforms infrequent testing in simulations
Let us run a simulation for the setting we just described.
We use the standard SEIR (suspected-exposed-infected-recovered) model with COVID-19 parameters
taken from the literature. (We expect that as we learn more on COVID-19 and test technology improves, some parameters will change but the
qualitative conclusions will remain the same; at the moment we assume the mean incubation period is $5.2$ days,
the infectious period is $14$ days, the probability for a false negative for an individual in the incubation period is $67\%$
and is $20\%$ in the infectious period, with no false positives.)
For this particular run, let's assume that each infected person infects on average two people in their workplace.
Here are two execution of such a simulation - the first where we test $100\%$ of the population every $28$ days, and the second where
we test $25\%$ every week. We keep track of the number of __E__xposed, __I__nfected, and __R__ecovered individuals, and stop the
simulation once we get the first positive result.
```python
%load_ext autoreload
%autoreload 2
from util import * # Python file containing all code, available on https://github.com/boazbk/seirsplus
```
```python
print("Test 100% of people every 28 days")
sim(base,R=2,period=28,fraction_tested=1);
print("Test 25% of people every 7 days")
log = sim(base,R=2, period=7,fraction_tested=1/4)
print(f"Outbreak detected at {log['t']} days")
```
In this particular instance, testing every week resulted in discovery of the outbreak at 14 days (the last time we ran this), leading to better outcome. But this does not have to always be the case - there is significant amount of stochasticity here
and sometimes the fact that we only test frequently will cause us to miss an infection.
To compare the two parameter settings, we need to define a cost function. We define the __societal risk__ as
the average number of infected people per day until the first detection.
The rationale is that this corresponds to the number of _infection opportunities_: chances that
for individuals to infect both co-workers and members of the external society until the outbreak
was detected. (We do not model what happens after an outbreak is detected since that would depend
strongly on the particular institution; such detection may a number of mitigation mechanisms including contact tracing, isolation,
temporary closures, widepread testing, and more.)
Let's use a "violin graph" to compare the distribution of this societal risk under both testing regimes.
```python
infrequent = [sim(base,R=2,period=28,fraction_tested=1, plot=False)["risk"] for i in range(50)];
frequent = [sim(base,R=2,period=7,fraction_tested=1/4, plot=False)["risk"] for i in range(50)];
```
```python
violins([infrequent,frequent], ["100% / 28 days", "25% / week"])
```
We see that frequent testing reduces the mean social social risk,
but more than that it also significantly drops the "tail" of the distribution - the chance that
the social risk will be much higher than (in this case) an average of 6 infected days.
Moreover, we can also compare to the setting when the business is closed but we don't test at all.
```python
closed = [sim({**base , "T":56 },R=0,period=28,fraction_tested=0, plot=False)["risk"] for i in range(50)];
```
```python
violins([closed,infrequent,frequent], ["Business closed", "100% / 28 days", "25% / week"])
```
We see that testing $25\%$ of the population per week is not only better than testing everyone
once a week, but in fact even reduces the infection opportunities compared to the baseline
where the business is closed!
Overall this conclusion holds for many (but not all!) parameters for the external infection probability
and internal reproductive number.
<small>Figure: Comparing the societal risk for the 28/4 testing policy (i.e., 25% each week) against the baseline when business is closed, 28/1 (i.e., 100% every 4 weeks), as well as the case when the business is opened and no testing takes place.
For every value of the external probability and internal reproductive number we track the percentage of the 28/4 risk wrt the comparands (less than 100% means that 28/4 policy improves on the comparand)
</small>
### Is more frequent testing always better?
Since we saw that increasing the frequency to once a week gives better outcome,
you might wonder if increasing it even more to daily testing would be even better.
This sometimes is the case, but not always, and regardless we often see rapidly "diminishing returns"
with frequency.
```python
veryfrequent = [sim(base,R=2,period=1,fraction_tested=1/28, plot=False)["risk"] for i in range(50)];
```
```python
violins([closed,infrequent,frequent, veryfrequent], ["Business closed", "100% / 28 days", "25% / week", "3.6% / day"])
```
## Can we prove this?
The above are results from simulations, but can we _prove_ that more frequent testing always helps?
To do so, we consider a simpler setting where an initial infection arrives at time $t_0$,
and at time $t-t_0$ there are $C^{t-t_0}$ infected individuals.
The factor $C$ is the growth per time unit (for example, if units are months then $C=10$
might be reasonable for COVID-19, but it of course depends on how well connected is the workplace
or institution).
In this model we ignore factors such as incubation period, false negatives, and simply ask, assuming the following
question: if we have a budget to test $p$ fraction of the individuals per time unit,
and use that by testing $\epsilon p$ fraction every $\epsilon$ time units, what is
the expected number of infected people at detection?
Let's assume $t_0=0$ for simplicity. If at each $\epsilon$ time units we test each individual
with probability $p$ independently, then the probability we will miss at time $t$ all infected
individuals is $(1 - \epsilon p)^{C^t}$. The expected risk is the sum over all $n$ of the probability
we missed detection up to time $(n-1)\epsilon$, times the probability we detected at time $n\epsilon$,
times $C^{n \epsilon}$ or in other words
$$\sum_{n=0}^{\infty} \left( \prod_{i=0}^{n-1} (1 - \epsilon p)^{C^{\epsilon i}} \right) \cdot (1 - (1-\epsilon p)^{C^{\epsilon n}} \cdot C^{\epsilon n}$$
using the formula for arithmetic progressions this is
$$\sum_{n=0}^{\infty} (1 - \epsilon p)^{(C^{\epsilon n} - 1)(C^\epsilon - 1)} \cdot (1 - (1-\epsilon p)^{C^{\epsilon n}} \cdot C^{\epsilon n}$$
now letting $\epsilon \rightarrow 0$, $(1-\epsilon p)^{1/\epsilon} \rightarrow \exp(-p)$,
while $\tfrac{1}{\epsilon}(C^{\epsilon} -1 ) \rightarrow \ln C$
and $1-(1-\epsilon)^{C^{\epsilon n}} \approx pC^tdt$ (where $dt = \epsilon$ and $t=\epsilon n$).
Hence the total expected cost converges to the integral
$$\int_{t=0}^\infty \exp(\tfrac{-p(C^t-1)}{\ln C}) p \cdot C^{2t} dt$$
Lucikly, [SymPy](https://www.sympy.org/) knows how to solve this integral
```python
import sympy as sp
t = sp.Symbol('t')
C = sp.Symbol('C')
p = sp.Symbol('p')
sp.Q.is_true(p>0)
sp.Q.is_true(p<1)
sp.Q.is_true(C>1)
sp.integrate(sp.exp(-p*(C**t - 1)/sp.log(C))*p*C**(2*t),t)
```
$\displaystyle \begin{cases} \frac{\left(- C^{t} p - \log{\left(C \right)}\right) e^{- \frac{p \left(C^{t} - 1\right)}{\log{\left(C \right)}}}}{p} & \text{for}\: p \neq 0 \\\begin{cases} \frac{C^{2 t} p}{2 \log{\left(C \right)}} & \text{for}\: 2 \log{\left(C \right)} \neq 0 \\p t & \text{otherwise} \end{cases} & \text{otherwise} \end{cases}$
Hence the total expected cost equals
$$-\exp\left(\tfrac{p\cdot(1-C^t)}{\ln C}\right)\left( \tfrac{\ln C}{p} + C^t \right)\Bigr|^{t=\infty}_{t=0} = 1 +\tfrac{\ln C}{p}$$
We see that with frequent enough testing, the cost scales only _logarithmically_ with $C$.
In contrast, consider the case $p=1$. If we test the population every time unit, then
since the initial infection can come at any point, the risk would be $\int_0^1 C^t dt = (C-1)/\ln C$,
which scales _nearly linearly_ with $C$.
However, for small values of $C$, it can be the case that the risk when testing everyone per time period
is lower than the risk in the very frequent setting. However, this difference is limited:
[Wolfram Alpha](https://www.wolframalpha.com/input/?i=max+%281+%2B+ln%28x%29+-+%28x-1%29%2Fln%28x%29%29)
tells us that the maximum of $1 + \ln C - (C-1)/\ln C$ for $C>1$ is $3-e \approx 0.28$.
As usual, see the [paper](http://arxiv.org/abs/2007.04827) for more details. We also created a ["playground" notebook"](https://colab.research.google.com/drive/1pgpVBqchciVYl2qnb6q_L33xLmXogmVo?usp=sharing) where you can test out simulations of different scenarios
| 952176733e7624252debfece00f4e37af221e5d8 | 185,733 | ipynb | Jupyter Notebook | readme.ipynb | boazbk/seirsplus-old | 9b69e913f63f70bd0cccefec206f3ef3b6564a3a | [
"MIT"
]
| 1 | 2020-08-18T14:49:21.000Z | 2020-08-18T14:49:21.000Z | readme.ipynb | boazbk/seirsplus-old | 9b69e913f63f70bd0cccefec206f3ef3b6564a3a | [
"MIT"
]
| null | null | null | readme.ipynb | boazbk/seirsplus-old | 9b69e913f63f70bd0cccefec206f3ef3b6564a3a | [
"MIT"
]
| null | null | null | 388.562762 | 47,276 | 0.930034 | true | 3,362 | Qwen/Qwen-72B | 1. YES
2. YES | 0.743168 | 0.771844 | 0.573609 | __label__eng_Latn | 0.995995 | 0.171017 |
# The transport-length hillslope diffuser
# The basics:
This component uses an approach similar to Davy and Lague (2009)'s equation for fluvial erosion and transport, and applies it to hillslope diffusion.
Formulation and implementation were inspired by Carretier et al (2016), see this paper and references therein for justification.
## Theory
The elevation z of a point of the landscape (grid node) changes according to:
\begin{equation}
\frac{\partial z}{\partial t} = -\epsilon + D + U \tag{1}\label{eq:1}
\end{equation}
and we define:
\begin{equation}
D = \frac{q_s}{L} \tag{2}\label{eq:2}
\end{equation}
where $\epsilon$ is the local erosion rate [*L/T*], *D* the local deposition rate [*L/T*], *U* the uplift (or subsidence) rate [*L/T*], $q_s$ the incoming sediment flux per unit width [*L$^2$/T*] and *L* is the **transport length**.
We specify the erosion rate $\epsilon$ and the transport length *L*:
\begin{equation}
\epsilon = \kappa S \tag{3}\label{eq:3}
\end{equation}
\begin{equation}
L = \frac{dx}{1-({S}/{S_c})^2} \tag{4}\label{eq:4}
\end{equation}
where $\kappa$ [*L/T*] is an erodibility coefficient, $S$ is the local slope [*L/L*] and $S_c$ is the critical slope [*L/L*].
Thus, the elevation variation results from the difference between local rates of detachment and deposition.
The detachment rate is proportional to the local gradient. However, the deposition rate (*$q_s$/L*) depends on the local slope and the critical slope:
- when $S \ll S_c$, most of the sediment entering a node is deposited there, this is the pure diffusion case. In this case, the sediment flux $q_s$ does not include sediment eroded from above and is thus "local".
- when $S \approx S_c$, *L* becomes infinity and there is no redeposition on the node, the sediments are transferred further downstream. This behaviour corresponds to mass wasting, grains can travel a long distance before being deposited. In that case, the flux $q_s$ is "non-local" as it incorporates sediments that have both been detached locally and transited from upslope.
- for an intermediate $S$, there is a prgogressive transition between pure creep and "balistic" transport of the material. This is consistent with experiments (Roering et al., 2001; Gabet and Mendoza, 2012).
## Contrast with the non-linear diffusion model
Previous models typically use a "non-linear" diffusion model proposed by different authors (e.g. Andrews and Hanks, 1985; Hanks, 1999; Roering et al., 1999) and supported by $^{10}$Be-derived erosion rates (e.g. Binnie et al., 2007) or experiments (Roering et al., 2001). It is usually presented in the followin form:
$ $
\begin{equation}
\frac{\partial z}{\partial t} = \frac{\partial q_s}{\partial x} \tag{5}\label{eq:5}
\end{equation}
$ $
\begin{equation}
q_s = \frac{\kappa' S}{1-({S}/{S_c})^2} \tag{6}\label{eq:6}
\end{equation}
where $\kappa'$ [*L$^2$/T*] is a diffusion coefficient.
This description is thus based on the definition of a flux of transported sediment parallel to the slope:
- when the slope is small, this flux refers to diffusion processes such as aoil creep, rain splash or diffuse runoff
- when the slope gets closer to critical slope, the flux increases dramatically, simulating on average the cumulative effect of mass wasting events.
Despite these conceptual differences, Eq ($\ref{eq:3}$) and ($\ref{eq:4}$) predict similar topographic evolution to the 'non-linear' diffusion equations for $\kappa' = \kappa dx$, as shown in the following example.
# Example 1:
First, we import what we'll need:
```python
%matplotlib inline
import numpy as np
from matplotlib.pyplot import figure, show, plot, xlabel, ylabel, title
import pymt.models
```
[33;01m➡ models: Avulsion, Plume, Sedflux3D, Subside, FrostNumber, Ku, ExponentialWeatherer, Flexure, FlowAccumulator, FlowDirectorD8, FlowDirectorDINF, FlowDirectorSteepest, FlowRouter, LinearDiffuser, OverlandFlow, SoilMoisture, StreamPowerEroder, TransportLengthHillslopeDiffuser, Vegetation, Hydrotrend, Child, Cem, Waves[39;49;00m
Set the initial and run conditions:
```python
total_t = 2000000. # total run time (yr)
dt = 1000. # time step (yr)
nt = int(total_t // dt) # number of time steps
uplift_rate = 0.0001 # uplift rate (m/yr)
kappa = 0.001 # erodibility (m/yr)
Sc = 0.6 # critical slope
```
Instantiate the components:
The hillslope diffusion component must be used together with a flow router/director that provides the steepest downstream slope for each node, with a D4 method (creates the field *topographic__steepest_slope* at nodes).
```python
fdir = pymt.models.FlowDirectorSteepest()
tl_diff = pymt.models.TransportLengthHillslopeDiffuser()
```
```python
config_file, config_dir = fdir.setup(
grid_row_spacing=10.,
grid_column_spacing=10.,
grid_rows=100,
grid_columns=100,
clock_start=0.0,
clock_stop=total_t,
clock_step=dt,
)
fdir.initialize(config_file, config_dir)
```
```python
config_file, config_dir = tl_diff.setup(
grid_row_spacing=10.,
grid_column_spacing=10.,
grid_rows=100,
grid_columns=100,
clock_start=0.0,
clock_stop=total_t,
clock_step=dt,
erodibility=kappa,
slope_crit=Sc,
)
tl_diff.initialize(config_file, config_dir)
```
Set the boundary conditions. The **FlowDirector** component uses a variable called *boundary_condition_flag* to set its boundary conditions. A value of 1, means the boundary is open and sediment is free to leave the grid. A value of 4 means the nodes are closed and so there is no flux through them. The **TransportLengthHillslopeDiffuser** uses these boundary conditions as input so we'll set them both here.
```python
status = fdir.get_value("boundary_condition_flag").reshape((100, 100))
status[:, (0, -1)] = 1 # E and W boundaries are open
status[(0, -1), :] = 4 # N and S boundaries are closed
```
```python
fdir.set_value("boundary_condition_flag", status)
tl_diff.set_value("boundary_condition_flag", status)
```
Start with an initial surface that's just random noise.
```python
z = np.random.rand(100 * 100)
```
```python
fdir.set_value("topographic__elevation", z)
```
Get the input values for **TransportLengthHillslopeDiffuser** from the flow director.
```python
tl_diff.set_value("topographic__elevation", z)
tl_diff.set_value("flow__receiver_node", fdir.get_value("flow__receiver_node"))
tl_diff.set_value("topographic__steepest_slope", fdir.get_value("topographic__steepest_slope"))
```
Run the components for 2 Myr and trace an East-West cross-section of the topography every 100 kyr:
```python
for t in range(nt - 1):
fdir.update()
tl_diff.set_value("topographic__elevation", z)
tl_diff.set_value(
"flow__receiver_node", fdir.get_value("flow__receiver_node")
)
tl_diff.set_value(
"topographic__steepest_slope",
fdir.get_value("topographic__steepest_slope"),
)
tl_diff.update()
z = tl_diff.get_value("topographic__elevation").reshape((100, 100))
z[1:-1, 1:-1] += uplift_rate * dt # add the uplift
fdir.set_value("topographic__elevation", z)
# add some output to let us see we aren't hanging:
if t % 100 == 0:
print(t * dt)
# plot east-west cross-section of topography:
x_plot = range(0, 1000, 10)
z_plot = z[1, :]
figure('cross-section')
plot(x_plot, z_plot)
```
And plot final topography:
```python
tl_diff.quick_plot("topographic__elevation")
```
# Example 2
In this example, we show that when the slope is steep ($S \geq S_c$), the transport-length hillsope diffusion simulates mass wasting, with long transport distances.
First, we create a grid: the western half of the grid is flat at 0m of elevation, the eastern half is a 45-degree slope.
```python
fdir = pymt.models.FlowDirectorSteepest()
tl_diff = pymt.models.TransportLengthHillslopeDiffuser()
```
```python
total_t = 1000000. # total run time (yr)
dt = 1000. # time step (yr)
nt = int(total_t // dt) # number of time steps
kappa = 0.001 # erodibility (m / yr)
Sc = 0.6 # critical slope
```
```python
grid_params = {
"grid_row_spacing": 10.,
"grid_column_spacing": 10.,
"grid_rows": 100,
"grid_columns": 100,
}
clock_params = {
"clock_start": 0.0,
"clock_stop": total_t,
"clock_step": dt,
}
```
```python
config_file, config_dir = fdir.setup(**grid_params, **clock_params)
fdir.initialize(config_file, config_dir)
```
```python
config_file, config_dir = tl_diff.setup(
**grid_params,
**clock_params,
erodibility=kappa,
slope_crit=Sc,
)
tl_diff.initialize(config_file, config_dir)
```
As before, set the boundary conditions for both components.
```python
status = fdir.get_value("boundary_condition_flag").reshape((100, 100))
status[:, (0, -1)] = 1 # E and W boundaries are open
status[(0, -1), :] = 4 # N and S boundaries are closed
```
```python
fdir.set_value("boundary_condition_flag", status)
tl_diff.set_value("boundary_condition_flag", status)
```
In this example, we'll use a different initial surface: a dipping plane.
```python
grid = fdir.var_grid("topographic__elevation")
n_vals = fdir.grid_size(grid)
x, y = fdir.grid[0].node_x, fdir.grid[0].node_y
```
```python
z = np.zeros(n_vals)
z[x > 500] = x[x < 490] / 10.0
```
To make sure we've set things up correctly.
```python
fdir.set_value("topographic__elevation", z)
fdir.quick_plot("topographic__elevation")
```
Now time step through the model, plotting things along the way.
```python
for t in range(1000):
fdir.update()
tl_diff.set_value("topographic__elevation", fdir.get_value("topographic__elevation"))
tl_diff.set_value("flow__receiver_node", fdir.get_value("flow__receiver_node"))
tl_diff.set_value("topographic__steepest_slope", fdir.get_value("topographic__steepest_slope"))
tl_diff.update()
fdir.set_value("topographic__elevation", tl_diff.get_value("topographic__elevation"))
# add some output to let us see we aren't hanging:
if t % 100 == 0:
print(t * dt)
z = tl_diff.get_value("topographic__elevation").reshape((100, 100))
# plot east-west cross-section of topography:
x_plot = range(0, 1000, 10)
z_plot = z[1, :]
figure('cross-section')
plot(x_plot, z_plot)
```
```python
fdir.quick_plot("topographic__elevation")
```
The material is diffused from the top and along the slope and it accumulates at the bottom, where the topography flattens.
# Example 3
As a comparison, the following code uses linear diffusion on the same slope. Instead of using the **TransportLengthHillslopeDiffuser** component, we'll swap in the **LinearDiffuser** component. Everything else will be pretty much the same.
```python
fdir = pymt.models.FlowDirectorSteepest()
diff = pymt.models.LinearDiffuser()
```
Setup and initialize the models.
```python
config_file, config_dir = fdir.setup(**grid_params, **clock_params)
fdir.initialize(config_file, config_dir)
```
```python
config_file, config_dir = diff.setup(
**grid_params,
**clock_params,
linear_diffusivity=0.1,
)
diff.initialize(config_file, config_dir)
```
Set boundary conditions.
```python
status = fdir.get_value("boundary_condition_flag").reshape((100, 100))
status[:, (0, -1)] = 1 # E and W boundaries are open
status[(0, -1), :] = 4 # N and S boundaries are closed
```
```python
fdir.set_value("boundary_condition_flag", status)
diff.set_value("boundary_condition_flag", status)
```
Set the initial topography.
```python
grid = fdir.var_grid("topographic__elevation")
n_vals = fdir.grid_node_count(grid)
x, y = fdir.grid[0].node_x, fdir.grid[0].node_y
```
```python
z = np.zeros(n_vals)
z[x > 500] = x[x < 490] / 10.0
```
```python
fdir.set_value("topographic__elevation", z)
fdir.quick_plot("topographic__elevation")
```
Run the model!
```python
for t in range(1000):
fdir.update()
diff.set_value("topographic__elevation", fdir.get_value("topographic__elevation"))
diff.update()
fdir.set_value("topographic__elevation", diff.get_value("topographic__elevation"))
# add some output to let us see we aren't hanging:
if t % 100 == 0:
print(t * dt)
z = diff.get_value("topographic__elevation").reshape((100, 100))
# plot east-west cross-section of topography:
x_plot = range(0, 1000, 10)
z_plot = z[1, :]
figure('cross-section')
plot(x_plot, z_plot)
```
```python
fdir.quick_plot("topographic__elevation")
```
```python
```
| 129b8f49bcaa8f0d436d29618cc654ce21a73c76 | 97,828 | ipynb | Jupyter Notebook | nb/transport_length_hillslope_diffuser.ipynb | mcflugen/pymt-live | dcfc430a4953ec33bcf3c6efce0fe4b35d3358a9 | [
"MIT"
]
| null | null | null | nb/transport_length_hillslope_diffuser.ipynb | mcflugen/pymt-live | dcfc430a4953ec33bcf3c6efce0fe4b35d3358a9 | [
"MIT"
]
| 2 | 2019-05-22T20:47:43.000Z | 2019-12-30T14:07:12.000Z | nb/transport_length_hillslope_diffuser.ipynb | csdms/pymt-live | dcfc430a4953ec33bcf3c6efce0fe4b35d3358a9 | [
"MIT"
]
| null | null | null | 124.304956 | 55,376 | 0.871622 | true | 3,457 | Qwen/Qwen-72B | 1. YES
2. YES | 0.884039 | 0.808067 | 0.714363 | __label__eng_Latn | 0.914775 | 0.498037 |
# Electronics Demos
See also some of the circuit diagram demos in the *3.2.0 Generating Embedded Diagrams.ipynb* notebook.
This notebook demonstrates how we can use a range of techniques to script the creation of electrical circuit diagrams, as well as creating models of circuits that can be rendered as a schematic circuit diagram and analysed as a computational model. This means we can:
- create a model of a circuit as a computational object through a simple description language;
- render a schematic diagram of the circuit from the model;
- display analytic equations describing the model that represent particular quantities such as currents and voltages as a function of component variables;
- automatically calculate the values of voltages and currents from the model based on provided component values.
The resulting document is self-standing in terms of creating the media assets that are displayed from within the document itself. In addition, analytic treatments and exact calculations can be performed on the same model, which means that diagrams, analyses and calculations will always be consistent, automatically derived as they are from the same source. This compares to a traditional production route where the different components of the document may be created independently of each other.
A full treatment would require a notebook environment with various notebook extensions enabled so that things like code cells could be hidden, or generated equations and diagrams could be embedded directly in markdown cells.
Cells could also be annotated with metadata identifying them as cells to be used in a slideshow/presentation style view using the RISE notebook extension. (*You could do this yourself now, it's just taking me some time working through all the things that are possible and actually marking the notebook up!*)
## `lcapy`
`lcapy` is a linear circuit analysis package that can be used to describe, display and analyse the behaviour of a wide range of linear analogue electrical circuits.
The *3.2.0 Generating Embedded Diagrams.ipynb* notebook demonstrates how electrical circuit diagrams can be written using the `circuitikz` *TeX* package. Among other things, `lcapy` can generate circuit diagrams using `circuitikz` scripts generated from a simpler Python grammar.
`lcapy` provides a far more powerful approach, by using a circuit description that can be used to generate an circuit diagram as the basis for a wide range of analyses. For example, `lcapy` can be used to describe equivalent circuits (such as Thevenin or Norton equivalent circuits), or generate Bode plots.
*There are some further examples not yet featuring in these Azure notebooks linked to from [An Easier Approach to Electrical Circuit Diagram Generation – lcapy](https://blog.ouseful.info/2018/08/07/an-easier-approach-to-electrical-circuit-diagram-generation-lcapy/).*
```python
%%capture
try:
%load_ext tikz_magic
except:
!conda config --add channels conda-forge
!conda install -y imagemagick
!pip install --user git+https://github.com/innovationOUtside/ipython_magic_tikz
```
```python
%%capture
try:
import lcapy
except:
!pip install git+https://github.com/mph-/lcapy.git
```
Let's see how far we can get doing a simple re-representation of an OpenLearn module on electronics.
## OpenLearn Example
*The following section is a reworking of http://www.open.edu/openlearn/science-maths-technology/introduction-electronics/content-section-3.1 .*
```python
import lcapy
from lcapy import Circuit
from IPython.display import display, Latex
%matplotlib inline
```
### Voltage dividers
Voltage dividers are widely used in electronic circuits to create a reference voltage, or to reduce the amplitude of a signal. The figure below shows a voltage divider. The value of $V_{out}$ can be calculated from the values of $V_S$, $R_1$ and $R_2$.
```python
#We can create a schematic for the voltage divider using lcapy
#This has the advantage that circuit description is also a model
#The model can be analysed and used to calculate voltages and currents, for example,
# across components if component values and the source voltage are defined
#Figure: A voltage divider circuit
sch='''
VS 1 0 ; down
W 1 2 ; right, size=2
R1 2 3 ; down
R2 3 4; down
W 3 5; right
P1 5 6; down,v=V_{out}
W 4 6; right
W 4 0; left
'''
#Demonstrate thate we can write the descriptioon to a file
fn="voltageDivider.sch"
with open(fn, "w") as text_file:
text_file.write(sch)
# and then create the circuit model from the (persisted) file
cct = Circuit(fn)
```
```python
#Draw the circuit diagram that corresponds to the schematic description
cct.draw(style='american', draw_nodes=False, label_nodes=False) #american, british, european
#Draw function is defined in https://github.com/mph-/lcapy/blob/master/lcapy/schematic.py
#The styles need tweaking to suit OU convention - this requires a minor patch to lcapy
#Styles defined in https://github.com/mph-/lcapy/blob/master/lcapy/schematic.py#Schematic.tikz_draw
```
In the first instance, let’s assume that is not connected to anything (for voltage dividers it is always assumed that negligible current flows through ). This means that, according to Kirchhoff’s first law, the current flowing through is the same as the current flowing through . Ohm’s law allows you to calculate the current through . It is the potential difference across that resistor, divided by its resistance. Since the voltage is distributed over two resistors, the potential drop over $R_1$ is $V_{R_1}=V_S - V_{out}$.
```python
#The equation at the end of the last paragraph is written explicitly as LateX
# But we can also analyse the circuit using lcapy to see what the equation *should* be
#The voltage across R_2, V_out, is given as:
cct.R1.v
#We can't do anything about the order of the variables in the output expression, unfortunately
#It would be neater if sympy sorted fractional terms last but it doesn't...
#We can get an expression for the output voltage, Vout, or its calculated value in a couple of ways:
#- find the voltage across the appropriately numbered nodes
# (the node numbers can be displayed on the schematic if required:
# simply set label_nodes=True in the draw() statement.)
cct.Voc(3,4)['t']
#- the output voltage can also be obtained by direct reference to the appropriate component:
cct.R2.v
#sympy is a symbolic maths package
from sympy import Symbol, Eq
#If we add .expr to the voltages, we can get the sympy representation of voltage and current equations
# that are automatically derived from the model.
vout_expr=cct.R2.v.expr
v_r1_expr=cct.R1.v.expr
#I don't know how to get the symbols from the circuit as sympy symbols so create them explicitly
vout=Symbol('V_out')
v_r1=Symbol("V_{R_1}")
#Working with sympy symbols, we can perform a substitution if expressions match exactly
#In this case, we can swap in V_out for the expression returned from the analysis
# to give us an expression in the form we want
Eq( v_r1, v_r1_expr.subs(vout_expr,vout) )
#This is rendered below - and created through symbolic maths analysis of the circuit model.
```
*The following expressions are hand written using LaTeX*
The current through $R_1$ ($I_{R_1}$) is given by
$I_{R_1}=\displaystyle\frac{(V_S-V_{out})}{R_1}$
Similarly, the current through $R_2$ is given by
$I_{R_2}=\displaystyle\frac{V_{out}}{R_2}$
Kirchoff’s first law tells you that $I_{R_1}=I_{R_2}=$, and therefore
$\displaystyle\frac{V_{out}}{V_{R_2}}=\frac{(V_S-V_{out})}{R_1}$
Multiplying both sides by $R_1$ and by $R_2$ gives
$R_1V_{out}=R_2(V_S-V_{out})$
Then multiplying out the brackets on the right-hand side gives
$R_1V_{out}=R_2V_S-R_2V_{out}$
This can be rearranged to
$R_1V_{out}+R_2V_{out}=R_2V_S$
giving
$(R_1+R_2)V_{out}=R_2V_S$
and therefore the fundamental result is obtained:
$V_{out}=\displaystyle\frac{R_2V_S}{(R_1+R_2)}$
```python
#We can find this from quantities we have derived through analysis of the presented circuit
Eq(vout,vout_expr)
#The following equation is automatically derived.
#Note that it could be embedded in the markdown cell if we enable the Python-Markdown notebook extension
```
```python
#It's not obvious how to get the same expression for each of the currents from sympy
#Could we force lcapy to use the V_out value somehow?
#sympy can be blocked from simplifying expressions using evaluate=False
# but I don't think we can pass this parameter using lcapy?
#In passing, the .expr rendering looks nicer in notebooks - does it use \displaystyle on the fraction?
Eq(Symbol('I_{R_1}'),cct.R1.i.expr)
#The simplified version is correct but not very intuitive...
#And doesn't help the flow of the materials... but it might be useful later on?
#The following equation is generated by the symbolic analysis...
```
```python
#We get the following from the circuit analysis, as above...
cct.R2.i.expr
#We note that the circuit analysis returns equal expressions for I_R_1 and I_R_2
# which gives some sort of reinforcement to the idea of Kirchoff's Law...
#The following equation is generated by the symbolic analysis...
```
#### Exercise
Suppose $V_S= 24 V$ and $R_2 = 100\Omega$. You want $V_{out} = 6 V$. What value of $R_1$ do you need?
#### Answer
Rearranging the equation for $V_{out}$ gives
$V_{out}(R_1+R_2)=R_2V_S$
and therefore
$(R_1+R_2)=\displaystyle\frac{R_2V_S}{V_{out}}$
which means the equation for $R_1$ is
$R_1=\displaystyle\frac{R_2V_S}{V_{out}}-R_2$
Substituting in the values given,
$R_1=\displaystyle\frac{100\Omega \times 24V}{6V}-100\Omega = 400\Omega-100\Omega=300\Omega$
```python
#We essentially want to solve the following
#Note that the expression is derived automatically from analysis of the circuit provided
Eq(vout,vout_expr)
#We don't necessarily know / can't control what the form of the present solution will be though?
#The following equation is generated by the symbolic analysis...
```
```python
#Anyway... we can start to substitute values into the expression...
from sympy import sympify
#This is clunky - is there a proper way of substituting values into lcapy expressions?
Eq(6,sympify(str(vout_expr)).subs([('V_S',24), ('R_2',100)]))
#The following equation is generated by the symbolic analysis...
```
```python
#Rearranging, we need to solve the following for R_1
Eq(vout_expr-vout,0)
#The following equation is generated by the symbolic analysis...
```
```python
#sympy can solve such equations for us
from sympy import solve
#Solve for R_1 - this gives us an alternative form of the result above
Eq(Symbol('R_1'),solve(sympify(str(vout_expr-vout)),'R_1')[0])
#The following equation is generated by the symbolic analysis...
```
```python
#To solve the equation, we can substitute values into the sympy expression as follows
#solve(sympify(str(vout_expr-vout)).subs([('V_S',24), ('R_2',100),('V_out',6)]),'R_1')[0]
```
```python
Eq(Symbol('R_1'),solve(sympify(str(vout_expr-vout)),'R_1')[0].subs([('V_S',24),
('R_2',100),
('V_out',6)]))
#A key point about this is that we can script in different component values and display the correct output
#We should be able to use the python-markdown extension to render py variables inside markdown cells
# but the extension seems to be conflicting with something else in this notebook?
#If it was working, we should be able to write something like the following in a markdown cell:
# For R_2={{R2=100;R2}}, V_S={{Vs=20;Vs}} and V_out={{Vout=5;Vout}},
# we need R1={{solve( ..., 'R_1').subs([('V_S',Vs),('R_2',R2),('V_out',Vout)])}}.
#The following result is calculated by the symbolic analysis...
```
```python
#We can also do partial solutions
Vs=20; Vout=5; R2 = Symbol('R_2')
R1=solve(sympify(str(vout_expr-vout)),'R_1')[0].subs([('V_S',Vs),('V_out',Vout)])
print('For V_S={Vs}V and V_out={Vout}V, we need R1={R1}.'.format(R2=R2,Vs=Vs,Vout=Vout,R1=R1))
```
For V_S=20V and V_out=5V, we need R1=3*R_2.
```python
#Alternatively, we can create a function to solve for any single missing value
#The following will calculate the relevant solution
def soln(values=None):
if values is None:
values={'V_S':24, 'R_1':'', 'R_2':100, 'V_out':6}
outval=[v for v in values if not values[v]]
invals=[(v,values[v]) for v in values if values[v] ]
if len(outval)!=1 or len(invals)!=3:
return 'oops'
outval=outval[0]
print(invals)
return 'Value of {} is {}'.format(outval,
solve(sympify(str(vout_expr-vout)).subs(invals),outval)[0])
```
```python
soln()
```
[('V_out', 6), ('V_S', 24), ('R_2', 100)]
'Value of R_1 is 300'
```python
soln({'V_S':24,'R_2':'', 'R_1':300,'V_out':6})
```
[('R_1', 300), ('V_out', 6), ('V_S', 24)]
'Value of R_2 is 100'
```python
#We can also explore a simple thing to check the value from a circuit analysis
def cct1(V='24',R1='100',R2='100'):
R1 = '' if R1 and float(R1) <=0 else R1
sch='''
VS 1 0 {V}; down
W 1 2 ; right, size=2
R1 2 3 {R1}; down
R2 3 4 {R2}; down
W 3 5; right, size=2
P1 5 6; down,v=V_{{out}}
W 4 6; right, size=2
W 4 0; left
'''.format(V=V,R1=R1,R2=R2)
cct = Circuit()
cct.add(sch)
cct.draw(label_nodes=False)
#The output voltage, V_out is the voltage across R2
txt='The output voltage, $V_{{out}}$ across $R_2$ is {}V.'.format(cct.R2.v if R1 else V)
display(Latex(txt))
return
```
```python
cct1()
```
```python
#It's trivial to make an interactive widget built around the previous function
#This then lets us select R and V values and calculate the result automatically
from ipywidgets import interact_manual
@interact_manual
def i_cct1(V='24',R1='',R2='100'):
cct1(V=V,R1=R1,R2=R2)
```
interactive(children=(Text(value='24', description='V'), Text(value='', description='R1'), Text(value='100', d…
```python
# We could also plot V_out vs R_1 for given V_S and R_2?
```
### The Wheatstone bridge
*http://www.open.edu/openlearn/science-maths-technology/introduction-electronics/content-section-3.2*
Originally developed in the nineteenth century, a Wheatstone bridge provided an accurate way of measuring resistances without being able to measure current or voltage values, but only being able to detect the presence or absence of a current. A simple galvanometer, as illustrated in the figure below, could show the absence of a current through the Wheatstone bridge in either direction. The long needle visible in the centre of the galvanometer would deflect to one side or the other if any current was detected, but show no deflection in the absence of a current.
*An early D'Arsonval galvanometer showing magnet and rotating coil ([Wikipedia](https://commons.wikimedia.org/wiki/File:A_moving_coil_galvanometer._Wellcome_M0016397.jpg))*
The figures below shows two equivalent circuits made of four resistors forming a Wheatstone bridge. Its purpose here is to show whether there is any current flowing between $V_{left}$ and $V_{right}$.
```python
%load_ext tikz_magic
```
```python
%%tikz -p circuitikz -s 0.4
%The following creates two diagrams side by side
%The script could be improved by specifying some parameters to identify component sizes
% and calculate node locations relatively.
%Select the resistor style
\ctikzset{resistor = european}
%Create the left hand diagram
\draw (0,1) to[R, l=$R_2$] (-2,3) to[R, l=$R_1$] (0,5) -- (0,6);
%can't get the R_2 and R_4 labels onto the other side of the resistor?
\draw (0,1) to[R, l=$R_4$] (2,3) to[R, l=$R_3$] (0,5);
\draw(-2,3)to[ammeter] (2,3);
\draw (0,1) to (0,0) node[ground]{};
\draw (0,6) node[above] {$V_s$};
\node[circle,draw=black, fill=black, inner sep=0pt,minimum size=3pt,label=left:{$V_{left}$}] (vl2) at (-2,3) {};
\node[circle,draw=black, fill=black, inner sep=0pt,minimum size=3pt,label=right:{$V_{right}$}] (vr2) at (2,3) {};
\node[circle,draw=black, fill=black, inner sep=0pt,minimum size=3pt,label=right:{}] (g) at (0,1) {};
\node[circle,draw=black, fill=black, inner sep=0pt,minimum size=3pt,label=right:{}] (g) at (0,5) {};
%Create the right hand diagram
\begin{scope}[xshift=7cm]
\draw (0,1)--(-2,1) to[R, l=$R_2$] (-2,3) to[R, l=$R_1$] (-2,5) -- (0,5)--(0,6);
\draw (0,1)--(2,1) to[R, l_=$R_4$] (2,3) to[R, l_=$R_3$] (2,5)--(0,5);
\draw (0,1) to (0,0) node[ground]{};
\draw(-2,3)to[ammeter] (2,3);
\draw (0,6) node[above] {$V_s$};
\node[circle,draw=black, fill=black, inner sep=0pt,minimum size=3pt,label=left:{$V_{left}$}] (vl2) at (-2,3) {};
\node[circle,draw=black, fill=black, inner sep=0pt,minimum size=3pt,label=right:{$V_{right}$}] (vr2) at (2,3) {};
\node[circle,draw=black, fill=black, inner sep=0pt,minimum size=3pt,label=right:{}] (g2) at (0,1) {};
\node[circle,draw=black, fill=black, inner sep=0pt,minimum size=3pt,label=right:{}] (g) at (0,5) {};
\end{scope}
```
The bridge is said to be balanced (that is, no current flows through the bridge and the needle of the galvanometer shows no deflection) if the voltages $V_{left}$ and $V_{right}$ are equal.
It can be shown that the bridge is balanced if, and only if, $\frac{R_1}{R_2}=\frac{R_3}{R_4}$, as follows.
When $V_{left}-V_{right}=0$ then $V_{left}=V_{right}$. Then the Wheatstone bridge can be viewed as two voltage dividers, $R_1$ and $R_2$ on the left and $R_3$ and $R_4$ on the right. Applying the voltage divider equation gives $V_{left}=\frac{R_2}{(R_1+R_2)}V_S$ and $V_{right}=\frac{R_4}{(R_3+R_4)}V_S$.
So
$\displaystyle\frac{R_2}{(R_1+R_2)}=\frac{R_4}{(R_3+R_4)}$
and
$R_2(R_3+R_4)=R_4(R_1+R_2)$
Multiplying out the brackets gives
$R_2R_3+R_2R_4=R_4R_1+R_4R_2$
which simplifies to
$R_2R_3=R_4R_1$
and
$\displaystyle\frac{R_3}{R_4}=\frac{R_1}{R_2}$
So, if $R_4$ were unknown, $R_1$, $R_2$ and $R_3$ could be chosen so that the needle of a galvanometer showed no deflection due to the current. Then
$R_4=\displaystyle\frac{R_2 \times R_3}{R_1}$
```python
#We can actually demonstrate the current flow with some lcapy analysed examples
#We can get an indication of the sign of the current across the ammeter in the following way
from numpy import sign
sign(-1),sign(0),sign(1)
```
(-1, 0, 1)
```python
%%capture
#This package lets us work with SI units, suitably quantified
!pip install quantiphy
```
```python
#Define a function that creates - and analyses - a Wheatstone bridge circuit
#We'll model the ammeter as a low value resistor
from quantiphy import Quantity
def wheatstone(R1=10,R2=10, R3=1e6,R4=1e6, diag=True):
sch='''
W 1 0; down
W 1 2; left
R2 2 3 {R2}; up
R1 3 4 {R1}; up
W 4 5; right
W 1 6; right
R4 6 7 {R4}; up
R3 7 8 {R3}; up
W 8 5; left
RA 7 3 1e-6; left
V 9 5 dc 10; down
W 9 10; right, size=3
RL 10 11 1e6; down
W 11 0; left, size=3
'''.format(R1=R1,R2=R2,R3=R3,R4=R4)
#We model the ammeter as a low value resistor
_cctw = Circuit()
_cctw.add(sch)
if diag:
_cctw.draw(label_nodes=False, draw_nodes=False, style='european')
def _qR(R):
return '$'+Quantity(R, '\Omega').render()+'$'
display(Latex('Resistor values: R1: {}, R2: {}, R3: {}, R4:{}'.format(_qR(R1),_qR(R2),_qR(R3),_qR(R4))))
display(Latex('$\\frac{{R1}}{{R2}}$ = {}, $\\frac{{R3}}{{R4}}$ = {}'.format(R1/R2,R3/R4)))
signer = '=' if (R1/R2)==(R3/R4) else '<' if (R1/R2)<(R3/R4) else '>'
display(Latex('$\\frac{{R1}}{{R2}}$ {} $\\frac{{R3}}{{R4}}$'.format(signer)))
display(Latex('Sign of current across $R_A$: {}'.format(sign(_cctw.RA.i.n(2)))))
return _cctw
```
```python
cctw=wheatstone()
```
```python
wheatstone(R1=5,diag=False);
#The display breaks in nbpreview? The < is treated as an HTML open bracket maybe?
```
Resistor values: R1: $5 \Omega$, R2: $10 \Omega$, R3: $1 M\Omega$, R4:$1 M\Omega$
$\frac{R1}{R2}$ = 0.5, $\frac{R3}{R4}$ = 1.0
$\frac{R1}{R2}$ < $\frac{R3}{R4}$
Sign of current across $R_A$: 1
```python
wheatstone(R3=5e5,diag=False);
```
Resistor values: R1: $10 \Omega$, R2: $10 \Omega$, R3: $500 k\Omega$, R4:$1 M\Omega$
$\frac{R1}{R2}$ = 1.0, $\frac{R3}{R4}$ = 0.5
$\frac{R1}{R2}$ > $\frac{R3}{R4}$
Sign of current across $R_A$: -1
```python
```
```python
```
```python
```
### FRAGMENTS - bits and pieces I've found out along the way that may be useful later
```python
#It's easy enough to pass in values to a defined circuit and calculate a desired componebnt value
#This means we can let students check their own answers...
#The following renders the circuit with specified component values and then calucates and displays V_out
#This approach can also be used to generate assessment material
# for activities that take the same form year on year, for example, but use different values.
#The wrapper function could also be extended to allow users to enter 3 of 4 values and calculate the fourth.
cctx=cct1(V=24,R2=100,R1=100)
```
##### Example of a step response
Different inputs can be applied to a circuit - which means we can use a step input, for example, and then analyse / calculate the step response.
```python
from lcapy import Circuit, j, omega
cct = Circuit()
cct.add("""
Vi 1 0_1 step 20; down
R1 1 2; right, size=1.5
C1 2 0; down
W 0_1 0; right
W 0 0_2; right, size=0.5
P1 2_2 0_2; down
W 2 2_2;right, size=0.5""")
cct.draw()
```
```python
cct.C1.v
```
$\left(20 - 20 e^{- \frac{t}{C_{1} R_{1}}}\right) u\left(t\right)$
```python
cct.R1.v
```
$12$
```python
cct.C1.i
```
$\frac{20 u\left(t\right)}{R_{1}} e^{- \frac{t}{C_{1} R_{1}}}$
```python
cct.R1.I.s
```
$\frac{20}{R_{1} \left(s + \frac{1}{C_{1} R_{1}}\right)}$
```python
#s-domain voltage across R1
cct.R1.V.s
```
$\frac{20 s^{2}}{s^{3} + \frac{s^{2}}{C_{1} R_{1}}}$
```python
#time domain voltage across R1
cct.R1.v
```
$20 e^{- \frac{t}{C_{1} R_{1}}} u\left(t\right)$
```python
cct.s_model().draw()
```
```python
#impedance between nodes 2 and 0
cct.impedance(2, 0)
```
$\frac{1}{C_{1} \left(s + \frac{1}{C_{1} R_{1}}\right)}$
```python
#open circuit vlotage between nodes 2 and 0
cct.Voc(2, 0).s
```
$\frac{20}{C_{1} R_{1} \left(s^{2} + \frac{s}{C_{1} R_{1}}\right)}$
```python
#equiv cct between nodes 2 and 0
cct.thevenin(2, 0)
```
$\mathrm{V}(\frac{20}{C_{1} R_{1} \left(s^{2} + \frac{s}{C_{1} R_{1}}\right)}) + \mathrm{Z}(\frac{1}{C_{1} \left(s + \frac{1}{C_{1} R_{1}}\right)})$
```python
cct.thevenin(2, 0).Z
```
$\frac{1}{C_{1} \left(s + \frac{1}{C_{1} R_{1}}\right)}$
```python
cct.thevenin(2, 0).Z.latex()
```
'\\frac{1}{C_{1} \\left(s + \\frac{1}{C_{1} R_{1}}\\right)}'
```python
cct.thevenin(2, 0).Voc.s
```
$\frac{20}{C_{1} R_{1} \left(s^{2} + \frac{s}{C_{1} R_{1}}\right)}$
```python
cct.norton(2,0)
```
$\mathrm{I}(\frac{20}{R_{1} s}) | \mathrm{Y}(C_{1} \left(s + \frac{1}{C_{1} R_{1}}\right))$
```python
cct.norton(2,0).Z
```
$\frac{1}{C_{1} \left(s + \frac{1}{C_{1} R_{1}}\right)}$
```python
#Y is reciprocal of Z
cct.norton(2,0).Y
```
$C_{1} \left(s + \frac{1}{C_{1} R_{1}}\right)$
```python
cct.norton(2,0).Isc.s
```
$\frac{20}{R_{1} s}$
```python
#Add component values
from lcapy import Circuit
cct = Circuit()
cct.add("""
Vi 1 0_1 ; down
R1 1 2 4.7e3; right, size=1.5
C1 2 0 47e-9; down
W 0_1 0; right
W 0 0_2; right, size=0.5
P1 2_2 0_2; down
W 2 2_2;right, size=0.5""")
cct.draw()
```
```python
cct.Voc(2,0).s
```
$0$
```python
from lcapy import Vdc, R
c = Vdc(10)+R(100)
c.Voc.dc
```
$10$
```python
c.Isc.dc
```
$\frac{1}{10}$
```python
from numpy import logspace, linspace, pi
from lcapy import Vac, Vstep, R, C, L, sin, t, s , omega, f
n = Vstep(20) + R(4.7e3) + C(4.7e-9)
n.draw()
vf =logspace(-1, 3, 4000)
n.Isc.frequency_response().plot(vf, log_scale=True);
```
```python
type(n)
```
lcapy.oneport.Ser
```python
#Look like we can pass stuff in to the expression?
#so for a first order low pass filter eg https://web.stanford.edu/~boyd/ee102/conv_demo.pdf
X=(1/(1+s/500))(j * 2 * pi * f)
fv = logspace(-2, 4, 400)
X.plot(fv, log_scale=True)
X.phase_degrees.plot(fv,log_scale=True);
```
```python
cct.Voc(2,0).s
```
$\frac{200000000}{2209 s^{2} + 10000000 s}$
```python
X=cct.Voc(2,0).s(j * 2 * pi * f)
fv = logspace(-2, 4, 400)
X.plot(fv, log_scale=True)
X.phase_degrees.plot(fv,log_scale=True);
```
```python
from numpy import logspace
from lcapy import pi, f, Hs, H, s, j
#HOw might we relate this to circuit description?
H = Hs((s - 2) * (s + 3) / (s * (s - 2 * j) * (s + 2 * j)))
A = H(j * 2 * pi * f)
fv = logspace(-3, 6, 400)
A.plot(fv, log_scale=True)
A.phase_degrees.plot(fv,log_scale=True);
```
```python
A
```
$- \frac{j \left(2 j \pi f - 2\right) \left(2 j \pi f + 3\right)}{2 \pi f \left(2 j \pi f - 2 j\right) \left(2 j \pi f + 2 j\right)}$
```python
H
```
$\frac{\left(s - 2\right) \left(s + 3\right)}{s \left(s - 2 j\right) \left(s + 2 j\right)}$
```python
H = (cct.R1.V('s') / cct.Vi.V('s')).simplify()
H
```
$\frac{C_{1} R_{1} s}{C_{1} R_{1} s + 1}$
```python
##fragments
```
```python
#schemdraw https://cdelker.bitbucket.io/SchemDraw/SchemDraw.html
```
```python
#online examples - tangentially relevant as examples of what can be done elsewhere
#- https://www.circuitlab.com/
#- https://github.com/willymcallister/circuit-sandbox
```
```python
```
| f0b25f98b10601c2184dddc12bfd754f49a5d6c9 | 305,215 | ipynb | Jupyter Notebook | Getting Started With Notebooks/3.6.0 Electronics.ipynb | ouseful-demos/getting-started-with-notebooks | a1419706a899eb49f86ff27e03c476d12da598f4 | [
"MIT"
]
| null | null | null | Getting Started With Notebooks/3.6.0 Electronics.ipynb | ouseful-demos/getting-started-with-notebooks | a1419706a899eb49f86ff27e03c476d12da598f4 | [
"MIT"
]
| null | null | null | Getting Started With Notebooks/3.6.0 Electronics.ipynb | ouseful-demos/getting-started-with-notebooks | a1419706a899eb49f86ff27e03c476d12da598f4 | [
"MIT"
]
| null | null | null | 215.395201 | 18,331 | 0.878558 | true | 8,211 | Qwen/Qwen-72B | 1. YES
2. YES | 0.831143 | 0.855851 | 0.711335 | __label__eng_Latn | 0.979352 | 0.491 |
# Calculating Spin-Weighted Spherical Harmonics
## Authors: Zach Etienne & Brandon Clark
[comment]: <> (Abstract: TODO)
**Notebook Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). In addition, its results have been validated against a [trusted Mathematica notebook](https://demonstrations.wolfram.com/versions/source.jsp?id=SpinWeightedSphericalHarmonics&version=0012).
### NRPy+ Source Code for this module: [SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.py](../edit/SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.py)
## Introduction:
This tutorial notebook defines a Python function for computing spin-weighted spherical harmonics using Sympy. Spin-weight $s=-2$ spherical harmonics are the natural basis for decomposing gravitational wave data.
The tutorial contains code necessary to validate the resulting expressions assuming $s=-2$ against a trusted Mathematica notebook (validated for all $(\ell,m)$ up to $\ell=8$. Finally it outputs a C code capable of computing $_{-2}Y_{\ell m} (\theta, \phi)$ for all $(\ell,m)$ for $\ell=0$ up to `maximum_l`.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows:
1. [Step 1](#initializenrpy): Initialize needed Python/NRPy+ modules
1. [Step 2](#gbf): Defining the Goldberg function
1. [Step 3](#math_code_validation): Code Validation against Mathematica script
1. [Step 4](#ccode): Generate C-code function for computing s=-2 spin-weighted spherical harmonics, using NRPy+
1. [Step 5](#code_validation): Code Validation against SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics NRPy+ module
1. [Step 6](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Initialize needed Python/NRPy+ modules [Back to [top](#toc)\]
$$\label{initializenrpy}$$
Let's start by importing all the needed modules from NRPy+:
```python
# Step 1: Initialize needed Python/NRPy+ modules
from outputC import outputC # NRPy+: Core C code output module
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import os, sys # Standard Python modules for multiplatform OS-level functions
# Step 1.a: Set maximum l to which we will validate the spin-weighted spherical harmonics with s=-2:
maximum_l = 4 # Note that we have validated against Mathematica up to and including l=8 -- perfect agreement.
```
<a id='gbf'></a>
# Step 2: Defining the Goldberg function [Back to [top](#toc)\]
$$\label{gbf}$$
One way to calculate the spin-weighted spherical harmonics is using the following formula
from [Goldberg et al. (1967)](https://aip.scitation.org/doi/10.1063/1.1705135):
$$ _sY_{\ell m} (\theta, \phi) = \left(-1\right)^m \sqrt{ \frac{(\ell+m)! (\ell-m)! (2\ell+1)} {4\pi (\ell+s)! (\ell-s)!} } \sin^{2\ell} \left( \frac{\theta}{2} \right) \times\sum_{r=0}^{\ell-s} {\ell-s \choose r} {\ell+s \choose r+s-m} \left(-1\right)^{\ell-r-s} e^{i m \phi} \cot^{2r+s-m} \left( \frac{\theta} {2} \right)$$
```python
# Step 2: Defining the Goldberg function
# Step 2.a: Declare SymPy symbols:
th, ph = sp.symbols('th ph',real=True)
# Step 2.b: Define the Goldberg formula for spin-weighted spherical harmonics
# (https://aip.scitation.org/doi/10.1063/1.1705135);
# referenced & described in Wikipedia Spin-weighted spherical harmonics article:
# https://en.wikipedia.org/w/index.php?title=Spin-weighted_spherical_harmonics&oldid=853425244
def Y(s, l, m, th, ph, GenerateMathematicaCode=False):
Sum = 0
for r in range(l-s + 1):
if GenerateMathematicaCode == True:
# Mathematica needs expression to be in terms of cotangent, so that code validation below
# yields identity with existing Mathematica notebook on spin-weighted spherical harmonics.
Sum += sp.binomial(l-s, r)*sp.binomial(l+s, r+s-m)*(-1)**(l-r-s)*sp.exp(sp.I*m*ph)*sp.cot(th/2)**(2*r+s-m)
else:
# SymPy C code generation cannot handle the cotangent function, so define cot(th/2) as 1/tan(th/2):
Sum += sp.binomial(l-s, r)*sp.binomial(l+s, r+s-m)*(-1)**(l-r-s)*sp.exp(sp.I*m*ph)/sp.tan(th/2)**(2*r+s-m)
return (-1)**m*sp.simplify(sp.sqrt(sp.factorial(l+m)*sp.factorial(l-m)*(2*l+1)/(4*sp.pi*sp.factorial(l+s)*sp.factorial(l-s)))*sp.sin(th/2)**(2*l)*Sum)
```
<a id='math_code_validation'></a>
# Step 3: Code Validation against Mathematica script [Back to [top](#toc)\]
$$\label{math_code_validation}$$
To validate the code we wish to compare it with an existent [Mathematica notebook](https://demonstrations.wolfram.com/versions/source.jsp?id=SpinWeightedSphericalHarmonics&version=0012). We will validate the code using a spin-value of $s=-2$ and $\ell = 8,7,6,5,4,3,2,1,0$ while leaving $m$, $\theta$, and $\phi$ unknown.
```python
# Step 3: Code Validation against Mathematica notebook:
# https://demonstrations.wolfram.com/versions/source.jsp?id=SpinWeightedSphericalHarmonics&version=0012
# # For the l=0 case m=0, otherwise there is a divide-by-zero in the Y() function above.
# print("FullSimplify[Y[-2, 0, 0, th, ph]-"+str(sp.mathematica_code(sp.simplify(Y(-2, 0, 0, th, ph,GenerateMathematicaCode=True))))+"] \n") # Agrees with Mathematica notebook for l = 0
# # Check the other cases
# for l in range(1,maximum_l+1): # Agrees with Mathematica notebook for l = 1, 2, 4, 5, 6, 7, 8;
# print("FullSimplify[Y[-2, "+str(l)+", m, th, ph]-("+
# str(sp.mathematica_code(sp.simplify(Y(-2, l, m, th, ph, GenerateMathematicaCode=True)))).replace("binomial","Binomial").replace("factorial","Factorial")+")] \n")
```
<a id='ccode'></a>
# Step 4: Generate C-code function for computing s=-2 spin-weighted spherical harmonics, using NRPy+ [Back to [top](#toc)\]
$$\label{ccode}$$
```python
# Step 4: Generating C Code function for computing
# s=-2 spin-weighted spherical harmonics,
# using NRPy+'s outputC() function.
outCparams = "preindent=3,outCfileaccess=a,outCverbose=False,includebraces=True"
with open(os.path.join("SpinWeight_minus2_SphHarmonics","SpinWeight_minus2_SphHarmonics.h"), "w") as file:
file.write("""
void SpinWeight_minus2_SphHarmonics(const int l, const int m, const REAL th, const REAL ph,
REAL *reYlmswm2_l_m, REAL *imYlmswm2_l_m) {
if(l<0 || l>"""+str(maximum_l)+""" || m<-l || m>+l) {
printf("ERROR: SpinWeight_minus2_SphHarmonics handles only l=[0,"""+str(maximum_l)+"""] and only m=[-l,+l] is defined.\\n");
printf(" You chose l=%d and m=%d, which is out of these bounds.\\n",l,m);
exit(1);
}\n""")
file.write("switch(l) {\n")
for l in range(maximum_l+1): # Output values up to and including l=8.
file.write(" case "+str(l)+":\n")
file.write(" switch(m) {\n")
for m in range(-l,l+1):
file.write(" case "+str(m)+":\n")
Y_m2_lm = Y(-2, l, m, th, ph)
Cstring = outputC([sp.re(Y_m2_lm),sp.im(Y_m2_lm)],["*reYlmswm2_l_m","*imYlmswm2_l_m"],
"returnstring",outCparams)
file.write(Cstring)
file.write(" return;\n")
file.write(" } /* End switch(m) */\n")
file.write(" } /* End switch(l) */\n")
file.write("} /* End function SpinWeight_minus2_SphHarmonics() */\n")
```
<a id='code_validation'></a>
# [Step 5](#code_validation): Code Validation against `SpinWeight_minus2_SphHarmonics.SpinWeight_minus2_SphHarmonics` NRPy+ module \[Back to [top](#toc)\]
$$\label{code_validation}$$
As additional validation, we verify agreement in the SymPy expressions for the spin-weight -2 spherical harmonics expressions between
1. this tutorial and
2. the NRPy+ [`SpinWeight_minus2_SphHarmonics.SpinWeight_minus2_SphHarmonics`](../edit/SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.py) module.
```python
import SpinWeight_minus2_SphHarmonics.SpinWeight_minus2_SphHarmonics as swm2
swm2.SpinWeight_minus2_SphHarmonics(maximum_l=4,filename=os.path.join("SpinWeight_minus2_SphHarmonics","SpinWeight_minus2_SphHarmonics-NRPymodule.h"))
print("\n\n### BEGIN VALIDATION TESTS ###")
import filecmp
fileprefix = os.path.join("SpinWeight_minus2_SphHarmonics","SpinWeight_minus2_SphHarmonics")
if filecmp.cmp(fileprefix+"-NRPymodule.h",fileprefix+".h") == False:
print("VALIDATION TEST FAILED ON file: "+fileprefix+".h"+".")
sys.exit(1)
print("VALIDATION TEST PASSED on file: "+fileprefix+".h")
print("### END VALIDATION TESTS ###")
```
### BEGIN VALIDATION TESTS ###
VALIDATION TEST PASSED on file: SpinWeight_minus2_SphHarmonics/SpinWeight_minus2_SphHarmonics.h
### END VALIDATION TESTS ###
<a id='latex_pdf_output'></a>
# Step 6: Output this notebook to $\LaTeX$-formatted PDF \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-SpinWeighted_Spherical_Harmonics.pdf](Tutorial-SpinWeighted_Spherical_Harmonics.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-SpinWeighted_Spherical_Harmonics")
```
Created Tutorial-SpinWeighted_Spherical_Harmonics.tex, and compiled LaTeX
file to PDF file Tutorial-SpinWeighted_Spherical_Harmonics.pdf
| 463d0bd8b2c51ff7ddd1c6ba073a73b6d9716849 | 15,115 | ipynb | Jupyter Notebook | Tutorial-SpinWeighted_Spherical_Harmonics.ipynb | fedelopezar/nrpytutorial | 753acd954be4a2f99639c9f9fd5e623689fc7493 | [
"BSD-2-Clause"
]
| 66 | 2018-06-26T22:18:09.000Z | 2022-02-09T21:12:33.000Z | Tutorial-SpinWeighted_Spherical_Harmonics.ipynb | fedelopezar/nrpytutorial | 753acd954be4a2f99639c9f9fd5e623689fc7493 | [
"BSD-2-Clause"
]
| 14 | 2020-02-13T16:09:29.000Z | 2021-11-12T14:59:59.000Z | Tutorial-SpinWeighted_Spherical_Harmonics.ipynb | fedelopezar/nrpytutorial | 753acd954be4a2f99639c9f9fd5e623689fc7493 | [
"BSD-2-Clause"
]
| 30 | 2019-01-09T09:57:51.000Z | 2022-03-08T18:45:08.000Z | 43.811594 | 366 | 0.59656 | true | 2,877 | Qwen/Qwen-72B | 1. YES
2. YES | 0.83762 | 0.803174 | 0.672754 | __label__eng_Latn | 0.620921 | 0.401365 |
<a href="https://colab.research.google.com/github/davy-datascience/ml_algorithms/blob/master/LinearRegression/Approach-1/Linear%20Regression.ipynb" target="_parent"></a>
# Linear Regression - with single variable
## Intro
I first tried coding linear regression algorithm being taught by Luis Serrano. Luis produces youtube videos on data-science subjects with easy-to-understand visualizations. In his video [Linear Regression: A friendly introduction](https://www.youtube.com/watch?v=wYPUhge9w5c) he uses the following approach :
<br/>
**Note:**
The dataset we're using contains salary of some people and the number of year of experience.
We are trying to predict the salary given the number of year of experience.
So the number of year of experience is the independent variable and the salary is the dependent variable.
The x-axis is related to the number of year of experience.
The y-axis is related to the salary.
y-intercept is the point that satisfy x = 0, in other words the point of the line that intersects the y-axis
Increasing y-intercept means translating the line up, and decreasing y-intercept means translating the line down
## Implementation
Run the following cell to import all needed modules, you must have opened this document on Google Colab before doing so: <a href="https://colab.research.google.com/github/davy-datascience/ml_algorithms/blob/master/LinearRegression/Approach-1/Linear%20Regression.ipynb" target="_parent"></a>
```
import pandas as pd
from sympy.geometry import Point, Line
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
import progressbar
```
I used the component Line from the module sympy.geometry. To create a Line I need to specify two Points. The line is also characterized by 3 coefficients (a, b and c) that match the following equation :
In my appoach I am dealing with a line equation of this sort :
So I translated the first equation to match my equation requirement :
Run the following cell. It contains the functions that will be used in the program:
```
def drawAll(X, Y, line):
""" plot the points from the dataset and draw the actual Line """
coefs = line.coefficients
x = np.linspace(X.min(),X.max())
y = (-coefs[0] * x - coefs[2]) / coefs[1]
plt.plot(x, y)
plt.scatter(X, Y, color = 'red')
plt.show()
def transformLine(point, line, x_median, learning_rate):
""" According to the random point, update the Line """
# We take the median of the x values for better results for the calculations of the horizontal distances
# Creation of the vertical line passing through the new point
ymin = line.points[0] if line.direction.y > 0 else line.points[1]
ymax = line.points[1] if line.direction.y > 0 else line.points[0]
vertical_line = Line(Point(point.x,ymin.y), Point(point.x,ymax.y))
# Find the intersection with our line (to calculate the vertical distance)
I = line.intersection(vertical_line)
vertical_distance = point.y - I[0].y
horizontal_distance = point.x - x_median
coefs = line.coefficients
a = coefs[0]
b = coefs[1]
c = coefs[2]
# Calculation of the points which constitute the new line
# Reminder: we add (learning_rate * vertical_distance * horizontal_distance) to the slope and we add (learning_rate * vertical_distance) to y-intercept
# The equation now looks like :
# y = - (a/b)*x + (learning_rate * vertical_distance * horizontal_distance) * x - (c/b) + learning_rate * vertical_distance
# We keep the same scope of the line so the min value of x and the max value of x don't change
x_min = line.points[0].x
y_min = - (a/b)*x_min + (learning_rate * vertical_distance * horizontal_distance * x_min) - (c/b) + learning_rate * vertical_distance
x_max = line.points[1].x
y_max = - (a/b)*x_max + (learning_rate * vertical_distance * horizontal_distance * x_max) - (c/b) + learning_rate * vertical_distance
newLine = Line(Point(x_min, y_min), Point(x_max, y_max))
return newLine
def predict(X, line):
""" I use my model (the equation of the line) to predict new values """
prediction = []
coefs = line.coefficients
a = coefs[0]
b = coefs[1]
c = coefs[2]
for x in X.values:
y = - (a/b)*x - (c/b)
prediction.append(y)
return prediction
```
Run the following cell to launch the linear regression program:
```
# Set the learning rate and the number of iterations
learning_rate = 0.01
nb_epochs = 1000
# Read the data
dataset = pd.read_csv("https://raw.githubusercontent.com/davy-datascience/ml_algorithms/master/LinearRegression/Approach-1/dataset/Salary_Data.csv")
# Separate the dataset into a training set and a test set
train, test = train_test_split(dataset, test_size = 0.2)
# Separation independent variable X - dependent variable y for the train set & the test set
X_train = train.YearsExperience
y_train = train.Salary
X_test = test.YearsExperience
y_test = test.Salary
# Looking for 1st line equation
# The line must have the same scope than the scatter plots from the dataset
# I decided to build the line choosing the point that has the max x-value and the point that has the min x-value
# Find the point with the maximum value of x in the dataset
idx_max = X_train.idxmax()
x_max = Point(X_train.loc[idx_max], y_train.loc[idx_max])
# Find the point with the minimum value of x in the dataset
idx_min = X_train.idxmin()
x_min = Point(X_train.loc[idx_min], y_train.loc[idx_min])
# Build the line with the 2 points
line = Line(x_min, x_max)
drawAll(X_train, y_train, line)
# Iterate choosing a random point and moving the line with the function transformLine
for i in progressbar.progressbar(range(nb_epochs)):
sample = train.sample()
point = Point(sample.YearsExperience, sample.Salary)
line = transformLine(point, line, X_train.median(), learning_rate)
#drawAll(X_train, y_train, line) # Uncomment this line to see the line at each iteration
drawAll(X_train, y_train, line)
# Predict the test set with my model and see
y_pred = predict(X_test, line)
print("MAE (Mean Absolute Error) is used to evaluate the model accuracy")
print("MAE for my model: {}".format(mean_absolute_error(y_pred, y_test)))
# Predict the test set with the sklearn algorithm
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train.to_frame(), y_train)
y_pred2 = regressor.predict(X_test.to_frame())
print("MAE for the algorithm of the sklearn module: {}".format(mean_absolute_error(y_pred2, y_test)))
```
| 4ae27012e943834dfd2910aaba2aa49907541b04 | 11,323 | ipynb | Jupyter Notebook | LinearRegression/Approach-1/Linear Regression.ipynb | davy-datascience/portfolio | 818689d290e732309e603ba7b720e2a5a20ac564 | [
"MIT"
]
| null | null | null | LinearRegression/Approach-1/Linear Regression.ipynb | davy-datascience/portfolio | 818689d290e732309e603ba7b720e2a5a20ac564 | [
"MIT"
]
| null | null | null | LinearRegression/Approach-1/Linear Regression.ipynb | davy-datascience/portfolio | 818689d290e732309e603ba7b720e2a5a20ac564 | [
"MIT"
]
| null | null | null | 43.053232 | 397 | 0.565928 | true | 1,630 | Qwen/Qwen-72B | 1. YES
2. YES | 0.909907 | 0.7773 | 0.707271 | __label__eng_Latn | 0.977827 | 0.481558 |
###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2015 L.A. Barba, Pi-Yueh Chuang.
# Exercise: Derivation of the vortex-source panel method
The potential at location $(x, y)$ induced by an uniform flow, a source sheet, and a vortex sheet can be represented as
$$
\begin{equation}
\begin{split}
\phi(x, y)
&= \phi_{uniform\ flow}(x, y) \\
&+ \phi_{source\ sheet}(x, y) + \phi_{vortex\ sheet}(x, y)
\end{split}
\end{equation}
$$
That is
$$
\begin{equation}
\begin{split}
\phi(x, y) &= xU_{\infty}\cos(\alpha) + yU_{\infty}\sin(\alpha) \\
&+
\frac{1}{2\pi} \int_{sheet} \sigma(s)\ln\left[(x-\xi(s))^2+(y-\eta(s))^2\right]^{\frac{1}{2}}ds \\
&-
\frac{1}{2\pi} \int_{sheet} \gamma(s)\tan^{-1} \frac{y-\eta(s)}{x-\xi(s)}ds
\end{split}
\end{equation}
$$
where $s$ is local coordinate on the sheet, and $\xi(s)$ and $\eta(s)$ are coordinate of the infinite source and vortex on the sheet. In the above equation, we assume the source sheet and the vortex sheet overlap.
------------------------------------------------------
### Q1:
If we discretize the sheet into $N$ panels, re-write the above equation using discretized integral. Assume $l_j$ represents the length of the panel $j$. And so that
$$
\begin{equation}
\left\{
\begin{array}{l}
\xi_j(s)=x_j-s\sin\beta_j \\
\eta_j(s)=y_j+s\cos\beta_j
\end{array}
,\ \ \
0\le s \le l_j
\right.
\end{equation}
$$
The following figure shows the panel $j$:
<center> </center>
HINT: for example, consider the integral $\int_0^L f(x) dx$, if we discretize the domain $0\sim L$ into 3 panels, the integral can be writen as:
$$
\int_0^L f(x) dx = \int_0^{L/3} f(x)dx+\int_{L/3}^{2L/3} f(x)dx+\int_{2L/3}^{L} f(x)dx \\
= \sum_{j=1}^3 \int_{l_j}f(x)dx
$$
----------------------------
Now let's assume
1. $\sigma_j(s) = constant = \sigma_j$
2. $\gamma_1(s) = \gamma_2(s) = ... = \gamma_N(s) = \gamma$
------------------------------------------------
### Q2:
Apply the above assumption into the equation of $\phi(x, y)$ you derived in Q1.
---------------------------
The normal velocity $U_n$ can be derived from the chain rule:
$$
\begin{equation}
\begin{split}
U_n &= \frac{\partial \phi}{\partial \vec{n}} \\
&=
\frac{\partial \phi}{\partial x}\frac{\partial x}{\partial \vec{n}}
+
\frac{\partial \phi}{\partial y}\frac{\partial y}{\partial \vec{n}} \\
&=
\frac{\partial \phi}{\partial x}\nabla x\cdot \vec{n}
+
\frac{\partial \phi}{\partial y}\nabla y\cdot \vec{n} \\
&=
\frac{\partial \phi}{\partial x}n_x
+
\frac{\partial \phi}{\partial y}n_y
\end{split}
\end{equation}
$$
The tangential velocity can also be obtained using the same technique. So we can have the normal and tangential velocity at the point $(x, y)$ using:
$$
\begin{equation}
\left\{
\begin{array}{l}
U_n(x, y)=\frac{\partial \phi}{\partial x}(x, y) n_x(x, y)+\frac{\partial \phi}{\partial y}(x, y) n_y(x, y) \\
U_t(x, y)=\frac{\partial \phi}{\partial x}(x, y) t_x(x, y)+\frac{\partial \phi}{\partial y}(x, y) t_y(x, y)
\end{array}
\right.
\end{equation}
$$
-------------------------------------
### Q3:
Using the above equation, derive the $U_n(x,y)$ and $U_t(x,y)$ from the equation you obtained in Q2.
-----------------------------------------
### Q4:
Consider the normal velocity at the center of $i$-th panel, i.e., $(x_{c,i}, y_{c,i})$, after replacing $(x_{c,i}, y_{c,i})$ with $(x, y)$ in the equation you derived in the Q3, we can re-write the equation in matrix form:
$$
\begin{equation}
\begin{split}
U_n(x_{c,i}, y_{c,i}) &= U_{n,i} \\
&= b^n_i + \left[\begin{matrix} A^n_{i1} && A^n_{i2} && ... && A^n_{iN}\end{matrix}\right]\left[\begin{matrix} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_N \end{matrix}\right] + \left(\sum_{j=1}^N B^n_{ij}\right)\gamma \\
&= b^n_i + \left[\begin{matrix} A^n_{i1} && A^n_{i2} && ... && A^n_{iN} && \left(\sum_{j=1}^N B^n_{ij}\right) \end{matrix}\right]\left[\begin{matrix} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_N \\ \gamma \end{matrix}\right]
\end{split}
\end{equation}
$$
$$
\begin{equation}
\begin{split}
U_t(x_{c,i}, y_{c,i}) &= U_{t,i} \\
&= b^t_i + \left[\begin{matrix} A^t_{i1} && A^t_{i2} && ... && A^t_{iN}\end{matrix}\right]\left[\begin{matrix} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_N \end{matrix}\right] + \left(\sum_{j=1}^N B^t_{ij}\right)\gamma \\
&= b^t_i + \left[\begin{matrix} A^t_{i1} && A^t_{i2} && ... && A^t_{iN} && \left(\sum_{j=1}^N B^t_{ij}\right) \end{matrix}\right]\left[\begin{matrix} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_N \\ \gamma \end{matrix}\right]
\end{split}
\end{equation}
$$
What are the $b^n_i$, $A^n_{ij}$, $B^n_{ij}$, $b^t_i$, $A^t_{ij}$, and $B^t_{ij}$?
-----------------------
Given the fact that (from the Fig. 1)
$$
\begin{equation}
\left\{\begin{matrix} \vec{n}_i=n_{x,i}\vec{i}+n_{y,i}\vec{j} = \cos(\beta_i)\vec{i}+\sin(\beta_i)\vec{j} \\ \vec{t}_i=t_{x,i}\vec{i}+t_{y,i}\vec{j} = -\sin(\beta_i)\vec{i}+\cos(\beta_i)\vec{j} \end{matrix}\right.
\end{equation}
$$
we have
$$
\begin{equation}
\left\{
\begin{matrix}
n_{x,i}=t_{y,i} \\
n_{y,i}=-t_{x,i}
\end{matrix}
\right.
,\ or\
\left\{
\begin{matrix}
t_{x,i}=-n_{y,i} \\
t_{y,i}=n_{x,i}
\end{matrix}
\right.
\end{equation}
$$
-----------------------
### Q5:
Applying the above relationship between $\vec{n}_i$ and $\vec{t}_i$ to your answer of the Q4, you should find that relationships exist between $B^n_{ij}$ and $A^t_{ij}$ and between $B^t_{ij}$ and $A^n_{ij}$. This means, in your codes, you don't have to actually calculate the $B^n_{ij}$ and $B^t_{ij}$. What are the relationship?
-------------------------
Now, note that when $i=j$, there is a singular point in the integration domain when calculating $A^n_{ii}$ and $A^t_{ii}$. This singular point occurs when $s=l_i/2$, i.e., $\xi_i(l_i/2)=x_{c,i}$ and $\eta_i(l_i/2)=y_{c,i}$. This means we need to calculate $A^n_{ii}$ and $A^t_{ii}$ analytically.
--------------------------
### Q6:
What is the exact values of $A^n_{ii}$ and $A^t_{ii}$?
------------------------------
In our problem, there are $N+1$ unknowns, that is, $\sigma_1, \sigma_2, ..., \sigma_N, \gamma$. We'll need $N+1$ linear equations to solve the unknowns. The first $N$ linear equations can be obtained from the non-penetration condition on the center of each panel. That is
$$
\begin{equation}
\begin{split}
U_{n,i} &= 0 \\
&= b^n_i + \left[\begin{matrix} A^n_{i1} && A^n_{i2} && ... && A^n_{iN} && \left(\sum_{j=1}^N B^n_{ij}\right) \end{matrix}\right]\left[\begin{matrix} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_N \\ \gamma \end{matrix}\right] \\
&,\ \ for\ i=1\sim N
\end{split}
\end{equation}
$$
or
$$
\begin{equation}
\begin{split}
&\left[\begin{matrix} A^n_{i1} && A^n_{i2} && ... && A^n_{iN} && \left(\sum_{j=1}^N B^n_{ij}\right) \end{matrix}\right]\left[\begin{matrix} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_N \\ \gamma \end{matrix}\right] =-b^n_i \\
&,\ \ for\ i=1\sim N
\end{split}
\end{equation}
$$
For the last equation, we use Kutta-condition to obtain that.
$$
\begin{equation}
U_{t,1} = - U_{t,N}
\end{equation}
$$
----------------------
### Q7:
Apply the matrix form of the $U_{t,i}$ and $U_{t,N}$ to the Kutta-condition and obtain the last linear equation. Re-arrange the equation so that unknowns are always on the LHS while the knowns on RHS.
---------------------
### Q8:
Now you have $N+1$ linear equations and can solve the $N+1$ unknowns. Try to combine the first $N$ linear equations and the last one (i.e. the Kutta-condition) in the Q7 and obtain the matrix form of the whole system of linear equations.
----------------------------
The equations can be solved now! This is the vortex-source panel method.
--------------------
Please ignore the cell below. It just loads our style for the notebook.
```python
from IPython.core.display import HTML
def css_styling(filepath):
styles = open(filepath, 'r').read()
return HTML(styles)
css_styling('../styles/custom.css')
```
<link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 750px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1000px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Alegreya Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:600px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Alegreya Sans', sans-serif;
font-style:regular;
font-weight: 200;
font-size: 50pt;
line-height: 100%;
color:#CD2305;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Fenix', serif;
font-size: 22pt;
line-height: 100%;
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Fenix', serif;
margin-top:12px;
font-size: 16pt;
margin-bottom: 3px;
font-style: regular;
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Fenix', serif;
font-size: 2pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Alegreya Sans', sans-serif;
font-weight: 300;
font-size: 16pt;
color: #CD2305;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'Source Code Pro', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
| 72771973d337dd50de924f999e5e9b92bfdd5091 | 20,263 | ipynb | Jupyter Notebook | lessons/11_Lesson11_Exercise.ipynb | josecarloszart/AeroPython | 73057ca0532b000365f9bf707726d35d75a28448 | [
"CC-BY-4.0"
]
| null | null | null | lessons/11_Lesson11_Exercise.ipynb | josecarloszart/AeroPython | 73057ca0532b000365f9bf707726d35d75a28448 | [
"CC-BY-4.0"
]
| null | null | null | lessons/11_Lesson11_Exercise.ipynb | josecarloszart/AeroPython | 73057ca0532b000365f9bf707726d35d75a28448 | [
"CC-BY-4.0"
]
| 1 | 2021-01-31T22:54:57.000Z | 2021-01-31T22:54:57.000Z | 28.823613 | 337 | 0.457188 | true | 3,599 | Qwen/Qwen-72B | 1. YES
2. YES | 0.880797 | 0.771844 | 0.679838 | __label__eng_Latn | 0.719866 | 0.417822 |
# Neuromatch Academy: Week 3, Day 1, Tutorial 1
# Real Neurons: The Leaky Integrate-and-Fire (LIF) Neuron Model
__Content creators:__ Qinglong Gu, Songtin Li, John Murray, Richard Naud, Arvind Kumar
__Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Matthew Krause, Spiros Chavlis, Michael Waskom
---
# Tutorial Objectives
This is Tutorial 1 of a series on implementing realistic neuron models. In this tutorial, we will build up a leaky integrate-and-fire (LIF) neuron model and study its dynamics in response to various types of inputs. In particular, we are going to write a few lines of code to:
- simulate the LIF neuron model
- drive the LIF neuron with external inputs, such as direct currents, Gaussian white noise, and Poisson spike trains, etc.
- study how different inputs affect the LIF neuron's output (firing rate and spike time irregularity)
Here, we will especially emphasize identifying conditions (input statistics) under which a neuron can spike at low firing rates and in an irregular manner. The reason for focusing on this is that in most cases, neocortical neurons spike in an irregular manner.
---
# Setup
```python
# Imports
import numpy as np
import matplotlib.pyplot as plt
```
```python
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
# use NMA plot style
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
my_layout = widgets.Layout()
```
```python
# @title Helper functions
def plot_volt_trace(pars, v, sp):
"""
Plot trajetory of membrane potential for a single neuron
Expects:
pars : parameter dictionary
v : volt trajetory
sp : spike train
Returns:
figure of the membrane potential trajetory for a single neuron
"""
V_th = pars['V_th']
dt, range_t = pars['dt'], pars['range_t']
if sp.size:
sp_num = (sp / dt).astype(int) - 1
v[sp_num] += 20 # draw nicer spikes
plt.plot(pars['range_t'], v, 'b')
plt.axhline(V_th, 0, 1, color='k', ls='--')
plt.xlabel('Time (ms)')
plt.ylabel('V (mV)')
plt.legend(['Membrane\npotential', r'Threshold V$_{\mathrm{th}}$'],
loc=[1.05, 0.75])
plt.ylim([-80, -40])
def plot_GWN(pars, I_GWN):
"""
Args:
pars : parameter dictionary
I_GWN : Gaussian white noise input
Returns:
figure of the gaussian white noise input
"""
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.plot(pars['range_t'][::3], I_GWN[::3], 'b')
plt.xlabel('Time (ms)')
plt.ylabel(r'$I_{GWN}$ (pA)')
plt.subplot(122)
plot_volt_trace(pars, v, sp)
plt.tight_layout()
def my_hists(isi1, isi2, cv1, cv2, sigma1, sigma2):
"""
Args:
isi1 : vector with inter-spike intervals
isi2 : vector with inter-spike intervals
cv1 : coefficient of variation for isi1
cv2 : coefficient of variation for isi2
Returns:
figure with two histograms, isi1, isi2
"""
plt.figure(figsize=(11, 4))
my_bins = np.linspace(10, 30, 20)
plt.subplot(121)
plt.hist(isi1, bins=my_bins, color='b', alpha=0.5)
plt.xlabel('ISI (ms)')
plt.ylabel('count')
plt.title(r'$\sigma_{GWN}=$%.1f, CV$_{\mathrm{isi}}$=%.3f' % (sigma1, cv1))
plt.subplot(122)
plt.hist(isi2, bins=my_bins, color='b', alpha=0.5)
plt.xlabel('ISI (ms)')
plt.ylabel('count')
plt.title(r'$\sigma_{GWN}=$%.1f, CV$_{\mathrm{isi}}$=%.3f' % (sigma2, cv2))
plt.tight_layout()
plt.show()
# the function plot the raster of the Poisson spike train
def my_raster_Poisson(range_t, spike_train, n):
"""
Generates poisson trains
Args:
range_t : time sequence
spike_train : binary spike trains, with shape (N, Lt)
n : number of Poisson trains plot
Returns:
Raster plot of the spike train
"""
# find the number of all the spike trains
N = spike_train.shape[0]
# n should smaller than N:
if n > N:
print('The number n exceeds the size of spike trains')
print('The number n is set to be the size of spike trains')
n = N
# plot rater
i = 0
while i < n:
if spike_train[i, :].sum() > 0.:
t_sp = range_t[spike_train[i, :] > 0.5] # spike times
plt.plot(t_sp, i * np.ones(len(t_sp)), 'k|', ms=10, markeredgewidth=2)
i += 1
plt.xlim([range_t[0], range_t[-1]])
plt.ylim([-0.5, n + 0.5])
plt.xlabel('Time (ms)', fontsize=12)
plt.ylabel('Neuron ID', fontsize=12)
```
---
# Section 1: The Leaky Integrate-and-Fire (LIF) model
```python
#@title Video 1: Reduced Neuron Models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='rSExvwCVRYg', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=rSExvwCVRYg
## Implementation of an LIF neuron model
Now, it's your turn to implement one of the simplest mathematical model of a neuron: the leaky integrate-and-fire (LIF) model. The basic idea of LIF neuron was proposed in 1907 by Louis Édouard Lapicque, long before we understood the electrophysiology of a neuron (see a translation of [Lapicque's paper](https://pubmed.ncbi.nlm.nih.gov/17968583/) ). More details of the model can be found in the book [**Theoretical neuroscience**](http://www.gatsby.ucl.ac.uk/~dayan/book/) by Peter Dayan and Laurence F. Abbott.
The subthreshold membrane potential dynamics of a LIF neuron is described by
\begin{eqnarray}
C_m\frac{dV}{dt} = -g_L(V-E_L) + I,\quad (1)
\end{eqnarray}
where $C_m$ is the membrane capacitance, $V$ is the membrane potential, $g_L$ is the leak conductance ($g_L = 1/R$, the inverse of the leak resistance $R$ mentioned in previous tutorials), $E_L$ is the resting potential, and $I$ is the external input current.
Dividing both sides of the above equation by $g_L$ gives
\begin{align}
\tau_m\frac{dV}{dt} = -(V-E_L) + \frac{I}{g_L}\,,\quad (2)
\end{align}
where the $\tau_m$ is membrane time constant and is defined as $\tau_m=C_m/g_L$.
You might wonder why dividing capacitance by conductance gives units of time! Find out yourself why this is true.
Below, we will use Eqn.(2) to simulate LIF neuron dynamics.
If $I$ is sufficiently strong such that $V$ reaches a certain threshold value $V_{\rm th}$, $V$ is reset to a reset potential $V_{\rm reset}< V_{\rm th}$, and voltage is clamped to $V_{\rm reset}$ for $\tau_{\rm ref}$ ms, mimicking the refractoriness of the neuron during an action potential:
\begin{eqnarray}
\mathrm{if}\quad V(t_{\text{sp}})\geq V_{\rm th}&:& V(t)=V_{\rm reset} \text{ for } t\in(t_{\text{sp}}, t_{\text{sp}} + \tau_{\text{ref}}]
\end{eqnarray}
where $t_{\rm sp}$ is the spike time when $V(t)$ just exceeded $V_{\rm th}$.
(__Note__: in the lecture slides, $\theta$ corresponds to the threshold voltage $V_{th}$, and $\Delta$ corresponds to the refractory time $\tau_{\rm ref}$.)
Thus, the LIF model captures the facts that a neuron:
- performs spatial and temporal integration of synaptic inputs
- generates a spike when the voltage reaches a certain threshold
- goes refractory during the action potential
- has a leaky membrane
The LIF model assumes that the spatial and temporal integration of inputs is linear. Also, membrane potential dynamics close to the spike threshold are much slower in LIF neurons than in real neurons.
## Exercise 1: Python code to simulate the LIF neuron
We now write Python code to calculate Eqn. (2) and simulate the LIF neuron dynamics. We will use the Euler method, which you saw in the linear systems case last week, to numerically integrate Eq 2.
The cell below initializes a dictionary that stores parameters of the LIF neuron model and the simulation scheme. You can use `pars=default_pars(T=simulation_time, dt=time_step)` to get the parameters (you can try to print the dictionary `pars`). Note that, `simulation_time` and `time_step` have the unit `ms`. In addition, you can add the value to a new parameter by `pars['New_param'] = value`.
```python
# @title
# @markdown Execute this code to initialize the default parameters
def default_pars(**kwargs):
pars = {}
# typical neuron parameters#
pars['V_th'] = -55. # spike threshold [mV]
pars['V_reset'] = -75. # reset potential [mV]
pars['tau_m'] = 10. # membrane time constant [ms]
pars['g_L'] = 10. # leak conductance [nS]
pars['V_init'] = -75. # initial potential [mV]
pars['E_L'] = -75. # leak reversal potential [mV]
pars['tref'] = 2. # refractory time (ms)
# simulation parameters #
pars['T'] = 400. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
# external parameters if any #
for k in kwargs:
pars[k] = kwargs[k]
pars['range_t'] = np.arange(0, pars['T'], pars['dt']) # Vector of discretized time points [ms]
return pars
pars = default_pars()
```
The cell below defines the function to simulate the LIF neuron when receiving external current inputs. You can use `v, sp = run_LIF(pars, Iinj)` to get the membrane potential (`v`) and spike train (`sp`) given the dictionary `pars` and input current `Iinj`.
```python
def run_LIF(pars, Iinj, stop=False):
"""
Simulate the LIF dynamics with external input current
Args:
pars : parameter dictionary
Iinj : input current [pA]. The injected current here can be a value
or an array
stop : boolean. If True, use a current pulse
Returns:
rec_v : membrane potential
rec_sp : spike times
"""
# Set parameters
V_th, V_reset = pars['V_th'], pars['V_reset']
tau_m, g_L = pars['tau_m'], pars['g_L']
V_init, E_L = pars['V_init'], pars['E_L']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tref = pars['tref']
# Initialize voltage and current
v = np.zeros(Lt)
v[0] = V_init
Iinj = Iinj * np.ones(Lt)
if stop: # set end of current to 0 if current pulse
Iinj[:int(len(Iinj) / 2) - 1000] = 0
Iinj[int(len(Iinj) / 2) + 1000:] = 0
tr = 0. # the count for refractory duration
# Simulate the LIF dynamics
rec_spikes = [] # record spike times
for it in range(Lt - 1):
if tr > 0: # check for refractoriness
v[it+1] = V_reset
tr = tr - 1
else:
# forward euler
dv = (-(v[it] - E_L) + Iinj[it] / g_L) * (dt / tau_m)
v[it+1] = v[it] + dv
if v[it+1] >= V_th: # reset voltage and record spike event
rec_spikes.append(it+1)
v[it+1] = V_reset
tr = tref / dt
rec_spikes = np.array(rec_spikes) * dt
return v, rec_spikes
pars = default_pars(T=500)
pars['tref'] = 10
# Uncomment below to test your function
v, sp = run_LIF(pars, Iinj=220, stop=True)
plot_volt_trace(pars, v, sp)
# plt.ylim([-80, -60])
plt.show()
```
---
# Section 2: Response of an LIF model to different types of input currents
In the following section, we will learn how to inject direct current and white noise to study the response of an LIF neuron.
```python
#@title Video 2: Response of the LIF neuron to different inputs
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='preNGdab7Kk', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=preNGdab7Kk
## Section 2.1: Direct current (DC)
### LIF neuron driven by constant current
Investigate the voltage response of the LIF neuron when receiving a DC input of 300 pA by `run_LIF` function.
```python
pars = default_pars(T=100) # get the parameters
# Run the model to obtain v and sp
v, sp = run_LIF(pars, Iinj=300)
plot_volt_trace(pars, v, sp)
plt.show()
```
In the plot above, you see the membrane potential of an LIF neuron. You may notice that the neuron generates a spike. But this is just a cosmetic spike only for illustration purposes. In an LIF neuron, we only need to keep track of times when the neuron hit the threshold so the postsynaptic neurons can be informed of the spike.
### Interactive Demo: Parameter exploration of DC input amplitude
Here's an interactive demo that shows how the LIF neuron behavior changes for DC input with different amplitudes.
How much DC is needed to reach the threshold (rheobase current)? How does the membrane time constant affect the frequency of the neuron?
```python
# @title
# @markdown Make sure you execute this cell to enable the widget!
my_layout.width = '450px'
@widgets.interact(
I_dc=widgets.FloatSlider(50., min=0., max=300., step=10.,
layout=my_layout),
tau_m=widgets.FloatSlider(10., min=2., max=20., step=2.,
layout=my_layout)
)
def diff_DC(I_dc=200., tau_m=10.):
pars = default_pars(T=100.)
pars['tau_m'] = tau_m
v, sp = run_LIF(pars, Iinj=I_dc)
plot_volt_trace(pars, v, sp)
plt.show()
```
interactive(children=(FloatSlider(value=50.0, description='I_dc', layout=Layout(width='450px'), max=300.0, ste…
```python
"""
1. As we increase the current, we observe that at 210 pA we cross the threshold.
2. As we increase the membrane time constant (slower membrane), the firing rate
is decreased because the membrane needs more time to reach the threshold after
the reset.
""";
```
## Section 2.2: Gaussian white noise (GWN) current
Given the noisy nature of neuronal activity _in vivo_, neurons usually receive complex, time-varying inputs.
To mimic this, we will now investigate the neuronal response when the LIF neuron receives Gaussian white noise $\xi(t)$ with mean
\begin{eqnarray}
E[\xi(t)]=\mu=0,
\end{eqnarray}
and autocovariance
\begin{eqnarray}
E[\xi(t)\xi(t+\tau)]=\sigma_\xi^2 \delta(\tau)
\end{eqnarray}
Note that the GWN has zero mean, that is, it describes only the fluctuations of the input received by a neuron. We can thus modify our definition of GWN to have a nonzero mean value $\mu$ that equals the DC input, since this is the average input into the cell. The cell below defines the modified gaussian white noise currents with nonzero mean $\mu$.
```python
#@title
#@markdown Execute this cell to get function to generate GWN
def my_GWN(pars, mu, sig, myseed=False):
"""
Function that generates Gaussian white noise input
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
myseed : random seed. int or boolean
the same seed will give the same
random number sequence
Returns:
I : Gaussian white noise input
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# set random seed
# you can fix the seed of the random number generator so that the results
# are reliable however, when you want to generate multiple realization
# make sure that you change the seed for each new realization.
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# generate GWN
# we divide here by 1000 to convert units to sec.
I_gwn = mu + sig * np.random.randn(Lt) / np.sqrt(dt / 1000.)
return I_gwn
```
### Exercise 2: LIF neuron driven by GWN
You can generate a noisy input with `my_GWN(pars, mu, sig, myseed=False)`. Here, $\mu=250$ and $\sigma=5$. Note that fixing the value of the random seed (e.g., `myseed=2020`) will allow you to obtain the same result every time you run this.
```python
pars = default_pars(T=100.)
sig_gwn = 5.
mu_gwn = 250.
# Calculate the GWN current
I_GWN = my_GWN(pars, mu=mu_gwn, sig=sig_gwn, myseed=2020)
# Run the model and calculate the v and the sp
v, sp = run_LIF(pars, Iinj=I_GWN)
plot_GWN(pars, I_GWN)
plt.show()
```
### Interactive Demo: LIF neuron Explorer for noisy input
The mean of the GWN is the amplitude of DC. Indeed, when $\sigma = 0$, GWN is just a DC.
So the question arises how does $\sigma$ of the GWN affect the spiking behavior of the neuron. For instance we may want to know
- how does the minimum input (i.e. $\mu$) needed to make a neuron spike change with increase in $\sigma$
- how does the spike regularity change with increase in $\sigma$
To get an intuition about these questions you can use the following interactive demo that shows how the LIF neuron behavior changes for noisy input with different amplitudes (the mean $\mu$) and fluctuation sizes ($\sigma$).
```python
# @title
# @markdown Make sure you execute this cell to enable the widget!
my_layout.width = '450px'
@widgets.interact(
mu_gwn=widgets.FloatSlider(200., min=100., max=300., step=5.,
layout=my_layout),
sig_gwn=widgets.FloatSlider(2.5, min=0., max=5., step=.5,
layout=my_layout)
)
def diff_GWN_to_LIF(mu_gwn, sig_gwn):
pars = default_pars(T=100.)
I_GWN = my_GWN(pars, mu=mu_gwn, sig=sig_gwn)
v, sp = run_LIF(pars, Iinj=I_GWN)
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.plot(pars['range_t'][::3], I_GWN[::3], 'b')
plt.xlabel('Time (ms)')
plt.ylabel(r'$I_{GWN}$ (pA)')
plt.subplot(122)
plot_volt_trace(pars, v, sp)
plt.tight_layout()
plt.show()
```
interactive(children=(FloatSlider(value=200.0, description='mu_gwn', layout=Layout(width='450px'), max=300.0, …
```python
"""
If we have bigger current fluctuations (increased sigma), the minimum input needed
to make a neuron spike is smaller as the fluctuations can help push the voltage above
threshold.
The standard deviation (or size of current fluctuations) dictates the level of
irregularity of the spikes; the higher the sigma the more irregular the observed
spikes.
""";
```
## Think!
- As we increase the input average ($\mu$) or the input fluctuation ($\sigma$), the spike count changes. How much can we increase the spike count, and what might be the relationship between GWN mean/std or DC value and spike count?
- We have seen above that when we inject DC, the neuron spikes in a regular manner (clock like), and this regularity is reduced when GWN is injected. The question is, how irregular can we make the neurons spiking by changing the parameters of the GWN?
---
# Section 3: Firing rate and spike time irregularity
When we plot the output firing rate as a function of GWN mean or DC value, it is called the input-output transfer function of the neuron (so simply F-I curve).
Spike regularity can be quantified as the **coefficient of variance (CV) of the inter-spike-interval (ISI)**:
\begin{align}
\text{CV}_{\text{ISI}} = \frac{std(\text{ISI})}{mean(\text{ISI})}
\end{align}
A Poisson train is an example of high irregularity, in which $\textbf{CV}_{\textbf{ISI}} \textbf{= 1}$. And for a clocklike (regular) process we have $\textbf{CV}_{\textbf{ISI}} \textbf{= 0}$ because of **std(ISI)=0**.
## Interactive Demo: F-I Explorer for different `sig_gwn`
How does the F-I curve of the LIF neuron change as we increase the $\sigma$ of the GWN? We can already expect that the F-I curve will be stochastic and the results will vary from one trial to another. But will there be any other change compared to the F-I curved measured using DC?
Here's an interactive demo that shows how the F-I curve of a LIF neuron changes for different levels of fluctuation $\sigma$.
```python
# @title
# @markdown Make sure you execute this cell to enable the widget!
my_layout.width = '450px'
@widgets.interact(
sig_gwn=widgets.FloatSlider(3.0, min=0., max=6., step=0.5,
layout=my_layout)
)
def diff_std_affect_fI(sig_gwn):
pars = default_pars(T=1000.)
I_mean = np.arange(100., 400., 10.)
spk_count = np.zeros(len(I_mean))
spk_count_dc = np.zeros(len(I_mean))
for idx in range(len(I_mean)):
I_GWN = my_GWN(pars, mu=I_mean[idx], sig=sig_gwn, myseed=2020)
v, rec_spikes = run_LIF(pars, Iinj=I_GWN)
v_dc, rec_sp_dc = run_LIF(pars, Iinj=I_mean[idx])
spk_count[idx] = len(rec_spikes)
spk_count_dc[idx] = len(rec_sp_dc)
# Plot the F-I curve i.e. Output firing rate as a function of input mean.
plt.figure()
plt.plot(I_mean, spk_count, 'k',
label=r'$\sigma_{\mathrm{GWN}}=%.2f$' % sig_gwn)
plt.plot(I_mean, spk_count_dc, 'k--', alpha=0.5, lw=4, dashes=(2, 2),
label='DC input')
plt.ylabel('Spike count')
plt.xlabel('Average injected current (pA)')
plt.legend(loc='best')
plt.show()
```
interactive(children=(FloatSlider(value=3.0, description='sig_gwn', layout=Layout(width='450px'), max=6.0, ste…
```python
"""
Discussion: If we use a DC input, the F-I curve is deterministic, and we can
found its shape by solving the membrane equation of the neuron. If we have GWN,
as we increase the sigma, the F-I curve has a more linear shape, and the neuron
reaches its threshold using less average injected current.
"""
```
'\nDiscussion: If we use a DC input, the F-I curve is deterministic, and we can \nfound its shape by solving the membrane equation of the neuron. If we have GWN, \nas we increase the sigma, the F-I curve has a more linear shape, and the neuron \nreaches its threshold using less average injected current.\n'
### Exercise 3: Compute $CV_{ISI}$ values
As shown above, the F-I curve becomes smoother while increasing the amplitude of the fluctuation ($\sigma$). In addition, the fluctuation can also change the irregularity of the spikes. Let's investigate the effect of $\mu=250$ with $\sigma=0.5$ vs $\sigma=3$.
Fill in the code below to compute ISI, then plot the histogram of the ISI and compute the $CV_{ISI}$. Note that, you can use `np.diff` to calculate ISI.
```python
def isi_cv_LIF(spike_times):
"""
Calculates the inter-spike intervals (isi) and
the coefficient of variation (cv) for a given spike_train
Args:
spike_times : (n, ) vector with the spike times (ndarray)
Returns:
isi : (n-1,) vector with the inter-spike intervals (ms)
cv : coefficient of variation of isi (float)
"""
if len(spike_times) >= 2:
# Compute isi
isi = np.diff(spike_times)
# Compute cv
cv = isi.std()/isi.mean()
else:
isi = np.nan
cv = np.nan
return isi, cv
pars = default_pars(T=1000.)
mu_gwn = 250
sig_gwn1 = 0.5
sig_gwn2 = 3.0
I_GWN1 = my_GWN(pars, mu=mu_gwn, sig=sig_gwn1, myseed=2020)
_, sp1 = run_LIF(pars, Iinj=I_GWN1)
I_GWN2 = my_GWN(pars, mu=mu_gwn, sig=sig_gwn2, myseed=2020)
_, sp2 = run_LIF(pars, Iinj=I_GWN2)
# Uncomment to check your function
isi1, cv1 = isi_cv_LIF(sp1)
isi2, cv2 = isi_cv_LIF(sp2)
my_hists(isi1, isi2, cv1, cv2, sig_gwn1, sig_gwn2)
```
## Interactive Demo: Spike irregularity explorer for different `sig_gwn`
In the above illustration, we see that the CV of inter-spike-interval (ISI) distribution depends on $\sigma$ of GWN. What about the mean of GWN, should that also affect the CV$_{\rm ISI}$? If yes, how? Does the efficacy of $\sigma$ in increasing the CV$_{\rm ISI}$ depend on $\mu$?
In the following interactive demo, you will examine how different levels of fluctuation $\sigma$ affect the CVs for different average injected currents ($\mu$).
```python
#@title
#@markdown Make sure you execute this cell to enable the widget!
my_layout.width = '450px'
@widgets.interact(
sig_gwn=widgets.FloatSlider(0.0, min=0., max=10.,
step=0.5, layout=my_layout)
)
def diff_std_affect_fI(sig_gwn):
pars = default_pars(T=1000.)
I_mean = np.arange(100., 400., 20)
spk_count = np.zeros(len(I_mean))
cv_isi = np.empty(len(I_mean))
isi_mean = np.zeros(I_mean.size)
isi_var = np.zeros(I_mean.size)
for idx in range(len(I_mean)):
I_GWN = my_GWN(pars, mu=I_mean[idx], sig=sig_gwn)
v, rec_spikes = run_LIF(pars, Iinj=I_GWN)
spk_count[idx] = len(rec_spikes)
if len(rec_spikes) > 3:
isi = np.diff(rec_spikes)
cv_isi[idx] = np.std(isi) / np.mean(isi)
isi_mean[idx] = np.mean(isi)
isi_var[idx] = np.var(isi)
# Plot the F-I curve i.e. Output firing rate as a function of input mean.
plt.figure()
plt.plot(I_mean[spk_count > 5], cv_isi[spk_count > 5], 'bo', alpha=0.5)
plt.xlabel('Average injected current (pA)')
plt.ylabel(r'Spike irregularity ($\mathrm{CV}_\mathrm{ISI}$)')
plt.ylim(-0.1, 1.5)
plt.grid(True)
plt.show()
# plt.figure()
# plt.scatter(isi_mean, isi_var)
# plt.xlabel('isi_mean')
# plt.ylabel('isi_var')
```
interactive(children=(FloatSlider(value=0.0, description='sig_gwn', layout=Layout(width='450px'), max=10.0, st…
### Try to answer the following:
- Does the standard deviation of the injected current affect the F-I curve in any qualitative manner?
- Why does increasing the mean of GWN reduce the $CV_{ISI}$?
- If you plot spike count (or rate) vs. $CV_{ISI}$, should there be a relationship between the two? Try out yourself.
```python
"""
1. Yes, it does. With DC input the F-I curve has a strong non-linearity but when
a neuron is driven with GWN, as we increase the $\sigma$ the non-linearity is
smoothened out. Essentially, in this case noise is acting to suppress the
non-linearities and render a neuron as a linear system.
2. (here is a short answer) When we increase the mean of the GWN, at some point
effective input mean is above the spike threshold and then the neuron operates
in the so called mean-driven regime -- as the input is so high all the neuron
is does is charge up to the spike threshold and reset. This essentially gives
almost regular spiking.
3. In an LIF, high firing rates are achieved for high GWN mean. Higher the mean,
higher the firing rate and lower the CV_ISI. So you will expect that as firing rate
increases, spike irregularity decreases. This is because of the spike threshold.
For a Poisson process there is no relationship between spike rate and spike
irregularity.
""";
```
---
# Section 4: Generation of Poisson type spike trains
*In the next tutorials, we will often use Poisson type spike train to explore properties of neurons and synapses. Therefore, it is good to know how to generate Poisson type spike trains.*
Mathematically, a spike train is a Point Process. One of the simplest models of a sequence of presynaptic pulse inputs is the Poisson process. We know that given temporal integration and refractoriness, neurons cannot behave as a Poisson Process, and Gamma Process gives a better approximation (*find out what might be the difference in the two processes*).
Here, however, we will assume that the incoming spikes are following Poisson statistics. A question arises about how to simulate a Poisson process. The generation of the Poisson process can be realized by at least two following ways:
- By definition, for a Poisson process with rate $\lambda$, the probability of finding one event in the time window with a sufficiently small length $\Delta t$ is $P(N = 1) = \lambda \Delta t$. Therefore, in each time window, we generate a uniformly distributed random variable $r \in [0,1]$ and generate a Poisson event when $r <\lambda \Delta t$. This method allows us to generate Poisson distributed spikes in an online manner.
- The interval $t_{k+1}-t_{k}$ between two Poisson events with rate $\lambda$ follows the exponential distribution, i.e., $P(t_{k+1}-t_{k}<t) = 1 - e^{\lambda t}$. Therefore, we only need to generate a set of exponentially distributed variables $\{s_k\}$ to obtain the timing of Poisson events $t_{k+1}=t_{k}+s_{k}$. In this method, we need to generate all future spikes at once.
Below, we use the first method in a function `Poisson_generator`, which takes arguments `(pars, rate, n, myseed)`.
```python
# @title
# @markdown Execute this cell to get a Poisson_generator function
def Poisson_generator(pars, rate, n, myseed=False):
"""
Generates poisson trains
Args:
pars : parameter dictionary
rate : noise amplitute [Hz]
n : number of Poisson trains
myseed : random seed. int or boolean
Returns:
pre_spike_train : spike train matrix, ith row represents whether
there is a spike in ith spike train over time
(1 if spike, 0 otherwise)
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# generate uniformly distributed random variables
u_rand = np.random.rand(n, Lt)
# generate Poisson train
poisson_train = 1. * (u_rand < rate * (dt / 1000.))
return poisson_train
```
```python
# we can use Poisson_generator to mimic presynaptic spike trains
pars = default_pars()
pre_spike_train = Poisson_generator(pars, rate=10, n=100, myseed=2020)
my_raster_Poisson(pars['range_t'], pre_spike_train, 100)
```
How do we make sure that the above spike trains are following Poisson statistics?
A Poisson process must have the following properties:
- The ratio of the mean and variance of spike count is 1
- Inter-spike-intervals are exponentially distributed
- Spike times are irregular i.e. $CV_{\rm ISI} = 1$
- Adjacent spike intervals are independent of each other.
---
# Summary
Congratulations! You've just built a leaky integrate-and-fire (LIF) neuron model from scratch, and studied its dynamics in response to various types of inputs, having:
- simulated the LIF neuron model
- driven the LIF neuron with external inputs, such as direct current, Gaussian white noise, and Poisson spike trains, etc.
- studied how different inputs affect the LIF neuron's output (firing rate and spike time irregularity),
with a special focus on low rate and irregular firing regime to mimc real cortical neurons. The next tutorial will look at how spiking statistics may be influenced by a neuron's input statistics.
However, if you have extra time, follow the section below to explore a different type of noise input.
---
# Bonus 1: Orenstein-Uhlenbeck Process
When a neuron receives spiking input, the synaptic current is Shot Noise -- which is a kind of colored noise and the spectrum of the noise determined by the synaptic kernel time constant. That is, a neuron is driven by **colored noise** and not GWN.
We can model colored noise using the Ohrenstein-Uhlenbeck process - filtered white noise.
## Ornstein-Uhlenbeck (OU) current
We next study if the input current is temporally correlated and is modeled as an Ornstein-Uhlenbeck process $\eta(t)$, i.e., low-pass filtered GWN with a time constant $\tau_{\eta}$:
$$\tau_\eta \frac{d}{dt}\eta(t) = \mu-\eta(t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t).$$
**Hint:** An OU process as defined above has
$$E[\eta(t)]=\mu$$
and autocovariance
$$[\eta(t)\eta(t+\tau)]=\sigma_\eta^2e^{-|t-\tau|/\tau_\eta},$$
which can be used to check your code.
```python
# @title `my_OU(pars, mu, sig, myseed)`
# @markdown Ececute this cell to enable the OU process
def my_OU(pars, mu, sig, myseed=False):
"""
Function that produces Ornstein-Uhlenbeck input
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I_ou : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I_ou = np.zeros(Lt)
I_ou[0] = noise[0] * sig
# generate OU
for it in range(Lt-1):
I_ou[it+1] = I_ou[it] + (dt / tau_ou) * (mu - I_ou[it]) + np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1]
return I_ou
```
### Interactive Demo: LIF Explorer with OU input
In the following, we will check how a neuron responds to a noisy current that follows the statistics of an OU process.
```python
# @title
# @markdown Remember to enable the widget by running the cell!
my_layout.width = '450px'
@widgets.interact(
tau_ou=widgets.FloatSlider(10.0, min=5., max=20.,
step=2.5, layout=my_layout),
sig_ou=widgets.FloatSlider(10.0, min=5., max=40.,
step=2.5, layout=my_layout),
mu_ou=widgets.FloatSlider(190.0, min=180., max=220.,
step=2.5, layout=my_layout)
)
def LIF_with_OU(tau_ou=10., sig_ou=40., mu_ou=200.):
pars = default_pars(T=1000.)
pars['tau_ou'] = tau_ou # [ms]
I_ou = my_OU(pars, mu_ou, sig_ou)
v, sp = run_LIF(pars, Iinj=I_ou)
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.plot(pars['range_t'], I_ou, 'b', lw=1.0)
plt.xlabel('Time (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$ (pA)')
plt.subplot(122)
plot_volt_trace(pars, v, sp)
plt.tight_layout()
plt.show()
```
interactive(children=(FloatSlider(value=10.0, description='tau_ou', layout=Layout(width='450px'), max=20.0, mi…
## Think!
- How does the OU type input change neuron responsiveness?
- What do you think will happen to the spike pattern and rate if you increased or decreased the time constant of the OU process?
```python
"""
Discussion:
In a limiting case, when the time constant of the OU process is very long and the
input current is almost flat, we expect the firing rate to decrease and neuron
will spike more regularly. So as OU process time we expect firing rate and
CV_ISI to decrease, if all other parameters are kept constant. We can also relate
the OU process time constant to the membrane time constant as the neuron membrane
does the same operation. This way we can link to the very first interactive demo.
""";
```
---
# Bonus 2: Generalized Integrate-and-Fire models
LIF model is not the only abstraction of real neurons. If you want to learn about more realistic types of neuronal models, watch the Bonus Video!
```python
#@title Video 3 (Bonus): Extensions to Integrate-and-Fire models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='G0b6wLhuQxE', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=G0b6wLhuQxE
| 79e4a3508ff176d70e3142163a32dc185c66e3fc | 840,977 | ipynb | Jupyter Notebook | tutorials/W3D1_RealNeurons/W3D1_Tutorial1.ipynb | lingqiz/course-content | 8a2f355ee78c9439440e192f5a0a0508e3cc8f66 | [
"CC-BY-4.0"
]
| null | null | null | tutorials/W3D1_RealNeurons/W3D1_Tutorial1.ipynb | lingqiz/course-content | 8a2f355ee78c9439440e192f5a0a0508e3cc8f66 | [
"CC-BY-4.0"
]
| null | null | null | tutorials/W3D1_RealNeurons/W3D1_Tutorial1.ipynb | lingqiz/course-content | 8a2f355ee78c9439440e192f5a0a0508e3cc8f66 | [
"CC-BY-4.0"
]
| null | null | null | 840,977 | 840,977 | 0.940857 | true | 9,530 | Qwen/Qwen-72B | 1. YES
2. YES | 0.749087 | 0.749087 | 0.561132 | __label__eng_Latn | 0.976083 | 0.142027 |
<h1 align=center> Computer Vision: Assignment 2 </h1>
| [<br /><sub><b>Amr M. Kayid</b></sub>](https://github.com/AmrMKayid)| [<br /><sub><b>Abdullah ELkady</b></sub>](https://github.com/AbdullahKady) |
| :---: | :---: |
| **37-15594** | **37-16401** |
| **T10** | **T10** |
```python
%reload_ext autoreload
%autoreload 2
%matplotlib inline
```
```python
import math
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import math
import numpy as np
import matplotlib.pyplot as plot
import matplotlib.cm as cm
import sys
import pylab
from matplotlib.widgets import Slider
```
```python
cameraman_img = Image.open("Cameraman.tif")
plt.imshow(cameraman_img, cmap="gray")
```
# Problem 1
Implement a function to compute the **Laplacian of Gaussian (LoG) kernel** given the value of sigma (𝜎).
Deliverables:
- Your code.
- The output edge image for 𝜎 = 2, 𝜎 = 3 and 𝜎 = 4. Name the edge images “LoG_2.jpg”, “Log_3.jpg” and “Log_4.jpg”. The treshold for all cases should be set to 0.1.
First, you are asked to compute the size of the kernel as per the following equation:
\begin{equation}
s=2 \times\lceil 3 \times \sigma\rceil+ 1
\end{equation}
```python
def compute_kernel_size(sigma: str) -> int:
return int((2 * np.ceil(3 * sigma)) + 1)
```
Given the size of the kernel (𝑠 × 𝑠), implement a function to compute the values inside as per the following function ([0,0] is the middle cell):
\begin{equation}
\operatorname{LoG}(x, y)=\frac{-1}{\pi \sigma^{4}}\left(1-\frac{x^{2}+y^{2}}{2 \sigma^{2}}\right) e^{-\frac{x^{2}+y^{2}}{2 \sigma^{2}}}
\end{equation}
```python
def log_(x: float, y: float, sigma: float) -> float:
n1 = (-1. / (math.pi * sigma**4))
common = (x**2 + y**2) / (2 * sigma**2)
n2 = (1. - common)
n3 = math.exp(-common)
return n1 * n2 * n3
```
```python
range_ = lambda start, end: range(start, end+1)
```
```python
def compute_log_mask(sigma: float) -> np.ndarray:
mask_width = compute_kernel_size(sigma)
log_mask = []
w_range = int(math.floor(mask_width / 2.))
print('Going from {} to range {}'.format(-w_range, w_range))
for x in range_(-w_range, w_range):
for y in range_(-w_range, w_range):
log_mask.append(log_(x, y, sigma))
log_mask = np.array(log_mask)
log_mask = log_mask.reshape(mask_width, mask_width)
return log_mask
```
```python
def convolve(image, mask):
image = np.array(image)
width = image.shape[1]
height = image.shape[0]
w_range = int(math.floor(mask.shape[0] / 2.))
new_image = np.zeros((height, width))
for i in range(w_range, width-w_range):
for j in range(w_range, height-w_range):
for k in range_(-w_range, w_range):
for h in range_(-w_range,w_range):
new_image[j, i] += mask[w_range + h,w_range+k] * image[j + h, i + k]
return new_image
```
```python
def prewitt(image):
vertical = np.array([[-1, 0, 1],
[-1, 0, 1],
[-1 ,0, 1]])
horizontal = np.array([[-1, -1, -1],
[0, 0, 0],
[1 ,1, 1]])
vertical_image = convolve(image, vertical)
horizontal_image = convolve(image, horizontal)
gradient_magnitude = np.sqrt(vertical_image**2 + horizontal_image**2)
return gradient_magnitude
```
```python
def normalize(image, min_val, max_val):
min_ = np.min(image)
max_ = np.max(image)
normalized = ((image - min_) / (max_ - min_)) * (max_val - min_val) + min_val
return normalized
```
```python
def run_log_edge_detection(image, sigma, threshold):
image = np.array(image)
print("creating mask")
log_mask = compute_log_mask(sigma)
print("smoothing the image by convolving with the log_ mask")
log_image = convolve(image, log_mask)
output_image = np.zeros_like(log_image)
prewitt_image = normalize(prewitt(image), 0, 1)
for i in range(1, log_image.shape[0] - 1):
for j in range(1, log_image.shape[1] - 1):
value = log_image[i][j]
kernel = log_image[i-1:i+2, j-1:j+2]
k_min = kernel.min()
k_max = kernel.max()
is_zero_crossing = False
if (value > 0 and k_min < 0) or (value < 0 and k_max > 0) or (value == 0):
is_zero_crossing = True
if (prewitt_image[i][j] > threshold) and is_zero_crossing:
output_image[i][j] = 1
plt.imshow(prewitt_image, cmap ='gray')
plt.imshow(output_image, cmap='gray')
```
```python
cameraman_img = Image.open("Cameraman.tif")
plt.imshow(cameraman_img, cmap="gray")
```
```python
run_log_edge_detection(cameraman_img, 2, 0.1)
plt.savefig('LoG_2.jpg')
```
```python
run_log_edge_detection(cameraman_img, 3, 0.1)
plt.savefig('LoG_3.jpg')
```
```python
run_log_edge_detection(cameraman_img, 4, 0.1)
plt.savefig('LoG_4.jpg')
```
# Problem 2
Implement a function to sharpen a gray-scale image as per the discussion provided in the tutorial. As a possible kernel for edge detection, consider the kernel provided below. Apply your function to the image “cameraman.tif”.
\begin{equation}
M=\left[\begin{array}{ccc}{-1} & {-1} & {-1} \\ {-1} & {8} & {-1} \\ {-1} & {-1} & {-1}\end{array}\right]
\end{equation}
Deliverables:
- Your code.
- The sharpened output image. Name the image “Sharpened.jpg”
```python
original_image = np.array(cameraman_img)
sharpening_layer = np.zeros_like(original_image)
sharpening_layer = convolve(original_image, np.array([[-1, -1, -1], [-1, 8, -1], [-1, -1, -1]]))
output_sharpened = 0.1 * sharpening_layer + original_image
# Apply threshold so that the ranges stay within 0-255
output_sharpened[output_sharpened > 255] = 255
output_sharpened[output_sharpened < 0] = 0
plt.imshow(output_sharpened, cmap=cm.gray)
plt.savefig('Sharpened.jpg')
```
| 710aca70cfffc4c2b181bd62b895e31c8a2b1508 | 10,030 | ipynb | Jupyter Notebook | Assignment2/Assignment2.ipynb | AmrMKayid/computer-vision | cdce82c2914631d46afe3cc9325893a7491222de | [
"MIT"
]
| null | null | null | Assignment2/Assignment2.ipynb | AmrMKayid/computer-vision | cdce82c2914631d46afe3cc9325893a7491222de | [
"MIT"
]
| null | null | null | Assignment2/Assignment2.ipynb | AmrMKayid/computer-vision | cdce82c2914631d46afe3cc9325893a7491222de | [
"MIT"
]
| null | null | null | 29.156977 | 355 | 0.516251 | true | 1,805 | Qwen/Qwen-72B | 1. YES
2. YES | 0.815232 | 0.897695 | 0.73183 | __label__eng_Latn | 0.547142 | 0.538619 |
Alexnet model for MNIST Dataset
Alexnet : AlexNet is the name of a convolutional neural network which has had a large impact on the field of machine learning, specifically in the application of deep learning to machine vision
Alexnet Architecture: The network had a very similar architecture as LeNet by Yann LeCun et al but was deeper, with more filters per layer, and with stacked convolutional layers. It consisted of 11×11, 5×5,3×3, convolutions, max pooling, dropout, data augmentation, ReLU activations, SGD with momentum. It attached ReLU activations after every convolutional and fully-connected layer.
```python
pip install -q tensorflow==2.4.1
```
```python
pip install -q tensorflow-quantum
```
```python
import tensorflow as tf
import tensorflow_quantum as tfq
from tensorflow.keras import datasets, layers, models, losses
import cirq
import sympy
import numpy as np
import seaborn as sns
import collections
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
```
```python
(x_train,y_train),(x_test,y_test) = datasets.mnist.load_data()
x_train = tf.pad(x_train, [[0, 0], [2,2], [2,2]])/255
x_test = tf.pad(x_test, [[0, 0], [2,2], [2,2]])/255
x_train = tf.expand_dims(x_train, axis=3, name=None)
x_test = tf.expand_dims(x_test, axis=3, name=None)
x_train = tf.repeat(x_train, 3, axis=3)
x_test = tf.repeat(x_test, 3, axis=3)
x_val = x_train[-2000:,:,:,:]
y_val = y_train[-2000:]
x_train = x_train[:-2000,:,:,:]
y_train = y_train[:-2000]
```
```python
model = models.Sequential()
model.add(layers.experimental.preprocessing.Resizing(224, 224, interpolation="bilinear", input_shape=x_train.shape[1:]))
model.add(layers.Conv2D(96, 11, strides=4, padding='same'))
model.add(layers.Lambda(tf.nn.local_response_normalization))
model.add(layers.Activation('relu'))
model.add(layers.MaxPooling2D(3, strides=2))
model.add(layers.Conv2D(256, 5, strides=4, padding='same'))
model.add(layers.Lambda(tf.nn.local_response_normalization))
model.add(layers.Activation('relu'))
model.add(layers.MaxPooling2D(3, strides=2))
model.add(layers.Conv2D(384, 3, strides=4, padding='same'))
model.add(layers.Activation('relu'))
model.add(layers.Conv2D(384, 3, strides=4, padding='same'))
model.add(layers.Activation('relu'))
model.add(layers.Conv2D(256, 3, strides=4, padding='same'))
model.add(layers.Activation('relu'))
model.add(layers.Flatten())
model.add(layers.Dense(4096, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(4096, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
```
```python
model.compile(optimizer='adam', loss=losses.sparse_categorical_crossentropy, metrics=['accuracy'])
history = model.fit(x_train, y_train, batch_size=64, epochs=10, validation_data= (x_val, y_val))
```
```python
fig, axs = plt.subplots(2, 1, figsize=(15,15))
axs[0].plot(history.history['loss'])
axs[0].plot(history.history['val_loss'])
axs[0].title.set_text('Training Loss vs Validation Loss')
axs[0].set_xlabel('Epochs')
axs[0].set_ylabel('Loss')
axs[0].legend(['Train', 'Val'])
axs[1].plot(history.history['accuracy'])
axs[1].plot(history.history['val_accuracy'])
axs[1].title.set_text('Training Accuracy vs Validation Accuracy')
axs[1].set_xlabel('Epochs')
axs[1].set_ylabel('Accuracy')
axs[1].legend(['Train', 'Val'])
```
```python
model.evaluate(x_test, y_test)
```
Result:
Got 98.63% accuracy
Loss 7.70%
```python
```
```python
```
```python
```
| 4009c495085bc525d3518417ca0f98f3780a349b | 71,093 | ipynb | Jupyter Notebook | Alexnet.ipynb | DhavalDeshkar/Deep-learning-model-comparision | 82a32b9446d679772c98598bcded2cb2d4f9489a | [
"MIT"
]
| null | null | null | Alexnet.ipynb | DhavalDeshkar/Deep-learning-model-comparision | 82a32b9446d679772c98598bcded2cb2d4f9489a | [
"MIT"
]
| null | null | null | Alexnet.ipynb | DhavalDeshkar/Deep-learning-model-comparision | 82a32b9446d679772c98598bcded2cb2d4f9489a | [
"MIT"
]
| null | null | null | 281 | 63,826 | 0.920513 | true | 970 | Qwen/Qwen-72B | 1. YES
2. YES | 0.833325 | 0.644225 | 0.536849 | __label__eng_Latn | 0.529662 | 0.085609 |
<center>
<h1><b>Lab 1</b></h1>
<h1>PHYS 580 - Computational Physics</h1>
<h2>Professor Molnar</h2>
</br>
<h3><b>Ethan Knox</b></h3>
<h3><b>September 4, 2020</b></h3>
</center>
## Imports
```python
import numpy as np
from matplotlib import pyplot as plt
```
## Differential Equation
$$\frac{dN}{dt}=-\frac{N}{\tau}$$
```python
def f(y, x, tau = 1):
return - y / tau
```
## Numerical Methods
Denoting: $\Delta x = x_{i+1}-x_i$
### Euler Method
$$y_{i+1} = y_i + f\left(y_i, x_i\right)\Delta x$$
```python
def euler(f, y0, x, *args):
y = y0 * np.ones_like(x)
for i in range(len(x) - 1):
dx = x[i + 1] - x[i]
k1 = f(y[i], x[i], *args) * dx
y[i + 1] = y[i] + k1
return y
```
### RK2 Method
$$
\begin{align}k_1 &= f\left(y_i, x_i\right)\Delta x\\
k_2 &= f\left(y_i + k_1, x_i + \delta x\right)\Delta x\\
\end{align}
$$
$$y_{i+1} = y_i + \frac{1}{2}\left(k_1 + k_2\right)$$
```python
def rk2(f, y0, x, *args):
y = y0 * np.ones_like(x)
for i in range(len(x) - 1):
dx = x[i + 1] - x[i]
k1 = f(y[i], x[i], *args) * dx
k2 = f(y[i] + k1, x[i] + dx, *args) * dx
y[i + 1] = y[i] + 0.5 * (k1 + k2)
return y
```
### RK4 Method
$$
\begin{align}k_1 &= \Delta x f\left(y_i, x_i\right)\\
k_2 &= \Delta x f\left(y_i + \frac{k_1}{2}, x_i + \frac{\Delta x}{2}\right)\\
k_3 &= \Delta x f\left(y_i + \frac{k_2}{2}, x_i + \frac{\Delta x}{2}\right)\\
k_4 &= \Delta x f\left(y_i + k_3, x_i + \Delta x\right)\\
\end{align}
$$
$$y_{i + 1} = y_i + \frac{1}{6}\left(k_1 + 2\left(k_2 + k_3\right) + k_4\right)$$
```python
def rk4(f, y0, x, *args):
y = y0 * np.ones_like(x)
for i in range(len(x) - 1):
dx = x[i + 1] - x[i]
k1 = f(y[i], x[i], *args) * dx
k2 = f(y[i] + 0.5 * k1, x[i] + 0.5 * dx, *args) * dx
k3 = f(y[i] + 0.5 * k2, x[i] + 0.5 * dx, *args) * dx
k4 = f(y[i] + k3, x[i] + dx, *args) * dx
y[i + 1] = y[i] + (k1 + 2 * (k2 + k3) + k4) / 6
return y
```
## Analytical Solution
$$N\left(t\right)=N_0e^{\frac{-x}{\tau}}$$
```python
def nexact(x, tau = 1, N0 = 1000):
return N0 * np.exp(-x / tau)
```
## Error
### Global
```python
def global_error(calculated, exact):
return np.cumsum(calculated - exact)
```
### Local
```python
def local_error(y_exact, y_approx, x):
error = np.zeros_like(x)
for i in np.arange(1, len(error)):
error[i-1] = y_exact(x[i]) - y_exact(x[i-1]) - (y_approx[i] - y_approx[i-1])
return error
```
## Code
#### Parameters
```python
N0 = 1000 # Initial Condition
tau = 1 # Rate Parameter
t_i = 0 # Initial Time
t_f = 5 * tau # Final Time
ratios = 1.0e-1*np.asarray([0.05, 0.2, 0.5, 1.0, 1.5])
labels = [rf'$\Delta t/\tau = {ratio:0.4f}$' for ratio in ratios]
```
#### Accumulate Data
```python
t = [np.arange(t_i, t_f, ratio * tau) for ratio in ratios]
N_euler = [euler(f, N0, t_, tau) for t_ in t]
N_rk2 = [rk2(f, N0, t_, tau) for t_ in t]
N_rk4 = [rk4(f, N0, t_, tau) for t_ in t]
```
#### Plotting
```python
fig1, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(16,16))
for i, label in enumerate(labels):
ax1.plot(t[i] / tau, N_euler[i], label=label)
ax2.plot(t[i] / tau, N_rk2[i], label=label)
ax3.plot(t[i] / tau, N_rk4[i], label=label)
for ax in (ax1, ax2, ax3):
ax.plot(t[0] / tau, nexact(t[0], tau, N0), c='k',ls='--', label="Exact")
ax.set_xlabel(r'$t/\tau$')
ax.legend()
ax.grid()
ax1.set_ylabel(r'Euler $N(t)$')
ax2.set_ylabel(r'RK2 $N(t)$')
ax3.set_ylabel(r'RK4 $N(t)$')
plt.tight_layout()
plt.savefig("Lab1_results.png")
```
```python
plt.cla()
plt.clf()
```
<Figure size 432x288 with 0 Axes>
```python
fig2, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16,16))
for i, label in enumerate(labels):
ax1.plot(t[i] / tau, np.absolute(local_error(nexact, N_euler[i], t[i]/tau)), label=label)
ax2.plot(t[i] / tau, np.absolute(local_error(nexact, N_rk2[i], t[i]/tau)), label=label)
ax3.plot(t[i] / tau, np.absolute(local_error(nexact, N_rk4[i], t[i]/tau)), label=label)
for ax in (ax1, ax2, ax3):
ax.set_xlabel(r'$t/\tau$')
ax.legend()
ax.grid()
ax1.set_ylabel("Euler Local Error")
ax2.set_ylabel("RK2 Local Error")
ax3.set_ylabel("RK4 Local Error")
plt.tight_layout()
plt.savefig("Lab1_local_error.png")
```
```python
plt.cla()
plt.clf()
```
```python
fig3, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16,16))
for i, label in enumerate(labels):
ax1.plot(t[i] / tau, np.absolute(global_error(N_euler[i], nexact(t[i], tau, N0))), label=label)
ax2.plot(t[i] / tau, np.absolute(global_error(N_rk2[i], nexact(t[i], tau, N0))), label=label)
ax3.plot(t[i] / tau, np.absolute(global_error(N_rk4[i], nexact(t[i], tau, N0))), label=label)
for ax in (ax1, ax2, ax3):
ax.set_xlabel(r'$t/\tau$')
ax.legend()
ax.grid()
ax1.set_ylabel("Euler Global Error")
ax2.set_ylabel("RK2 Global Error")
ax3.set_ylabel("RK4 Global Error")
plt.tight_layout()
plt.savefig("Lab1_global_error.png")
```
## Conclusion
It's very apparent that in both local and global errors, both Runge-Kutta methods outperformed Euler's method for a given timestep. Further, RK4 outperformed RK2 in the same manner.
```python
```
```python
```
```python
plt.cla()
plt.clf()
```
<Figure size 432x288 with 0 Axes>
```python
fig3, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16,16))
for i, label in enumerate(labels):
ax1.plot(t[i] / tau, np.absolute(global_error(N_euler[i], nexact(t[i], tau, N0))), label=label)
ax2.plot(t[i] / tau, np.absolute(global_error(N_rk2[i], nexact(t[i], tau, N0))), label=label)
ax3.plot(t[i] / tau, np.absolute(global_error(N_rk4[i], nexact(t[i], tau, N0))), label=label)
for ax in (ax1, ax2, ax3):
ax.set_xlabel(r'$t/\tau$')
ax.legend()
ax.grid()
ax1.set_ylabel("Euler Global Error")
ax2.set_ylabel("RK2 Global Error")
ax3.set_ylabel("RK4 Global Error")
plt.tight_layout()
plt.savefig("Lab1_global_error.png")
```
## Conclusion
It's very apparent that in both local and global errors, both Runge-Kutta methods outperformed Euler's method for a given timestep. Further, RK4 outperformed RK2 in the same manner.
```python
```
| c45c6da3df0f882827491ce6f8c05922748ffae9 | 786,787 | ipynb | Jupyter Notebook | Labs/Lab01/Lab1out.ipynb | ethank5149/PurduePHYS580 | 54d5d75737aa0d31ed723dd0e79c98dc01e71ca7 | [
"MIT"
]
| null | null | null | Labs/Lab01/Lab1out.ipynb | ethank5149/PurduePHYS580 | 54d5d75737aa0d31ed723dd0e79c98dc01e71ca7 | [
"MIT"
]
| null | null | null | Labs/Lab01/Lab1out.ipynb | ethank5149/PurduePHYS580 | 54d5d75737aa0d31ed723dd0e79c98dc01e71ca7 | [
"MIT"
]
| null | null | null | 90.072925 | 159,140 | 0.732557 | true | 2,296 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.841826 | 0.750749 | __label__eng_Latn | 0.193485 | 0.582574 |
# Линейная регрессия
При изучении задач машинного обучения удобно отталкиваться от реальных данных и реальных бизнес проблем. В качестве задачи для обсуждения метода линейной регрессии мы рассмотрим сильно учененный набор данных из конкурса kaggle [House Sales in King County, USA](https://www.kaggle.com/harlfoxem/housesalesprediction).
Перед нами стоит задача по предсказанию цены за дом в зависимости от различных его параметров. В этом наборе данных мы для простоты специально оставим из всех парамтеров только жилую площадь в квадратных футах.
## Загрузка данных
Воспользуемся для заргузки данных из CSV файла библиотекой pandas.
```python
import pandas as pd
housesales = pd.read_csv('../datasets/kc_house_data_reduced.csv')
housesales['price'] = housesales['price']/100000
housesales['sqft_living'] = housesales['sqft_living']/1000
```
## Исследование данных
Следующим шагом будет исследование данных, с которыми нам предстоит иметь дело. Самым мощным инструментов для этого является визуализация. Посмотим как цена дома зависит от его жилой площади
```python
%pylab inline
import seaborn as sns # для красивого стиля графиков
#from matplotlib.ticker import FuncFormatter # для форматирования разметки по оси y
ax = housesales.plot.scatter(x='sqft_living', y='price')
ax.set_xlabel('Жилая площадь, 1k футы$^2$')
ax.set_ylabel('Цена, 100k $')
```
Мы видим, что точек не очень много и между жилой площадью и ценой определенно нет простой зависимости. При этом вполне логично, что есть характерная закономерность: чем больше жилая площадь, тем больше цена. В качестве более сильного предположения можно выдвинуть следующее. Если площадь увеличивается в 2 раза, то и цена увеличивается примерно в 2 раза. Это означает, что мы можем попробовать описать наблюдаемую *зависимость* как *прямо пропорциональную*, и, что еще важнее, как *линейную*.
## Постановка задачи
Важнейшим элементом *любого* исследования данных является постановка формальной задачи, которую нам предстоит решить. В данном примере она напрашивается естественным образом: *по заданной площади оценить стоимость дома*.
Например, для площади 2000 квадратных футов цена примерно может составить 500,000$.
```python
ax = housesales.plot.scatter(x='sqft_living', y='price')
ax.set_xlabel('Жилая площадь, 1k футы$^2$')
ax.set_ylabel('Цена, 100k $')
ax.axvline(x=2, ymax=0.45, color='green', linestyle='--')
ax.axhline(y=5, xmax=0.5, color='red', linestyle='--')
scatter(2, 5, color='red', marker='x', s=40, linewidth=3)
```
## Признаки и целевая переменная
Представим наши данные в табличном виде
```python
housesales.head(10)
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>sqft_living</th>
<th>price</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1.180</td>
<td>2.2190</td>
</tr>
<tr>
<th>1</th>
<td>2.570</td>
<td>5.3800</td>
</tr>
<tr>
<th>2</th>
<td>0.770</td>
<td>1.8000</td>
</tr>
<tr>
<th>3</th>
<td>1.960</td>
<td>6.0400</td>
</tr>
<tr>
<th>4</th>
<td>1.680</td>
<td>5.1000</td>
</tr>
<tr>
<th>5</th>
<td>1.715</td>
<td>2.5750</td>
</tr>
<tr>
<th>6</th>
<td>1.060</td>
<td>2.9185</td>
</tr>
<tr>
<th>7</th>
<td>1.780</td>
<td>2.2950</td>
</tr>
<tr>
<th>8</th>
<td>1.890</td>
<td>3.2300</td>
</tr>
<tr>
<th>9</th>
<td>1.160</td>
<td>4.6800</td>
</tr>
</tbody>
</table>
</div>
Каждоя строка состоит из двух чисел: площадь дома и его цена. Обозначим число строка как $n$
```python
n = len(housesales)
n
```
30
Будем называть признаками те параметры, которые нам будут в задаче даны и целевыми переменными те, которые требуется определить. В данном случае у нас один признак - площадь дома и одна целевая переменная - цена. Каждую строку таблицы будем называть объектом.
Обозначим за $x^{(i)}$ - значение признака для $i$-го объекта, а за $y^{(i)}$ - значение целевой переменной для $i$-го объекта. Индекс $i$ пробегает значения от $1$ до $n$.
Пару $(x^{(i)}, y^{(i)})$ будем называть элементом обучающей выборки. A весь набор таких пар обучающей выборкой и обозначать ее как $(X, Y)$.
Следующим важным элементом является функция $h_\theta(x)$, которую называют гипотезой или моделью. В данной задаче, если на вход этой функции подать площадь дома, то на выходе должны быть вычислена примерная стоимость.
Можно грубо сказать, что все методы машинного обучения направленны на решение одного вопроса: как найти функцию $h$?
Суть любой задачи машинного обучения сводится к схеме приведенной ниже
**TODO**: вставить схему
## Линейная регрессия
В методе линейной регресии предлагается представить гипотезу в виде
$$
h_\theta(x) = \theta_0 + \theta_1 x.
$$
Здесь $\theta_0$ и $\theta_1$ - неизвестные вещественнозначные параметры, которые нам предстоит определить (выучить).
Рассмотрим несколько примеров для различных параметров $\theta_i$. Для простоты попложим $\theta_0= 0$.
```python
ax = housesales.plot.scatter(x='sqft_living', y='price')
ax.set_xlabel('Жилая площадь, 1k футы$^2$')
ax.set_ylabel('Цена, 100k $')
xx = np.linspace(1, 3, 2)
plot(xx, 1 * xx, label='$\\theta_1 = 1$')
plot(xx, 2 * xx, label='$\\theta_1 = 2$')
plot(xx, 3 * xx, label='$\\theta_1 = 3$')
ax.legend(loc=4)
```
## Функция потерь
Чтобы выбрать параметры $\theta_i$ нам необходим числовой критерий $L(\theta_0, \theta_1)$. Допустим мы хотим придумать такой критерий, чтобы чем его значение меньше, тем лучшие значения параметров гипотезы мы выбрали. При этом чтобы этот критейри не принимал отрицательных значений. Это означает, что если критерий равен нулю, то мы считаем, что выбрали наилучшие из возможных значений параметров.
В методе линейно регресси в качестве такого критерий предлагается рассмотреть среднее значение квадрата ошибки.
$$
L(\theta_0, \theta_1) = \frac{1}{2n} \sum\limits_{i=1}^{n}\left(h_\theta(x^{(i)}) - y^{(i)}\right)^2.
$$
$L$ также называют функцией потерь.
## Интуиция за функцией потерь
Чтобы пролить свет на выбор имеено такого вида функции $L$ рассмотрим упрощенный пример задачи регрессии. Обучающая выборка представлена на рисунке, а гипотеза $h$ имеет упрощенный вид
$$
h_\theta(x) = \theta_1 x
$$
```python
# Обучающая выборка
x = np.array([1, 2, 3])
y = np.array([1, 2, 3])
# Будем строить график h(x) и L(theta)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
# Отметим на левом графике обучающую выборку с помощью зеленых крестиков
ax1.scatter(x, y, s=100, linewidth=3, marker='x', color='green')
# И построим график гипотезы при трех различных значениях theta
theta_1 = 0.75
theta_2 = 1
theta_3 = 1.3
xx = np.linspace(0.5, 3.5, 10)
ax1.plot(xx, theta_1*xx, color='red', label='$\\theta_1={}$'.format(theta_1))
ax1.plot(xx, theta_2*xx, color='blue', label='$\\theta_1={}$'.format(theta_2))
ax1.plot(xx, theta_3*xx, color='green', label='$\\theta_1={}$'.format(theta_3))
ax1.legend(loc=4)
ax1.set_xlim((0, 3.5))
ax1.set_ylim((0, 3.5))
ax1.set_xlabel('$x$')
ax1.set_ylabel('$y$')
# Определим функцию L для вычисления функции потерь
def L(x, y, theta_1):
n = len(x)
return 1/(2*n)*np.sum((theta_1*x - y)**2)
# И построим график функции потерь в зависимости от различных значений параметра theta_1
ax2.scatter(theta_1, L(x, y, theta_1), color='red', s=100)
ax2.scatter(theta_2, L(x, y, theta_2), color='blue', s=100)
ax2.scatter(theta_3, L(x, y, theta_3), color='green', s=100)
theta = np.linspace(0.6, 1.4, 20)
L_theta = [L(x, y, theta_1) for theta_1 in theta]
_ = ax2.plot(theta, L_theta)
ax2.set_xlabel('$\\theta$')
ax2.set_ylabel('$L(\\theta)$')
```
Как можно увидеть из правого графика, благодря тому, что функция $L$ имеет квадратичных вид, то у нее есть ровно один минимум, которых автоматически является глобальным. В данном примере этот минимум достугается в точке $\theta_1 = 1$.
## Метод градиентного спука
Будучи уверены, что функция $L$ имеет ровно один минимум мы можем сформулировать задачу поиска наилучшей гипотезы как задачу оптимизации, в которой необходимо минимизировать функцию потерь $L$ по параметрам гипотезы $\theta_0, \theta_1$.
\begin{equation}
\theta_0^*, \theta_1^* = \underset{\theta_0, \theta_1 \in \mathbb{R}}{\mathrm{argmin}} L(\theta_0, \theta_1).
\end{equation}
В данной поставноке мы должны минимизировать функцию сразу по двум переменным. Построим на основе нашей обучающей выборки поверхность $L(\theta_0, \theta_1)$.
```python
def L(x, y, theta_0, theta_1):
n = len(x)
h = theta_0 + theta_1*x
return 1/(2*n)*np.sum((h - y)**2)
X = housesales['sqft_living']
y = housesales['price']
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
theta_0 = np.linspace(-30, 30, 100)
theta_1 = np.linspace(-10, 10, 100)
L_theta = np.array([[L(X, y, theta_0i, theta_1j) for theta_0i in theta_0] for theta_1j in theta_1])
theta_0v, theta_1v = np.meshgrid(theta_0, theta_1)
ax.plot_surface(theta_0v, theta_1v, L_theta)
plt.xlabel('$\\theta_0$')
plt.ylabel('$\\theta_1$')
plt.title('Поверхность функции потерь $L(\\theta_0, \\theta_1)$')
```
### Линии уровеня
Для визуального анализа график поверхности не всегда является удобным способом представления функции потерь. Для случая функции 2-х переменных удобно рассмотреть ее графическое представление в виде линий уровня - уривых, на которых функция принимает одинаковое значение.
```python
levels = [2, 5, 10, 50, 100]
CS = plt.contour(theta_0v, theta_1v, L_theta, levels, colors='k')
plt.clabel(CS, inline=1, fontsize=10, colors='k')
plt.xlabel('$\\theta_0$')
plt.ylabel('$\\theta_1$')
plt.title('Линии уровня функции потерь $L(\\theta_0, \\theta_1)$')
```
Из приведенного выше графика хорошо видно, что у функции $L$ есть всего одна точка экстремума, которая соответсвует глобальному минимум. Важный вопрос, ответ на который нам предстоит рассмотреть, заключается в том, как найти эту точку минимума. Т.е. решить задачу оптимизации или по терминологии машинного обучения - обучить модель линейной регресии.
В качестве основного метода решения мы будет использовать градиентный спуск. Напомним, что под градиентом функции в точке понимается вектор, который указывает направление наискорейшего роста функции. Компоненты вектора градиента определяеются как частные производные.
$$
grad~ L = \left(\frac{\partial L}{\partial \theta_0}, \frac{\partial L}{\partial \theta_1}\right).
$$
Для пояснение того, как метод градиентного спуска позволяет найти минимум функции, мы вернемся к одномерному случаю, когда зафиксируем $\theta_0 = 0$ и будем рассматривать зависимость функции потерь $L$ только от $\theta_1$. График этой зависимости для задачи оценки стоимост жилья приведен на рисунке ниже.
Суть метода градиентного спуска заключается в следующем. На каждом шаге мы обновляем оценку для аргумента минимума $\theta_1^{(k)}$, начиная с некоторого начального значения $\theta_1^{(0)}$ по правилу
$$
\theta_1^{(k + 1)} = \theta_1^{(k)} - \alpha \frac{\partial L}{\partial \theta_1}(\theta_1^{(k)}).
$$
Здесь $\alpha$ - параметр, который разывают скоростью обучения (learning rate). Удачный выбор значения $\alpha$ может полохительно сказаться на числе шагов алгоритма, которые необходимы для приближения к значения минимума.
Попробуем с помощью визуализации пояснить суть работы алгоритма.
Пусть $\theta_1^{(0)} = 8$. Поскольку производная в точке описывает тангенс угла наклона касательной к функции в этой точке, то как видно из рисунка, производная будет иметь положительный знак
$$
\frac{\partial L}{\partial \theta_1}(\theta_1^{(0)}) > 0.
$$
А следовательно, производная взятая с отрицательным знаком будет направлена в сторону уменьшения значения функции $L$ и это действительно то направление, в котором следует искать минимум.
```python
theta_1 = np.linspace(-5, 9, 100)
L_theta_1 = [L(X, y, 0, theta_1i) for theta_1i in theta_1]
plot(theta_1, L_theta_1)
theta1_0 = 8
scatter(theta1_0, L(X, y, 0, theta1_0), s=50)
def gradL(X, y, theta_0, theta_1):
n = len(X)
h = theta_0 + theta_1*X.values
return 1/(n)*np.sum(X.transpose()@(h - y.values))
arrow(theta1_0, L(X, y, 0, theta1_0), -1, -gradL(X, y, 0, theta1_0), linewidth=2, head_width=0.5, head_length=2, fc='k', ec='k')
vlines(x=theta1_0, ymin=-20, ymax=L(X, y, 0, theta1_0), color='green', alpha=0.3, linestyles='--')
vlines(x=theta1_0 - 1, ymin=-20, ymax=L(X, y, 0, theta1_0 -1), color='green', alpha=0.3, linestyles='--')
ylim(-20, 100)
xlabel('$\\theta_1$')
ylabel('$L(0, \\theta_1)$')
text(x=theta1_0-0.45, y = -35, text='$\\theta_1^{(0)}$', s=50)
text(x=theta1_0-1.45, y = -35, text='$\\theta_1^{(1)}$', s=50)
```
Рассмотрим другую ситуацию. Пусть $\theta_1^{(0)} = -2$, тогда производная в этой точке будет иметь отрицательный знак
$$
\frac{\partial L}{\partial \theta_1}(\theta_1^{(0)}) < 0.
$$
А следовательно взятая с обратным знаком будет подталкивать значение $\theta_1^{(i)}$ вправо, что в таком случае и требуется.
```python
plot(theta_1, L_theta_1)
theta1_0 = -2
scatter(theta1_0, L(X, y, 0, theta1_0), s=50)
def gradL(X, y, theta_0, theta_1):
n = len(X)
h = theta_0 + theta_1*X.values
return 1/(n)*np.sum(X.transpose()@(h - y.values))
arrow(theta1_0, L(X, y, 0, theta1_0), 1, gradL(X, y, 0, theta1_0), linewidth=2, head_width=0.5, head_length=2, fc='k', ec='k')
vlines(x=theta1_0, ymin=-20, ymax=L(X, y, 0, theta1_0), color='green', alpha=0.3, linestyles='--')
vlines(x=theta1_0 + 1, ymin=-20, ymax=L(X, y, 0, theta1_0 +1), color='green', alpha=0.3, linestyles='--')
ylim(-20, 100)
xlabel('$\\theta_1$')
ylabel('$L(0, \\theta_1)$')
text(x=theta1_0-0.45, y = -35, text='$\\theta_1^{(0)}$', s=50)
text(x=theta1_0+0.55, y = -35, text='$\\theta_1^{(1)}$', s=50)
```
Повторяя шаг алгорима многократно мы постеменно будем приближаться к точке минимума функции как это показано на рисунке ниже.
```python
plot(theta_1, L_theta_1)
theta1_0 = -2
scatter(theta1_0, L(X, y, 0, theta1_0), s=50)
alpha = 0.1
for i in range(8):
arrow(theta1_0, L(X, y, 0, theta1_0),
-alpha*gradL(X, y, 0, theta1_0), -alpha*gradL(X, y, 0, theta1_0)**2,
linewidth=2, head_width=0.5, head_length=2, fc='k', ec='k')
vlines(x=theta1_0, ymin=-20, ymax=L(X, y, 0, theta1_0), color='green', alpha=0.3, linestyles='--')
text(x=theta1_0-0.45, y = -35, text='$\\theta_1^{(%d)}$' % i, s=50)
theta1_0 += -alpha*gradL(X, y, 0, theta1_0)
vlines(x=theta1_0, ymin=-20, ymax=L(X, y, 0, theta1_0), color='green', alpha=0.3, linestyles='--')
ylim(-20, 100)
xlabel('$\\theta_1$')
ylabel('$L(0, \\theta_1)$')
title('$\\alpha = {}$'.format(alpha))
```
Отметим следующее наблюдение, которое хорошо видно на графике выше. Чем ближе мы к минимуму, тем меньше изменяется значение параметра $\theta_1^{(i)}$ и это при том, что скорость обучения постоянна $\alpha = const$. Объяснить этот эффект можно следующим образом. В точке экстремума касательная к функции горизонтальна и проивзодная равна нулю. Следовательно, чем ближе мы к точке оптимума, тем меньше обсолютное значение производной и тем меньше изменяется параметр $\theta_1^{(i)}$. В некоторых случаях это может создавать проблемы с необходимостью использовать очень большое число итераций для достижения результата. В этом случае следует увеличть скорость обучения обучения $\alpha$.
```python
plot(theta_1, L_theta_1)
theta1_0 = -2
scatter(theta1_0, L(X, y, 0, theta1_0), s=50)
alpha = 0.2
for i in range(3):
arrow(theta1_0, L(X, y, 0, theta1_0),
-alpha*gradL(X, y, 0, theta1_0), L(X, y, 0, theta1_0 - alpha*gradL(X, y, 0, theta1_0)) - L(X, y, 0, theta1_0),
linewidth=2, head_width=0.5, head_length=2, fc='k', ec='k')
vlines(x=theta1_0, ymin=-20, ymax=L(X, y, 0, theta1_0), color='green', alpha=0.3, linestyles='--')
text(x=theta1_0-0.45, y = -35, text='$\\theta_1^{(%d)}$' % i, s=50)
theta1_0 += -alpha*gradL(X, y, 0, theta1_0)
vlines(x=theta1_0, ymin=-20, ymax=L(X, y, 0, theta1_0), color='green', alpha=0.3, linestyles='--')
ylim(-20, 100)
xlabel('$\\theta_1$')
ylabel('$L(0, \\theta_1)$')
title('$\\alpha = {}$'.format(alpha))
```
При это следует учитывать, что слишком большое значение $\alpha$ может приводить к тому, что мы будем постоянно перепрыгивать через локальный минимум и алгоритм может не сойтись к искомой точке.
```python
plot(theta_1, L_theta_1)
theta1_0 = -2
scatter(theta1_0, L(X, y, 0, theta1_0), s=50)
alpha = 0.63
for i in range(6):
arrow(theta1_0, L(X, y, 0, theta1_0),
-alpha*gradL(X, y, 0, theta1_0), L(X, y, 0, theta1_0 - alpha*gradL(X, y, 0, theta1_0)) - L(X, y, 0, theta1_0),
linewidth=2, head_width=1, head_length=0.1, fc='k', ec='k')
vlines(x=theta1_0, ymin=-20, ymax=L(X, y, 0, theta1_0), color='green', alpha=0.3, linestyles='--')
text(x=theta1_0-0.45, y = -35, text='$\\theta_1^{(%d)}$' % i, s=50)
theta1_0 += -alpha*gradL(X, y, 0, theta1_0)
vlines(x=theta1_0, ymin=-20, ymax=L(X, y, 0, theta1_0), color='green', alpha=0.3, linestyles='--')
ylim(-20, 100)
xlabel('$\\theta_1$')
ylabel('$L(0, \\theta_1)$')
title('$\\alpha = {}$'.format(alpha))
```
Рекомендуется использовать следующие значения для скорости обучения в порядке их возрастания.
$$
\alpha = 0.01, 0.03, 0.1, 0.3, 1.
$$
Вернемся к двумерному случаю и проиллюстрируем работу метода градиентного спуска с помощью линий уровня.
```python
CS = plt.contour(theta_0v, theta_1v, L_theta, levels, colors='k')
plt.clabel(CS, inline=1, fontsize=10, colors='k')
plt.xlabel('$\\theta_0$')
plt.ylabel('$\\theta_1$')
plt.title('Линии уровня функции потерь $L(\\theta_0, \\theta_1)$')
def gradL(X, y, theta_0, theta_1):
n = len(X)
h = theta_0 + theta_1*X.values
return (1/(n)*np.sum(h - y.values),
1/(n)*np.sum(X.transpose()@(h - y.values)))
theta0_i = 20
theta1_i = 5
alpha = 0.1
for i in range(1000):
upd_theta0_i, upd_theta1_i = gradL(X, y, theta0_i, theta1_i)
if i < 10 or i % 50 == 0:
scatter(theta0_i, theta1_i, s=20)
theta0_i -= alpha*upd_theta0_i
theta1_i -= alpha*upd_theta1_i
```
### Выражения для градиентов
Для реализации метода нам необходимо в явном виде вычислить частные производные функции потерь.
\begin{eqnarray}
\frac{\partial L}{\partial \theta_0} & = & \frac{1}{n}\sum\limits_{i=1}^n (h_\theta(x^{(i)}) - y^{(i)}) \\
\frac{\partial L}{\partial \theta_1} & = & \frac{1}{n}\sum\limits_{i=1}^n (h_\theta(x^{(i)}) - y^{(i)})x^{(i)} \\
\end{eqnarray}
Здесь использовано правило вычисления производной для составной функции.
Используя данное представление мы уже нашли выше оценку для минимума функции $L$. Координаты этой точки равны
```python
print("theta_0* = {:g}, theta_1* = {:g}".format(theta0_i, theta1_i))
```
theta_0* = 0.431547, theta_1* = 2.16905
Функция потерь при данном значении параметров принимает значение
```python
L(X, y, theta0_i, theta1_i)
```
1.048716528027162
Построим график для полученной гипотезы
```python
ax = housesales.plot.scatter(x='sqft_living', y='price')
ax.set_xlabel('Жилая площадь, 1k футы$^2$')
ax.set_ylabel('Цена, 100k $')
xx = np.linspace(1, 3, 2)
plot(xx, theta0_i + theta1_i * xx)
```
И вернемся к вопросу, который мы можем задать этой модели. Какова примерная стоимость дома площадью 2000 квадратных футов?
```python
print("Примерная стоимость: {:.2g} * 100k$".format((theta0_i + theta1_i * 2)))
```
Примерная стоимость: 4.8 * 100k$
### Основные результаты
Поздравляю! С помощью метода линейной регресии мы смогли построить систему машинного обучения, которая на основе предоставленных данных делает оценку для стоимости жилья. Для закрепления материала приведем еще раз основные понятия
* задача регрессии (regression problem) - задача машинного обучения, в которой по заданому входу нужно предсказать значение из множества вещественных чисел.
* объекты (objects)- сущности, для которых алгоритм делает предсказания
* признаки (features) - числовое представление объектов
* матрица объекты-признаки - матрица по строкам которой отложены объекты, а по столбцам - признаки
* целевая переменная (target) - значение, которое необходимо предсказать
* обучающая выборка (training set)- множество пар (объект, значение целевой переменной), которые используются для выбора модели
* линейная регрессия (linear regression) - метод решения задачи регрессии
* гипотеза (hypothesis) - вид функции, которая используется для нахождения целевой переменной
* параметры гипотезы (hypothesis parameters) - неизвестные величины, которые необходимо найти в ходе обучения
* функция потерь (loss function) - штраф за неправильный результат
* градиентный спуск (gradient descent) - метод решения задачи оптимизации, который базируется на использовании градиента функции
* срокость обучения (learning rate) - основной параметр метода градиентного спуска, который отвечает за скорость сходимости
Далее мы рассмотрим как динейную регресси можно обобщить на случай произвольной размерности и что можно сделать с явно нелинейными зависимостями.
## Многомерная линейная регрессия
В случае, если целевай переменная зависит от нескольких признаков, например, площадь дома и цисло ванных комнат, линейную регрессию можно обобщить на случай многмерного вектора признаков.
Рассмотрим задачу прогноза цены жилья в зависимости от следующих параметров:
* количество спальных комнат
* количество ванных комнат
* жилая площадь
* общая площадь
* количество этажей
* год постройки
```python
housesales = pd.read_csv('../datasets/house_price/train.csv.gz')[['SalePrice',
'LotArea',
'LotFrontage',
'1stFlrSF',
'OverallQual',
'GarageArea',
'YearBuilt',
'TotalBsmtSF',
'2ndFlrSF',]].dropna()
housesales['SalePrice'] = housesales['SalePrice']/100000
housesales.head()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>SalePrice</th>
<th>LotArea</th>
<th>LotFrontage</th>
<th>1stFlrSF</th>
<th>OverallQual</th>
<th>GarageArea</th>
<th>YearBuilt</th>
<th>TotalBsmtSF</th>
<th>2ndFlrSF</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2.085</td>
<td>8450</td>
<td>65.0</td>
<td>856</td>
<td>7</td>
<td>548</td>
<td>2003</td>
<td>856</td>
<td>854</td>
</tr>
<tr>
<th>1</th>
<td>1.815</td>
<td>9600</td>
<td>80.0</td>
<td>1262</td>
<td>6</td>
<td>460</td>
<td>1976</td>
<td>1262</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>2.235</td>
<td>11250</td>
<td>68.0</td>
<td>920</td>
<td>7</td>
<td>608</td>
<td>2001</td>
<td>920</td>
<td>866</td>
</tr>
<tr>
<th>3</th>
<td>1.400</td>
<td>9550</td>
<td>60.0</td>
<td>961</td>
<td>7</td>
<td>642</td>
<td>1915</td>
<td>756</td>
<td>756</td>
</tr>
<tr>
<th>4</th>
<td>2.500</td>
<td>14260</td>
<td>84.0</td>
<td>1145</td>
<td>8</td>
<td>836</td>
<td>2000</td>
<td>1145</td>
<td>1053</td>
</tr>
</tbody>
</table>
</div>
Предствим матрицу объекты-признаки как
$$
X = \begin{pmatrix}
x_{11} & x_{12} & \cdots & x_{1m} \\
x_{21} & x_{22} & \cdots & x_{2m} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n1} & x_{n2} & \cdots & x_{nm} \\
\end{pmatrix}
$$
Здесь $n$ - число объектов и $m$ - число признаков.
Вектор целевых переменных $y$ имеет ту же форму, что и для одного признака
$$
y = \begin{pmatrix}
y_1 \\
y_2 \\
\vdots \\
y_n
\end{pmatrix}
$$
Для многомерного случая гипотеза $h_\theta(x)$ пример следующий вид
$$
h_\theta(x) = \theta_0 + \theta_1x_1 + \theta_2x_2 + \cdots + \theta_mx_m.
$$
Из приведенного веше выражения видно, что слагаемое $\theta_0$ отличается от всех остальных, которые представляют собой произведения различных компанент вектора параметров $\theta$ и вектора признаков $x = (x_1, x_2 , \dots, x_m)$. Чтобы привести выражение для гипотезы к более простому виду мы применим следующий приём. Расширим матрицу объекты-признаки дополнительным столбцом слева, состоящим из одних единиц
$$
X = \begin{pmatrix}
1 & x_{11} & x_{12} & \cdots & x_{1m} \\
1 & x_{21} & x_{22} & \cdots & x_{2m} \\
\vdots & \vdots & & \ddots & \vdots \\
1 & x_{n1} & x_{n2} & \cdots & x_{nm} \\
\end{pmatrix}
$$
и используем нулевой индекс для обозначения этого столбца ($x_{i0} = 1, \quad i = \overline{1, n}$).
Тогда выражение для гипотезы $h_\theta(x)$ можно переписать в виде
$$
h_\theta(x) = \theta_0 + \theta_1x_1 + \theta_2x_2 + \cdots + \theta_mx_m = \theta_0x_0 + \theta_1x_1 + \theta_2x_2 + \cdots + \theta_mx_m = \sum\limits_{i = 0}^{m}\theta_ix_i = \theta^Tx.
$$
Т.е. представить в виде скалярного произведения ветора параметров $\theta$ и вектора признаков $x$. Здесь и далее под матрицей объектов-признаков $X$ будем понимать расширенную матрицу с единичным первым столбцом.
В качестве функции потерь $L$ как и прежде рассмотрим средний квадрат ошибки
$$
L(\theta) = \frac{1}{2n}\sum\limits_{i=1}^{n}\left(h_\theta(x^{(i)}) - y^{(i)}\right)^2.
$$
Здесь с помощью $x^{(i)} = (x_{i0}, x_{i1}, \dots, x_{im})^T$ обозначена вектор, составленый из строки матрицы $X$.
### Векторная форма градиентного спуска
Рассмотрим как метод градиентного спуска можно обобщить на многомерный случай. Для этого вычислим градиент фукнци потерь $J$ по всем компонентам вектора параметров $\theta$.
\begin{eqnarray}
\frac{\partial L}{\partial \theta_0} & = & \frac{1}{n}\sum\limits_{i=1}^n (h_\theta(x^{(i)}) - y^{(i)}) \\
\frac{\partial L}{\partial \theta_j} & = & \frac{1}{n}\sum\limits_{i=1}^n (h_\theta(x^{(i)}) - y^{(i)})x_{ij}, \quad j = \overline{1, m} \\
\end{eqnarray}
Или, используя прием с дополнительным единичным столбцом ($x_{i0} = 1$), можно переписать выражением для компонент вектора градиента в единообразном стиле
\begin{eqnarray}
\frac{\partial L}{\partial \theta_j} & = & \frac{1}{n}\sum\limits_{i=1}^n (h_\theta(x^{(i)}) - y^{(i)})x_{ij}, \quad j = \overline{0, m} \\
\end{eqnarray}
Наконец, в векторном виде выражение для градиента пример вид
$$
grad~ L(\theta) = \frac{1}{n} X^T(X\theta - y).
$$
Уравнение для обновления параметров $\theta$ для шага градиентного спуска можно также представить в вектрном виде
$$
\theta^{(k+1)} = \theta^{(k)} - \alpha ~grad ~ L(\theta^{(k)}).
$$
### Кривая обучения
В многомерном случае не представляется возможным визуализировать работу метода градиентного спуска с помощью линий уровня, поскольку простанство параметров в общес случае имеет размерность $m+1$. На помощь нам приходит другое представление, которое показывается изменение значений функции потерь в точке $\theta^{(k)}$ в зависимости от номера шага $k$. Такой график называется кривой обучения.
#### Проведем предобработку данных
```python
n = len(housesales)
m = len(housesales.drop('SalePrice', axis=1).columns)
y = housesales['SalePrice'].values.reshape((n, 1))
X = housesales.drop('SalePrice', axis=1).values.reshape((n, m))
X = np.hstack((np.ones((n, 1)), X))
```
#### Градиентный спуск
```python
def grad(y, X, theta):
n = y.shape[0]
return 1/n * X.transpose() @ (X @ theta - y)
def L(y, X, theta):
n = y.shape[0]
return 1/(2*n)*np.sum(np.power(X @ theta - y, 2))
def fit(y, X, theta_0, alpha=0.001, nsteps = 100):
theta = np.copy(theta_0)
loss = [L(y, X, theta)]
for i in range(nsteps):
theta -= alpha*grad(y, X, theta)
loss.append(L(y, X, theta))
return loss, theta
```
```python
theta_0 = np.zeros((m + 1, 1))
loss_history, theta_star = fit(y, X, theta_0, alpha=1e-10, nsteps=10000)
```
```python
plt.plot(loss_history)
plt.xlabel('$k$')
plt.ylabel('$L(\\theta^{(k)})$')
_ = plt.title('Кривая обучения')
```
```python
L(y, X, theta_star)
```
0.19708429653845569
Стоит обратить внимание, что наблюдается очень медленная сходимость метода градиентного спука для такой постановки задачи. Попробуем решить эту проблему с помощью предобработчки данных, а конкретно - нормализации признаков.
### Нормирование признаков
Поскольку различные признаки могу иметь отличные размерности, то, для устойчисвости численной процедуры метода градиентного спуска, рекомендуется провести их нормализацию. Для этого каждый признак можно по отдельности нормировать, вычтя из него среднее значение и разделив на средне-квадратичное отклонение
$$
U^{(j)} = \frac{X^{(j)} - M[X^{(j)}]}{\sqrt{D[X^{(j)}]}}, \quad j = \overline{1, m}.
$$
При этом первый столбце из одних единиц остается неизменным.
$$
U^{(0)} = \begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1\end{pmatrix}
$$
```python
U = np.ones((n, m + 1))
for j in range(1, m + 1):
U[:, j] = (X[:, j] - np.mean(X[:, j]))/np.std(X[:, j])
```
```python
loss_history, theta_star = fit(y, U, theta_0, alpha=1e-1, nsteps=50)
```
```python
plt.plot(loss_history)
plt.xlabel('$k$')
plt.ylabel('$L(\\theta^{(k)})$')
_ = plt.title('Кривая обучения')
```
Из графика видно, что при нормализации признаков удалось достичь того же значения функции потерь сделав в 10 раз меньше шагов.
Значение функции потерь для нашей задачи составило
```python
L(y, U, theta_star)
```
0.07986216026716797
Как же можно улучшить этот результат оставаясь в рамках метода линейно регрессии?
### Нелинейные зависимости
Проведем анализ нашей задачи. Для этого построим зависимость целевой переменной от каждого из признаков по отдельности.
```python
sns.pairplot(housesales, y_vars="SalePrice",
x_vars=["LotArea", "LotFrontage", "1stFlrSF", "OverallQual"],
)
sns.pairplot(housesales, y_vars="SalePrice",
x_vars=["GarageArea", "YearBuilt", "TotalBsmtSF", "2ndFlrSF"],
)
```
По характерному виду некоторых зависимостей можно предположить, что целевая переменная нелинейно зависит от некоторых признаков. Чтобы учеть подобные зависимости мы модем обогатить числовые признаки их квадратами и произведениями различных признаков, получив, таким образом, новую матрицу объекты-признаки.
$$
X = \begin{pmatrix}
1 & x_{11} & x_{12} & \cdots & x_{1m} & x_{11}^2 & x_{11}x_{12} & \cdots & x_{1m}^2\\
1 & x_{21} & x_{22} & \cdots & x_{2m} & x_{21}^2 & x_{21}x_{22} & \cdots & x_{2m}^2 \\
\vdots & \vdots & & \ddots & \vdots \\
1 & x_{n1} & x_{n2} & \cdots & x_{nm} & x_{n1}^2 & x_{n1}x_{n2} & \cdots & x_{nm}^2\\
\end{pmatrix}
$$
После чего используем тот же метод линейно регрессии.
```python
n = len(housesales)
m = len(housesales.drop('SalePrice', axis=1).columns)
y = housesales['SalePrice'].values.reshape((n, 1))
tmpX = np.ones((n, m + 1))
X = np.zeros((n, (m + 1)**2))
tmpX[:, 1:] = housesales.drop('SalePrice', axis=1).values.reshape((n, m))
for i in range(n):
X_i = tmpX[i, :] .reshape(1, -1)
X[i, :] = (X_i.T @ X_i).reshape(-1)
```
```python
U = np.ones(X.shape)
for j in range(1, 2*m + 1):
U[:, j] = (X[:, j] - np.mean(X[:, j]))/np.std(X[:, j])
```
### Нормальное уравнение
```python
theta_0 = np.zeros((U.shape[1], 1))
loss_history, theta_star = fit(y, U, theta_0, alpha=1e-2, nsteps=500)
```
```python
plt.plot(loss_history)
plt.ylim((0, loss_history[10]))
plt.xlabel('$k$')
plt.ylabel('$L(\\theta^{(k)})$')
_ = plt.title('Кривая обучения')
```
Значение функции потерь после процедуры градиентного спуска равно
```python
L(y, U, theta_star)
```
0.067764053087824239
что немного улучшает результат, полученный без использования квадратичных признаков.
### Нормальное уравнение
Значение оптимальных параметров линейной регресси может быть получено аналитически в матричном виде и вычисленно без применения метода градиентного спуска
\begin{equation}
\theta = (X^TX)^{-1}X^T y.
\end{equation}
Вывод данного уравениея оставим в качестве самостоятельного задания.
Данное выражение называется **нормальным уравнением** линейной регрессии.
Его основная вычислительная сложность связана с необходимостью обращения матрицы, которая может быть плохо обусловлена. Это может сказаться на точности вычислений. Метод граидентного спуска наоборот обладает тем свойством, что при удачном выборе скрости обучения будет с каждым шагом сходиться к оптимальному значения параметров $\theta$.
Для устойчивого обращения плохо обусловленной матрицы мы можем использовать алгоритм обращения Мура-Пенроуза (numpy.linalg.pinv)
```python
theta_n = np.linalg.pinv(X.T @ X) @ X.T @ y
```
Значение функции потерь при параметрах, вычисленных с помощью нормального уравнения, равно
```python
L(y, X, theta_n)
```
0.04344541186281841
Видно, что точное аналитическое решение дало чуть лучший результат по сравнению с методом градиентного спуска. В данном случае это объясняется небольшой размерностью задачи. Даже при учете квадратичных признаков их число m = 81.
Эмпирическое правило выбора метода решения задачи линейной регресии можно сформулировать следующим образом. При числе признаков до 1000 следует использовать нормальное уравниение в противном случае - метод градиентного спуска.
## Резюме
1. Метод линейной регрессии может быть расширен на многомерный случай
2. Для улучшения сходимости метода градиентного спуска следует использовать нормализацию признаков
3. Метод линейной регрессии можно обобщить на нелинейный случай с помощью добавления полиномиальных признаков
4. Аналитическое решения задачи линейной регресии называется **нормальное уравнение**
5. Аналитическое решение следует использовать при числи признаков меньше либо равном 1000
## Источники
1. **Andrew Ng**. Machine Learning - [Linear regression with one variable](https://www.coursera.org/learn/machine-learning/home/week/1), [Linear regression with multiple variables](https://www.coursera.org/learn/machine-learning/home/week/2)
2. **К. В. Воронцов**. Введение в машинное обучение - [Линейна регрессия](https://www.coursera.org/learn/vvedenie-mashinnoe-obuchenie/home/week/4)
3. **Christopher M. Bishop**. Pattern Recognition And Machine Learning - [Linear models for regression]
4. Открытй курс по машинному обучению - [Линейные модели классификации и регрессии](https://habrahabr.ru/company/ods/blog/323890/)
```python
```
| 2010e4bdea971ef3d759904fdd025dfaa5ae634a | 965,392 | ipynb | Jupyter Notebook | notebooks/LinearRegression.ipynb | it-sec-std/ml-intro | 684889152849cf4d8085bba9803113ba15be1dba | [
"Apache-2.0"
]
| null | null | null | notebooks/LinearRegression.ipynb | it-sec-std/ml-intro | 684889152849cf4d8085bba9803113ba15be1dba | [
"Apache-2.0"
]
| null | null | null | notebooks/LinearRegression.ipynb | it-sec-std/ml-intro | 684889152849cf4d8085bba9803113ba15be1dba | [
"Apache-2.0"
]
| null | null | null | 500.721992 | 112,816 | 0.928625 | true | 14,008 | Qwen/Qwen-72B | 1. YES
2. YES | 0.805632 | 0.7773 | 0.626218 | __label__rus_Cyrl | 0.83569 | 0.293244 |
```python
%pylab inline
```
Populating the interactive namespace from numpy and matplotlib
Fourier Analysis of a Plucked String
------------------------------------
Let's assume we're studying a plucked string with a shape like this:
\begin{eqnarray}
f(x) & = & 2 A \frac{x}{L} & (x<L/2) \\
\\
f(x) & = & 2 A \left(\frac{L-x}{L}\right) & (x >= L/2)
\end{eqnarray}
Let's graph that and see what it looks like:
```python
L=1.0
N=500 # make sure N is even for simpson's rule
A=1.0
def fLeft(x):
return 2*A*x/L
def fRight(x):
return 2*A*(L-x)/L
def fa_vec(x):
"""
vector version
'where(cond, A, B)', returns A when cond is true and B when cond is false.
"""
return where(x<L/2, fLeft(x), fRight(x))
x=linspace(0,L,N) # define the 'x' array
h=x[1]-x[0] # get x spacing
y=fa_vec(x)
title("Vertical Displacement of a Plucked String at t=0")
xlabel("x")
ylabel("y")
plot(x,y)
```
Let's define the basis functions:
\begin{equation}
|n\rangle = b_n(x) = \sqrt{\frac{2}{L}} \sin(n \pi x/L)
\end{equation}
Let's look at a few of those. In python well use the function `basis(x,n)` for $|n\rangle$:
```python
def basis(x, n):
return sqrt(2/L)*sin(n*pi*x/L)
for n in range(1,5):
plot(x,basis(x,n),label="n=%d"%n)
legend(loc=3)
```
If we guess that we can express $f(x)$ as a superposition of $b_n(x)$ then we have:
\begin{equation}
f(x) = c_1 b_1(x) + c_2 b_2(x) + c_3 b_3(x) + \cdots = \sum_{n=1}^\infty c_n b_n(x)
\end{equation}
What happens if we multiply $f(x)$ with $b_m(x)$ and then integrate from $x=0$ to $x=L$?
\begin{equation}
\int_0^L f(x) b_m(x) dx = c_1 \int_0^L b_m(x) b_1(x)dx + c_2 \int_0^L b_m(x) b_2(x)dx + c_3 \int_0^L b_m(x) b_3(x)dx + \cdots
\end{equation}
or more compactly in dirac notation:
\begin{equation}
\langle m|f\rangle = c_1 \langle m|1\rangle + c_2 \langle m|2\rangle + c_3 \langle m|3\rangle + \cdots
\end{equation}
or equivalently:
\begin{equation}
\langle m|f\rangle = \sum_{n=1}^\infty c_n \langle m|n\rangle
\end{equation}
Remember that the $b_n(x)$ are *orthonormal* so that means:
$$\langle n|m \rangle = \int_0^L b_n(x) b_m(x)dx = \delta_{nm}$$
where $\delta_{nm}$ is defined as 0 if $n<>m$ and 1 if $n=m$.
So:
$$\sum_{n=1}^\infty c_n \langle m|n\rangle = c_m$$
or in other words:
$$c_m = \langle m|f\rangle = \int_{0}^{L} b_m(x) f(x) dx $$
Yeah! Let's do that integral for this case. Note that the function $f(x)$ is symmetric about the midpoint, just like $b_m$ when $m$ is odd. When $m$ is even, the integral is zero. SO, for *odd* m we can write:
$$ c_m = \int_{0}^{L} b_m(x) f(x) dx = 2 \int_{0}^{L/2} b_m(x) f(x) dx $$
(when $m$ is odd) or:
$$ c_m = 2 \int_0^{L/2} \sqrt{\frac{2}{L}} \sin(\frac{m\pi x}{L}) 2A \frac{x}{L} dx $$
$$ c_m = \frac{4A}{L} \sqrt{\frac{2}{L}} \int_0^{L/2} x\sin(\frac{m\pi x}{L}) dx $$
$$ c_m = \frac{4A}{L} \sqrt{\frac{2}{L}} \frac{L^2}{\pi^2 m^2} (-1)^{\frac{m-1}{2}}$$
Or simplifying:
$$ c_m = \frac{4A \sqrt{2L}}{\pi^2 m^2} (-1)^{\frac{m-1}{2}}$$
```python
def simpson_array(f, h):
"""
Use Simpson's Rule to estimate an integral of an array of
function samples
f: function samples (already in an array format)
h: spacing in "x" between sample points
The array is assumed to have an even number of elements.
"""
if len(f)%2 != 0:
raise ValueError("Sorry, f must be an array with an even number of elements.")
evens = f[2:-2:2]
odds = f[1:-1:2]
return (f[0] + f[-1] + 2*odds.sum() + 4*evens.sum())*h/3.0
def braket(n):
"""
Evaluate <n|f>
"""
return simpson_array(basis(x,n)*fa_vec(x),h)
M=20
coefs = [0]
coefs_th = [0]
ys = [[]]
sup = zeros(N)
for n in range(1,M):
coefs.append(braket(n)) # do numerical integral
if n%2==0:
coefs_th.append(0.0)
else:
coefs_th.append(4*A*sqrt(2*L)*(-1)**((n-1)/2.0)/(pi**2*n**2)) # compare theory
ys.append(coefs[n]*basis(x,n))
sup += ys[n]
plot(x,sup)
print("%10s\t%10s\t%10s" % ('n', 'coef','coef(theory)'))
print("%10s\t%10s\t%10s" % ('---','-----','------------'))
for n in range(1,M):
print("%10d\t%10.5f\t%10.5f" % (n, coefs[n],coefs_th[n]))
```
Project 11
============
Pick your own function and compute it's fourier coefficients analytically. Then, check your answer both graphically and numerically using simpson's rule.
```python
```
| a551406a68309b009a44b1e7e949c85fbd71bf3b | 91,397 | ipynb | Jupyter Notebook | P11-FourierSeries.ipynb | parduhne/PHYS_280 | 9890804b87fa23c3d53e826fde4a3b82430c875c | [
"MIT"
]
| null | null | null | P11-FourierSeries.ipynb | parduhne/PHYS_280 | 9890804b87fa23c3d53e826fde4a3b82430c875c | [
"MIT"
]
| null | null | null | P11-FourierSeries.ipynb | parduhne/PHYS_280 | 9890804b87fa23c3d53e826fde4a3b82430c875c | [
"MIT"
]
| null | null | null | 274.465465 | 38,854 | 0.905259 | true | 1,646 | Qwen/Qwen-72B | 1. YES
2. YES | 0.872347 | 0.861538 | 0.751561 | __label__eng_Latn | 0.821645 | 0.584459 |
<a href="https://colab.research.google.com/github/Cal-0/Cal-0-dashboard.github.io/blob/master/DS-Unit-1-Sprint-3-Linear-Algebra/module2-intermediate-linear-algebra/Copy_of_LS_DS_132_Intermediate_Linear_Algebra_Assignment.ipynb" target="_parent"></a>
# Statistics
## 1.1 Sales for the past week was the following amounts: [3505, 2400, 3027, 2798, 3700, 3250, 2689]. Without using library functions, what is the mean, variance, and standard deviation of of sales from last week? (for extra bonus points, write your own function that can calculate these two values for any sized list)
```
def sales_mean (lst):
return sum(lst)/len(lst)
```
```
lst = [3505,2400,3027,2798,3700,3250,2689]
```
```
sales_mean(lst)
```
3052.714285714286
```
m = sales_mean(lst)
```
```
var = sum((xi-m)**2 for xi in lst)/len(lst)
```
```
var
```
183761.06122448976
```
std = var**(1/2)
```
```
std
```
428.67360686714756
## 1.2 Find the covariance between last week's sales numbers and the number of customers that entered the store last week: [127, 80, 105, 92, 120, 115, 93] (you may use librray functions for calculating the covariance since we didn't specifically talk about its formula)
```
import numpy as np
```
```
lst2 = [127,80,105,92,120,115,93]
```
```
np.cov(lst)
```
array(214387.9047619)
## 1.3 Find the standard deviation of customers who entered the store last week. Then, use the standard deviations of both sales and customers to standardize the covariance to find the correlation coefficient that summarizes the relationship between sales and customers. (You may use library functions to check your work.)
```
from statistics import stdev
```
```
stdev(lst)
```
463.02041505953576
## 1.4 Use pandas to import a cleaned version of the titanic dataset from the following link: [Titanic Dataset](https://raw.githubusercontent.com/Geoyi/Cleaning-Titanic-Data/master/titanic_clean.csv)
## Calculate the variance-covariance matrix and correlation matrix for the titanic dataset's numeric columns. (you can encode some of the categorical variables and include them as a stretch goal if you finish early)
```
import pandas as pd
```
```
df = pd.read_csv('https://raw.githubusercontent.com/Geoyi/Cleaning-Titanic-Data/master/titanic_clean.csv')
```
```
df.cov()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Unnamed: 0</th>
<th>pclass</th>
<th>survived</th>
<th>age</th>
<th>sibsp</th>
<th>parch</th>
<th>fare</th>
<th>body</th>
<th>has_cabin_number</th>
</tr>
</thead>
<tbody>
<tr>
<th>Unnamed: 0</th>
<td>143117.500000</td>
<td>284.357034</td>
<td>-53.967125</td>
<td>-1442.939812</td>
<td>25.828746</td>
<td>1.172783</td>
<td>-9410.735123</td>
<td>591.579132</td>
<td>-95.438885</td>
</tr>
<tr>
<th>pclass</th>
<td>284.357034</td>
<td>0.701969</td>
<td>-0.127248</td>
<td>-3.954605</td>
<td>0.053090</td>
<td>0.013287</td>
<td>-24.227788</td>
<td>-2.876653</td>
<td>-0.249992</td>
</tr>
<tr>
<th>survived</th>
<td>-53.967125</td>
<td>-0.127248</td>
<td>0.236250</td>
<td>-0.314343</td>
<td>-0.014088</td>
<td>0.034776</td>
<td>6.146023</td>
<td>0.000000</td>
<td>0.061406</td>
</tr>
<tr>
<th>age</th>
<td>-1442.939812</td>
<td>-3.954605</td>
<td>-0.314343</td>
<td>165.850021</td>
<td>-2.559806</td>
<td>-1.459378</td>
<td>114.416613</td>
<td>81.622922</td>
<td>1.463138</td>
</tr>
<tr>
<th>sibsp</th>
<td>25.828746</td>
<td>0.053090</td>
<td>-0.014088</td>
<td>-2.559806</td>
<td>1.085052</td>
<td>0.336833</td>
<td>8.641768</td>
<td>-8.708471</td>
<td>-0.003946</td>
</tr>
<tr>
<th>parch</th>
<td>1.172783</td>
<td>0.013287</td>
<td>0.034776</td>
<td>-1.459378</td>
<td>0.336833</td>
<td>0.749195</td>
<td>9.928031</td>
<td>4.237190</td>
<td>0.013316</td>
</tr>
<tr>
<th>fare</th>
<td>-9410.735123</td>
<td>-24.227788</td>
<td>6.146023</td>
<td>114.416613</td>
<td>8.641768</td>
<td>9.928031</td>
<td>2678.959738</td>
<td>-179.164684</td>
<td>10.976961</td>
</tr>
<tr>
<th>body</th>
<td>591.579132</td>
<td>-2.876653</td>
<td>0.000000</td>
<td>81.622922</td>
<td>-8.708471</td>
<td>4.237190</td>
<td>-179.164684</td>
<td>9544.688567</td>
<td>3.625689</td>
</tr>
<tr>
<th>has_cabin_number</th>
<td>-95.438885</td>
<td>-0.249992</td>
<td>0.061406</td>
<td>1.463138</td>
<td>-0.003946</td>
<td>0.013316</td>
<td>10.976961</td>
<td>3.625689</td>
<td>0.174613</td>
</tr>
</tbody>
</table>
</div>
# Orthogonality
## 2.1 Plot two vectors that are orthogonal to each other. What is a synonym for orthogonal?
```
import matplotlib.pyplot as plt
```
```
red = [0,3]
blue = [3,0]
```
```
plt.arrow(0,0, blue[0], blue[1], head_width=0.05, head_length=0.05, color='blue')
plt.arrow(0,0, red[0], red[1], head_width=0.05, head_length=0.05, color='red')
plt.xlim(-1,4)
plt.ylim(-1,4)
```
```
# Perpendicular
```
## 2.2 Are the following vectors orthogonal? Why or why not?
\begin{align}
a = \begin{bmatrix} -5 \\ 3 \\ 7 \end{bmatrix}
\qquad
b = \begin{bmatrix} 6 \\ -8 \\ 2 \end{bmatrix}
\end{align}
```
a = np.array([
[-5],
[3],
[7]
])
b = np.array([
[6],
[-8],
[2]
])
```
```
np.vdot(a,b)
# No because thed dot product is not zero
```
-40
```
plt.arrow(0,0, a[0], a[1], head_width=0.05, head_length=0.05, color='blue')
plt.arrow(0,0, b[0], b[1], head_width=0.05, head_length=0.05, color='red')
```
## 2.3 Compute the following values: What do these quantities have in common?
## What is $||c||^2$?
## What is $c \cdot c$?
## What is $c^{T}c$?
\begin{align}
c = \begin{bmatrix} 2 & -15 & 6 & 20 \end{bmatrix}
\end{align}
```
c = np.array([2,-15,6,20])
```
```
square_c =([
[2,-15],
[6,20]
])
```
```
square_c2 = np.linalg.det(square_c)
```
```
c**2
```
array([ 4, 225, 36, 400])
```
np.dot(c,c)
```
665
```
np.dot(c.T,c)
# These two are equal because c's tranpsotion doesn't change anything
# as it is a 1 d list
```
665
# Unit Vectors
## 3.1 Using Latex, write the following vectors as a linear combination of scalars and unit vectors:
\begin{align}
d = \begin{bmatrix} 7 \\ 12 \end{bmatrix}
\qquad
e = \begin{bmatrix} 2 \\ 11 \\ -8 \end{bmatrix}
\end{align}
\begin{align}
d = 7\hat{i} + 12\hat{j}
\end{align}
\begin{align}
e = 2\hat{i} + 11\hat{j} - 8\hat{i}
\end{align}
## 3.2 Turn vector $f$ into a unit vector:
\begin{align}
f = \begin{bmatrix} 4 & 12 & 11 & 9 & 2 \end{bmatrix}
\end{align}
```
```
# Linear Independence / Dependence
## 4.1 Plot two vectors that are linearly dependent and two vectors that are linearly independent (bonus points if done in $\mathbb{R}^3$).
# Span
## 5.1 What is the span of the following vectors?
\begin{align}
g = \begin{bmatrix} 1 & 2 \end{bmatrix}
\qquad
h = \begin{bmatrix} 4 & 8 \end{bmatrix}
\end{align}
```
```
## 5.2 What is the span of $\{l, m, n\}$?
\begin{align}
l = \begin{bmatrix} 1 & 2 & 3 \end{bmatrix}
\qquad
m = \begin{bmatrix} -1 & 0 & 7 \end{bmatrix}
\qquad
n = \begin{bmatrix} 4 & 8 & 2\end{bmatrix}
\end{align}
```
```
# Basis
## 6.1 Graph two vectors that form a basis for $\mathbb{R}^2$
```
```
## 6.2 What does it mean to form a basis?
# Rank
## 7.1 What is the Rank of P?
\begin{align}
P = \begin{bmatrix}
1 & 2 & 3 \\
-1 & 0 & 7 \\
4 & 8 & 2
\end{bmatrix}
\end{align}
## 7.2 What does the rank of a matrix tell us?
# Linear Projections
## 8.1 Line $L$ is formed by all of the vectors that can be created by scaling vector $v$
\begin{align}
v = \begin{bmatrix} 1 & 3 \end{bmatrix}
\end{align}
\begin{align}
w = \begin{bmatrix} -1 & 2 \end{bmatrix}
\end{align}
## find $proj_{L}(w)$
## graph your projected vector to check your work (make sure your axis are square/even)
```
```
# Stretch Goal
## For vectors that begin at the origin, the coordinates of where the vector ends can be interpreted as regular data points. (See 3Blue1Brown videos about Spans, Basis, etc.)
## Write a function that can calculate the linear projection of each point (x,y) (vector) onto the line y=x. run the function and plot the original points in blue and the new projected points on the line y=x in red.
## For extra points plot the orthogonal vectors as a dashed line from the original blue points to the projected red points.
```
import pandas as pd
import matplotlib.pyplot as plt
# Creating a dataframe for you to work with -Feel free to not use the dataframe if you don't want to.
x_values = [1, 4, 7, 3, 9, 4, 5 ]
y_values = [4, 2, 5, 0, 8, 2, 8]
data = {"x": x_values, "y": y_values}
df = pd.DataFrame(data)
df.head()
plt.scatter(df.x, df.y)
plt.show()
```
```
```
| 88bc62a14c31f36848b7dda6ec230ad9e4a968f7 | 61,629 | ipynb | Jupyter Notebook | DS-Unit-1-Sprint-3-Linear-Algebra/module2-intermediate-linear-algebra/Copy_of_LS_DS_132_Intermediate_Linear_Algebra_Assignment.ipynb | Cal-0/Cal-0-dashboard.github.io | 520ba74b593b8b27e7adbca37326f1dd23ba2d40 | [
"MIT"
]
| null | null | null | DS-Unit-1-Sprint-3-Linear-Algebra/module2-intermediate-linear-algebra/Copy_of_LS_DS_132_Intermediate_Linear_Algebra_Assignment.ipynb | Cal-0/Cal-0-dashboard.github.io | 520ba74b593b8b27e7adbca37326f1dd23ba2d40 | [
"MIT"
]
| null | null | null | DS-Unit-1-Sprint-3-Linear-Algebra/module2-intermediate-linear-algebra/Copy_of_LS_DS_132_Intermediate_Linear_Algebra_Assignment.ipynb | Cal-0/Cal-0-dashboard.github.io | 520ba74b593b8b27e7adbca37326f1dd23ba2d40 | [
"MIT"
]
| null | null | null | 47.774419 | 8,696 | 0.606289 | true | 3,381 | Qwen/Qwen-72B | 1. YES
2. YES | 0.831143 | 0.872347 | 0.725045 | __label__eng_Latn | 0.762405 | 0.522855 |
## Trailing losses ##
Trailing losses occur for moving objects when their motion during an exposure (or visit) makes them cover an area larger than the PSF. This notebook investigates the SNR losses due to trailing, as expected for LSST (with its 2-snap per visit observations) and the effect of DM source detection on the limiting magnitude expected for a visit.
There are also some visualizations of what moving objects might look like in LSST visits (including the dip or gap that may occur in the trail due to 2 snaps per visit observations), and an estimate of what fraction of NEOs or PHAs may be affected by these trailing losses.
Note that for LSST, each visit is composed of 2 shorter exposures. The pair is used to reject cosmic rays, but will be simply added together for a single visit limiting magnitude equivalent to the combination of the two visits (LSST is sky-noise limited in all bands except u band, which also has a non-negligble read-noise contribution). The spacing between the two exposures is nominally 2 seconds (the readout time), however the shutter requires 1 second to move across the field of view. The shutter is composed of two blades which travel across the fov one after another (e.g. one blade 'opens' the camera, the second blade 'closes' it; for the next exposure, the second blade 'opens', followed by the first blade 'closing' the camera and returning to the starting positions). Thus, the midpoint of any particular exposure varies by 1 second across the fov but the total exposure time is constant. The 'gap' between a pair of exposures in a visit varies from 2 seconds to 4 seconds, depending on location in the fov relative to the shutter movement.
If the shutter is opening L-R then R-L, then a point on the L side of the camera will have an exposure midpoint 1 second earlier than a point on the R side of the camera, will have a 4 second gap between exposures instead of a 2 second gap, and have the midpoint of the second exposure 1 second later than the R side of the camera. This may complicate trailing calculations.
Each object will have the same trail length in the individual exposures, and the same overall 'central' location, but slightly different combined trail length due to variation in length of gap in middle of the visit.
---
Simple example of motion (just to show gap between exposures in a single visit).
```python
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
```
```python
# simple motion - no PSF, just showing the 'gap' between snaps
velocity = 0.5 #deg/day
velocity = velocity /24.0 #arcseconds/second
exposuretime = np.arange(0, 15.01, 0.2) #seconds
timesL = np.concatenate([exposuretime, exposuretime+exposuretime.max()+4])
timesR = np.concatenate([exposuretime+1., exposuretime+exposuretime.max()+2.+1])
#print timesL.mean(), timesL.min(), timesL.max()
#print timesR.mean(), timesR.min(), timesR.max()
positionL = velocity * timesL
positionR = velocity * timesR
plt.figure()
plt.plot(positionL, np.zeros(len(positionL))+0.1, 'k.', label='RHS - 2s gap')
plt.plot(positionR, np.zeros(len(positionR))+0.2, 'r.', label='LHS - 4s gap')
plt.ylim(0, 0.3)
plt.xlim(-0.05, None)
plt.xlabel('Arcseconds')
plt.legend(loc='lower right', fontsize='smaller')
plt.axvline(positionL.mean(), color='g', linestyle=':')
print positionL.mean(), positionR.mean()
```
## PSF ##
Now let's add in the seeing distribution, to look at the flux profile of the sources.
Assume the PSF (for a stationary source) is consistent with Kolmogorov turbulence -- for LSST, this means it will actually be a von Karmen profile, but we can approximate this well enough here with a double gaussian. For moving objects, we can create many 'stationary' PSFs, at a series of locations along the trail of the object.
A good description of the PSF is a double-gaussian:
\begin{equation}
p_K(r | \alpha) \, = \, 0.909 \left( p(r | \alpha) + 0.1 p(r | 2\alpha) \right) \\
p(r | \alpha) \, = \frac{1}{2 \pi \alpha^2} exp( \frac{-r^2}{2 \alpha^2})
\end{equation}
```python
from scipy import integrate
def sumPSF(x, y, flux):
dx = np.min(np.diff(x))
dy = np.min(np.diff(y))
sumVal = integrate.simps(integrate.simps(flux, dx=dx), dx=dy)
#sumVal = np.trapz(np.trapz(flux, dx=dx),dx=dy)
return sumVal
```
```python
from scipy import interpolate
def zoomImage(x, y, flux, zoom=[-1, 1, -1, 1], zmax=None, nbins=200.0, pixelize=False, pixelscale=0.2):
"""Zoom in and show the image in region 'zoom'.
'pixelize' translates x/y into pixels and displays the image as would-be-seen with pixels."""
if zmax is None:
zmax = flux.max()
if pixelize:
x_pix = x / pixelscale
y_pix = y / pixelscale
xg = np.arange(zoom[0], zoom[1]+0.5, 1)
yg = np.arange(zoom[2], zoom[3]+0.5, 1)
xgrid, ygrid = np.meshgrid(xg, yg)
showflux = interpolate.interpn((y_pix, x_pix), flux, (ygrid, xgrid),
method='splinef2d', bounds_error=False, fill_value=0)
plt.imshow(showflux, extent=zoom, vmin=0, vmax=zmax, origin='lower', interpolation='none',
cmap='gray')
plt.colorbar()
plt.xlabel('Pixels')
plt.ylabel('Pixels')
else:
nbins = float(nbins)
binsize = (zoom[1]-zoom[0])/nbins
xg = np.arange(zoom[0], zoom[1]+binsize, binsize)
binsize = (zoom[3] - zoom[2])/nbins
yg = np.arange(zoom[2], zoom[3]+binsize, binsize)
xgrid, ygrid = np.meshgrid(xg, yg)
showflux = interpolate.interpn((y, x), flux, (ygrid, xgrid),
method='splinef2d', bounds_error=False, fill_value=0)
plt.imshow(showflux, extent=zoom, vmin=0, vmax=zmax, origin='lower', interpolation='none', cmap='gray')
plt.colorbar()
plt.xlabel('Arcseconds')
plt.ylabel('Arcseconds')
```
```python
def singleGaussianStationaryPSF(seeing, totalflux=1, xcen=0, ycen=0, stepsize=0.01, alpharad=10.0):
"Distribute flux across PSF. seeing in arcseconds"
# Translate 'seeing' FWHM to gaussian 'sigma' (alpha in the equations above)
alpha = seeing / np.sqrt(8 * np.log(2))
maxrad = alpha*alpharad
x = np.arange(0, 2*maxrad, stepsize)
x = x - x.mean() + xcen
y = np.arange(0, 2*maxrad, stepsize)
y = y - y.mean() + ycen
xi, yi = np.meshgrid(x, y)
radius = np.sqrt((xi-xcen)**2 + (yi-ycen)**2)
p = 1.0/(2.0*np.pi*alpha**2) * np.exp(-radius**2/(2.0*alpha**2))
flux = p / sumPSF(x, y, p) * totalflux
# Flux = flux[y][x], although here it doesn't matter because len(x) = len(y)
return x, y, flux
```
```python
def stationaryPSF(seeing, totalflux=1, xcen=0, ycen=0, stepsize=0.01, alpharad=10.0):
"Distribute flux across PSF. seeing in arcseconds"
# Translate 'seeing' FWHM to gaussian 'sigma' (alpha in the equations above)
alpha = seeing / np.sqrt(8 * np.log(2))
maxrad = alpha*alpharad
x = np.arange(0, 2*maxrad, stepsize)
x = x - x.mean() + xcen
y = np.arange(0, 2*maxrad, stepsize)
y = y - y.mean() + ycen
xi, yi = np.meshgrid(x, y)
radius = np.sqrt((xi-xcen)**2 + (yi-ycen)**2)
p1 = 1.0/(2.0*np.pi*alpha**2) * np.exp(-radius**2/(2.0*alpha**2))
p2 = 1.0/(2.0*np.pi*(2*alpha)**2) * np.exp(-radius**2/(2.0*(2*alpha)**2))
p = 0.909*(p1 + 0.1*p2)
flux = p / sumPSF(x, y, p) * totalflux
# Flux = flux[y][x], although here it doesn't matter because len(x) = len(y)
return x, y, flux
```
```python
def crossSection(x, y, flux, xi=None, yi=None):
"""Take a cross-section at xi/yi and return the flux values."""
if xi is None:
xi = np.mean(x)
if yi is None:
yi = np.mean(y)
# Find closest point in x/y arrays.
xindx = np.argmin(np.abs(x-xi))
yindx = np.argmin(np.abs(y-yi))
fluxx = flux[yindx][:]
fluxy = np.swapaxes(flux, 0, 1)[xindx][:]
return fluxx, fluxy
```
Check these tools out with stationary PSF.
```python
x, y, flux = stationaryPSF(0.7, totalflux=1)
sumPSF(x, y, flux)
```
1.0000000000000002
```python
print sumPSF(x, y, flux)
plt.figure()
zoomImage(x, y, flux)
plt.title('0.7" seeing')
plt.figure()
fluxx, fluxy = crossSection(x, y, flux)
x1 = x
fx1 = fluxx
x, y, flux = stationaryPSF(0.7, xcen=1)
print sumPSF(x, y, flux)
plt.figure()
zoomImage(x, y, flux)
plt.title('0.7" seeing')
plt.figure()
fluxx, fluxy = crossSection(x, y, flux, xi=1)
x2 = x
fx2 = fluxx
y2 = y
fy2 = fluxy
x, y, flux = stationaryPSF(1.0, totalflux=1)
print sumPSF(x, y, flux)
plt.figure()
zoomImage(x, y, flux, zmax=1.6)
plt.title('1.0" seeing')
plt.figure()
fluxx, fluxy = crossSection(x, y, flux)
plt.plot(x1, fx1, 'r')
plt.plot(x2, fx2, 'r:')
plt.plot(y2, fy2, 'g:')
plt.plot(x, fluxx, 'b')
plt.xlim(-2, 2)
plt.xlabel('Arcseconds')
```
### Moving object PSF ###
Now simulate a moving object as a series of stationary PSFs, summing all the flux vlaues contributed by each stationary PSF.
```python
def movingPSF(velocity=1.0, seeing=0.7, totalflux=1., side='L'):
"Simulate a moving object; velocity (deg/day), seeing(arcsecond), side='L' or 'R' (L=4sec gap)"""
velocity = velocity / 24.0 #arcsecond/second
exposureTimeSteps = seeing/velocity/20.0
exposuretime = np.arange(0, 15+exposureTimeSteps/2.0, exposureTimeSteps) #seconds
timesL = np.concatenate([exposuretime, exposuretime+exposuretime.max()+4])
timesR = np.concatenate([exposuretime+1., exposuretime+exposuretime.max()+2.+1])
positionL = velocity * timesL
positionR = velocity * timesR
xlist = []
ylist = []
fluxlist = []
if side=='L':
positions = positionL
else:
positions = positionR
for p in (positions):
xcen = p
x, y, flux = stationaryPSF(seeing, xcen=xcen, ycen=0)
xlist.append(x)
ylist.append(y)
fluxlist.append(flux)
xmin = np.array([x.min() for x in xlist]).min()
xmax = np.array([x.max() for x in xlist]).max()
ymin = np.array([y.min() for y in ylist]).min()
ymax = np.array([y.max() for y in ylist]).max()
stepsize = 0.01 #arcseconds
x = np.arange(xmin, xmax+stepsize, stepsize)
y = np.arange(ymin, ymax+stepsize, stepsize)
xgrid, ygrid = np.meshgrid(x, y)
flux = np.zeros(np.shape(xgrid), float)
for xi, yi, fi in zip(xlist, ylist, fluxlist):
f = interpolate.interpn((yi, xi), fi, (ygrid, xgrid), bounds_error=False, fill_value=0)
flux += f
fluxSum = sumPSF(x, y, flux)
flux = flux / fluxSum * totalflux
return x, y, flux
```
```python
velocity = 1.0 #deg/day
seeing = 0.7 #arcseconds
x, y, flux = movingPSF(velocity=velocity, seeing=seeing, totalflux=1000.0)
print sumPSF(x, y, flux)
```
1000.0
```python
zoomImage(x, y, flux, zoom=[-1, 3, -1, 1])
plt.title('Velocity of %.2f deg/day with %.2f" seeing' %(velocity, seeing))
plt.ylabel('4 second gap')
plt.figure()
fluxx, fluxy = crossSection(x, y, flux)
plt.figure()
plt.plot(x, fluxx, 'r', label='Lengthwise')
plt.plot(y, fluxy, 'b', label='Crossection')
plt.legend(loc='upper right', fontsize='smaller', fancybox=True)
plt.xlabel('Arcseconds')
plt.title('Velocity of %.2f deg/day with %.2f" seeing' %(velocity, seeing))
```
Above is with a smooth sub-pixel sampling. At 1 deg/day, on the 4 second gap (L) hand side of the chip, there is a dip in the smoothly sampled flux. However, it's small in width - the tracks are only separated by about 0.1", which is less than a pixel (0.2"/pixel for LSST). We can look at how the flux would appear if it was only sampled at the center of each pixel.
```python
# try pixelizing the flux
pixelscale = 0.2
# 1 deg/day not nyquist sampled; 2 deg/day is definitely visible!
zoom=[-1, 3, -1, 1]
zoompix = [int(z/pixelscale) for z in zoom]
zoomImage(x, y, flux, zoom=zoompix, pixelize=True)
```
```python
x, y, flux = movingPSF(side='R', velocity=velocity, seeing=seeing)
zoomImage(x, y, flux, zoom=[-1, 3, -1, 1])
plt.title('Velocity of %.2f deg/day with %.2f" seeing' %(velocity, seeing))
plt.ylabel('2 second gap')
plt.figure()
fluxx, fluxy = crossSection(x, y, flux)
plt.figure()
plt.plot(x, fluxx, 'r', label='Lengthwise')
plt.plot(y, fluxy, 'b', label='Crossection')
plt.legend(loc='upper right', fontsize='smaller', fancybox=True)
plt.xlabel('Arcseconds')
plt.title('Velocity of %.2f deg/day with %.2f" seeing' %(velocity, seeing))
```
```python
velocity=0.5
seeing=1.0
x, y, flux = movingPSF(side='L', velocity=velocity, seeing=seeing)
zoomImage(x, y, flux, zoom=[-1, 2, -1, 1])
plt.title('Velocity of %.2f deg/day with %.2f" seeing' %(velocity, seeing))
plt.ylabel('4 second gap')
plt.figure()
fluxx, fluxy = crossSection(x, y, flux)
print fluxx.max(), fluxy.max()
plt.figure()
plt.plot(x, fluxx, 'r', label='Lengthwise')
plt.plot(y, fluxy, 'b', label='Center Crossection')
plt.legend(loc='upper right', fontsize='smaller', fancybox=True)
plt.xlabel('Arcseconds')
plt.title('Velocity of %.2f deg/day with %.2f" seeing' %(velocity, seeing))
```
---
## SNR ##
Calculating the SNR primarily comes down to calculating $n_{eff}$.
We can calculate SNR (eqn 41 in SNR doc) (assuming gain=1):
\begin{equation}
SNR = \frac {C_b} {(C_b + n_{eff} (B_b + \sigma_I^2))^{1/2}}
\end{equation}
And equation 45-46 of the SNR doc invert this to:
\begin{equation}
C_b(SNR) = \frac{SNR^2}{2} + \left( \frac{SNR^4}{4}+ SNR^2 V_n \right) ^{1/2} \\
V_n = n_{eff} (B_b + \sigma_I^2) \\
\end{equation}
And then, as shown in eqn 26 of the SNR doc, $n_{eff}$ can be calculated as follows:
\begin{equation}
n_{eff} = \sum_i w_i = \frac{1}{\sum_i p_i^2} \\
\end{equation}
For a double gaussian, this is approximately equivalent to
\begin{equation}
n_{eff} = 2.436 \, (FWHM / pixelscale)^2 \\
\end{equation}
and for the simplification of a single gaussian PSF, it is exactly equivalent to
\begin{equation}
n_{eff} = 2.266 \, (FWHM / pixelscale)^2 \\
\end{equation}
```python
from scipy import integrate
def calcNeff(x, y, psfprofile, pixelscale=0.2):
# Find dx/dy intervals for integration. They should be uniform based on methods here.
# numpy says they are, but somehow the diff returns multiple versions ?? (is this the bug?)
dx = np.max(np.diff(x))
dy = np.max(np.diff(y))
# Make sure psfprofile normalizes to 1.
psfSum = integrate.simps(integrate.simps(psfprofile, dx=dx), dx=dy)
psfprofile /= psfSum
# Calculate neff (area), in 'numerical steps'
neff = 1.0 / integrate.simps(integrate.simps(psfprofile**2, dx=dx), dx=dy)
# Convert to pixels (the 'neff' above is in arcseconds^2)
neff = neff / (pixelscale**2)
return neff
```
```python
# Calculate Neff for stationary sources.
pixelscale = 0.2 #arcseconds/pixel
FWHM = 0.7 #arcseconds ('seeing')
# For a single gaussian PSF
x, y, flux = singleGaussianStationaryPSF(seeing=FWHM, totalflux=1.0, alpharad=20.0, stepsize=0.01)
neff = calcNeff(x, y, flux, pixelscale=pixelscale)
neff_analytic = 2.266 * (FWHM/pixelscale)**2
print 'Single Gaussian:'
print 'Analytic Neff', neff_analytic
print 'Calculated neff from PSF', neff
print '% difference:', (neff-neff_analytic)/neff_analytic*100.0
# For a double gaussian PSF
# See note after equation 33 in SNR doc -
# suggests seeing = 1.035 * FWHM for a double-gaussian.
seeing = FWHM * 1.035
neff_analytic = 2.436 * (seeing/pixelscale)**2
# Calculate Neff from sum(1/p) for each pixel.
x, y, flux = stationaryPSF(seeing=FWHM, totalflux=1.0, alpharad=20.0, stepsize=0.01)
neff = calcNeff(x, y, flux, pixelscale=pixelscale)
print 'Double Gaussian (adjusted FWHM/seeing value):'
print 'Analytic Neff', neff_analytic
print 'Calculated neff from PSF', neff
print '% difference:', (neff-neff_analytic)/neff_analytic*100.0
```
Single Gaussian:
Analytic Neff 27.7585
Calculated neff from PSF 27.7607058687
% difference: 0.00794664229063
Double Gaussian (adjusted FWHM/seeing value):
Analytic Neff 31.966425225
Calculated neff from PSF 31.0304425876
% difference: -2.92801785237
```python
# So calculate Neff for moving sources - example:
velocity = 0.5
seeing = 0.7
x, y, flux = movingPSF(side='L', velocity=velocity, seeing=seeing, totalflux=1.0)
neff = calcNeff(x, y, flux)
print 'Calculated neff from PSF (LHS), velocity %.2f seeing %.1f: %f' %(velocity, seeing, neff)
x, y, flux = movingPSF(side='R', velocity=velocity, seeing=seeing, totalflux=1.0)
neff = calcNeff(x, y, flux)
print 'Calculated neff from PSF (RHS), velocity %.2f seeing %.1f: %f' %(velocity, seeing, neff)
```
Calculated neff from PSF (LHS), velocity 0.50 seeing 0.7: 38.864290
Calculated neff from PSF (RHS), velocity 0.50 seeing 0.7: 37.669208
```python
# Calculate totalflux equivalent to (optimally extracted) SNR=5.0 for this range of velocities.
SNR = 5.0
sky = 2000.
inst_noise = 10.0
Vn = neff*(sky + inst_noise)
counts = SNR**2/2. + np.sqrt(SNR**4/4. + SNR**2 * Vn)
mags = 2.5*np.log10(counts)
# and for a stationary source.
x, y, flux = stationaryPSF(seeing=seeing, totalflux=1.)
neff_stat = calcNeff(x, y, flux)
Vn = neff_stat*(sky + inst_noise)
counts = SNR**2/2. + np.sqrt(SNR**4/4. + SNR**2 * Vn)
mag_stat = 2.5*np.log10(counts)
# Subtract the two to find the magnitude increase required to stay at SNR=5.0 (optimal extraction) as objects trail.
mag_diff = mags - mag_stat
print mag_stat, "=m5 for a stationary source (corresponds to SNR=5)"
print mags, "= m5 for for an object moving", velocity, "deg/day in", seeing, "arcsec seeing"
print "difference: ", mag_diff
```
7.75202331664 =m5 for a stationary source (corresponds to SNR=5)
7.85626751764 = m5 for for an object moving 0.5 deg/day in 0.7 arcsec seeing
difference: 0.104244200996
This result matches closely with the trailing loss formula used in the LSST Overview paper, so it looks about right. This is the inevitable loss in SNR due to the fact that the moving object trails across more sky pixels, thus adding more sky noise into the measurement of its (optimally extracted) source.
---
We also need to look at the effect that results from DM's source detection algorithms. DM will only detect sources which are brighter than some threshhold (currently 5$\sigma$) in a PSF-convolved image. Because our sources are moving, their peak fluxes (convolved with the stationary PSF) will be lower than a similar stationary source.
```python
# Find counts at threshhold for stationary source.
seeing = 0.7
SNR = 5.0
sky = 2000.
inst_noise = 10.0
x, y, flux = stationaryPSF(seeing=seeing, totalflux=1)
neff_stat = calcNeff(x, y, flux)
Vn = neff_stat*(sky + inst_noise)
counts_stat = SNR**2/2. + np.sqrt(SNR**4/4. + SNR**2 * Vn)
x_stat, y_stat, flux_stat = stationaryPSF(seeing=seeing, totalflux=counts_stat)
zoomImage(x_stat, y_stat, flux_stat)
print counts_stat, sumPSF(x_stat, y_stat, flux_stat), flux_stat.sum()
```
```python
# Distribute same counts in moving object.
velocity = 1.0
x_mo, y_mo, flux_mo = movingPSF(seeing=seeing, velocity=velocity, totalflux=counts_stat)
zoomImage(x_mo, y_mo, flux_mo)
print counts_stat, sumPSF(x_mo, y_mo, flux_mo), flux_mo.sum()
```
```python
# Compare the peak brightness of the two (without correlation with PSF)
fx_stat, fy_stat = crossSection(x_stat, y_stat, flux_stat)
fx_mo, fy_mo = crossSection(x_mo, y_mo, flux_mo)
plt.plot(x_stat, fx_stat, 'g')
plt.plot(x_mo, fx_mo, 'r')
plt.xlabel("Arcseconds")
plt.title('Lengthwise cross-section')
print 'max counts for stationary / moving objects:', flux_stat.max(), '/', flux_mo.max()
print 'total flux across stationary object', sumPSF(x_stat, y_stat, flux_stat)
print 'total flux across moving object', sumPSF(x_mo, y_mo, flux_mo)
```
```python
# Generate a PSF profile that we will correlate with the stationary and moving object sources
# (this is the LSST detection filter)
x_psf, y_psf, psfprofile = stationaryPSF(seeing=seeing, totalflux=1.0, stepsize=0.01, alpharad=10.0)
```
```python
from scipy import signal
filtered_stat = signal.fftconvolve(flux_stat, psfprofile)
plt.imshow(filtered_stat, origin='lower')
plt.colorbar()
print 'Max counts in filtered image (~ sum of total flux)', filtered_stat.max()
print 'total flux in original image, simple sum without account for pixel size', flux_stat.sum()
print 'Max counts in original image', flux_stat.max()
```
```python
filtered_mo = signal.fftconvolve(flux_mo, psfprofile)
plt.imshow(filtered_mo, origin='lower')
plt.colorbar()
print 'Max counts in filtered image', filtered_mo.max()
print 'total counts in original image (sum without account for pixel size)', flux_mo.sum()
print 'Max counts in original image', flux_mo.max()
```
```python
# So how much brighter do we have to get as a moving object in order to hit the
# out_stat.max() value, which is the detection threshhold?
ratio = filtered_stat.max() / filtered_mo.max()
print "increasing counts in moving object by", ratio
dmag = 2.5*np.log10(ratio)
print "equivalent to change in magnitude of", dmag
flux_mo2 = flux_mo * ratio
```
increasing counts in moving object by 1.55539911197
equivalent to change in magnitude of 0.479604616668
```python
# Just look at the cross-sections,
# see that even with the increase of 1.55 in flux that we're still below the pix level of the stationary PSF.
# this is because along the 'line' of the velocity, the flux doesn't fall as fast as a stationary PSF
fx_stat, fy_stat = crossSection(x_stat, y_stat, flux_stat)
fx_mo, fy_mo = crossSection(x_mo, y_mo, flux_mo)
fx_mo2, fy_mo2 = crossSection(x_mo, y_mo, flux_mo2)
plt.plot(x_stat, fx_stat, 'g', label='Stationary F=F0')
plt.plot(x_mo, fx_mo, 'r', label='Moving, F=F0')
plt.plot(x_mo, fx_mo2, 'k', label='Moving, F=F0*%.3f' %(ratio))
plt.legend(loc='upper right', fancybox=True, fontsize='smaller')
plt.xlabel("Arcseconds")
plt.title('Lengthwise cross-section')
```
Set up and repeat the experiment for a variety of different seeings, and both sides of the chip.
(warning - this is pretty slow).
```python
det_loss = {}
trail_loss = {}
seeings = [0.7, 1.0, 1.2]
velocities = np.concatenate([np.arange(0.02, 2.5, 0.2), np.arange(2.8, 4.0, 0.3), np.arange(4.5, 10, 0.5)])
sides = ['L', 'R']
```
```python
# Find counts at threshhold for stationary source.
for side in sides:
det_loss[side] = {}
trail_loss[side] = {}
for seeing in seeings:
SNR = 5.0
sky = 2000. #these values should cancel
inst_noise = 10.0
x, y, flux = stationaryPSF(seeing=seeing, totalflux=1)
neff_stat = calcNeff(x, y, flux)
Vn = neff_stat*(sky + inst_noise)
counts_stat = SNR**2/2. + np.sqrt(SNR**4/4. + SNR**2 * Vn)
x_stat, y_stat, flux_stat = stationaryPSF(seeing=seeing, totalflux=counts_stat)
# Determine the PSF Profile for convolution (correlation, but we're symmetric)
x_psf, y_psf, psfprofile = stationaryPSF(seeing=seeing, totalflux=1.0, stepsize=0.01, alpharad=10.0)
# Calculated the filtered peak value for stationary sources - this is what we have to match.
filtered_stat = signal.fftconvolve(flux_stat, psfprofile)
# Calculate how much brighter (than a stationary obj) a moving object has to be to match the
# peak level above in PSF filter (rather than moving object filter)
# And calculate how much brighter (than stationary obj) a moving object has to be to hit SNR=5
# even with optimal extraction
det_loss[side][seeing] = np.zeros(len(velocities), float)
trail_loss[side][seeing] = np.zeros(len(velocities), float)
for i, v in enumerate(velocities):
x, y, flux = movingPSF(seeing=seeing, velocity=v, totalflux=counts_stat, side=side)
filtered_mo = signal.fftconvolve(flux, psfprofile)
det_loss[side][seeing][i] = filtered_stat.max() / filtered_mo.max()
neff = calcNeff(x, y, flux)
Vn = neff*(sky + inst_noise)
counts_mo = SNR**2/2. + np.sqrt(SNR**4/4. + SNR**2 * Vn)
trail_loss[side][seeing][i] = counts_mo / counts_stat
```
```python
# We have the 'blue curve'= minimum SNR losses due to increased area == 'trail_loss'
# We have the 'red curve' = max detection loss due to detection on PSF-filtered image instead of trailed PSF
# 'diff_loss' == the difference between them (potentially recoverable with increased work by DM)
diff_loss = {}
for side in sides:
diff_loss[side] = {}
for seeing in seeings:
diff_loss[side][seeing] = det_loss[side][seeing]/trail_loss[side][seeing]
```
```python
# red = detection losses due to detection on point-like PSF
# blue = snr losses due to increased area under moving object
# green = ratio between the two (red/blue)
# solid = LHS of focal plane (4s gap), dashed = RHS of focal plane (2s gap)
for side in sides:
if side == 'L':
linestyle = '-'
if side == 'R':
linestyle = ':'
plt.figure(1)
for seeing in seeings:
plt.plot(velocities, det_loss[side][seeing], color='r', linestyle=linestyle)
plt.plot(velocities, trail_loss[side][seeing], color='b', linestyle=linestyle)
plt.plot(velocities, diff_loss[side][seeing], color='g', linestyle=linestyle)
plt.xlabel('Velocity (deg/day)')
plt.ylabel('Flux loss (ratio) - SNR loss')
plt.figure(2)
for seeing in seeings:
plt.plot(velocities*30.0/seeing/24.0, det_loss[side][seeing], color='r', linestyle=linestyle)
plt.plot(velocities*30.0/seeing/24.0, trail_loss[side][seeing], color='b', linestyle=linestyle)
plt.plot(velocities*30.0/seeing/24.0, diff_loss[side][seeing], color='g', linestyle=linestyle)
plt.xlabel('x')
plt.ylabel('Flux loss (ratio) - SNR loss')
plt.figure(3)
for seeing in seeings:
plt.plot(velocities*30.0/seeing/24.0, -2.5*np.log10(det_loss[side][seeing]), color='r', linestyle=linestyle)
plt.plot(velocities*30.0/seeing/24.0, -2.5*np.log10(trail_loss[side][seeing]), color='b', linestyle=linestyle)
plt.plot(velocities*30.0/seeing/24.0, -2.5*np.log10(diff_loss[side][seeing]), color='g', linestyle=linestyle)
plt.xlabel('x')
plt.ylabel('Delta mag')
```
Fit functions to the detection and trailing losses. We're looking for something probably like:
\begin{equation}
x = \frac{v T_{exp}} {\theta} \\
flux ratio = \sqrt{1 + a x^2 / (1 + b x)}\\
\end{equation}
```python
from scipy.optimize import curve_fit
from scipy.special import erf, erfc
```
```python
def vToX(v, t, seeing):
return v * t / seeing / 24.0
def fitfunc(x, c1, c2):
# x = velocities * t / seeing (/24.0)
func = np.sqrt(1. + c1*x**2 / (1. + c2 *x))
return func
def fitfunc2(x, c1, c2):
func = 1 + c1*x**2 / (1.+c2*x)
return func
print vToX(1.0, 30, 0.7)
```
1.78571428571
```python
tExp = 30.0
xall = {}
trailall = {}
detall = {}
diffall = {}
for side in sides:
# combine the data so that we can fit it all at once.
xall[side] = []
detall[side] = []
trailall[side] = []
diffall[side] = []
for s in seeings:
x = vToX(velocities, tExp, s)
xall[side].append(x)
detall[side].append(det_loss[side][s])
trailall[side].append(trail_loss[side][s])
diffall[side].append(diff_loss[side][s])
xall[side] = np.array(xall[side]).flatten()
detall[side] = np.array(detall[side]).flatten()
trailall[side] = np.array(trailall[side]).flatten()
diffall[side] = np.array(diffall[side]).flatten()
xarg = np.argsort(xall[side])
detall[side] = detall[side][xarg]
trailall[side] = trailall[side][xarg]
xall[side] = xall[side][xarg]
diffall[side] = diffall[side][xarg]
# Fit the data.
trailab = {}
detab = {}
diffab = {}
for side in sides:
trailab[side] = {}
detab[side] = {}
diffab[side] = {}
popt, pcov = curve_fit(fitfunc, xall[side], trailall[side])
trailab[side]['a'] = popt[0]
trailab[side]['b'] = popt[1]
popt, pcov = curve_fit(fitfunc, xall[side], detall[side])
detab[side]['a'] = popt[0]
detab[side]['b'] = popt[1]
popt, pcov = curve_fit(fitfunc, xall[side], diffall[side])
diffab[side]['a'] = popt[0]
diffab[side]['b'] = popt[1]
# Residuals?
dl = {}
tl = {}
dd = {}
for side in sides:
dl[side] = fitfunc(xall[side], detab[side]['a'], detab[side]['b'])
tl[side] = fitfunc(xall[side], trailab[side]['a'], trailab[side]['b'])
dd[side] = fitfunc(xall[side], diffab[side]['a'], diffab[side]['b'])
# Plot data
for side in sides:
plt.plot(xall[side], dl[side], 'r-')
plt.plot(xall[side], tl[side], 'b-')
plt.plot(xall[side], detall[side], 'r.')
plt.plot(xall[side], trailall[side], 'b.')
plt.xlabel('x')
plt.ylabel('flux loss')
plt.figure()
for side in sides:
plt.plot(xall[side], dd[side], 'g-')
plt.plot(xall[side], diffall[side], 'g.')
plt.xlabel('x')
plt.ylabel('ratio (det / trail) flux')
# plot diffs.
plt.figure()
for side in sides:
diff_dl = 2.5*np.log10(detall[side] / dl[side])
diff_tl = 2.5*np.log10(trailall[side] / tl[side])
eps = 1e-20
diff_dd = 2.5*np.log10(diffall[side] / dd[side])
plt.plot(xall[side], diff_dl, 'r-')
plt.plot(xall[side], diff_tl, 'b-')
plt.xlabel('x')
plt.ylabel('$\Delta$ (mag_calc - mag_fit)')
plt.plot(xall[side], diff_dd, 'g-')
plt.xlabel('x')
plt.ylabel('$\Delta$ (mag_calc - mag_fit)')
```
Fitting the trailing losses with a simple polynomial seems to do slightly better, but results in a more complicated result. A third order polynomial is not enough to capture the wiggles near x=0, so deg=4 it is. Actually, deg=5 does better at not adding additional losses near velocity=0, which is an important area.
```python
trailp = {}
detp = {}
diffp = {}
deg = 5
for side in sides:
trailp[side] = np.polyfit(xall[side], trailall[side], deg=deg)
detp[side] = np.polyfit(xall[side], detall[side], deg=deg)
diffp[side] = np.polyfit(xall[side], diffall[side], deg=deg)
# Residuals?
dl = {}
tl = {}
dd = {}
for side in sides:
dl[side] = np.polyval(detp[side], xall[side])
tl[side] = np.polyval(trailp[side], xall[side])
dd[side] = np.polyval(diffp[side], xall[side])
# plot
for side in sides:
plt.plot(xall[side], dl[side], 'r-')
plt.plot(xall[side], tl[side], 'b-')
plt.plot(xall[side], detall[side], 'r.')
plt.plot(xall[side], trailall[side], 'b.')
plt.xlabel('x')
plt.ylabel('flux loss')
plt.figure()
for side in sides:
plt.plot(xall[side], dd[side], 'g-')
plt.plot(xall[side], diffall[side], 'g.')
plt.xlabel('x')
plt.ylabel('ratio (det / trail) flux')
plt.figure()
for side in sides:
plt.plot(xall[side], 2.5*np.log10(detall[side]/dl[side]), 'r-')
plt.plot(xall[side], 2.5*np.log10(trailall[side]/tl[side]), 'b-')
plt.xlabel('x')
plt.ylabel('$\Delta$ (mag_calc - mag_fit)')
eps = 1e-20
diff_dd = 2.5*np.log10(diffall[side] / dd[side])
plt.plot(xall[side], diff_dd, 'g-')
```
```python
for side in sides:
print 'side', side
print 'a/b'
print 'trailing loss params : a,b', trailab[side]['a'], trailab[side]['b']
print 'detection loss params: a,b', detab[side]['a'], detab[side]['b']
print 'ratio params: a,b', diffab[side]['a'], diffab[side]['b']
print 'p'
print 'trailing loss params: p', trailp[side]
print 'detection loss params: p, ', detp[side]
print 'ratio params: p', diffp[side]
print 'Average a/b"s:'
print 'trailing loss:', (trailab['L']['a'] + trailab['R']['a'])/2.0, (trailab['L']['b'] + trailab['R']['b'])/2.0
print 'detection loss:', (detab['L']['a'] + detab['R']['a'])/2.0, (detab['L']['b'] + detab['R']['b'])/2.0
print 'difference det-trail', (diffab['L']['a'] + diffab['R']['a'])/2.0, (diffab['L']['b'] + diffab['R']['b'])/2.0
```
side L
a/b
trailing loss params : a,b 0.920734046087 1.4169297694
detection loss params: a,b 0.431955259706 0.00535711080589
ratio params: a,b 0.217994245071 0.346470879895
p
trailing loss params: p [ -5.29500760e-06 2.19397497e-04 -2.95124468e-03 8.65400208e-03
2.11945061e-01 9.51532325e-01]
detection loss params: p, [ -1.78979546e-05 7.56237969e-04 -1.16129979e-02 8.36873676e-02
2.97513833e-01 8.81788708e-01]
ratio params: p [ -8.89600663e-06 3.65178479e-04 -5.22005436e-03 2.82739025e-02
1.08896878e-01 9.39462156e-01]
side R
a/b
trailing loss params : a,b 0.601723287645 0.907305298359
detection loss params: a,b 0.405893156869 0.000259020721574
ratio params: a,b 0.179239610511 0.263157781724
p
trailing loss params: p [ -6.51485613e-06 2.82673664e-04 -4.19447694e-03 1.98842197e-02
1.70661764e-01 9.58934388e-01]
detection loss params: p, [ -2.73651404e-05 1.22260780e-03 -2.00065226e-02 1.49305252e-01
1.03333241e-01 9.42108957e-01]
ratio params: p [ -1.54787202e-05 6.75787994e-04 -1.05047458e-02 6.64178860e-02
9.35379777e-03 9.71380681e-01]
Average a/b"s:
trailing loss: 0.761228666866 1.16211753388
detection loss: 0.418924208288 0.00280806576373
difference det-trail 0.198616927791 0.30481433081
```python
# Summary:
def dmag(velocity, seeing, texp=30.):
a_trail = 0.761
b_trail = 1.162
a_det = 0.420
b_det = 0.003
x = velocity * texp / seeing / 24.0
dmag = {}
dmag['trail'] = 1.25 * np.log10(1 + a_trail*x**2/(1+b_trail*x))
dmag['detect'] = 1.25 * np.log10(1 + a_det*x**2 / (1+b_det*x))
return dmag
```
```python
velocities = np.arange(0, 8, 0.1)
plt.figure(figsize=(8,6))
seeing = 0.7
dmags = dmag(velocities, seeing)
plt.plot(velocities, dmags['trail'], 'b:', label='SNR loss')
plt.plot(velocities, dmags['detect'], 'r-', label='Detection loss')
#plt.plot(velocities, 1.25*(np.log10(1+0.67*(velocities*30.0/seeing/24.0))), 'k:')
plt.legend(loc='upper left', fancybox=True, numpoints=1, fontsize='smaller')
plt.grid(True)
plt.ylim(0, None)
plt.xlim(0, None)
plt.xlabel(r'Velocity (deg/day)', fontsize='x-large')
plt.ylabel(r'$\Delta$ Mag', fontsize='x-large')
plt.title(r'Trailing Losses for $\Theta$ = %.1f"' % seeing, fontsize='x-large')
plt.savefig('_static/trailing_losses.png', format='png', dpi=600)
```
```python
velocities = np.arange(0, 30, 0.1)
plt.figure(figsize=(8,6))
seeing = 0.7
dmags = dmag(velocities, seeing)
plt.plot(velocities, dmags['trail'], 'b:', label='SNR loss')
plt.plot(velocities, dmags['detect'], 'r-', label='Detection loss')
#plt.plot(velocities, 1.25*(np.log10(1+0.67*(velocities*30.0/seeing/24.0))), 'k:')
plt.legend(loc='upper left', fancybox=True, numpoints=1, fontsize='smaller')
plt.grid(True)
plt.ylim(0, None)
plt.xlim(0, None)
plt.xlabel(r'Velocity (deg/day)', fontsize='x-large')
plt.ylabel(r'$\Delta$ Mag', fontsize='x-large')
plt.title(r'Trailing Losses for $\Theta$ = %.1f"' % seeing, fontsize='x-large')
plt.savefig('_static/trailing_losses_fast.png', format='png', dpi=600)
```
```python
velocities = np.arange(0, 20, 0.1)
plt.figure(figsize=(8,6))
seeing = 1.0
dmags = dmag(velocities, seeing)
plt.plot(velocities, dmags['trail'], 'b:', label='SNR loss')
plt.plot(velocities, dmags['detect'], 'r-', label='Detection loss')
plt.legend(loc='upper left', fancybox=True, numpoints=1, fontsize='smaller')
plt.grid(True)
plt.ylim(0, None)
plt.xlim(0, None)
plt.xlabel(r'Velocity (deg/day)', fontsize='x-large')
plt.ylabel(r'$\Delta$ Mag', fontsize='x-large')
plt.title(r'Trailing Losses for $\Theta$ = %.1f"' % seeing, fontsize='x-large')
plt.savefig('_static/trailing_losses_%.1f.png' % seeing, format='png', dpi=600)
```
```python
```
| 7816d8bd597d63e744a58f1c6bfc97d2faa52c6e | 759,925 | ipynb | Jupyter Notebook | Trailing Losses.ipynb | lsst-sims/smtn-003 | 2310c968ce15f939824f96660d027cbf677ae755 | [
"CC-BY-4.0"
]
| null | null | null | Trailing Losses.ipynb | lsst-sims/smtn-003 | 2310c968ce15f939824f96660d027cbf677ae755 | [
"CC-BY-4.0"
]
| null | null | null | Trailing Losses.ipynb | lsst-sims/smtn-003 | 2310c968ce15f939824f96660d027cbf677ae755 | [
"CC-BY-4.0"
]
| null | null | null | 392.118163 | 38,140 | 0.920754 | true | 11,373 | Qwen/Qwen-72B | 1. YES
2. YES | 0.757794 | 0.70253 | 0.532373 | __label__eng_Latn | 0.850643 | 0.075211 |
```python
"""
================================
Data pre-processing
Angle correction
================================
(See the project documentation for more info)
The goal is to process data before using it to train ML algorithms :
1. Extraction of accelerations for activity 1 (rest activity)
2. Transitory regime suppression on activity 1
3. Calculation of theta angle between Z' and Z (ground's normal axis)
4. System rotation towards Z earth axis
5. Offset removal
Note that the solver fails to find a solution at step 3
"""
print(__doc__)
```
================================
Data pre-processing
Angle correction
================================
(See the project documentation for more info)
The goal is to process data before using it to train ML algorithms :
1. Extraction of accelerations for activity 1 (rest activity)
2. Transitory regime suppression on activity 1
3. Calculation of theta angle between Z' and Z (ground's normal axis)
4. System rotation towards Z earth axis
5. Offset removal
Note that the solver fails to find a solution at step 3
```python
# Imports statements
import pandas as pd
import numpy as np
# from math import cos, sin
from utils.colorpalette import black, red, blue, green, yellow, pink, brown, violet
from utils.activities import activities_labels
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
```
```python
# Import data into memory
raw_data = pd.read_csv('../data/1.csv',header=None,delimiter=',').astype(int)
raw_data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0</td>
<td>1502</td>
<td>2215</td>
<td>2153</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>1.0</td>
<td>1667</td>
<td>2072</td>
<td>2047</td>
<td>1</td>
</tr>
<tr>
<th>2</th>
<td>2.0</td>
<td>1611</td>
<td>1957</td>
<td>1906</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>3.0</td>
<td>1601</td>
<td>1939</td>
<td>1831</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>4.0</td>
<td>1643</td>
<td>1965</td>
<td>1879</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
```python
# Prepare further plotting activities
color_map = np.array([black, red, blue, green, yellow, pink, brown, violet])
axe_name = ['X', 'Y', 'Z']
activities = np.array(raw_data[4]-1) # -1 is here to shift the table correctly to work with the color_map
x_min, x_max = raw_data[0].min() - 1, raw_data[0].max() + 1
```
```python
# Show data before processing
y_min, y_max, xx, yy, fig, subplot = [], [], [], [], [], []
legend = []
for activity, color in zip(activities_labels, color_map):
legend.append(Line2D([0], [0], marker='o', label=activity, ls='None', markerfacecolor=color, markeredgecolor='k'))
for k in range(0,3):
y_min.append(raw_data[k+1].min() - 1)
y_max.append(raw_data[k+1].max() + 1)
xx_tmp, yy_tmp = np.meshgrid(np.arange(x_min, x_max, 1000),np.arange(y_min[k], y_max[k], 100))
xx.append(xx_tmp)
yy.append(yy_tmp)
fig.append(plt.figure())
subplot.append(fig[k].add_subplot(111))
subplot[k].scatter(raw_data[0], raw_data[k+1], s=1,c=color_map[activities])
subplot[k].set_title('Acceleration on ' + axe_name[k])
legend = plt.legend(handles=legend, loc='upper center', bbox_to_anchor=(1, 2), title='Activities')
plt.show()
```
```python
#Prepare for processing
clean_data = []
clean_data.append(raw_data[0])
```
```python
# Transitory regime suppression on activity 1
np_raw_data = np.array(raw_data, dtype=object)
bool_mask_on_act_1 = np_raw_data[:, 4] == 1 # Boolean mask to only select rows concerning activity 1
bool_mask_on_permanent_regime = (np_raw_data[:, 0] >= 3200) & (np_raw_data[:, 0] <= 16000)
act_1_data_permanent_regime = np_raw_data[bool_mask_on_act_1 & bool_mask_on_permanent_regime]
```
```python
# Show activity 1 data after transitory regime suppression on activity 1
activities = np.array(act_1_data_permanent_regime[:,4]-1, dtype=int) # -1 is here to shift the table correctly to work with the color_map
x_min, x_max = act_1_data_permanent_regime[0].min() - 1, act_1_data_permanent_regime[0].max() + 1
y_min, y_max, xx, yy, fig, subplot = [], [], [], [], [], []
legend = []
for activity, color in zip(activities_labels, color_map):
legend.append(Line2D([0], [0], marker='o', label=activity, ls='None', markerfacecolor=color, markeredgecolor='k'))
for k in range(0,3):
y_min.append(act_1_data_permanent_regime[k+1].min() - 1)
y_max.append(act_1_data_permanent_regime[k+1].max() + 1)
xx_tmp, yy_tmp = np.meshgrid(np.arange(x_min, x_max, 1000),np.arange(y_min[k], y_max[k], 100))
xx.append(xx_tmp)
yy.append(yy_tmp)
fig.append(plt.figure())
subplot.append(fig[k].add_subplot(111))
subplot[k].scatter(act_1_data_permanent_regime[:,0], act_1_data_permanent_regime[:,k+1], s=1,c=color_map[activities])
subplot[k].set_title('Acceleration on ' + axe_name[k])
legend = plt.legend(handles=legend, loc='upper center', bbox_to_anchor=(1, 2), title='Activities')
plt.show()
```
```python
index_mean, xp_mean, yp_mean, zp_mean, activity_mean = act_1_data_permanent_regime.mean(axis=0)
```
```python
# Look for theta value :
from sympy.solvers import solve
from sympy import Symbol, sin, cos
from math import sqrt
index_mean, xp_mean, yp_mean, zp_mean, activity_mean = act_1_data_permanent_regime.mean(axis=0)
abs_gamma_mean = sqrt(xp_mean**2+yp_mean**2+zp_mean**2)
theta = Symbol('theta')
index_mean, xp_mean, yp_mean, zp_mean, activity_mean = act_1_data_permanent_regime.mean(axis=0)
theta = solve(sin(theta)*yp_mean+cos(theta)*zp_mean, abs_gamma_mean, dict=True)
# TODO : Find a way that this equation returns results !
```
```python
# System rotation towards Z earth axis - see the report for documentation
rotation_matrix = np.array([[1, 0, 0],
[0, cos(theta), -sin(theta)]
[0, sin(theta), cos(theta)]])
for row_index in clean_data:
Gamma_xp_yp_zp = clean_data.iloc[row_index][1, 2, 3]
Gamma_x_y_z = np.matmul(rotation_matrix, Gamma_xp_yp_zp)
clean_data.iloc[row_index][1, 2, 3] = Gamma_x_y_z
```
```python
# Offset suppression
# TODO :
# Should we really delete the offset though? Maybe it just corresponds to gravity, so change of system first !
# At rest, Gamma is expected to be 1g, but is calculated to be around 3,7g.
# So there might be offsets, indeed, but in which direction?
mean_acc_by_act = raw_data[[1, 2, 3, 4]].groupby([4], as_index=False).mean().sort_values(by=4, ascending=True)
mean_acc_at_act_1 = mean_acc_by_act.iloc[0] # Offset is calculated at rest (activity 1)
for k in range(1,4):
clean_data.append(raw_data[k] - mean_acc_at_act_1[k])
```
```python
# Show changes after offset suppression
legend = []
for activity, color in zip(activities_labels, color_map):
legend.append(Line2D([0], [0], marker='o', label=activity, ls='None', markerfacecolor=color, markeredgecolor='k'))
y_min, y_max, xx, yy, fig, subplot = [], [], [], [], [], []
for k in range(0,3):
y_min.append(clean_data[k+1].min() - 1)
y_max.append(clean_data[k+1].max() + 1)
xx_tmp, yy_tmp = np.meshgrid(np.arange(x_min, x_max, 1000),np.arange(y_min[k], y_max[k], 100))
xx.append(xx_tmp)
yy.append(yy_tmp)
fig.append(plt.figure())
subplot.append(fig[k].add_subplot(111))
subplot[k].scatter(clean_data[0], clean_data[k+1], s=1,c=color_map[activities])
subplot[k].set_title('Acceleration on ' + axe_name[k])
legend = plt.legend(handles=legend, loc='upper center', bbox_to_anchor=(1, 2), title='Activities')
plt.show()
```
```python
# Push data changes into new csv file
df = pd.DataFrame(clean_data)
df.to_csv("../../data/cleaned_data/projected_on_z_axis_1.csv",index=False)
```
| da2bc3f52dc10cbee10d3d2f1c781e58b4cdb7c3 | 308,686 | ipynb | Jupyter Notebook | src/angle_correction.ipynb | will-afs/IML | 9c76535274e183c37395af20ea35d1544fbc6c43 | [
"BSD-3-Clause"
]
| null | null | null | src/angle_correction.ipynb | will-afs/IML | 9c76535274e183c37395af20ea35d1544fbc6c43 | [
"BSD-3-Clause"
]
| null | null | null | src/angle_correction.ipynb | will-afs/IML | 9c76535274e183c37395af20ea35d1544fbc6c43 | [
"BSD-3-Clause"
]
| null | null | null | 668.151515 | 85,414 | 0.943668 | true | 2,466 | Qwen/Qwen-72B | 1. YES
2. YES | 0.805632 | 0.746139 | 0.601114 | __label__eng_Latn | 0.634378 | 0.234919 |
```python
from sympy import integrate, init_printing
from sympy.abc import x
init_printing(use_latex="mathjax")
```
```python
f = x**2 - 3*x + 2
integrate(f)
```
$\displaystyle \frac{x^{3}}{3} - \frac{3 x^{2}}{2} + 2 x$
```python
from sympy.abc import a,b,c
f = a*x**2+b*x+c
integrate(f, x)
```
$\displaystyle \frac{a x^{3}}{3} + \frac{b x^{2}}{2} + c x$
```python
from sympy import cos,pi
integrate(cos(x), (x,0,pi/2.0)) # from 0 to pi/2
```
$\displaystyle 1$
```python
integrate(x, (x,0,5))
```
$\displaystyle \frac{25}{2}$
```python
from sympy.abc import x,y,z,a,b,c,d
from sympy import simplify
```
```python
I1 = integrate(1, (y,c,d))
simplify( integrate(I1, (x,a,b) ) )
```
$\displaystyle \left(a - b\right) \left(c - d\right)$
[Referencia](https://numython.github.io/posts/integrales-con-sympy/)
```python
from __future__ import division
from sympy import *
x, y, z, t = symbols('x y z t')
k, m, n = symbols('k m n', integer=True)
f, g, h = symbols('f g h', cls=Function)
integrate(x**2 * exp(x) * cos(x), x)
```
$\displaystyle \frac{x^{2} e^{x} \sin{\left(x \right)}}{2} + \frac{x^{2} e^{x} \cos{\left(x \right)}}{2} - x e^{x} \sin{\left(x \right)} + \frac{e^{x} \sin{\left(x \right)}}{2} - \frac{e^{x} \cos{\left(x \right)}}{2}$
```python
integrate(5/(1+x**2), (x,-oo,oo))
```
$\displaystyle 5 \pi$
[Referencia](https://docs.sympy.org/latest/modules/integrals/integrals.html) [Comprobacion](https://www.youtube.com/watch?v=6uIeKpA2dHw)
```python
f = sin(k*x)*cos(m*x)
integrate(f, x)
```
$\displaystyle \begin{cases} 0 & \text{for}\: k = 0 \wedge m = 0 \\\frac{\cos^{2}{\left(m x \right)}}{2 m} & \text{for}\: k = - m \\- \frac{\cos^{2}{\left(m x \right)}}{2 m} & \text{for}\: k = m \\- \frac{k \cos{\left(k x \right)} \cos{\left(m x \right)}}{k^{2} - m^{2}} - \frac{m \sin{\left(k x \right)} \sin{\left(m x \right)}}{k^{2} - m^{2}} & \text{otherwise} \end{cases}$
```python
f = sin(2*x)*cos(4*x)
integrate(f)
```
$\displaystyle \frac{\sin{\left(2 x \right)} \sin{\left(4 x \right)}}{3} + \frac{\cos{\left(2 x \right)} \cos{\left(4 x \right)}}{6}$
```python
limit(((x - 1)/(x + 1))**x, x, oo)
```
$\displaystyle e^{-2}$
```python
exp(-2)
```
$\displaystyle e^{-2}$
```python
limit(sin(x)/x, x, 2)
```
$\displaystyle \frac{\sin{\left(2 \right)}}{2}$
```python
limit((1 - cos(x))/x**2, x, 0)
```
$\displaystyle \frac{1}{2}$
```python
S.Half
```
$\displaystyle \frac{1}{2}$
```python
limit((1 + k/x)**x, x, oo)
```
$\displaystyle e^{k}$
```python
limit((x + 1)*(x + 2)*(x + 3)/x**3, x, oo)
```
$\displaystyle 1$
[Referencia](https://github.com/sympy/sympy/blob/master/sympy/series/tests/test_demidovich.py)
```python
diff( cos(x) * (1 + x))
```
$\displaystyle - \left(x + 1\right) \sin{\left(x \right)} + \cos{\left(x \right)}$
```python
diff( cos(x) * (1 + x),x)
```
$\displaystyle - \left(x + 1\right) \sin{\left(x \right)} + \cos{\left(x \right)}$
```python
diff( cos(x) * (1 + x),x,x)
```
$\displaystyle - (\left(x + 1\right) \cos{\left(x \right)} + 2 \sin{\left(x \right)})$
```python
diff(log(x * y), y)
```
$\displaystyle \frac{1}{y}$
[Referencia](https://pybonacci.org/2012/04/30/como-calcular-limites-derivadas-series-e-integrales-en-python-con-sympy/)
| 02afebbaddcb7ecf3ebaf5fa6abfd8be4abcd33a | 12,882 | ipynb | Jupyter Notebook | LimitesDerivadasIntegrales/DerivadasLimitesIntegrales.ipynb | rulgamer03/Python-Projects | 89a2418fadce0fd4674d3f7d3fa682a9aaa4b14d | [
"Apache-2.0"
]
| 1 | 2021-06-18T16:29:46.000Z | 2021-06-18T16:29:46.000Z | LimitesDerivadasIntegrales/DerivadasLimitesIntegrales.ipynb | rulgamer03/Python-Projects | 89a2418fadce0fd4674d3f7d3fa682a9aaa4b14d | [
"Apache-2.0"
]
| null | null | null | LimitesDerivadasIntegrales/DerivadasLimitesIntegrales.ipynb | rulgamer03/Python-Projects | 89a2418fadce0fd4674d3f7d3fa682a9aaa4b14d | [
"Apache-2.0"
]
| null | null | null | 22.058219 | 424 | 0.382316 | true | 1,282 | Qwen/Qwen-72B | 1. YES
2. YES | 0.907312 | 0.899121 | 0.815784 | __label__yue_Hant | 0.454445 | 0.733672 |
# Bayesian Model Example - The Deregulation Act
Basic idea: have taxi- and non-taxi-related crime (i.e. what Jacob is doing) diverged after the deregulation act?
Method:
- Calculate the difference in proportions between the two datasets.
- Model the change in proportion following the text messaging example from Probabilistic Programming book. I.e.:
- Model the difference using a Poisson distribution (defined by $\lambda$ parameter)
- Assume that two distributions exist: one before the Act, and one after.
- Define $\tau$ as the point at which they switch (i.e. when the Act came into force).
- Create two $\lambda$ parameters - one before $\tau$ and one after.
- See what the posterior distributions of $\lambda_1$, $\lambda_2$, and $\tau$ are. Do they correspond with the introduction of the Act?
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as stats
from IPython.core.pylabtools import figsize
import pymc3 as pm
import theano.tensor as tt
```
/Users/nick/anaconda3/envs/py36/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
## Data
Prepare the data
```python
data = pd.read_csv("./data/taxi_crime.csv")
date = np.array(data.loc[:,"Month"])
taxi_crime = np.array(data.loc[:,"Taxi Crime"])
all_crime = np.array(data.loc[:,"All Crime"])
```
Plot them
```python
figsize(12.5, 4)
plt.plot(np.arange(len(date)), taxi_crime*100, color="#348ABD", label="Taxi crime (*100)")
plt.plot(np.arange(len(date)), all_crime, color="#00AA00", label="All crime")
plt.xlabel("Time (Months)")
plt.ylabel("Count")
plt.title("Crime volumes")
plt.legend()
```
As _all crime_ has much larger volumes than _taxi crime_, index to the first point and then calculate the difference. This will give them comparable sizes.
```python
taxi_i = np.array( [ x / taxi_crime[0] for x in taxi_crime ] )
all_i = np.array( [ x / all_crime[0] for x in all_crime ] )
diff = all_i - taxi_i
```
```python
figsize(12.5, 4)
plt.plot(np.arange(len(date)), taxi_i, color="#00AA00", label="Taxi indexed")
plt.plot(np.arange(len(date)), all_i, color="#348ABD", label="All indexed")
plt.plot(np.arange(len(date)), diff, color="#A60628", label="Difference")
plt.xlabel("Time (Months)")
plt.ylabel("Count")
plt.legend()
```
It doesn't work very well using these noisy data. So try calculating the difference between the two proportions as a 12-month rolling average
```python
avg_diff = []
avg_taxi = []
avg_all = []
for i in range(12, len(taxi_i)):
taxi_mean = np.mean(taxi_i[i-12:len(taxi_i)])
all_mean = np.mean(all_i[i-12:len(taxi_i)])
avg_taxi.append(taxi_mean)
avg_all.append(all_mean)
avg_diff.append(all_mean - taxi_mean)
avg_diff = np.array(avg_diff)
avg_taxi = np.array(avg_taxi)
avg_all = np.array(avg_all)
```
```python
figsize(12.5, 4)
plt.plot(np.arange(len(avg_diff)), avg_taxi, color="#00AA00", label="Taxi indexed 12-month average")
plt.plot(np.arange(len(avg_diff)), avg_all, color="#348ABD", label="All indexed 12-month average")
plt.plot(np.arange(len(avg_diff)), avg_diff, color="#A60628", label="Difference 12-month average")
plt.xlabel("Time (Months)")
plt.ylabel("Count")
plt.title("12-month rolling average differences")
plt.legend()
```
## Model Conception
We want to model the difference between the two indexed crime volumes. The difference is continuous and could be positive or negative. Therefore lets assume that the difference can be modelled using a normal distribution:
$$X \sim N(\mu, 1/\tau)$$
The values of $\mu$ and $\tau$ determine the shape of the distribution. The $\tau$ will give us information about the spread of the difference. $\mu$ is especially useful: a _larger_ value for $\mu$ will higher probability of _larger_ variable values being chosen. Therefore if a model suggests that $\mu$ increases after some point (i.e. the introduction of the Deregulation Act) then it would appear that the differnce has increased.
```python
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1./_tau),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1./_tau), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables");
```
We want to test whether at some point (e.g. at the introduction of the Deregulation Act) the difference in crime volumes changed. Call this point $z$. After $z$, the difference suddenly becomes larger; $\mu$ increases. Therefore there are two $\mu$ parameters, one before $z$ and one after (and each will have its own $\tau$ parameter):
$$
X \sim N
\begin{cases}
(\mu_1, 1/\tau_1 ) & \text{if } t \lt z \cr
(\mu_2, 1/\tau_2 ) & \text{if } t \ge z
\end{cases}
$$
If, in reality, no sudden change occurred and indeed $\mu_1 = \mu_2$, then the $\mu$s posterior distributions should look about equal.
We are interested in inferring the unknown $\mu$s and $\tau$s. To use Bayesian inference, we need to assign prior probabilities to their different possible values. $\mu$ is continuous and might be negative (this would happen if taxi crime were more abundant than all other crime), so use a normal distrubtion again. $\tau$ is also continuous, but cannot be negative, so use an exponential distribution. Each of those distributions has their own parameters, so we now have six _hyper-parameters_ (two for each of the $\mu$s and one for each $\tau$). Name them $\alpha$, $\beta$ and $\gamma$:
$$
\begin{align}
& \mu_1 \sim \text{N} (\alpha_1, 1/\beta_1) \text{ , } \tau_1 \sim \text{Exp} (\gamma_1) \\\
& \mu_2 \sim \text{N} (\alpha_2, 1/\beta_2) \text{ , } \tau_2 \sim \text{Exp} (\gamma_2)
\end{align}
$$
The $\alpha$, $\beta$ and $\gamma$ parameters influence other parameters, so our initial guesses do not influence the model too strongly, so we have some flexibility in our choice.
We also need a distribution for the point at which the difference changes ($z$). Assign a *uniform prior belief* to every possible time point. This will reduce any personal bias.
\begin{align}
& z \sim \text{DiscreteUniform(1,N) }\\\\
& \Rightarrow P( \tau = k ) = \frac{1}{N}
\end{align}
where $N$ is the number of time points in the data.
## Model Definition
Now define the model using PYMC. (_Note that we use the same values for the hyperparameters, although above I talked about them as separate variables for each of the two Normal distributions_).
```python
with pm.Model() as model:
# The hyperparameters
alpha = 1.0
beta = 1.0
gamma = 1.0/avg_diff.mean()
# The two mean and sd variables parameters.
# (We want to see if their posterior distributions change)
mu_1 = pm.Normal("mu_1", mu = alpha, tau = 1.0/beta)
mu_2 = pm.Normal("mu_2", mu = alpha, tau = 1.0/beta)
tau_1 = pm.Exponential("tau_1", gamma)
tau_2 = pm.Exponential("tau_2", gamma)
#taus = pm.Exponential("taus", gamma, shape=2)
# Z is the point at which the texting behaviour changed.
# Don't know anything about this so assign a uniform prior belief
z = pm.DiscreteUniform("z", lower=0, upper=len(avg_diff) - 1)
#z = pm.Normal("z", mu = 20, sd=5)
# Creates new functions using switch() to assign the correct mu or tau variables
# These can be thought of as the random variables mu and tau definied initially.
idx = np.arange(len(avg_diff)) # Index
mu_ = pm.math.switch(z > idx, mu_1, mu_2)
tau_ = pm.math.switch(z > idx, tau_1, tau_2)
# Combine the observation data (we assume that it has been generated by
# a normal distribution with our mu and tau parameters).
obs = pm.Normal("obs", mu=mu_, tau=tau_, observed=avg_diff)
```
The model has been defined, now we can perform MCMC to sample from the posterior distribution
```python
N = 10000 # Number of samples
with model:
step = pm.Metropolis()
#start = pm.find_MAP() # Help it to start from a good place
#trace = pm.sample(N, tune=int(N*2), step=step, start=start, njobs=4)
trace = pm.sample(N, tune=int(N/2), step=step, start=start, njobs=1)
```
67%|██████▋ | 10044/15000 [00:10<00:05, 946.91it/s]
The model has finished. Now example our samples
```python
mu_1_samples = trace['mu_1']
mu_2_samples = trace['mu_2']
tau_1_samples = trace['tau_1']
tau_2_samples = trace['tau_2']
z_samples = trace['z']
```
```python
figsize(12.5, 10)
BINS = 50
#histogram of the samples:
# MUs
ax = plt.subplot(321) # rows, columns, index
#ax.set_autoscaley_on(False)
plt.hist(mu_1_samples, histtype='stepfilled', bins=BINS, alpha=0.85,
label="posterior of $\mu_1$", color="#A60628", normed=True)
plt.vlines(np.mean(mu_1_samples), 0, 3, linestyle="--", label="mean $mu_1$")
plt.legend(loc="upper left")
plt.title(r"""Posterior distributions of the variables""")
plt.xlim([-2,4])
#plt.xlabel("$\lambda_1$ value")
ax = plt.subplot(322) # rows, columns, index
#ax.set_autoscaley_on(False)
plt.hist(mu_2_samples, histtype='stepfilled', bins=BINS, alpha=0.85,
label="posterior of $\mu_2$", color="#A60628", normed=True)
plt.vlines(np.mean(mu_2_samples), 0, 3, linestyle="--", label="mean $mu_2$")
plt.legend(loc="upper left")
plt.xlim([-2,4])
#plt.xlabel("$\lambda_1$ value")
# TAUs
ax = plt.subplot(323)
#ax.set_autoscaley_on(False)
plt.hist(tau_1_samples, histtype='stepfilled', bins=BINS, alpha=0.85,
label="posterior of $\tau_1$", color="#7A68A6", normed=True)
plt.legend(loc="upper left")
plt.xlim([0, 7])
plt.xlabel("$\tau_1$ value")
ax = plt.subplot(324)
#ax.set_autoscaley_on(False)
plt.xlim([0, 7])
plt.hist(tau_2_samples, histtype='stepfilled', bins=BINS, alpha=0.85,
label="posterior of $\tau_2$", color="#7A68A6", normed=True)
plt.legend(loc="upper left")
plt.xlim([0, 7])
plt.xlabel("$\tau_2$ value")
plt.subplot(325)
plt.hist(z_samples, bins=len(diff), alpha=1,
label=r"posterior of $z$",
color="#467821", rwidth=2.)
plt.xticks(np.arange(len(diff)))
plt.legend(loc="upper left")
#plt.ylim([0, .75])
plt.xlim([0,len(avg_diff)])
plt.xlabel(r"$z$ (in months)")
plt.ylabel("probability");
```
Some more diagnostic visualisations
```python
pm.plots.traceplot(trace=trace, varnames=["mu_1", "mu_2", "tau_1", "tau_2", "z"])
#pm.plots.traceplot(trace=trace, varnames=["mu_1"])
```
```python
pm.plots.plot_posterior(trace=trace, varnames=["mu_1", "mu_2", "tau_1", "tau_2", "z"])
#pm.plots.plot_posterior(trace=trace["centers"][:,1])
#pm.plots.autocorrplot(trace=trace, varnames=["centers"]);
```
```python
```
```python
```
## Next steps:
Think about How to detect structural change in a timeseries.
E.g.: [https://stats.stackexchange.com/questions/16953/how-to-detect-structural-change-in-a-timeseries]
Can involve Jose too.
```python
```
| 0b99991ede27457064a46c19af4fea48ebdccae8 | 252,238 | ipynb | Jupyter Notebook | ExampleModels/Example-TaxiDeregulationTEMP.ipynb | nickmalleson/ProbabilisticProgramming | c2e83f8ce9e38f9850014aeebaab7d9658fa214e | [
"MIT"
]
| null | null | null | ExampleModels/Example-TaxiDeregulationTEMP.ipynb | nickmalleson/ProbabilisticProgramming | c2e83f8ce9e38f9850014aeebaab7d9658fa214e | [
"MIT"
]
| null | null | null | ExampleModels/Example-TaxiDeregulationTEMP.ipynb | nickmalleson/ProbabilisticProgramming | c2e83f8ce9e38f9850014aeebaab7d9658fa214e | [
"MIT"
]
| null | null | null | 423.218121 | 67,736 | 0.932532 | true | 3,237 | Qwen/Qwen-72B | 1. YES
2. YES | 0.855851 | 0.7773 | 0.665253 | __label__eng_Latn | 0.92553 | 0.383937 |
```python
import sympy as sp
import numpy as np
sp.init_printing()
```
```python
xi, L, A, E = sp.symbols("xi L A E")
Nue = sp.Matrix([(1-xi)/2, (1+xi)/2])
Nue
```
```python
Bue=sp.diff(Nue,xi)*2/L
Bue
```
```python
KeTRUSS1D = sp.lambdify((A,E,L), A*E*sp.integrate(Bue*(Bue.T),(xi,-1,1))*L/2)
print(KeTRUSS1D(A,E,L))
```
[[A*E/L -A*E/L]
[-A*E/L A*E/L]]
Adatok megadása
```python
A1=60
A2=20
A3=30
E1=100e3
E2=200e3
E3=50e3
L1=1e3
L2=2e3
L3=3e3
FT=-15e3
```
Elem-csomópont összerendelések tárolása az `en` mátrixban
```python
en=np.array([
[1,2],
[2,3],
[2,4]]) - 1
```
Elemi merevségi mátrixok
```python
Ke1 = KeTRUSS1D(A1,E1,L1)
Ke2 = KeTRUSS1D(A2,E2,L2)
Ke3 = KeTRUSS1D(A3,E3,L3)
```
Globális merevségi mátrix megadása során első lépésben egy zérus elemekkel
kitöltött mátrixot hozunk létre, majd a megfelelő helyekre betesszül az
egyes elemi merevségi mátrixainak elemeit.
```python
KG=np.zeros((4,4))
KG
```
array([[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]])
Egy lehetséges leprogramozása ennek az alábbiakban látható,
ahol felhasználjuk az elem-csomópont összerendlés mátrixot:
[np.ix_](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ix_.html)
Az 1-es elem merevségi mátrixának elhelyezése
a globális merevségi mátrixban:
```python
elemSzam=1
KG[np.ix_(en[elemSzam-1],en[elemSzam-1])] += Ke1
KG
```
array([[ 6000., -6000., 0., 0.],
[-6000., 6000., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]])
Az 2-es elem merevségi mátrixának elhelyezése
a globális merevségi mátrixban:
```python
elemSzam=2
KG[np.ix_(en[elemSzam-1],en[elemSzam-1])] += Ke2
KG
```
array([[ 6000., -6000., 0., 0.],
[-6000., 8000., -2000., 0.],
[ 0., -2000., 2000., 0.],
[ 0., 0., 0., 0.]])
Az 3-es elem merevségi mátrixának elhelyezése
a globális merevségi mátrixban:
```python
elemSzam=3
KG[np.ix_(en[elemSzam-1],en[elemSzam-1])] += Ke3
```
Tehát a globális merevségi mátrix:
```python
KG
```
array([[ 6000., -6000., 0., 0.],
[-6000., 8500., -2000., -500.],
[ 0., -2000., 2000., 0.],
[ 0., -500., 0., 500.]])
A globális tehervektor:
```python
FG = np.zeros(4)
FG[1 - 1] += FT
```
A kondenzált merevségi mátrixot és a kondenzált tehervektort megkapjuk a kényszerekkel ellátott szabadságfokokhoz tartozó sorok és oszlopok törlésével:
(megtartjuk a maradék részt)
```python
szabad = [0,1]
```
A kondenzált merevségi mátrixot megkapjuk az **első** és **második** sorok/oszlopok megtartásával:
```python
print(np.ix_(szabad,szabad))
```
(array([[0],
[1]]), array([[0, 1]]))
```python
KGkond = KG[np.ix_(szabad,szabad)]
KGkond
```
array([[ 6000., -6000.],
[-6000., 8500.]])
A kondezált tehervektort megkapjuk az **első** és **második** sor megtartásával:
```python
FGkond = FG[np.ix_(szabad)]
FGkond
```
array([-15000., 0.])
```python
# a teljes lineáris egyenletrendszer alulhatározott
np.linalg.solve(KG,FG)
```
array([ 1.12589991e+17, 1.12589991e+17, 1.12589991e+17,
1.12589991e+17])
Megoldás az ismeretlen elmozdulásokra
```python
Umego = np.linalg.solve(KGkond,FGkond)
Umego
```
array([-8.5, -6. ])
Tehát a teljes globális csomóponti elmozdulásvektor:
```python
UG = np.zeros(4)
UG[np.ix_(szabad)] += Umego
UG
```
array([-8.5, -6. , 0. , 0. ])
Eredmények megjelenítése
[matplotlib.patches](https://matplotlib.org/api/patches_api.html)
```python
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
```
```python
nagy = 100
R = 200
plt.figure()
ax = plt.gca()
ax.set_xticklabels([])
ax.set_yticklabels([])
## eredeti konfiguráció
# rudak
ax.add_patch(mpatches.Rectangle((-R/2,-L1),R,L1,
fc = (1,1,1,1),ec = (0.5,0.5,0.5,1), ls = "--", lw = 2))
ax.add_patch(mpatches.Rectangle((-1.5*L1-R/2,-L1-L2),R,L2,
fc = (1,1,1,1),ec = (0.5,0.5,0.5,1), ls = "--", lw = 2))
ax.add_patch(mpatches.Rectangle((1.5*L1-R/2,-L1-L3),R,L3,
fc = (1,1,1,1),ec = (0.5,0.5,0.5,1), ls = "--", lw = 2))
# csomópontok
ax.add_patch(mpatches.Circle((0,0),R,
fc = (1,1,1,1),ec = (0.5,0.5,0.5,1),ls = "--", lw = 2))
ax.add_patch(mpatches.Rectangle((-2*L1,-L1 - R/2),4*L1,R,
fc = (1,1,1,1),ec = (0.5,0.5,0.5,1),ls = "--", lw = 2))
# terhelés
ax.add_patch(mpatches.Arrow(0,R+5*R,0,-5*R,
fc = (1,0,0,1),ec = (1,0,0,1), lw = 3, width = 2*R))
## elmozdult állapot
# rudak
ax.add_patch(mpatches.Rectangle((-R/2,-L1 + nagy*UG[1]),R,L1 + nagy*(UG[0] - UG[1]),
fc = (1,1,1,1),ec = (0,0,0,1), lw = 2))
ax.add_patch(mpatches.Rectangle((-1.5*L1-R/2,-L1-L2 + nagy*UG[2]),R,L2 + nagy*(UG[1] - UG[2]),
fc = (1,1,1,1),ec = (0,0,0,1), lw = 2))
ax.add_patch(mpatches.Rectangle((1.5*L1-R/2,-L1-L3 + nagy*UG[3]),R,L3 + nagy*(UG[1] - UG[3]),
fc = (1,1,1,1),ec = (0,0,0,1), lw = 2))
# csomópontok
ax.add_patch(mpatches.Circle((0,0+nagy*UG[0]),R,
fc = (1,1,1,1),ec = (0,0,0,1), lw = 2))
ax.add_patch(mpatches.Rectangle((-2*L1,-L1 + nagy*UG[1] - R/2),4*L1,R,
fc = (1,1,1,1),ec = (0,0,0,1), lw = 2))
ax.add_patch(mpatches.Circle((-1.5*L1,-L1-L2 + nagy*UG[2]),R,
fc = (1,1,1,1),ec = (0,0,0,1), lw = 2))
ax.add_patch(mpatches.Circle((1.5*L1,-L1-L3 + nagy*UG[3]),R,
fc = (1,1,1,1),ec = (0,0,0,1), lw = 2))
plt.axis("equal")
plt.show()
```
A teljes csomóponti terhelésvektor
$\mathbf{F}_\mathrm{TOT} = \mathbf{F}_\mathrm{REAK} + \mathbf{F}_\mathrm{KÜL}$:
```python
FTOT = np.dot(KG,UG)
FTOT
```
array([-15000., 0., 12000., 3000.])
```python
FREAK = FTOT - FG
FREAK
```
array([ 0., 0., 12000., 3000.])
Az egyes elemekhez tartozó lokális elmozdulásvektorok:
```python
Ue1 = UG[en[0]]
Ue1
```
array([-8.5, -6. ])
```python
Ue2 = UG[en[1]]
Ue2
```
array([-6., 0.])
```python
Ue3 = UG[en[2]]
Ue3
```
array([-6., 0.])
Az egyes elemekhez tartozó lokális tehervektor:
```python
Fe1 = np.dot(Ke1,Ue1)
Fe1
```
array([-15000., 15000.])
```python
Fe2 = np.dot(Ke2,Ue2)
Fe2
```
array([-12000., 12000.])
```python
Fe3 = np.dot(Ke3,Ue3)
Fe3
```
array([-3000., 3000.])
Tehát a normál igénybevételek:
```python
N1 = Fe1[0]
N1
```
```python
N2 = Fe2[0]
N2
```
```python
N3 = Fe3[0]
N3
```
Vagyis mindegyik rúd nyomó igénybevétel alatt van.
A rudakban ébredő feszültség:
```python
sig1 = N1/A1
sig1
```
```python
sig2 = N2/A2
sig2
```
```python
sig3 = N3/A3
sig3
```
A rudak alakváltozásai:
```python
eps1 = sig1/E1
eps1
```
```python
eps2 = sig2/E2
eps2
```
```python
eps3 = sig3/E3
eps3
```
| cc8390c3e4304d800b5743848d4d961c686e434f | 34,364 | ipynb | Jupyter Notebook | 03/03_01.ipynb | TamasPoloskei/BME-VEMA | 542725bf78e9ad0962018c1cf9ff40c860f8e1f0 | [
"MIT"
]
| null | null | null | 03/03_01.ipynb | TamasPoloskei/BME-VEMA | 542725bf78e9ad0962018c1cf9ff40c860f8e1f0 | [
"MIT"
]
| 1 | 2018-11-20T14:17:52.000Z | 2018-11-20T14:17:52.000Z | 03/03_01.ipynb | TamasPoloskei/BME-VEMA | 542725bf78e9ad0962018c1cf9ff40c860f8e1f0 | [
"MIT"
]
| null | null | null | 31.818519 | 6,580 | 0.634327 | true | 3,057 | Qwen/Qwen-72B | 1. YES
2. YES | 0.909907 | 0.760651 | 0.692121 | __label__hun_Latn | 0.954953 | 0.446361 |
# Gefitinib PK Analysis
Run population anaysis of Gefitinib PK data. First we reproduce the steps performed in the publication by Eigenmann et. al.. The next step is to challenge the modelling assumptions.
## 1. Reproduce PK Analysis
### 1.1 Import data
```python
import os
import pandas as pd
# Import data
path = os.getcwd()
data_raw = pd.read_csv(path + '/data/PK_LXF_gefi.csv', sep=';')
# Filter relevant information
data = data_raw[['#ID', 'TIME', 'Y', 'DOSE GROUP', 'DOSE', 'BW']]
# Convert TIME and Y to numeric values (currently strings)
data['TIME'] = pd.to_numeric(data['TIME'], errors='coerce')
data['Y'] = pd.to_numeric(data['Y'], errors='coerce')
# Sort TIME values (for plotting convenience)
data.sort_values(by='TIME', inplace=True)
# Filter NaNs
data = data[data['Y'].notnull()]
# Show data
data
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>#ID</th>
<th>TIME</th>
<th>Y</th>
<th>DOSE GROUP</th>
<th>DOSE</th>
<th>BW</th>
</tr>
</thead>
<tbody>
<tr>
<th>31</th>
<td>25</td>
<td>9.991667</td>
<td>87.600</td>
<td>100.00</td>
<td>.</td>
<td>24.1</td>
</tr>
<tr>
<th>130</th>
<td>149</td>
<td>9.992361</td>
<td>3.960</td>
<td>25.00</td>
<td>.</td>
<td>23.4</td>
</tr>
<tr>
<th>175</th>
<td>99</td>
<td>9.996528</td>
<td>0.488</td>
<td>6.25</td>
<td>.</td>
<td>23.0</td>
</tr>
<tr>
<th>49</th>
<td>124</td>
<td>10.010417</td>
<td>2000.000</td>
<td>100.00</td>
<td>.</td>
<td>25.2</td>
</tr>
<tr>
<th>202</th>
<td>156</td>
<td>10.010417</td>
<td>78.000</td>
<td>6.25</td>
<td>.</td>
<td>30.4</td>
</tr>
<tr>
<th>76</th>
<td>41</td>
<td>10.010417</td>
<td>468.000</td>
<td>25.00</td>
<td>.</td>
<td>24.1</td>
</tr>
<tr>
<th>13</th>
<td>22</td>
<td>10.020833</td>
<td>3190.000</td>
<td>100.00</td>
<td>.</td>
<td>24.4</td>
</tr>
<tr>
<th>139</th>
<td>164</td>
<td>10.020833</td>
<td>327.000</td>
<td>25.00</td>
<td>.</td>
<td>24.6</td>
</tr>
<tr>
<th>166</th>
<td>75</td>
<td>10.022222</td>
<td>53.500</td>
<td>6.25</td>
<td>.</td>
<td>24.6</td>
</tr>
<tr>
<th>193</th>
<td>138</td>
<td>10.040278</td>
<td>97.500</td>
<td>6.25</td>
<td>.</td>
<td>27.5</td>
</tr>
<tr>
<th>94</th>
<td>68</td>
<td>10.040278</td>
<td>461.000</td>
<td>25.00</td>
<td>.</td>
<td>30.4</td>
</tr>
<tr>
<th>67</th>
<td>157</td>
<td>10.044444</td>
<td>2900.000</td>
<td>100.00</td>
<td>.</td>
<td>24.0</td>
</tr>
<tr>
<th>121</th>
<td>133</td>
<td>10.102778</td>
<td>514.000</td>
<td>25.00</td>
<td>.</td>
<td>22.8</td>
</tr>
<tr>
<th>211</th>
<td>165</td>
<td>10.103472</td>
<td>142.000</td>
<td>6.25</td>
<td>.</td>
<td>24.6</td>
</tr>
<tr>
<th>4</th>
<td>15</td>
<td>10.104861</td>
<td>2180.000</td>
<td>100.00</td>
<td>.</td>
<td>25.9</td>
</tr>
<tr>
<th>148</th>
<td>4</td>
<td>10.205556</td>
<td>83.900</td>
<td>6.25</td>
<td>.</td>
<td>29.3</td>
</tr>
<tr>
<th>85</th>
<td>62</td>
<td>10.206250</td>
<td>272.000</td>
<td>25.00</td>
<td>.</td>
<td>24.9</td>
</tr>
<tr>
<th>22</th>
<td>24</td>
<td>10.206944</td>
<td>1870.000</td>
<td>100.00</td>
<td>.</td>
<td>26.9</td>
</tr>
<tr>
<th>184</th>
<td>125</td>
<td>10.409722</td>
<td>8.030</td>
<td>6.25</td>
<td>.</td>
<td>31.5</td>
</tr>
<tr>
<th>58</th>
<td>132</td>
<td>10.413889</td>
<td>1090.000</td>
<td>100.00</td>
<td>.</td>
<td>21.9</td>
</tr>
<tr>
<th>103</th>
<td>117</td>
<td>10.413889</td>
<td>193.000</td>
<td>25.00</td>
<td>.</td>
<td>27.9</td>
</tr>
<tr>
<th>112</th>
<td>130</td>
<td>10.997222</td>
<td>9.980</td>
<td>25.00</td>
<td>.</td>
<td>19.5</td>
</tr>
<tr>
<th>157</th>
<td>42</td>
<td>10.997222</td>
<td>0.488</td>
<td>6.25</td>
<td>.</td>
<td>24.7</td>
</tr>
<tr>
<th>41</th>
<td>78</td>
<td>11.000000</td>
<td>31.400</td>
<td>100.00</td>
<td>.</td>
<td>20.2</td>
</tr>
<tr>
<th>25</th>
<td>24</td>
<td>15.994444</td>
<td>412.000</td>
<td>100.00</td>
<td>.</td>
<td>25.2</td>
</tr>
<tr>
<th>88</th>
<td>62</td>
<td>15.994444</td>
<td>2.100</td>
<td>25.00</td>
<td>.</td>
<td>25.1</td>
</tr>
<tr>
<th>160</th>
<td>42</td>
<td>15.995139</td>
<td>2.830</td>
<td>6.25</td>
<td>.</td>
<td>26.2</td>
</tr>
<tr>
<th>206</th>
<td>156</td>
<td>16.009722</td>
<td>45.400</td>
<td>6.25</td>
<td>.</td>
<td>29.3</td>
</tr>
<tr>
<th>98</th>
<td>68</td>
<td>16.010417</td>
<td>273.000</td>
<td>25.00</td>
<td>.</td>
<td>30.3</td>
</tr>
<tr>
<th>71</th>
<td>157</td>
<td>16.011111</td>
<td>2450.000</td>
<td>100.00</td>
<td>.</td>
<td>23.0</td>
</tr>
<tr>
<th>197</th>
<td>138</td>
<td>16.021528</td>
<td>27.900</td>
<td>6.25</td>
<td>.</td>
<td>26.9</td>
</tr>
<tr>
<th>107</th>
<td>117</td>
<td>16.022222</td>
<td>437.000</td>
<td>25.00</td>
<td>.</td>
<td>28.3</td>
</tr>
<tr>
<th>53</th>
<td>124</td>
<td>16.022222</td>
<td>2610.000</td>
<td>100.00</td>
<td>.</td>
<td>25.0</td>
</tr>
<tr>
<th>125</th>
<td>133</td>
<td>16.040972</td>
<td>420.000</td>
<td>25.00</td>
<td>.</td>
<td>23.0</td>
</tr>
<tr>
<th>215</th>
<td>165</td>
<td>16.040972</td>
<td>37.400</td>
<td>6.25</td>
<td>.</td>
<td>24.4</td>
</tr>
<tr>
<th>8</th>
<td>15</td>
<td>16.041667</td>
<td>2280.000</td>
<td>100.00</td>
<td>.</td>
<td>21.1</td>
</tr>
<tr>
<th>170</th>
<td>75</td>
<td>16.100694</td>
<td>81.800</td>
<td>6.25</td>
<td>.</td>
<td>24.2</td>
</tr>
<tr>
<th>116</th>
<td>130</td>
<td>16.102083</td>
<td>440.000</td>
<td>25.00</td>
<td>.</td>
<td>20.4</td>
</tr>
<tr>
<th>62</th>
<td>132</td>
<td>16.102778</td>
<td>2390.000</td>
<td>100.00</td>
<td>.</td>
<td>24.0</td>
</tr>
<tr>
<th>44</th>
<td>78</td>
<td>16.210417</td>
<td>1510.000</td>
<td>100.00</td>
<td>.</td>
<td>19.2</td>
</tr>
<tr>
<th>143</th>
<td>164</td>
<td>16.212500</td>
<td>209.000</td>
<td>25.00</td>
<td>.</td>
<td>24.5</td>
</tr>
<tr>
<th>188</th>
<td>125</td>
<td>16.214583</td>
<td>35.000</td>
<td>6.25</td>
<td>.</td>
<td>30.4</td>
</tr>
<tr>
<th>152</th>
<td>4</td>
<td>16.415278</td>
<td>24.000</td>
<td>6.25</td>
<td>.</td>
<td>27.6</td>
</tr>
<tr>
<th>80</th>
<td>41</td>
<td>16.415278</td>
<td>239.000</td>
<td>25.00</td>
<td>.</td>
<td>23.3</td>
</tr>
<tr>
<th>17</th>
<td>22</td>
<td>16.418056</td>
<td>2140.000</td>
<td>100.00</td>
<td>.</td>
<td>21.6</td>
</tr>
<tr>
<th>35</th>
<td>25</td>
<td>16.993056</td>
<td>100.000</td>
<td>100.00</td>
<td>.</td>
<td>23.4</td>
</tr>
<tr>
<th>179</th>
<td>99</td>
<td>16.994444</td>
<td>0.488</td>
<td>6.25</td>
<td>.</td>
<td>23.7</td>
</tr>
<tr>
<th>134</th>
<td>149</td>
<td>16.994444</td>
<td>8.560</td>
<td>25.00</td>
<td>.</td>
<td>23.6</td>
</tr>
</tbody>
</table>
</div>
### 1.2 Sort data into dosing groups
Each dose group has been dosed daily from day 2 to 16. PK samples have been taken on day 10 and 16.
```python
# Get group identifiers
groups = data['DOSE GROUP'].unique()
# Sort into groups
data_one = data[data['DOSE GROUP'] == groups[0]]
data_two = data[data['DOSE GROUP'] == groups[1]]
data_three = data[data['DOSE GROUP'] == groups[2]]
# Show different dose groups
groups
```
array([100. , 25. , 6.25])
### 1.3 Visualise dose group one
```python
import matplotlib.pyplot as plt
# Display dose group
print(groups[0])
# Get unique animal IDs
ids = data_one['#ID'].unique()
# Plot measurements
fig = plt.figure(figsize=(12, 6))
for i in ids:
# Mask for individual
mask = data_one['#ID'] == i
time = data_one[mask]['TIME']
volume = data_one[mask]['Y']
# Filter out Nan values
mask = volume.notnull()
time = time[mask]
volume = volume[mask]
# Create semi log plot
plt.scatter(time, volume, edgecolor='black')
# Set y axis to logscale
plt.yscale('log')
# Label axes
plt.xlabel('Time in [day]')
plt.ylabel('Plasma concentration in [mg/L]')
plt.show()
```
### 1.4 Visualise dose group two
```python
import matplotlib.pyplot as plt
# Display dose group
print(groups[1])
# Get unique animal IDs
ids = data_two['#ID'].unique()
# Plot measurements
fig = plt.figure(figsize=(12, 6))
for i in ids:
# Mask for individual
mask = data_two['#ID'] == i
time = data_two[mask]['TIME']
volume = data_two[mask]['Y']
# Filter out Nan values
mask = volume.notnull()
time = time[mask]
volume = volume[mask]
# Create semi log plot
plt.scatter(time, volume, edgecolor='black')
# Set y axis to logscale
plt.yscale('log')
# Label axes
plt.xlabel('Time in [day]')
plt.ylabel('Plasma concentration in [mg/L]')
plt.show()
```
### 1.5 Visualise dose group three
```python
import matplotlib.pyplot as plt
# Display dose group
print(groups[2])
# Get unique animal IDs
ids = data_three['#ID'].unique()
# Plot measurements
fig = plt.figure(figsize=(12, 6))
for i in ids:
# Mask for individual
mask = data_three['#ID'] == i
time = data_three[mask]['TIME']
volume = data_three[mask]['Y']
# Filter out Nan values
mask = volume.notnull()
time = time[mask]
volume = volume[mask]
# Create semi log plot
plt.scatter(time, volume, edgecolor='black')
# Set y axis to logscale
plt.yscale('log')
# Label axes
plt.xlabel('Time in [day]')
plt.ylabel('Plasma concentration in [mg/L]')
plt.show()
```
### 1.2 Build Structural Model (pints.ForwardModel)
#### 1.2.1 Build Myokit Model
```python
import myokit
from pkpd import model as m
# Build 1 compartmental PK model with default parameters
model = m.create_one_comp_pk_model()
# Validate model
model.validate()
# Check units
model.check_units(mode=myokit.UNIT_TOLERANT)
# Print model
print(model.code())
```
[[model]]
# Initial values
central.amount = 0
[central]
dot(amount) = -k_e * amount
in [mg]
conc = amount / volume
in [mg/L]
k_e = 0
in [1/day]
time = 0 bind time
in [day]
volume = 1
in [L]
#### 1.2.2 Set oral administration
```python
import myokit
class DosingRegimen(object):
def __init__(self):
self._amount = None
self._duration = None
self._periodicity = 0
self._multiplier = 0
self._dosing_regimen = None
self._indirect_admin = False
self._dosing_compartment = 'central'
def __call__(self, model, amount, duration=None, periodicity=0, multiplier=0):
"""
Returns the myokit.Model with appropriate structural adjustments for dosing,
and a myokit.Protocol with the specified schedule.
"""
if
def set_amdin(indirect=False):
self.indirect_admin = indirect
def create_dosing_regimen(model, compartment, administration, amount, duration=None, periodicity=0, multiplier=0):
"""
Returns a myokit.Protocol object with the specified dosing regimen, and alters the structural model,
by addition of bolus injection or dosing compartment.
model -- myokit.Model
compartment -- compartment with bolus injection or connection to dosing compartment
administration -- type of administration: Injection into compartment or dosing compartment
amount -- Applied dose
duration -- How long did it take to apply the dose, if None duration is set to the smallest numerically stable duration,
i.e. such that duration > 1e-6 and amount / duration < some upper value.
periodicity -- In which intervals is dose applied. If 0 dose is applied once.
multplier -- How often is dose applied? If 0, it's applied once for non-period and indefinitely for periodic schedules.
"""
if not isinstance(model, myokit.Model):
raise ValueError
if not model.has_component(compartment):
raise ValueError
if administration != 'dosing compartment':
raise NotImplementedError
# Set dosing compartment
dose_comp = model.get(compartment)
if administration == 'dosing compartment':
dose_comp = model.add_component('dose')
# Add dose rate and regimen to dose compartment
dose_rate = dose_comp.add_variable('dose_rate')
regimen = dose_comp.add_variable('regimen')
# define dosing regimen
duration = 0.001 # [1 / d] how long does it take for dose to be in dosing compartment?
dose_rate.set_rhs(25 * 0.2 / duration) # 100 mg / kg / d let's estimate weight of mouse 0.2 kg
dosing_regimen = myokit.Protocol()
dosing_regimen.schedule(level=1, start=0, duration=duration)
```
#### 1.2.3 Build pints model
```python
# Build pints model
pass
```
### 1.3 Build population model
This is the Hierarchical Log Prior of the model
\begin{align}
\text{p}(\psi | \theta )\text{p}(\theta )
\end{align}
Should take a number of means and variances to construct $\text{p}(\theta )$ (Gaussian for means of $\text{p}(\psi | \theta )$ and half cauchy for variances of $\text{p}(\psi | \theta )$.
Think about whether this can be generalised for any structure.
Potentially best way: build a Hierarchical Log Posterior by combining a Likelihood and prior similar to PosteriorLogLikelihood, but appending the input params of the prior instead to the model params (this is the p(y|psi)p(psi|theta) bit). You have to provide posteriors for the population level paramers.
In that way a posterior can be constructed by passing any problem, any population distribution and any priors of the population distirbution parameters!
```python
# Build hierarchical model
pass
```
### 1.4 Build error model
This can be done by just using Loglikelihood from pints
```python
pass
```
### Run inference
```python
pass
```
### Visualise prediction
```python
```
| cc5dd1b512b7e942c1a94198bb9192a068ece6aa | 178,075 | ipynb | Jupyter Notebook | gefitinib_pk_analysis.ipynb | DavAug/ErlotinibGefinitib | f0f2a3918dfaeb360bd5c27e8502d070dbe87160 | [
"BSD-3-Clause"
]
| 1 | 2020-06-25T11:19:46.000Z | 2020-06-25T11:19:46.000Z | gefitinib_pk_analysis.ipynb | DavAug/ErlotinibGefinitib | f0f2a3918dfaeb360bd5c27e8502d070dbe87160 | [
"BSD-3-Clause"
]
| 16 | 2020-08-25T09:14:03.000Z | 2020-09-10T09:02:41.000Z | gefitinib_pk_analysis.ipynb | DavAug/ErlotinibGefinitib | f0f2a3918dfaeb360bd5c27e8502d070dbe87160 | [
"BSD-3-Clause"
]
| null | null | null | 329.158965 | 38,326 | 0.681488 | true | 5,590 | Qwen/Qwen-72B | 1. YES
2. YES | 0.718594 | 0.705785 | 0.507173 | __label__eng_Latn | 0.307876 | 0.016662 |
# Direct Inversion of the Iterative Subspace
When solving systems of linear (or nonlinear) equations, iterative methods are often employed. Unfortunately, such methods often suffer from convergence issues such as numerical instability, slow convergence, and significant computational expense when applied to difficult problems. In these cases, convergence accelleration methods may be applied to both speed up, stabilize and/or reduce the cost for the convergence patterns of these methods, so that solving such problems become computationally tractable. One such method is known as the direct inversion of the iterative subspace (DIIS) method, which is commonly applied to address convergence issues within self consistent field computations in Hartree-Fock theory (and other iterative electronic structure methods). In this tutorial, we'll introduce the theory of DIIS for a general iterative procedure, before integrating DIIS into our previous implementation of RHF.
## I. Theory
DIIS is a widely applicable convergence acceleration method, which is applicable to numerous problems in linear algebra and the computational sciences, as well as quantum chemistry in particular. Therefore, we will introduce the theory of this method in the general sense, before seeking to apply it to SCF.
Suppose that for a given problem, there exist a set of trial vectors $\{\mid{\bf p}_i\,\rangle\}$ which have been generated iteratively, converging toward the true solution, $\mid{\bf p}^f\,\rangle$. Then the true solution can be approximately constructed as a linear combination of the trial vectors,
$$\mid{\bf p}\,\rangle = \sum_ic_i\mid{\bf p}_i\,\rangle,$$
where we require that the residual vector
$$\mid{\bf r}\,\rangle = \sum_ic_i\mid{\bf r}_i\,\rangle\,;\;\;\; \mid{\bf r}_i\,\rangle
=\, \mid{\bf p}_{i+1}\,\rangle - \mid{\bf p}_i\,\rangle$$
is a least-squares approximate to the zero vector, according to the constraint
$$\sum_i c_i = 1.$$
This constraint on the expansion coefficients can be seen by noting that each trial function ${\bf p}_i$ may be represented as an error vector applied to the true solution, $\mid{\bf p}^f\,\rangle + \mid{\bf e}_i\,\rangle$. Then
\begin{align}
\mid{\bf p}\,\rangle &= \sum_ic_i\mid{\bf p}_i\,\rangle\\
&= \sum_i c_i(\mid{\bf p}^f\,\rangle + \mid{\bf e}_i\,\rangle)\\
&= \mid{\bf p}^f\,\rangle\sum_i c_i + \sum_i c_i\mid{\bf e}_i\,\rangle
\end{align}
Convergence results in a minimization of the error (causing the second term to vanish); for the DIIS solution vector $\mid{\bf p}\,\rangle$ and the true solution vector $\mid{\bf p}^f\,\rangle$ to be equal, it must be that $\sum_i c_i = 1$. We satisfy our condition for the residual vector by minimizing its norm,
$$\langle\,{\bf r}\mid{\bf r}\,\rangle = \sum_{ij} c_i^* c_j \langle\,{\bf r}_i\mid{\bf r}_j\,\rangle,$$
using Lagrange's method of undetermined coefficients subject to the constraint on $\{c_i\}$:
$${\cal L} = {\bf c}^{\dagger}{\bf Bc} - \lambda\left(1 - \sum_i c_i\right)$$
where $B_{ij} = \langle {\bf r}_i\mid {\bf r}_j\rangle$ is the matrix of residual vector overlaps. Minimization of the Lagrangian with respect to the coefficient $c_k$ yields (for real values)
\begin{align}
\frac{\partial{\cal L}}{\partial c_k} = 0 &= \sum_j c_jB_{jk} + \sum_i c_iB_{ik} - \lambda\\
&= 2\sum_ic_iB_{ik} - \lambda
\end{align}
which has matrix representation
\begin{equation}
\begin{pmatrix}
B_{11} & B_{12} & \cdots & B_{1m} & -1 \\
B_{21} & B_{22} & \cdots & B_{2m} & -1 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
B_{n1} & B_{n2} & \cdots & B_{nm} & -1 \\
-1 & -1 & \cdots & -1 & 0
\end{pmatrix}
\begin{pmatrix}
c_1\\
c_2\\
\vdots \\
c_n\\
\lambda
\end{pmatrix}
=
\begin{pmatrix}
0\\
0\\
\vdots\\
0\\
-1
\end{pmatrix},
\end{equation}
which we will refer to as the Pulay equation, named after the inventor of DIIS. It is worth noting at this point that our trial vectors, residual vectors, and solution vector may in fact be tensors of arbitrary rank; it is for this reason that we have used the generic notation of Dirac in the above discussion to denote the inner product between such objects.
## II. Algorithms for DIIS
The general DIIS procedure, as described above, has the following structure during each iteration:
#### Algorithm 1: Generic DIIS procedure
1. Compute new trial vector, $\mid{\bf p}_{i+1}\,\rangle$, append to list of trial vectors
2. Compute new residual vector, $\mid{\bf r}_{i+1}\,\rangle$, append to list of trial vectors
3. Check convergence criteria
- If RMSD of $\mid{\bf r}_{i+1}\,\rangle$ sufficiently small, and
- If change in DIIS solution vector $\mid{\bf p}\,\rangle$ sufficiently small, break
4. Build **B** matrix from previous residual vectors
5. Solve Pulay equation for coefficients $\{c_i\}$
6. Compute DIIS solution vector $\mid{\bf p}\,\rangle$
For SCF iteration, the most common choice of trial vector is the Fock matrix **F**; this choice has the advantage over other potential choices (e.g., the density matrix **D**) of **F** not being idempotent, so that it may benefit from extrapolation. The residual vector is commonly chosen to be the orbital gradient in the AO basis,
$$g_{\mu\nu} = ({\bf FDS} - {\bf SDF})_{\mu\nu},$$
however the better choice (which we will make in our implementation!) is to orthogonormalize the basis of the gradient with the inverse overlap metric ${\bf A} = {\bf S}^{-1/2}$:
$$r_{\mu\nu} = ({\bf A}^{\rm T}({\bf FDS} - {\bf SDF}){\bf A})_{\mu\nu}.$$
Therefore, the SCF-specific DIIS procedure (integrated into the SCF iteration algorithm) will be:
#### Algorithm 2: DIIS within an SCF Iteration
1. Compute **F**, append to list of previous trial vectors
2. Compute AO orbital gradient **r**, append to list of previous residual vectors
3. Compute RHF energy
3. Check convergence criteria
- If RMSD of **r** sufficiently small, and
- If change in SCF energy sufficiently small, break
4. Build **B** matrix from previous AO gradient vectors
5. Solve Pulay equation for coefficients $\{c_i\}$
6. Compute DIIS solution vector **F_DIIS** from $\{c_i\}$ and previous trial vectors
7. Compute new orbital guess with **F_DIIS**
## III. Implementation
In order to implement DIIS, we're going to integrate it into an existing RHF program. Since we just-so-happened to write such a program in the last tutorial, let's re-use the part of the code before the SCF integration which won't change when we include DIIS:
```python
# ==> Basic Setup <==
# Import statements
import psi4
import numpy as np
# Memory specification
psi4.set_memory(int(5e8))
numpy_memory = 2
# Set output file
psi4.core.set_output_file('output.dat', False)
# Define Physicist's water -- don't forget C1 symmetry!
mol = psi4.geometry("""
O
H 1 1.1
H 1 1.1 2 104
symmetry c1
""")
# Set computation options
psi4.set_options({'basis': 'cc-pvdz',
'scf_type': 'pk',
'e_convergence': 1e-8})
# Maximum SCF iterations
MAXITER = 40
# Energy convergence criterion
E_conv = 1.0e-6
D_conv = 1.0e-3
```
```python
# ==> Static 1e- & 2e- Properties <==
# Class instantiation
wfn = psi4.core.Wavefunction.build(mol, psi4.core.get_global_option('basis'))
mints = psi4.core.MintsHelper(wfn.basisset())
# Overlap matrix
S = np.asarray(mints.ao_overlap())
# Number of basis Functions & doubly occupied orbitals
nbf = S.shape[0]
ndocc = wfn.nalpha()
print('Number of occupied orbitals: %d' % ndocc)
print('Number of basis functions: %d' % nbf)
# Memory check for ERI tensor
I_size = (nbf**4) * 8.e-9
print('\nSize of the ERI tensor will be %4.2f GB.' % I_size)
if I_size > numpy_memory:
psi4.core.clean()
raise Exception("Estimated memory utilization (%4.2f GB) exceeds allotted memory \
limit of %4.2f GB." % (I_size, numpy_memory))
# Build ERI Tensor
I = np.asarray(mints.ao_eri())
# Build core Hamiltonian
T = np.asarray(mints.ao_kinetic())
V = np.asarray(mints.ao_potential())
H = T + V
```
Number of occupied orbitals: 5
Number of basis functions: 24
Size of the ERI tensor will be 0.00 GB.
```python
# ==> CORE Guess <==
# AO Orthogonalization Matrix
A = mints.ao_overlap()
A.power(-0.5, 1.e-16)
A = np.asarray(A)
# Transformed Fock matrix
F_p = A.dot(H).dot(A)
# Diagonalize F_p for eigenvalues & eigenvectors with NumPy
e, C_p = np.linalg.eigh(F_p)
# Transform C_p back into AO basis
C = A.dot(C_p)
# Grab occupied orbitals
C_occ = C[:, :ndocc]
# Build density matrix from occupied orbitals
D = np.einsum('pi,qi->pq', C_occ, C_occ, optimize=True)
# Nuclear Repulsion Energy
E_nuc = mol.nuclear_repulsion_energy()
```
Now let's put DIIS into action. Before our iterations begin, we'll need to create empty lists to hold our previous residual vectors (AO orbital gradients) and trial vectors (previous Fock matrices), along with setting starting values for our SCF energy and previous energy:
```python
# ==> Pre-Iteration Setup <==
# SCF & Previous Energy
SCF_E = 0.0
E_old = 0.0
```
Now we're ready to write our SCF iterations according to Algorithm 2. Here are some hints which may help you along the way:
#### Starting DIIS
Since DIIS builds the approximate solution vector $\mid{\bf p}\,\rangle$ as a linear combination of the previous trial vectors $\{\mid{\bf p}_i\,\rangle\}$, there's no need to perform DIIS on the first SCF iteration, since there's only one trial vector for DIIS to use!
#### Building **B**
1. The **B** matrix in the Lagrange equation is really $\tilde{\bf B} = \begin{pmatrix} {\bf B} & -1\\ -1 & 0\end{pmatrix}$.
2. Since **B** is the matrix of residual overlaps, it will be a square matrix of dimension equal to the number of residual vectors. If **B** is an $N\times N$ matrix, how big is $\tilde{\bf B}$?
3. Since our residuals are real, **B** will be a symmetric matrix.
4. To build $\tilde{\bf B}$, make an empty array of the appropriate dimension, then use array indexing to set the values of the elements.
#### Solving the Pulay equation
1. Use built-in NumPy functionality to make your life easier.
2. The solution vector for the Pulay equation is $\tilde{\bf c} = \begin{pmatrix} {\bf c}\\ \lambda\end{pmatrix}$, where $\lambda$ is the Lagrange multiplier, and the right hand side is $\begin{pmatrix} {\bf 0}\\ -1\end{pmatrix}$.
```python
# Start from fresh orbitals
F_p = A.dot(H).dot(A)
e, C_p = np.linalg.eigh(F_p)
C = A.dot(C_p)
C_occ = C[:, :ndocc]
D = np.einsum('pi,qi->pq', C_occ, C_occ, optimize=True)
# Trial & Residual Vector Lists
F_list = []
DIIS_RESID = []
# ==> SCF Iterations w/ DIIS <==
print('==> Starting SCF Iterations <==\n')
# Begin Iterations
for scf_iter in range(1, MAXITER + 1):
# Build Fock matrix
J = np.einsum('pqrs,rs->pq', I, D, optimize=True)
K = np.einsum('prqs,rs->pq', I, D, optimize=True)
F = H + 2*J - K
# Build DIIS Residual
diis_r = A.dot(F.dot(D).dot(S) - S.dot(D).dot(F)).dot(A)
# Append trial & residual vectors to lists
F_list.append(F)
DIIS_RESID.append(diis_r)
# Compute RHF energy
SCF_E = np.einsum('pq,pq->', (H + F), D, optimize=True) + E_nuc
dE = SCF_E - E_old
dRMS = np.mean(diis_r**2)**0.5
print('SCF Iteration %3d: Energy = %4.16f dE = % 1.5E dRMS = %1.5E' % (scf_iter, SCF_E, dE, dRMS))
# SCF Converged?
if (abs(dE) < E_conv) and (dRMS < D_conv):
break
E_old = SCF_E
if scf_iter >= 2:
# Build B matrix
B_dim = len(F_list) + 1
B = np.empty((B_dim, B_dim))
B[-1, :] = -1
B[:, -1] = -1
B[-1, -1] = 0
for i in range(len(F_list)):
for j in range(len(F_list)):
B[i, j] = np.einsum('ij,ij->', DIIS_RESID[i], DIIS_RESID[j], optimize=True)
# Build RHS of Pulay equation
rhs = np.zeros((B_dim))
rhs[-1] = -1
# Solve Pulay equation for c_i's with NumPy
coeff = np.linalg.solve(B, rhs)
# Build DIIS Fock matrix
F = np.zeros_like(F)
for x in range(coeff.shape[0] - 1):
F += coeff[x] * F_list[x]
# Compute new orbital guess with DIIS Fock matrix
F_p = A.dot(F).dot(A)
e, C_p = np.linalg.eigh(F_p)
C = A.dot(C_p)
C_occ = C[:, :ndocc]
D = np.einsum('pi,qi->pq', C_occ, C_occ, optimize=True)
# MAXITER exceeded?
if (scf_iter == MAXITER):
psi4.core.clean()
raise Exception("Maximum number of SCF iterations exceeded.")
# Post iterations
print('\nSCF converged.')
print('Final RHF Energy: %.8f [Eh]' % SCF_E)
```
==> Starting SCF Iterations <==
SCF Iteration 1: Energy = -68.9800327333871337 dE = -6.89800E+01 dRMS = 1.16551E-01
SCF Iteration 2: Energy = -69.6472544393141675 dE = -6.67222E-01 dRMS = 1.07430E-01
SCF Iteration 3: Energy = -75.7919291462249021 dE = -6.14467E+00 dRMS = 2.89274E-02
SCF Iteration 4: Energy = -75.9721892296711019 dE = -1.80260E-01 dRMS = 7.56446E-03
SCF Iteration 5: Energy = -75.9893690602363563 dE = -1.71798E-02 dRMS = 8.74982E-04
SCF Iteration 6: Energy = -75.9897163367029691 dE = -3.47276E-04 dRMS = 5.35606E-04
SCF Iteration 7: Energy = -75.9897932415930200 dE = -7.69049E-05 dRMS = 6.21200E-05
SCF Iteration 8: Energy = -75.9897956274068633 dE = -2.38581E-06 dRMS = 2.57879E-05
SCF Iteration 9: Energy = -75.9897957845313670 dE = -1.57125E-07 dRMS = 1.72817E-06
SCF converged.
Final RHF Energy: -75.98979578 [Eh]
Congratulations! You've written your very own Restricted Hartree-Fock program with DIIS convergence accelleration! Finally, let's check your final RHF energy against <span style='font-variant: small-caps'> Psi4</span>:
```python
# Compare to Psi4
SCF_E_psi = psi4.energy('SCF')
psi4.compare_values(SCF_E_psi, SCF_E, 6, 'SCF Energy')
```
SCF Energy........................................................PASSED
True
## References
1. P. Pulay. *Chem. Phys. Lett.* **73**, 393-398 (1980)
2. C. David Sherrill. *"Some comments on accellerating convergence of iterative sequences using direct inversion of the iterative subspace (DIIS)".* Available at: vergil.chemistry.gatech.edu/notes/diis/diis.pdf. (1998)
| 03bcb8fffae1af959427f5b5fa9d42500276db0c | 19,268 | ipynb | Jupyter Notebook | Tutorials/03_Hartree-Fock/3b_rhf-diis.ipynb | andyj10224/psi4numpy | cbef6ddcb32ccfbf773befea6dc4aaae2b428776 | [
"BSD-3-Clause"
]
| 214 | 2017-03-01T08:04:48.000Z | 2022-03-23T08:52:04.000Z | Tutorials/03_Hartree-Fock/3b_rhf-diis.ipynb | andyj10224/psi4numpy | cbef6ddcb32ccfbf773befea6dc4aaae2b428776 | [
"BSD-3-Clause"
]
| 100 | 2017-03-03T13:20:20.000Z | 2022-03-05T18:20:27.000Z | Tutorials/03_Hartree-Fock/3b_rhf-diis.ipynb | andyj10224/psi4numpy | cbef6ddcb32ccfbf773befea6dc4aaae2b428776 | [
"BSD-3-Clause"
]
| 150 | 2017-02-17T19:44:47.000Z | 2022-03-22T05:52:43.000Z | 43.008929 | 938 | 0.571621 | true | 4,343 | Qwen/Qwen-72B | 1. YES
2. YES | 0.859664 | 0.835484 | 0.718235 | __label__eng_Latn | 0.926026 | 0.507032 |
# Week 4 Lab Session (ctd..): Advertising Data
* In this lab session we are going to look at how to answer questions involing the use of simple and multiple linear regression in Python.
* The questions are based on the book "An introduction to Statistical Learning" by James et al.
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.stats as stats
import statsmodels.api as sm
import statsmodels.formula.api as smf
import seaborn as sns
```
```python
data_advert = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
data_advert.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>TV</th>
<th>radio</th>
<th>newspaper</th>
<th>sales</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>230.1</td>
<td>37.8</td>
<td>69.2</td>
<td>22.1</td>
</tr>
<tr>
<th>2</th>
<td>44.5</td>
<td>39.3</td>
<td>45.1</td>
<td>10.4</td>
</tr>
<tr>
<th>3</th>
<td>17.2</td>
<td>45.9</td>
<td>69.3</td>
<td>9.3</td>
</tr>
<tr>
<th>4</th>
<td>151.5</td>
<td>41.3</td>
<td>58.5</td>
<td>18.5</td>
</tr>
<tr>
<th>5</th>
<td>180.8</td>
<td>10.8</td>
<td>58.4</td>
<td>12.9</td>
</tr>
</tbody>
</table>
</div>
```python
data_advert.describe()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>TV</th>
<th>radio</th>
<th>newspaper</th>
<th>sales</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>200.000000</td>
<td>200.000000</td>
<td>200.000000</td>
<td>200.000000</td>
</tr>
<tr>
<th>mean</th>
<td>147.042500</td>
<td>23.264000</td>
<td>30.554000</td>
<td>14.022500</td>
</tr>
<tr>
<th>std</th>
<td>85.854236</td>
<td>14.846809</td>
<td>21.778621</td>
<td>5.217457</td>
</tr>
<tr>
<th>min</th>
<td>0.700000</td>
<td>0.000000</td>
<td>0.300000</td>
<td>1.600000</td>
</tr>
<tr>
<th>25%</th>
<td>74.375000</td>
<td>9.975000</td>
<td>12.750000</td>
<td>10.375000</td>
</tr>
<tr>
<th>50%</th>
<td>149.750000</td>
<td>22.900000</td>
<td>25.750000</td>
<td>12.900000</td>
</tr>
<tr>
<th>75%</th>
<td>218.825000</td>
<td>36.525000</td>
<td>45.100000</td>
<td>17.400000</td>
</tr>
<tr>
<th>max</th>
<td>296.400000</td>
<td>49.600000</td>
<td>114.000000</td>
<td>27.000000</td>
</tr>
</tbody>
</table>
</div>
```python
plt.subplot(131)
plt.scatter(data_advert.TV,data_advert.sales)
plt.xlabel('TV')
plt.ylabel('Sales')
plt.subplot(132)
plt.scatter(data_advert.radio,data_advert.sales)
plt.xlabel('radio')
plt.subplot(133)
plt.scatter(data_advert.newspaper,data_advert.sales)
plt.xlabel('newspaper')
plt.subplots_adjust(top=0.8, bottom=0.08, left=0.0, right=1.3, hspace=5, wspace=0.5)
```
## QUESTION 1: Is there a relationship between advertising sales and budget?
* Test the null hypothesis
\begin{equation}
H_0: \beta_1=\ldots=\beta_p=0
\end{equation}
versus the alternative
\begin{equation}
H_a: \text{at least one $\beta_j$ is nonzero}
\end{equation}
* For that compute the F-statistic in Multiple Linear Regression 'sales ~ TV+radio+newspaper' using $\texttt{ols}$ from ${\bf Statsmodels}$
* If there is no relationship between the response and predictors, the F-statistic takes values close to 1. If $H_a$ is true, than F-statistic is expected to be significantly greater than 1. Check the associated p-values.
```python
results = smf.ols('sales ~ TV+radio+newspaper', data=data_advert).fit()
print(results.summary())
```
OLS Regression Results
==============================================================================
Dep. Variable: sales R-squared: 0.897
Model: OLS Adj. R-squared: 0.896
Method: Least Squares F-statistic: 570.3
Date: Sun, 28 Oct 2018 Prob (F-statistic): 1.58e-96
Time: 23:46:29 Log-Likelihood: -386.18
No. Observations: 200 AIC: 780.4
Df Residuals: 196 BIC: 793.6
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 2.9389 0.312 9.422 0.000 2.324 3.554
TV 0.0458 0.001 32.809 0.000 0.043 0.049
radio 0.1885 0.009 21.893 0.000 0.172 0.206
newspaper -0.0010 0.006 -0.177 0.860 -0.013 0.011
==============================================================================
Omnibus: 60.414 Durbin-Watson: 2.084
Prob(Omnibus): 0.000 Jarque-Bera (JB): 151.241
Skew: -1.327 Prob(JB): 1.44e-33
Kurtosis: 6.332 Cond. No. 454.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
## QUESTION 2: How strong is the relationship?
You should base your discussion on the following quantities:
* $RSE$ - computed using $\texttt{scale}$ atribute: $\texttt{np.sqrt(results.scale)}$
* Compute the percentage error, i.e. $RSE/(mean sale)$
* $R^2$ - computed using $\texttt{rsquared}$ atribute: $\texttt{results.rsquared}$
## QUESTION 3: Which media contribute to sales?
* Examine the p-values associated with each predictor’s t-statistic
## QUESTION 4: How large is the effect of each medium on sales?
* Examine 95% confidence intervals associated with each predictor
* Compare your results with three separate simple lineare regression
## QUESTION 5: Is the relationship linear?
* You can use residual versus fitted value plot to investigate this
## QUESTION 6: Is there interaction among the advertising media?
* Consider model sales ~ TV + radio + TV:radio
* How much more variability are we able to explain with this model?
```python
results = smf.ols('sales ~ TV + radio + TV:radio', data=data_advert).fit()
print(results.summary())
```
OLS Regression Results
==============================================================================
Dep. Variable: sales R-squared: 0.968
Model: OLS Adj. R-squared: 0.967
Method: Least Squares F-statistic: 1466.
Date: Mon, 29 Oct 2018 Prob (F-statistic): 2.92e-144
Time: 00:09:17 Log-Likelihood: -270.04
No. Observations: 200 AIC: 550.1
Df Residuals: 195 BIC: 566.6
Df Model: 4
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 6.7284 0.253 26.561 0.000 6.229 7.228
TV 0.0191 0.002 12.633 0.000 0.016 0.022
radio 0.0280 0.009 3.062 0.003 0.010 0.046
newspaper 0.0014 0.003 0.438 0.662 -0.005 0.008
TV:radio 0.0011 5.26e-05 20.686 0.000 0.001 0.001
==============================================================================
Omnibus: 126.161 Durbin-Watson: 2.216
Prob(Omnibus): 0.000 Jarque-Bera (JB): 1123.463
Skew: -2.291 Prob(JB): 1.10e-244
Kurtosis: 13.669 Cond. No. 1.84e+04
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.84e+04. This might indicate that there are
strong multicollinearity or other numerical problems.
| 532f6847d07209663f127e044efdf0f02a6d9db3 | 42,783 | ipynb | Jupyter Notebook | Week_4_Advertising_Lab_AA.ipynb | abdulra4/Analytical-Statistics | 43e2679388a7aab5c64434e5222b213a9a70fa2e | [
"MIT"
]
| null | null | null | Week_4_Advertising_Lab_AA.ipynb | abdulra4/Analytical-Statistics | 43e2679388a7aab5c64434e5222b213a9a70fa2e | [
"MIT"
]
| null | null | null | Week_4_Advertising_Lab_AA.ipynb | abdulra4/Analytical-Statistics | 43e2679388a7aab5c64434e5222b213a9a70fa2e | [
"MIT"
]
| null | null | null | 95.285078 | 26,944 | 0.756726 | true | 2,792 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.810479 | 0.702714 | __label__eng_Latn | 0.473427 | 0.470972 |
```python
from __future__ import print_function
import sympy
import sympy.physics.mechanics as mech
sympy.init_printing()
mech.init_vprinting()
```
```python
t = sympy.symbols('t')
rot_N, rot_E, rot_D, vel_N, vel_E, vel_D, \
gyro_bias_N, gyro_bias_E, gyro_bias_D, \
accel_bias_N, accel_bias_E, accel_bias_D, \
pos_N, pos_E, asl, terrain_asl, baro_bias, \
wind_N, wind_E, wind_D, d, agl, phi, theta, psi = mech.dynamicsymbols(
'rot_N, rot_E, rot_D, vel_N, vel_E, vel_D, ' \
'gyro_bias_N, gyro_bias_E, gyro_bias_D, ' \
'accel_bias_N, accel_bias_E, accel_bias_D, ' \
'pos_N, pos_E, asl, terrain_asl, baro_bias, ' \
'wind_N, wind_E, wind_D, d, agl, phi, theta, psi')
frame_i = mech.ReferenceFrame('i')
frame_n = frame_i.orientnew('n', 'Quaternion', (1, rot_N, rot_E, rot_D))
#frame_b = frame_n.orientnew('b', 'Quaternion', (q_0, q_1, q_2, q_3))
# easier to see where we get divide by zeros if we express dcm in euler angles
frame_b = frame_n.orientnew('b', 'Body', (psi, theta, phi), '321')
C_nb = frame_n.dcm(frame_b)
assert C_nb[0, 1] == frame_n.x.dot(frame_b.y)
sub_C_nb = {}
for i in range(3):
for j in range(3):
sub_C_nb[C_nb[i, j]] = sympy.Symbol('C_nb({:d}, {:d})'.format(i, j))(t)
sub_C_nb[-C_nb[i, j]] = -sympy.Symbol('C_nb({:d}, {:d})'.format(i, j))(t)
sub_C_nb_rev = { sub_C_nb[key]: key for key in sub_C_nb.keys() }
sub_lin = {
rot_N: 0,
rot_E: 0,
rot_D: 0,
gyro_bias_N: 0,
gyro_bias_E: 0,
gyro_bias_D: 0
}
sub_agl = {
asl - terrain_asl: agl
}
omega_bx, omega_by, omega_bz = mech.dynamicsymbols('omega_bx, omega_by, omega_bz')
flowX, flowY = mech.dynamicsymbols('flowX, flowY')
omega_ib_b = omega_bx * frame_b.x \
+ omega_by * frame_b.y \
+ omega_bz * frame_b.z
gyro_bias_i = gyro_bias_N * frame_i.x \
+ gyro_bias_E * frame_i.y \
+ gyro_bias_D * frame_i.z
omega_nx, omega_ny, omega_nz = mech.dynamicsymbols('omega_nx, omega_ny, omega_nz')
omega_in_n = -gyro_bias_N * frame_n.x \
- gyro_bias_E * frame_n.y \
- gyro_bias_D * frame_n.z
a_N, a_E, a_D = mech.dynamicsymbols('a_N, a_E, a_D')
a_n = a_N*frame_n.x + a_E*frame_n.y + a_D*frame_n.z
a_bias_n = accel_bias_N*frame_n.x + accel_bias_E*frame_n.y + accel_bias_D*frame_n.z
a_n_correct = a_n - a_bias_n
v_i = vel_N*frame_i.x + vel_E*frame_i.y + vel_D*frame_i.z
p_i = pos_N*frame_i.x + pos_E*frame_i.y - asl*frame_i.z
I_wx, I_wy, I_wz = mech.dynamicsymbols('I_wx, I_wy, I_wz')
I_w_n = I_wx*frame_n.x + I_wy*frame_n.y + I_wz*frame_n.z
```
```python
xe = sympy.Matrix([rot_N, rot_E, rot_D, vel_N, vel_E, vel_D, gyro_bias_N, gyro_bias_E, gyro_bias_D,
accel_bias_N, accel_bias_E, accel_bias_D,
pos_N, pos_E, asl, terrain_asl, baro_bias, wind_N, wind_E, wind_D])
xe.T
```
```python
def print_terms(terms):
for t in terms:
s = 'float {:s} = {:s};'.format(
str(t[0]), str(t[1]))
print(s.replace('(t)', ''))
def matrix_to_code(name, mat, i_name, i_syms, j_name, j_syms):
print('Matrix<float, {:s}n, {:s}n> {:s};'.format(i_name, j_name, name))
mat.simplify()
terms, mat = sympy.cse(mat)
print_terms(terms)
mat = mat[0]
for i in range(mat.shape[0]):
for j in range(mat.shape[1]):
if str(mat[i, j]) == "0":
continue
s = '{:s}({:s}{:s}, {:s}{:s}) = {:s};'.format(
str(name), i_name, str(i_syms[i]),
j_name, str(j_syms[j]), str(mat[i, j]))
print(s.replace('(t)', ''))
```
## Dynamics
This is just to check the other derivaiton in IEKF Derivation notebook, doesn't match yet, needes further work.
```python
trans_kin_eqs = list((a_n_correct.express(frame_i) - v_i.diff(t, frame_i)).to_matrix(frame_i))
trans_kin_eqs
```
```python
nav_eqs = list((p_i.diff(t, frame_i) - v_i).to_matrix(frame_i))
nav_eqs
```
```python
sub_q = {
(1 + rot_N**2 + rot_E**2 + rot_D**2): 1,
2*(1 + rot_N**2 + rot_E**2 + rot_D**2): 2
}
```
```python
rot_kin_eqs = list((frame_n.ang_vel_in(frame_i) - omega_in_n).to_matrix(frame_n))
rot_kin_eqs
```
```python
static_eqs = [
terrain_asl.diff(t),
baro_bias.diff(t),
wind_N.diff(t),
wind_E.diff(t),
wind_D.diff(t),
accel_bias_N.diff(t),
accel_bias_E.diff(t),
accel_bias_D.diff(t),
]
static_eqs
```
```python
static_eqs
```
```python
gyro_eqs = list((omega_in_n.diff(t, frame_n) - frame_i.ang_vel_in(frame_n).cross(I_w_n)).to_matrix(frame_n))
gyro_eqs
```
```python
sol = sympy.solve(rot_kin_eqs + trans_kin_eqs + static_eqs + nav_eqs + gyro_eqs, xe.diff(t))
sol = { key:sol[key].subs(sub_q) for key in sol.keys() }
xe_dot = sympy.Matrix([ sol[var] for var in xe.diff(t) ]).applyfunc(lambda x: x.subs(sub_q))
#xe_dot
```
```python
A = xe_dot.jacobian(xe).subs(sub_lin)
#A
```
```python
matrix_to_code('A', A, 'Xe::', xe, 'Xe::', xe)
```
Matrix<float, Xe::n, Xe::n> A;
float x0 = 2*a_D;
float x1 = 2*accel_bias_D;
float x2 = 2*a_E;
float x3 = 2*accel_bias_E;
float x4 = 2*a_N;
float x5 = 2*accel_bias_N;
float x6 = I_wz;
float x7 = I_wy;
float x8 = I_wx;
A(Xe::rot_N, Xe::gyro_bias_N) = -1/2;
A(Xe::rot_E, Xe::gyro_bias_E) = -1/2;
A(Xe::rot_D, Xe::gyro_bias_D) = -1/2;
A(Xe::vel_N, Xe::rot_E) = x0 - x1;
A(Xe::vel_N, Xe::rot_D) = -x2 + x3;
A(Xe::vel_N, Xe::accel_bias_N) = -1;
A(Xe::vel_E, Xe::rot_N) = -x0 + x1;
A(Xe::vel_E, Xe::rot_D) = x4 - x5;
A(Xe::vel_E, Xe::accel_bias_E) = -1;
A(Xe::vel_D, Xe::rot_N) = x2 - x3;
A(Xe::vel_D, Xe::rot_E) = -x4 + x5;
A(Xe::vel_D, Xe::accel_bias_D) = -1;
A(Xe::gyro_bias_N, Xe::gyro_bias_E) = -x6;
A(Xe::gyro_bias_N, Xe::gyro_bias_D) = x7;
A(Xe::gyro_bias_E, Xe::gyro_bias_N) = x6;
A(Xe::gyro_bias_E, Xe::gyro_bias_D) = -x8;
A(Xe::gyro_bias_D, Xe::gyro_bias_N) = -x7;
A(Xe::gyro_bias_D, Xe::gyro_bias_E) = x8;
A(Xe::pos_N, Xe::vel_N) = 1;
A(Xe::pos_E, Xe::vel_E) = 1;
A(Xe::asl, Xe::vel_D) = -1;
## Airspeed
```python
wind_i = wind_N*frame_i.x + wind_E*frame_i.y + wind_D*frame_i.z
vel_i = vel_N*frame_i.x + vel_E*frame_i.y + vel_D*frame_i.z
```
```python
rel_wind = wind_i - vel_i
y_airspeed = sympy.Matrix([rel_wind.dot(-frame_b.x)]).subs(sub_C_nb)
y_airspeed
```
```python
H_airspeed = y_airspeed.jacobian(xe).subs(sub_lin)
H_airspeed.T
```
```python
matrix_to_code('H', H_airspeed,
'Y_airspeed::', [sympy.Symbol('airspeed')],
'Xe::', xe)
```
Matrix<float, Y_airspeed::n, Xe::n> H;
float x0 = C_nb(1, 0);
float x1 = 2*vel_D - 2*wind_D;
float x2 = C_nb(2, 0);
float x3 = 2*vel_E - 2*wind_E;
float x4 = C_nb(0, 0);
float x5 = 2*vel_N - 2*wind_N;
H(Y_airspeed::airspeed, Xe::rot_N) = x0*x1 - x2*x3;
H(Y_airspeed::airspeed, Xe::rot_E) = -x1*x4 + x2*x5;
H(Y_airspeed::airspeed, Xe::rot_D) = -x0*x5 + x3*x4;
H(Y_airspeed::airspeed, Xe::vel_N) = x4;
H(Y_airspeed::airspeed, Xe::vel_E) = x0;
H(Y_airspeed::airspeed, Xe::vel_D) = x2;
H(Y_airspeed::airspeed, Xe::wind_N) = -x4;
H(Y_airspeed::airspeed, Xe::wind_E) = -x0;
H(Y_airspeed::airspeed, Xe::wind_D) = -x2;
## Distance
```python
d_eq = sympy.solve((d*frame_b.z).dot(frame_i.z).subs(sub_C_nb) - (asl - terrain_asl), d)[0]
d_eq.subs(sub_agl)
```
```python
y_dist = sympy.Matrix([d_eq]).subs(sub_C_nb)
y_dist[0].subs(sub_lin).subs(sub_agl)
```
```python
H_distance = y_dist.jacobian(xe).subs(sub_lin).subs(sub_agl)
H_distance.T
matrix_to_code('H', H_distance, 'Y_distance_down::',
[sympy.symbols('d')], 'Xe::', xe)
```
Matrix<float, Y_distance_down::n, Xe::n> H;
float x0 = C_nb(2, 2);
float x1 = 2*agl/x0**2;
float x2 = 1/x0;
H(Y_distance_down::d, Xe::rot_N) = -x1*C_nb(1, 2);
H(Y_distance_down::d, Xe::rot_E) = x1*C_nb(0, 2);
H(Y_distance_down::d, Xe::asl) = x2;
H(Y_distance_down::d, Xe::terrain_asl) = -x2;
## Optical Flow
```python
#omega_nx, omega_ny, omega_nz = sympy.symbols('\omega_{nx}, \omega_{ny}, \omega_{nz}')
#omega_ib_n = omega_nx*frame_i.x + omega_ny*frame_i.y + omega_nz*frame_i.z
#omega_ib_n
```
```python
y_flow_sym = [flowX, flowY]
omega_n = (omega_ib_b - gyro_bias_i)
vel_f_b = -vel_i - omega_n.cross(d_eq*frame_b.z)
vel_f_b.subs(sub_lin).subs(sub_agl)
```
```python
y_flow = sympy.Matrix([
-vel_f_b.dot(frame_b.x).subs(sub_C_nb),
-vel_f_b.dot(frame_b.y).subs(sub_C_nb)
]).subs(sub_C_nb)
```
```python
def sym2latex(s):
return sympy.latex(s).replace(r'{\left (t \right )}', '')
```
```python
y_flow_lin = y_flow.subs(sub_lin).subs(sub_agl).subs(sub_C_nb)
y_flow_lin.simplify()
matrix_to_code('y_flow_lin', y_flow_lin, 'Y_flow::', y_flow_sym, '', [0])
```
Matrix<float, Y_flow::n, n> y_flow_lin;
float x0 = vel_N;
float x1 = vel_E;
float x2 = vel_D;
float x3 = agl/C_nb(2, 2);
y_flow_lin(Y_flow::flowX, 0) = x0*C_nb(0, 0) + x1*C_nb(1, 0) + x2*C_nb(2, 0) + x3*omega_by;
y_flow_lin(Y_flow::flowY, 0) = x0*C_nb(0, 1) + x1*C_nb(1, 1) + x2*C_nb(2, 1) - x3*omega_bx;
```python
H_flow = y_flow.jacobian(xe).subs(sub_lin).subs(sub_agl)
H_flow
for i in range(H_flow.shape[0]):
for j in range(H_flow.shape[1]):
if H_flow[i, j] != 0:
s_mat = sym2latex(H_flow[i, j])
print('H[{:s}, {:s}] =& {:s} \\\\'.format(
sym2latex(y_flow_sym[i]),
sym2latex(xe[j]),
sym2latex(H_flow[i, j])))
```
H[\operatorname{flowX}, \operatorname{rot_{N}}] =& 2 \operatorname{C_{nb(1, 0)}} \operatorname{vel_{D}} - \frac{2 \operatorname{C_{nb(1, 2)}} \operatorname{agl}}{\operatorname{C_{nb(2, 2)}}^{2}} \omega_{by} - 2 \operatorname{C_{nb(2, 0)}} \operatorname{vel_{E}} \\
H[\operatorname{flowX}, \operatorname{rot_{E}}] =& - 2 \operatorname{C_{nb(0, 0)}} \operatorname{vel_{D}} + \frac{2 \operatorname{C_{nb(0, 2)}} \operatorname{agl}}{\operatorname{C_{nb(2, 2)}}^{2}} \omega_{by} + 2 \operatorname{C_{nb(2, 0)}} \operatorname{vel_{N}} \\
H[\operatorname{flowX}, \operatorname{rot_{D}}] =& 2 \operatorname{C_{nb(0, 0)}} \operatorname{vel_{E}} - 2 \operatorname{C_{nb(1, 0)}} \operatorname{vel_{N}} \\
H[\operatorname{flowX}, \operatorname{vel_{N}}] =& \operatorname{C_{nb(0, 0)}} \\
H[\operatorname{flowX}, \operatorname{vel_{E}}] =& \operatorname{C_{nb(1, 0)}} \\
H[\operatorname{flowX}, \operatorname{vel_{D}}] =& \operatorname{C_{nb(2, 0)}} \\
H[\operatorname{flowX}, \operatorname{gyro_{bias N}}] =& - \frac{\operatorname{C_{nb(0, 1)}} \operatorname{agl}}{\operatorname{C_{nb(2, 2)}}} \\
H[\operatorname{flowX}, \operatorname{gyro_{bias E}}] =& - \frac{\operatorname{C_{nb(1, 1)}} \operatorname{agl}}{\operatorname{C_{nb(2, 2)}}} \\
H[\operatorname{flowX}, \operatorname{gyro_{bias D}}] =& - \frac{\operatorname{C_{nb(2, 1)}} \operatorname{agl}}{\operatorname{C_{nb(2, 2)}}} \\
H[\operatorname{flowX}, \operatorname{asl}] =& \frac{\omega_{by}}{\operatorname{C_{nb(2, 2)}}} \\
H[\operatorname{flowX}, \operatorname{terrain_{asl}}] =& - \frac{\omega_{by}}{\operatorname{C_{nb(2, 2)}}} \\
H[\operatorname{flowY}, \operatorname{rot_{N}}] =& 2 \operatorname{C_{nb(1, 1)}} \operatorname{vel_{D}} + \frac{2 \operatorname{C_{nb(1, 2)}} \operatorname{agl}}{\operatorname{C_{nb(2, 2)}}^{2}} \omega_{bx} - 2 \operatorname{C_{nb(2, 1)}} \operatorname{vel_{E}} \\
H[\operatorname{flowY}, \operatorname{rot_{E}}] =& - 2 \operatorname{C_{nb(0, 1)}} \operatorname{vel_{D}} - \frac{2 \operatorname{C_{nb(0, 2)}} \operatorname{agl}}{\operatorname{C_{nb(2, 2)}}^{2}} \omega_{bx} + 2 \operatorname{C_{nb(2, 1)}} \operatorname{vel_{N}} \\
H[\operatorname{flowY}, \operatorname{rot_{D}}] =& 2 \operatorname{C_{nb(0, 1)}} \operatorname{vel_{E}} - 2 \operatorname{C_{nb(1, 1)}} \operatorname{vel_{N}} \\
H[\operatorname{flowY}, \operatorname{vel_{N}}] =& \operatorname{C_{nb(0, 1)}} \\
H[\operatorname{flowY}, \operatorname{vel_{E}}] =& \operatorname{C_{nb(1, 1)}} \\
H[\operatorname{flowY}, \operatorname{vel_{D}}] =& \operatorname{C_{nb(2, 1)}} \\
H[\operatorname{flowY}, \operatorname{gyro_{bias N}}] =& \frac{\operatorname{C_{nb(0, 0)}} \operatorname{agl}}{\operatorname{C_{nb(2, 2)}}} \\
H[\operatorname{flowY}, \operatorname{gyro_{bias E}}] =& \frac{\operatorname{C_{nb(1, 0)}} \operatorname{agl}}{\operatorname{C_{nb(2, 2)}}} \\
H[\operatorname{flowY}, \operatorname{gyro_{bias D}}] =& \frac{\operatorname{C_{nb(2, 0)}} \operatorname{agl}}{\operatorname{C_{nb(2, 2)}}} \\
H[\operatorname{flowY}, \operatorname{asl}] =& - \frac{\omega_{bx}}{\operatorname{C_{nb(2, 2)}}} \\
H[\operatorname{flowY}, \operatorname{terrain_{asl}}] =& \frac{\omega_{bx}}{\operatorname{C_{nb(2, 2)}}} \\
```python
H_flow.shape[0]
```
```python
H_flow.subs(sub_C_nb_rev).subs({phi: 0, theta:0, psi:0})
```
```python
P = sympy.diag(*[sympy.Symbol('var_' + str(xi)) for xi in xe])
R = sympy.diag(*[sympy.Symbol('var_flowY'), sympy.Symbol('var_flowX')])
#P = sympy.MatrixSymbol('P', len(xe), len(xe))
#R = sympy.MatrixSymbol('R', 2, 2)
S = H_flow * P * H_flow.T + R
S.simplify()
```
```python
S[0, 0].subs(sub_agl)
```
```python
S[1, 1].subs(sub_agl)
```
```python
S[0, 0].subs(sub_agl).subs(sub_C_nb_rev).subs({phi: 0, theta: 0})
```
```python
S[1, 1].subs(sub_agl).subs(sub_C_nb_rev).subs({phi: 0, theta: 0, psi:0})
```
```python
S.subs(sub_agl).subs(sub_C_nb_rev).subs({phi: 0, theta: 0, psi:0, omega_bx:0, omega_by: 0})
```
```python
H_flow.subs(sub_agl).subs(sub_C_nb_rev).subs({phi: 0, theta: 0, psi:0, omega_bx:0, omega_by: 0})
```
```python
matrix_to_code('S', sympy.diag(S[0,0]), 'Y_flow::', y_flow_sym, 'Y_flow::', y_flow_sym,)
```
Matrix<float, Y_flow::n, Y_flow::n> S;
float x0 = C_nb(2, 2);
float x1 = x0**4;
float x2 = agl;
float x3 = omega_by;
float x4 = x2*x3;
float x5 = x0**2;
float x6 = C_nb(0, 0);
float x7 = vel_D;
float x8 = C_nb(2, 0);
float x9 = vel_N;
float x10 = C_nb(1, 0);
float x11 = vel_E;
float x12 = x3**2;
float x13 = x2**2;
S(Y_flow::flowX, Y_flow::flowX) = (4*var_rot_E*(-x4*C_nb(0, 2) + x5*(x6*x7 - x8*x9))**2 + 4*var_rot_N*(-x4*C_nb(1, 2) + x5*(x10*x7 - x11*x8))**2 + x1*(var_flowY + 4*var_rot_D*(-x10*x9 + x11*x6)**2 + var_vel_D*x8**2 + var_vel_E*x10**2 + var_vel_N*x6**2) + x5*(var_asl*x12 + var_gyro_bias_D*x13*C_nb(2, 1)**2 + var_gyro_bias_E*x13*C_nb(1, 1)**2 + var_gyro_bias_N*x13*C_nb(0, 1)**2 + var_terrain_asl*x12))/x1;
```python
matrix_to_code('H', H_flow, 'Y_flow::', y_flow_sym, 'Xe::', xe)
```
Matrix<float, Y_flow::n, Xe::n> H;
float x0 = vel_D;
float x1 = C_nb(1, 0);
float x2 = 2*x1;
float x3 = vel_E;
float x4 = C_nb(2, 0);
float x5 = 2*x4;
float x6 = omega_by;
float x7 = agl;
float x8 = C_nb(2, 2);
float x9 = x8**(-2);
float x10 = 2*x7*x9*C_nb(1, 2);
float x11 = C_nb(0, 0);
float x12 = 2*x11;
float x13 = vel_N;
float x14 = 2*x7*x9*C_nb(0, 2);
float x15 = C_nb(0, 1);
float x16 = 1/x8;
float x17 = x16*x7;
float x18 = C_nb(1, 1);
float x19 = C_nb(2, 1);
float x20 = x16*x6;
float x21 = 2*x18;
float x22 = 2*x19;
float x23 = omega_bx;
float x24 = 2*x15;
float x25 = x16*x23;
H(Y_flow::flowX, Xe::rot_N) = x0*x2 - x10*x6 - x3*x5;
H(Y_flow::flowX, Xe::rot_E) = -x0*x12 + x13*x5 + x14*x6;
H(Y_flow::flowX, Xe::rot_D) = x12*x3 - x13*x2;
H(Y_flow::flowX, Xe::vel_N) = x11;
H(Y_flow::flowX, Xe::vel_E) = x1;
H(Y_flow::flowX, Xe::vel_D) = x4;
H(Y_flow::flowX, Xe::gyro_bias_N) = -x15*x17;
H(Y_flow::flowX, Xe::gyro_bias_E) = -x17*x18;
H(Y_flow::flowX, Xe::gyro_bias_D) = -x17*x19;
H(Y_flow::flowX, Xe::asl) = x20;
H(Y_flow::flowX, Xe::terrain_asl) = -x20;
H(Y_flow::flowY, Xe::rot_N) = x0*x21 + x10*x23 - x22*x3;
H(Y_flow::flowY, Xe::rot_E) = -x0*x24 + x13*x22 - x14*x23;
H(Y_flow::flowY, Xe::rot_D) = -x13*x21 + x24*x3;
H(Y_flow::flowY, Xe::vel_N) = x15;
H(Y_flow::flowY, Xe::vel_E) = x18;
H(Y_flow::flowY, Xe::vel_D) = x19;
H(Y_flow::flowY, Xe::gyro_bias_N) = x11*x17;
H(Y_flow::flowY, Xe::gyro_bias_E) = x1*x17;
H(Y_flow::flowY, Xe::gyro_bias_D) = x17*x4;
H(Y_flow::flowY, Xe::asl) = -x25;
H(Y_flow::flowY, Xe::terrain_asl) = x25;
## Attitude
```python
y_attitude = sympy.Matrix([
rot_N, rot_E, rot_D
])
H_attitude = y_attitude.jacobian(xe).subs(sub_lin).subs(sub_agl)
H_attitude
```
## Accelerometer
```python
g = sympy.symbols('g')
g_i = -g*frame_i.z + accel_bias_N*frame_i.x + accel_bias_E*frame_i.y + accel_bias_D*frame_i.z
y_accel = sympy.Matrix(g_i.express(frame_b).subs(sub_C_nb).to_matrix(frame_b))
H_accel = y_accel.jacobian(xe).subs(sub_lin)
H_accel
```
```python
H_accel.subs(sub_C_nb_rev).subs({phi: 0, theta: 0, psi: 0})
```
## Magnetometer
```python
B_N, B_E, B_D = sympy.symbols('B_N, B_E, B_D')
b_i = B_N*frame_i.x + B_E*frame_i.y + B_D*frame_i.z
y_mag = sympy.Matrix(b_i.express(frame_b).subs(sub_C_nb).to_matrix(frame_b))
H_mag = y_mag.jacobian(xe).subs(sub_lin).subs(sub_agl)
H_mag.simplify()
H_mag
```
## Observability Analysis
```python
def find_observable_states(H, x, n_max=3):
O = sympy.Matrix(H)
for n in range(n_max):
O = O.col_join(H*A**n)
return [x[i] for i in O.rref()[1]]
```
```python
find_observable_states(H_mag, xe)
```
```python
find_observable_states(H_accel, xe)
```
```python
find_observable_states(H_mag.col_join(H_accel), xe)
```
| 54e9ac39ea4d62a844546e2ff5130dbeb36b9e6d | 217,167 | ipynb | Jupyter Notebook | Measurement Jacobians.ipynb | jgoppert/iekf_analysis | d41ad34b37ef2636e20680accf399ea4a9332811 | [
"BSD-3-Clause"
]
| 5 | 2018-01-16T06:46:38.000Z | 2019-06-19T10:17:12.000Z | Measurement Jacobians.ipynb | jgoppert/iekf_analysis | d41ad34b37ef2636e20680accf399ea4a9332811 | [
"BSD-3-Clause"
]
| null | null | null | Measurement Jacobians.ipynb | jgoppert/iekf_analysis | d41ad34b37ef2636e20680accf399ea4a9332811 | [
"BSD-3-Clause"
]
| null | null | null | 135.306542 | 14,482 | 0.785626 | true | 6,835 | Qwen/Qwen-72B | 1. YES
2. YES | 0.7773 | 0.72487 | 0.563442 | __label__kor_Hang | 0.101034 | 0.147393 |
# Lecture 01 The Geometry of Linear Equations
Today's lecture contains:<br/>
1. 2D Linear Equations<br/>
2. 3D Linear Equations<br/>
3. Matrix Multiple Vector<br/>
For all equations, we focus on TWO big concepts: **row picture** and **column picture.**<br/>
<br/>
<br/>
## 1. 2D Linear Equations
Suppose we have two equations with two unknown:
\begin{align}
2x-y&=0\\
-x+2y&=3
\end{align}<br/>
Such equations can be written to matrix format:
\begin{align}
\begin{bmatrix}2&-1\\-1&2\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}=\begin{bmatrix}0\\3\end{bmatrix}
\end{align}
Usually we call the first matrix as **the coefficient matrix** denoted as $A$, the second part as the **unknown** $x$, and the third part as **the right hand side** $b$. Therefore, the system of linear equations can be denoted as $Ax=b$.</br>
### 1.1 Row Picture of 2D Linear Equations
To solve above linear system, one of the methods to tackle is called **row picture**. It is simply applied the math from high school:</br>
a. Find two points of each equation<br/>
b. Connect two points to a line<br/>
c. The joint is the answer.<br/>
```python
%gui qt
from mayavi import mlab
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import scipy as sp
import scipy.linalg
import sympy as sy
sy.init_printing()
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
```
Bad key "text.kerning_factor" on line 4 in
C:\Users\xingx\anaconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test_patch.mplstyle.
You probably need to get an updated matplotlibrc file from
http://github.com/matplotlib/matplotlib/blob/master/matplotlibrc.template
or from the matplotlib source distribution
```python
mlab.init_notebook(backend='x3d')
```
Notebook initialized with x3d backend.
For the first equation,\begin{align}
2x-y=0
\end{align}<br/>
the solution will be:
\begin{align}
x=0, y=0\\
x=2, y=4\
\end{align}<br/>
For the second equation,\begin{align}
-x+2y=3
\end{align}<br/>
the solution will be:
\begin{align}
x=-3, y=0\\
x=1, y=2\
\end{align}<br/>
Hence, the intersection of two line would be:\begin{align}
(x, y)^T = (1, 2)^T
\end{align}<br/>
```python
x = np.linspace(-5, 5, 100)
y1 = 2*x
y2 = (1/2)*x + 3/2
fig, ax = plt.subplots(figsize = (12, 7))
#plot point
ax.scatter(1, 2, s = 200, zorder=5, color = 'r', alpha = .8)
ax.scatter(0, 0, s = 100, zorder=5, color = 'k', alpha = .8)
ax.scatter(2, 4, s = 100, zorder=5, color = 'k', alpha = .8)
ax.scatter(-3, 0, s = 100, zorder=5, color = 'k', alpha = .8)
#plot line
ax.plot(x, y1, x, y2, lw = 3)
ax.plot([1, 1], [0, 2], ls = '--', color = 'b', alpha = .5)
ax.plot([0, 1], [2, 2], ls = '--', color = 'b', alpha = .5)
ax.plot([-5, 5], [0, 0], ls = '-', color = 'k', linewidth=5, alpha = .8)
ax.plot([0, 0], [-5, 5], ls = '-', color = 'k', linewidth=5, alpha = .8)
#set plot range
ax.set_xlim([-5, 5])
ax.set_ylim([-5, 5])
#plot text
s = '$(1,2)$'
pt1 = '$(-3,0)$'
pt2 = '$(0,0)$'
pt3 = '$(2,4)$'
ax.text(1, 2, s, fontsize = 20)
ax.text(-3, .3, pt1, fontsize = 20)
ax.text(0, .3, pt2, fontsize = 20)
ax.text(2, 4, pt3, fontsize = 20)
#
ax.set_title('Row Picture: Solution of $2x-y=0$, $-x+2y=3$', size = 22)
ax.grid()
```
### 1.2 Column Picture of 2D Linear Equations
Another perspective to see this equation is **column picture**. The above matrix
\begin{align}
\begin{bmatrix}2&-1\\-1&2\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}=\begin{bmatrix}0\\3\end{bmatrix}
\end{align}
can be converted to:
\begin{align}
x\begin{bmatrix}2\\-1\end{bmatrix}+y\begin{bmatrix}-1\\2\end{bmatrix}=\begin{bmatrix}0\\3\end{bmatrix}
\end{align}
We call the first vector$\begin{bmatrix}2\\-1\end{bmatrix}$ as $col_1$,second vector$\begin{bmatrix}-1\\2\end{bmatrix}$ as $col_2$.
That said,to satisfy the output vector as $\begin{bmatrix}3\\0\end{bmatrix}$. We have to $2×col_1$, $2×col_2$
\begin{align}
1\begin{bmatrix}2\\-1\end{bmatrix}+2\begin{bmatrix}-1\\2\end{bmatrix}=\begin{bmatrix}0\\3\end{bmatrix}
\end{align}
Hence, the column picture can be illustrated as followed:
```python
from functools import partial
fig, ax = plt.subplots(figsize = (12, 7))
plt.axhline(y=0, c='black', linewidth=5)
plt.axvline(x=0, c='black', linewidth=5)
ax = plt.gca()
#plot vector
arrow_vector = partial(plt.arrow, width=0.01, head_width=0.1, head_length=0.2, length_includes_head=True)
arrow_vector(0, 0, 2, -1, color='r')
arrow_vector(0, 0, -1, 2, color='g')
arrow_vector(2, -1, -2, 4, color='k')
arrow_vector(0, 0, 0, 3, width=0.05, color='b')
#plot point
ax.scatter(2, -1, s = 200, zorder=5, color = 'k', alpha = .8)
ax.scatter(0, 0, s = 200, zorder=5, color = 'k', alpha = .8)
ax.scatter(0, 3, s = 200, zorder=5, color = 'k', alpha = .8)
ax.scatter(-1, 2, s = 200, zorder=5, color = 'k', alpha = .8)
#set plot range
ax.set_xlim([-2, 3])
ax.set_ylim([-2, 4])
#plot text
col_1 = '$col_1$'
col_2 = '$col_2$'
col_3 = '1·$col_1$+2·$col_2$'
ax.text(1, -.5, col_1, fontsize = 20)
ax.text(-.5, 1, col_2, fontsize = 20)
ax.text(1, 1.5, col_3, fontsize = 20)
#
ax.set_title('Column Picture: Solution of $2x-y=0$, $-x+2y=3$', size = 22)
ax.grid()
```
So we have this kind of linear combination:
\begin{align}
x\begin{bmatrix}2\\-1\end{bmatrix}+y\begin{bmatrix}-1\\2\end{bmatrix}=\begin{bmatrix}0\\3\end{bmatrix}
\end{align}
That said, the linear combination of $col_1,col_2$ is vector $b$. Then what would be all the linear combination of $col_1,col_2$<br/>
**The whole plane!**
```python
x, y = np.arange(-10, 11, 5), np.arange(-10, 11, 5)
X, Y = np.meshgrid(x, y)
fig, ax = plt.subplots(figsize = (8, 8))
plt.axhline(y=0, c='black', linewidth=5)
plt.axvline(x=0, c='black', linewidth=5)
ax.scatter(X, Y, s = 200, color = 'red', zorder = 3)
ax.axis([-15, 15, -15, 15])
ax.grid()
```
## 2. 3D Linear Equations
Then we move from 2D to 3D. Suppose we have 3 equations as followed:
\begin{align}
\begin{cases}2x&-y&&=0\\-x&+2y&-z&=-1\\&-3y&+4z&=4\end{cases}
\end{align}
Such equations can be written to matrix format:
\begin{align}
A=\begin{bmatrix}2&-1&0\\-1&2&-1\\0&-3&4\end{bmatrix},\ b=\begin{bmatrix}0\\-1\\4\end{bmatrix}
\end{align}
### 2.1 Row Picture of 2D Linear Equations
The above linear system can be written as followed in row picture:
\begin{align}
\begin{bmatrix}2&-1&0\\-1&2&-1\\0&-3&4\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}0\\-1\\4\end{bmatrix}
\end{align}
First, we try to figure out this linear system from row picture.<br/>
In 2D space, the unknown can be illustrated as **2 lines** in a plane.<br/>
In 3D space, the unknown can be illustrated as **3 planes** in a space.
In the end, 3 plane will be intersected in a point **if** the three equations are independent.
---
TO-DO: Visualize 3D plane intersection. (matplotlib is fake 3d)
### 2.2 Column Picture of 2D Linear Equations
Column picture is Prof. Strang's favorite which is much easier to be understood. The above linear system can be written into column picture as followed:
\begin{align}
x\begin{bmatrix}2\\-1\\0\end{bmatrix}+y\begin{bmatrix}-1\\2\\-3\end{bmatrix}+z\begin{bmatrix}0\\-1\\4\end{bmatrix}=\begin{bmatrix}0\\-1\\4\end{bmatrix}
\end{align}
```python
from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import
fig = plt.figure(figsize=(5,5))
ax = fig.gca(projection='3d')
# Make the grid
x, y, z = np.zeros((3,3))
# Make the direction data for the arrows
u=np.array([2,-1,0])
w=np.array([-1,2,-3])
v=np.array([0,-1,4])
ax.quiver(x, y, z, u, v, w)
ax.set_xlim([-5,5])
ax.set_ylim([-5,5])
ax.set_zlim([-5,5])
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.show()
```
<IPython.core.display.Javascript object>
## 3. Matrix Multiple Vector
There are two ways of multiplication between matrix and vector. Assume we have:
\begin{align}
\begin{bmatrix}2&5\\1&3\end{bmatrix}\begin{bmatrix}1\\2\end{bmatrix}=?
\end{align}
### 3.1 Solving in row picture
\begin{align}
\begin{bmatrix}2&5\\1&3\end{bmatrix}\begin{bmatrix}1\\2\end{bmatrix}=\begin{bmatrix}2×1&5×2\\1×1&3×2\end{bmatrix}=\begin{bmatrix}12\\7\end{bmatrix}
\end{align}
### 3.2 Solving in column picture
This is favored by Prof. Strang.
\begin{align}
\begin{bmatrix}2&5\\1&3\end{bmatrix}\begin{bmatrix}1\\2\end{bmatrix}=1\begin{bmatrix}2\\1\end{bmatrix}+2\begin{bmatrix}5\\3\end{bmatrix}=\begin{bmatrix}12\\7\end{bmatrix}
\end{align}
| 355392c81b675d5ca944a20e60df5bb3e01d26ae | 223,864 | ipynb | Jupyter Notebook | Lecture 01 The Geometry of Linear Equations.ipynb | XingxinHE/Linear_Algebra | 7d6b78699f8653ece60e07765fd485dd36b26194 | [
"MIT"
]
| 3 | 2021-04-24T17:23:50.000Z | 2021-11-27T11:00:04.000Z | Lecture 01 The Geometry of Linear Equations.ipynb | XingxinHE/Linear_Algebra | 7d6b78699f8653ece60e07765fd485dd36b26194 | [
"MIT"
]
| null | null | null | Lecture 01 The Geometry of Linear Equations.ipynb | XingxinHE/Linear_Algebra | 7d6b78699f8653ece60e07765fd485dd36b26194 | [
"MIT"
]
| null | null | null | 182.448248 | 91,763 | 0.863564 | true | 3,114 | Qwen/Qwen-72B | 1. YES
2. YES | 0.882428 | 0.824462 | 0.727528 | __label__eng_Latn | 0.750344 | 0.528623 |
P1 continuous FEM, stationary linear elliptic ESV2007 problem
================================
This example is about approximating the solution $u$ of the elliptic problem
$$\begin{align}
-\nabla\cdot( \kappa \nabla u ) &= f &&\text{in } \Omega\\
u &= g_D &&\text{on }\partial\Omega
\end{align}$$
with datafunction as defined in `dune/gdt/test/linearelliptic/problems/ESV2007.hh` (see below) using piecewise linear continuous Finite Elements, as in `dune/gdt/test/linearelliptic/discretizers/cg.hh`.
Note that the discretization below contains handling of arbitrary Dirichlet and Neumann boundary data, although the problem at hand contains only trivial Dirichlet data.
```python
from dune.xt import common, grid, functions, la
from dune import gdt
common.init_mpi()
```
$$\begin{align}
\Omega &= [-1, 1]^2\\
\Gamma_D &= \partial\Omega\\
\Gamma_N &= \emptyset
\end{align}$$
```python
g = grid.make_cube_grid__2d_simplex_aluconform(lower_left=[-1, -1],
upper_right=[1, 1],
num_elements=[4, 4],
num_refinements=2,
overlap_size=[0, 0])
#g.visualize('../cgfem_esv2007_grid')
boundary_info = grid.make_boundary_info_on_leaf_layer(g, 'xt.grid.boundaryinfo.alldirichlet')
apply_on_neumann_boundary = grid.make_apply_on_neumann_intersections_leaf_part(boundary_info)
apply_on_dirichlet_boundary = grid.make_apply_on_dirichlet_intersections_leaf_part(boundary_info)
```
$$\begin{align}\kappa(x) &:= 1\\
f(x) &:= \tfrac{1}{2} \pi^2 \cos(\tfrac{1}{2} \pi x_0) \cos(\tfrac{1}{2} \pi x_1)\\
g_D(x) &:= 0\end{align}$$
Note that the grid `g` is only provided to select the correct _type_ of function. These functions do not rely on the actual grid which is why we need to later on provide the grid again, i.e., for `visualize(g)`.
```python
kappa = functions.make_constant_function_1x1(g, 1.0, name='diffusion')
f = functions.make_expression_function_1x1(g,
'x',
'0.5*pi*pi*cos(0.5*pi*x[0])*cos(0.5*pi*x[1])',
order=3,
name='force')
g_D = functions.make_constant_function_1x1(g, 0.0, name='dirichlet')
g_N = functions.make_constant_function_1x1(g, 0.0, name='neumann')
#kappa.visualize(g, '../cgfem_esv2007_diffusion')
#f.visualize(g, '../cgfem_esv2007_force')
```
```python
space = gdt.make_cg_leaf_part_to_1x1_fem_p1_space(g)
#space.visualize('../cgfem_esv2007_cg_space')
# There are two ways to create containers:
# * manually create them and given them to the operators/functionals
# * let those create appropriate ones
# in the SWIPDG example we chose the former, so here we do the latterfor the lhs by not providing a matrix, only the type of container
elliptic_operator = gdt.make_elliptic_matrix_operator_istl_row_major_sparse_matrix_double(kappa, space)
# for the rhs, we manually create a vector which is provided to all functionals to assemble into
rhs_vector = la.IstlDenseVectorDouble(space.size(), 0.0)
l2_force_functional = gdt.make_l2_volume_vector_functional(f, rhs_vector, space)
# there are two equivalent ways to restrict the integration domain of the face functional:
# * provide an apply_on_... tag on construction (as done here)
# * provide an apply_on_... tag when appending the functional to the system assembler (bindings not yet present)
l2_neumann_functional = gdt.make_l2_face_vector_functional(g_N, rhs_vector, space, apply_on_neumann_boundary)
# to handle the Dirichlet boundary we require two ingredients
# * a projection of the boundary values onto the space
# * a collection of the degrees of freedom associated with the boundary, to constrain the resulting linera system
g_D_h = gdt.make_discrete_function_istl_dense_vector_double(space, 'dirichlet_projection')
dirichlet_projection_operator = gdt.make_localizable_dirichlet_projection_operator(boundary_info, g_D, g_D_h)
dirichlet_constraints = gdt.make_dirichlet_constraints(boundary_info, space.size(), True)
# compute everything in one grid walk
system_assembler = gdt.make_system_assembler(space)
system_assembler.append(elliptic_operator)
system_assembler.append(l2_force_functional)
system_assembler.append(l2_neumann_functional)
system_assembler.append(dirichlet_projection_operator)
system_assembler.append(dirichlet_constraints)
system_assembler.assemble()
# to form the linear system
# * substract the Dirichlet shift
system_matrix = elliptic_operator.matrix()
rhs_vector -= system_matrix*g_D_h.vector()
# * apply the Dirichlet constraints
dirichlet_constraints.apply(system_matrix, rhs_vector)
# solve the linear system
u_h = la.IstlDenseVectorDouble(space.size(), 0.0)
solver = la.make_solver(system_matrix)
# there are three ways to solve the linear system, given a solver
# (i) use the black box variant
solver.apply(rhs_vector, u_h)
# (ii) select the type of solver
#print('available linear solvers:')
#for tp in solver.types():
# print(' {}'.format(tp))
#solver.apply(rhs_vector, u_h, 'superlu')
# (iii) select the type of solver and its options
#print('options for bicgstab.amg.ssor solver:')
#amg_opts = solver.options('bicgstab.amg.ssor')
#for kk, vv in amg_opts.items():
# print(' {}: {}'.format(kk, vv))
#amg_opts['precision'] = '1e-8'
#solver.apply(rhs_vector, u_h, amg_opts)
# add the Dirichlet shift
u_h += g_D_h.vector()
# visualize (this will write cgfem_esv2007_solution.vtu)
gdt.make_discrete_function(space, u_h, 'solution').visualize('../cgfem_esv2007_solution')
```
| d84d1bbf3feef5c8580541b0b93cc270e2e397fa | 7,910 | ipynb | Jupyter Notebook | notebooks/linearelliptic_cg.ipynb | dune-community/dune-gdt-pymor-interaction | ea77bba70130588aa070a23a23e23cd6a5e3ca25 | [
"BSD-2-Clause"
]
| 1 | 2020-02-08T04:12:19.000Z | 2020-02-08T04:12:19.000Z | notebooks/linearelliptic_cg.ipynb | dune-community/dune-gdt-pymor-interaction | ea77bba70130588aa070a23a23e23cd6a5e3ca25 | [
"BSD-2-Clause"
]
| 1 | 2019-05-09T08:09:07.000Z | 2019-06-29T12:18:51.000Z | notebooks/linearelliptic_cg.ipynb | dune-community/dune-gdt-pymor-interaction | ea77bba70130588aa070a23a23e23cd6a5e3ca25 | [
"BSD-2-Clause"
]
| 2 | 2017-04-05T18:45:47.000Z | 2020-02-08T04:12:22.000Z | 40.357143 | 217 | 0.601517 | true | 1,491 | Qwen/Qwen-72B | 1. YES
2. YES | 0.824462 | 0.743168 | 0.612714 | __label__eng_Latn | 0.87973 | 0.26187 |
Copyright (c) 2015, 2016
[Sebastian Raschka](http://sebastianraschka.com/)
[Li-Yi Wei](http://liyiwei.org/)
https://github.com/1iyiwei/pyml
[MIT License](https://github.com/1iyiwei/pyml/blob/master/LICENSE.txt)
# Python Machine Learning - Code Examples
# Chapter 3 - A Tour of Machine Learning Classifiers
* Logistic regression
* Binary and multiple classes
* Support vector machine
* Kernel trick
* Decision tree
* Random forest for ensemble learning
* K nearest neighbors
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
```python
%load_ext watermark
%watermark -a '' -u -d -v -p numpy,pandas,matplotlib,sklearn
```
last updated: 2016-11-07
CPython 3.5.2
IPython 4.2.0
numpy 1.11.1
pandas 0.18.1
matplotlib 1.5.1
sklearn 0.18
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.*
### Overview
- [Choosing a classification algorithm](#Choosing-a-classification-algorithm)
- [First steps with scikit-learn](#First-steps-with-scikit-learn)
- [Training a perceptron via scikit-learn](#Training-a-perceptron-via-scikit-learn)
- [Modeling class probabilities via logistic regression](#Modeling-class-probabilities-via-logistic-regression)
- [Logistic regression intuition and conditional probabilities](#Logistic-regression-intuition-and-conditional-probabilities)
- [Learning the weights of the logistic cost function](#Learning-the-weights-of-the-logistic-cost-function)
- [Handling multiple classes](#Handling-multiple-classes)
- [Training a logistic regression model with scikit-learn](#Training-a-logistic-regression-model-with-scikit-learn)
- [Tackling overfitting via regularization](#Tackling-overfitting-via-regularization)
- [Maximum margin classification with support vector machines](#Maximum-margin-classification-with-support-vector-machines)
- [Maximum margin intuition](#Maximum-margin-intuition)
- [Dealing with the nonlinearly separable case using slack variables](#Dealing-with-the-nonlinearly-separable-case-using-slack-variables)
- [Alternative implementations in scikit-learn](#Alternative-implementations-in-scikit-learn)
- [Solving nonlinear problems using a kernel SVM](#Solving-nonlinear-problems-using-a-kernel-SVM)
- [Using the kernel trick to find separating hyperplanes in higher dimensional space](#Using-the-kernel-trick-to-find-separating-hyperplanes-in-higher-dimensional-space)
- [Decision tree learning](#Decision-tree-learning)
- [Maximizing information gain – getting the most bang for the buck](#Maximizing-information-gain-–-getting-the-most-bang-for-the-buck)
- [Building a decision tree](#Building-a-decision-tree)
- [Combining weak to strong learners via random forests](#Combining-weak-to-strong-learners-via-random-forests)
- [K-nearest neighbors – a lazy learning algorithm](#K-nearest-neighbors-–-a-lazy-learning-algorithm)
- [Summary](#Summary)
```python
from IPython.display import Image
%matplotlib inline
```
```python
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
```
# Choosing a classification algorithm
There is no free lunch; different algorithms are suitable for different data and applications.
# First steps with scikit-learn
In the linear perceptron part, we wrote the models from ground up.
* too much coding
Existing machine learning libraries
* scikit-learn
* torch7, caffe, tensor-flow, theano, etc.
Scikit-learn
* will use for this course
* not as powerful as other deep learning libraries
* easier to use/install
* many library routines and data-sets to use, as exemplified below for main steps for a machine learning pipeline.
## <a href="https://en.wikipedia.org/wiki/Iris_flower_data_set">Iris dataset</a>
Let's use this dataset for comparing machine learning methods
<table style="width:100% border=0">
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
</tr>
<tr style="text-align=center">
<td>
Setosa
</td>
<td>
Versicolor
</td>
<td>
Virginica
</td>
</tr>
</table>
Loading the Iris dataset from scikit-learn.
```python
from sklearn import datasets
import numpy as np
iris = datasets.load_iris()
print('Data set size: ' + str(iris.data.shape))
X = iris.data[:, [2, 3]]
y = iris.target
print('Class labels:', np.unique(y))
```
Data set size: (150, 4)
Class labels: [0 1 2]
```python
import pandas as pd
df = pd.DataFrame(iris.data)
df.tail()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<th>145</th>
<td>6.7</td>
<td>3.0</td>
<td>5.2</td>
<td>2.3</td>
</tr>
<tr>
<th>146</th>
<td>6.3</td>
<td>2.5</td>
<td>5.0</td>
<td>1.9</td>
</tr>
<tr>
<th>147</th>
<td>6.5</td>
<td>3.0</td>
<td>5.2</td>
<td>2.0</td>
</tr>
<tr>
<th>148</th>
<td>6.2</td>
<td>3.4</td>
<td>5.4</td>
<td>2.3</td>
</tr>
<tr>
<th>149</th>
<td>5.9</td>
<td>3.0</td>
<td>5.1</td>
<td>1.8</td>
</tr>
</tbody>
</table>
</div>
Here, the third column represents the petal length, and the fourth column the petal width of the flower samples.
The classes are already converted to integer labels where 0=Iris-Setosa, 1=Iris-Versicolor, 2=Iris-Virginica.
## Data sets: training versus test
Use different data sets for training and testing a model (generalization)
```python
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
# splitting data into 70% training and 30% test data:
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
```
```python
num_training = y_train.shape[0]
num_test = y_test.shape[0]
print('training: ' + str(num_training) + ', test: ' + str(num_test))
```
training: 105, test: 45
## Data scaling
It is better to scale the data so that different features/channels have similar mean/std.
```python
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
```
## Training a perceptron via scikit-learn
We learned and coded perceptron in chapter 2.
Here we use the scikit-learn library version.
The perceptron only handles 2 classes for now.
We will discuss how to handle $N > 2$ classes.
```python
from sklearn.linear_model import Perceptron
ppn = Perceptron(n_iter=40, eta0=0.1, random_state=0)
_ = ppn.fit(X_train_std, y_train)
```
```python
y_pred = ppn.predict(X_test_std)
print('Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
```
Misclassified samples: 4 out of 45
```python
from sklearn.metrics import accuracy_score
print('Accuracy: %.2f' % accuracy_score(y_test, y_pred))
```
Accuracy: 0.91
```python
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
import warnings
def versiontuple(v):
return tuple(map(int, (v.split("."))))
def plot_decision_regions(X, y, classifier, test_idx=None,
resolution=0.02, xlabel='', ylabel='', title=''):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
# highlight test samples
if test_idx:
# plot all samples
if not versiontuple(np.__version__) >= versiontuple('1.9.0'):
X_test, y_test = X[list(test_idx), :], y[list(test_idx)]
warnings.warn('Please update to NumPy 1.9.0 or newer')
else:
X_test, y_test = X[test_idx, :], y[test_idx]
plt.scatter(X_test[:, 0],
X_test[:, 1],
c='',
alpha=1.0,
linewidths=1,
marker='o',
s=55, label='test set')
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
```
Training a perceptron model using the standardized training data:
```python
X_combined_std = np.vstack((X_train_std, X_test_std))
y_combined = np.hstack((y_train, y_test))
test_idx = range(X_train_std.shape[0], X_combined_std.shape[0])
plot_decision_regions(X=X_combined_std, y=y_combined,
classifier=ppn, test_idx=test_idx,
xlabel='petal length [standardized]',
ylabel='petal width [standardized]')
```
# Modeling class probabilities via logistic regression
* $\mathbf{x}$: input
* $\mathbf{w}$: weights
* $z = \mathbf{w}^T \mathbf{x}$
* $\phi(z)$: transfer function
* $y$: predicted class
## Perceptron
$
y = \phi(z) =
\begin{cases}
1 \; z \geq 0 \\
-1 \; z < 0
\end{cases}
$
## Adaline
$
\begin{align}
\phi(z) &= z \\
y &=
\begin{cases}
1 \; \phi(z) \geq 0 \\
-1 \; \phi(z) < 0
\end{cases}
\end{align}
$
## Logistic regression
$
\begin{align}
\phi(z) &= \frac{1}{1 + e^{-z}} \\
y &=
\begin{cases}
1 \; \phi(z) \geq 0.5 \\
0 \; \phi(z) < 0.5
\end{cases}
\end{align}
$
Note: this is actually classification (discrete output) not regression (continuous output); the naming is historical.
```python
import matplotlib.pyplot as plt
import numpy as np
def sigmoid(z):
return 1.0 / (1.0 + np.exp(-z))
z = np.arange(-7, 7, 0.1)
phi_z = sigmoid(z)
plt.plot(z, phi_z)
plt.axvline(0.0, color='k')
plt.ylim(-0.1, 1.1)
plt.xlabel('z')
plt.ylabel('$\phi (z)$')
# y axis ticks and gridline
plt.yticks([0.0, 0.5, 1.0])
ax = plt.gca()
ax.yaxis.grid(True)
plt.tight_layout()
# plt.savefig('./figures/sigmoid.png', dpi=300)
plt.show()
```
### Logistic regression intuition and conditional probabilities
$\phi(z) = \frac{1}{1 + e^{-z}}$: sigmoid function
$\phi(z) \in [0, 1]$, so can be interpreted as probability: $P(y = 1 \; | \; \mathbf{x} ; \mathbf{w}) = \phi(\mathbf{w}^T \mathbf{x})$
We can then choose class by interpreting the probability:
$
\begin{align}
y &=
\begin{cases}
1 \; \phi(z) \geq 0.5 \\
0 \; \phi(z) < 0.5
\end{cases}
\end{align}
$
The probability information can be very useful for many applications
* knowing the confidence of a prediction in addition to the prediction itself
* e.g. weather forecast: tomorrow might rain versus tomorrow might rain with 70% chance
Perceptron:
Adaline:
Logistic regression:
### Learning the weights of the logistic cost function
$J(\mathbf{w})$: cost function to minimize with parameters $\mathbf{w}$
$z = \mathbf{w}^T \mathbf{x}$
For Adaline, we minimize sum-of-squared-error:
$$
J(\mathbf{w}) = \frac{1}{2} \sum_i \left( y^{(i)} - t^{(i)}\right)^2
= \frac{1}{2} \sum_i \left( \phi\left(z^{(i)}\right) - t^{(i)}\right)^2
$$
#### Maximum likelihood estimation (MLE)
For logistic regression, we take advantage of the probability interpretation to maximize the likelihood:
$$
L(\mathbf{w}) = P(t \; | \; \mathbf{x}; \mathbf{w}) = \prod_i P\left( t^{(i)} \; | \; \mathbf{x}^{(i)} ; \mathbf{w} \right) = \prod_i \phi\left(z^{(i)}\right)^{t^{(i)}} \left(1 - \phi\left(z^{(i)}\right)\right)^{1-t^{(i)}}
$$
Why?
$$
\begin{align}
\phi\left(z^{(i)}\right)^{t^{(i)}} \left(1 - \phi\left(z^{(i)}\right)\right)^{1-t^{(i)}} =
\begin{cases}
\phi\left(z^{(i)} \right) & \; if \; t^{(i)} = 1 \\
1 - \phi\left(z^{(i)}\right) & \; if \; t^{(i)} = 0
\end{cases}
\end{align}
$$
This is equivalent to minimize the negative log likelihood:
$$
J(\mathbf{w})
= -\log L(\mathbf{w})
= \sum_i -t^{(i)}\log\left(\phi\left(z^{(i)}\right)\right) - \left(1 - t^{(i)}\right) \log\left(1 - \phi\left(z^{(i)}\right) \right)
$$
Converting prod to sum via log() is a common math trick for easier computation.
```python
def cost_1(z):
return - np.log(sigmoid(z))
def cost_0(z):
return - np.log(1 - sigmoid(z))
z = np.arange(-10, 10, 0.1)
phi_z = sigmoid(z)
c1 = [cost_1(x) for x in z]
plt.plot(phi_z, c1, label='J(w) if t=1')
c0 = [cost_0(x) for x in z]
plt.plot(phi_z, c0, linestyle='--', label='J(w) if t=0')
plt.ylim(0.0, 5.1)
plt.xlim([0, 1])
plt.xlabel('$\phi$(z)')
plt.ylabel('J(w)')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/log_cost.png', dpi=300)
plt.show()
```
### Relationship to cross entropy
### Optimizing for logistic regression
From
$$
\frac{\partial \log\left(x\right)}{\partial x} = \frac{1}{x}
$$
and chain rule:
$$
\frac{\partial f\left(y\left(x\right)\right)}{\partial x} = \frac{\partial f(y)}{\partial y} \frac{\partial y}{\partial x}
$$
We know
$$
J(\mathbf{w})
= \sum_i -t^{(i)}\log\left(\phi\left(z^{(i)}\right)\right) - \left(1 - t^{(i)}\right) \log\left(1 - \phi\left(z^{(i)}\right) \right)
$$
$$
\frac{\partial J(\mathbf{w})}{\partial \mathbf{w}} =
\sum_i \left(\frac{-t^{(i)}}{\phi\left(z^{(i)}\right)} + \frac{1- t^{(i)}}{1 - \phi\left(z^{(i)}\right)} \right) \frac{\partial \phi \left(z^{(i)}\right)}{\partial \mathbf{w}}
$$
For sigmoid
$
\frac{\partial \phi(z)}{\partial z} = \phi(z)\left(1-\phi(z)\right)
$
Thus
$$
\begin{align}
\delta J =
\frac{\partial J(\mathbf{w})}{\partial \mathbf{w}} &=
\sum_i \left(\frac{-t^{(i)}}{\phi\left(z^{(i)}\right)} + \frac{1- t^{(i)}}{1 - \phi\left(z^{(i)}\right)} \right)
\phi\left(z^{(i)}\right)\left(1 - \phi\left(z^{(i)}\right) \right)
\frac{\partial z^{(i)}}{\partial \mathbf{w}} \\
&=
\sum_i \left( -t^{(i)}\left(1 - \phi\left(z^{(i)}\right)\right) + \left(1-t^{(i)}\right)\phi\left(z^{(i)}\right) \right) \mathbf{x}^{(i)} \\
&=
\sum_i \left( -t^{(i)} + \phi\left( z^{(i)} \right) \right) \mathbf{x}^{(i)}
\end{align}
$$
For gradient descent
$$
\begin{align}
\delta \mathbf{w} &= -\eta \delta J = \eta \sum_i \left( t^{(i)} - \phi\left( z^{(i)} \right) \right) \mathbf{x}^{(i)} \\
\mathbf{w} & \leftarrow \mathbf{w} + \delta \mathbf{w}
\end{align}
$$
as related to what we did for optimizing in chapter 2.
### Handling multiple classes
So far we have discussed only binary classifiers for 2 classes.
How about $K > 2$ classes, e.g. $K=3$ for the Iris dataset?
#### Multiple binary classifiers
##### One versus one
Build $\frac{K(K-1)}{2}$ classifiers,
each separating a different pair of classes $C_i$ and $C_j$
<a href="https://www.microsoft.com/en-us/research/people/cmbishop/" title="Figure 4.2(b), PRML, Bishop">
</a>
The green region is ambiguous: $C_1$, $C_2$, $C_3$
##### One versus rest (aka one versus all)
Build $K$ binary classifiers,
each separating class $C_k$ from the rest
<a href="https://www.microsoft.com/en-us/research/people/cmbishop/" title="Figure 4.2(a), PRML, Bishop">
</a>
The green region is ambiguous: $C_1$, $C_2$
#### Ambiguity
Both one-versus-one and one-versus-all have ambiguous regions and may incur more complexity/computation.
Ambiguity can be resolved via tie breakers, e.g.:
* activation values
* majority voting
* more details: http://scikit-learn.org/stable/modules/multiclass.html
#### One multi-class classifier
Multiple activation functions $\phi_k, k=1, 2, ... K$ each with different parameters
$$
\phi_k\left(\mathbf{x}\right) = \phi\left(\mathbf{x}, \mathbf{w_k}\right) = \phi\left(\mathbf{w}_k^T \mathbf{x} \right)
$$
We can then choose the class based on maximum activation:
$$
y = argmax_k \; \phi_k\left( \mathbf{x} \right)
$$
#### Can also apply the above for multiple binary classifiers
Caveat
* need to assume the individual classifiers have compatible activations
* https://en.wikipedia.org/wiki/Multiclass_classification
Binary logistic regression:
$$
J(\mathbf{w})
= \sum_{i \in samples} -t^{(i)}\log\left(\phi\left(z^{(i)}\right)\right) - \left(1 - t^{(i)}\right) \log\left(1 - \phi\left(z^{(i)}\right) \right)
$$
Multi-class logistic regression:
$$
J(\mathbf{w})
= \sum_{i \in samples} \sum_{j \in classes} -t^{(i, j)}\log\left(\phi\left(z^{(i, j)}\right)\right) - \left(1 - t^{(i, j)}\right) \log\left(1 - \phi\left(z^{(i, j)}\right) \right)
$$
For $\phi \geq 0$, we can normalize for probabilistic interpretation:
$$
P\left(k \; | \; \mathbf{x} ; \{\mathbf{w}_k\} \right) =
\frac{\phi_k\left(\mathbf{x}\right)}{\sum_{m=1}^K \phi_m\left(\mathbf{x}\right) }
$$
Or use softmax (normalized exponential) for any activation $\phi$:
$$
P\left(k \; | \; \mathbf{x} ; \{\mathbf{w}_k\} \right) =
\frac{e^{\phi_k\left(\mathbf{x}\right)}}{\sum_{m=1}^K e^{\phi_m\left(\mathbf{x}\right)} }
$$
For example, if $\phi(z) = z$:
$$
P\left(k \; | \; \mathbf{x} ; \{\mathbf{w}_k\} \right) =
\frac{e^{\mathbf{w}_k^T\mathbf{x}}}{\sum_{m=1}^K e^{\mathbf{w}_m^T \mathbf{x}} }
$$
For training, the model can be optimized via gradient descent.
The likelihood function (to maximize):
$$
L(\mathbf{w})
= P(t \; | \; \mathbf{x}; \mathbf{w})
= \prod_i P\left( t^{(i)} \; | \; \mathbf{x}^{(i)} ; \mathbf{w} \right)
$$
The loss function (to minimize):
$$
J(\mathbf{w})
= -\log{L(\mathbf{w})}
= -\sum_i \log{P\left( t^{(i)} \; | \; \mathbf{x}^{(i)} ; \mathbf{w} \right)}
$$
### Training a logistic regression model with scikit-learn
The code is quite simple.
```python
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(C=1000.0, random_state=0)
lr.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=lr, test_idx=test_idx,
xlabel = 'petal length [standardized]',
ylabel='petal width [standardized]')
```
```python
if Version(sklearn_version) < '0.17':
print(lr.predict_proba(X_test_std[0, :]))
else:
print(lr.predict_proba(X_test_std[0, :].reshape(1, -1)))
```
[[ 2.05743774e-11 6.31620264e-02 9.36837974e-01]]
### Tackling overfitting via regularization
Recall our general representation of our modeling objective:
$$\Phi(\mathbf{X}, \mathbf{T}, \Theta) = L\left(\mathbf{X}, \mathbf{T}, \mathbf{Y}=f(\mathbf{X}, \Theta)\right) + P(\Theta)$$
* $L$ - loss/objective for data fitting
* $P$ - regularization to favor simple model
Need to balance between accuracy/bias (L) and complexity/variance (P)
* If the model is too simple, it might be inaccurate (high bias)
* If the model is too complex, it might over-fit and over-sensitive to training data (high variance)
A well-trained model should
* fit the training data well (low bias)
* remain stable with different training data for good generalization (to unseen future data; low variance)
The following illustrates bias and variance for a potentially non-linear model
$L_2$ norm is a common form for regularization, e.g.
$
P = \lambda ||\mathbf{w}||^2
$
for the linear weights $\mathbf{w}$
$\lambda$ is a parameter to weigh between bias and variance
$C = \frac{1}{\lambda}$ for scikit-learn
```python
weights, params = [], []
for c in np.arange(-5, 5):
lr = LogisticRegression(C=10**c, random_state=0)
lr.fit(X_train_std, y_train)
# coef_ has shape (n_classes, n_features)
# we visualize only class 1
weights.append(lr.coef_[1])
params.append(10**c)
weights = np.array(weights)
plt.plot(params, weights[:, 0],
label='petal length')
plt.plot(params, weights[:, 1], linestyle='--',
label='petal width')
plt.ylabel('weight coefficient')
plt.xlabel('C')
plt.legend(loc='upper left')
plt.xscale('log')
# plt.savefig('./figures/regression_path.png', dpi=300)
plt.show()
```
# Reading
* PML Chapter 3
# Maximum margin classification with support vector machines
Another popular type of machine learning algorithm
* basic version for linear classification
* kernel version for non-linear classification
Linear classification
* decision boundary
$
\mathbf{w}^T \mathbf{x}
\begin{cases}
\geq 0 \; class +1 \\
< 0 \; class -1
\end{cases}
$
* similar to perceptron
* based on different criteria
Perceptron
* minimize misclassification error
* more sensitive to outliers
* incremental learning (via SGD)
SVM
* maximize margins to nearest samples (called support vectors)
* more robust against outliers
* batch learning
## Maximum margin intuition
Maximize the margins of support vectors to the decision plane $\rightarrow$ more robust classification for future samples (that may lie close to the decision plane)
Let us start with the simple case of two classes with labels +1 and -1.
(We choose this particular combination of labeling for numerical simplicity, as follows.)
Let the training dataset be $\{\mathbf{x}^{(i)}, y^{(i)}\}$, $i=1$ to $N$.
The goal is to find hyper-plane parameters $\mathbf{w}$ and $w_0$ so that
$$y^{(i)} \left( \mathbf{w}^T\mathbf{x}^{(i)} + w_0\right) \geq 1, \; \forall i$$.
Note that $y^{(i)} = \pm1$ above.
<font color='blue'>
<ul>
<li> We use t or y for target labels depending on the context
<li> We separate out $w_0$ from the rest of
$
\mathbf{w} =
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$ for math derivation below
</ul>
</font>
### Geometry perspective
For the purpose of optimization, we can cast the problem as maximize $\rho$ for:
$$\frac{y^{(i)} \left( \mathbf{w}^T\mathbf{x}^{(i)} + w_0\right)}{||\mathbf{w}||} \geq \rho, \; \forall i$$
; note that the left-hand side can be interpreted as the distance from $\mathbf{x}^{(i)}$ to the hyper-plane.
### Scaling
Note that the above equation remains invariant if we multiply $||\mathbf{w}||$ and $w_0$ by any non-zero scalar.
To eliminate this ambiguity, we can fix $\rho ||\mathbf{w}|| = 1$ and minimize $||\mathbf{w}||$, i.e.:
min $\frac{1}{2} ||\mathbf{w}||^2$ subject to $y^{(i)}\left( \mathbf{w}^T \mathbf{x}^{(i)} + w_0\right) \geq 1, \; \forall i$
### Optimization
We can use <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier">Lagrangian multipliers</a> $\alpha^{(i)}$ for this constrained optimization problem:
$$
\begin{align}
L(\mathbf{w}, w_0, \alpha)
&=
\frac{1}{2} ||\mathbf{w}||^2 - \sum_i \alpha^{(i)} \left( y^{(i)} \left( \mathbf{w}^T \mathbf{x}^{(i)} + w_0\right) -1 \right)
\\
&=
\frac{1}{2} ||\mathbf{w}||^2 - \sum_i \alpha^{(i)} y^{(i)} \left( \mathbf{w}^T \mathbf{x}^{(i)} + w_0\right) + \sum_i \alpha^{(i)}
\end{align}
$$
<!--
(The last term above is for $\alpha^{(i)} \geq 0$.)
-->
With some calculus/algebraic manipulations:
$$\frac{\partial L}{\partial \mathbf{w}} = 0 \Rightarrow \mathbf{w} = \sum_i \alpha^{(i)} y^{(i)} \mathbf{x}^{(i)}$$
$$\frac{\partial L}{\partial w_0} = 0 \Rightarrow \sum_i \alpha^{(i)} y^{(i)} = 0$$
Plug the above two into $L$ above, we have:
$$
\begin{align}
L(\mathbf{w}, w_0, \alpha) &= \frac{1}{2} \mathbf{w}^T \mathbf{w} - \mathbf{w}^T \sum_i \alpha^{(i)}y^{(i)}\mathbf{x}^{(i)} - w_0 \sum_i \alpha^{(i)} y^{(i)} + \sum_i \alpha^{(i)} \\
&= -\frac{1}{2} \mathbf{w}^T \mathbf{w} + \sum_i \alpha^{(i)} \\
&= -\frac{1}{2} \sum_i \sum_j \alpha^{(i)} \alpha^{(j)} y^{(i)} y^{(j)} \left( \mathbf{x}^{(i)}\right)^T \mathbf{x}^{(j)} + \sum_i \alpha^{(i)}
\end{align}
$$
, which can be maximized, via quadratic optimization with $\alpha^{(i)}$ only, subject to the constraints: $\sum_i \alpha^{(i)} y^{(i)} = 0$ and $\alpha^{(i)} \geq 0, \; \forall i$
Note that $y^{(i)} = \pm 1$.
Once we solve $\{ \alpha^{(i)} \}$ we will see that most of them are $0$ with a few $> 0$.
The $>0$ ones correspond to lie on the decision boundaries and thus called support vectors:
$$y^{(i)} \left( \mathbf{w}^T \mathbf{x}^{(i)} + w_0\right) = 1$$
from which we can calculate $w_0$.
## Dealing with the nonlinearly separable case using slack variables
Soft margin classification
Some datasets are not linearly separable
Avoid thin margins for linearly separable cases
* bias variance tradeoff
For datasets that are not linearly separable, we can introduce slack variables $\{\xi^{(i)}\}$ as follows:
$$y^{(i)} \left( \mathbf{w}^T \mathbf{x}^{(i)} + w_0\right) \geq 1 - \xi^{(i)}, \; \forall i$$
* If $\xi^{(i)} = 0$, it is just like the original case without slack variables.
* If $0 < \xi^{(i)} <1$, $\mathbf{x}^{(i)}$ is correctly classified but lies within the margin.
* If $\xi^{(i)} \geq 1$, $\mathbf{x}^{(i)}$ is mis-classified.
For optimization, the goal is to minimize
$$\frac{1}{2} ||\mathbf{w}||^2 + C \sum_i \xi^{(i)}$$
, where $C$ is the strength of the penalty factor (like in regularization).
Using the Lagrangian multipliers $\{\alpha^{(i)}, \mu^{(i)} \}$ with constraints we have:
$$L = \frac{1}{2} ||\mathbf{w}||^2 + C \sum_i \xi^{(i)} - \sum_i \alpha^{(i)} \left( y^{(i)} \left( \mathbf{w}^{(i)}\mathbf{x}^{(i)} + w_0\right) - 1 + \xi^{(i)}\right) - \sum_i \mu^{(i)} \xi^{(i)}$$
, which can be solved via a similar process as in the original case without slack variables.
## Coding with SVM via scikit learn is simple
```python
from sklearn.svm import SVC
svm = SVC(kernel='linear', C=1.0, random_state=0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=svm, test_idx=test_idx,
xlabel='petal length [standardized]', ylabel='petal width [standardized]')
```
## Alternative implementations in scikit-learn
# Solving non-linear problems using a kernel SVM
SVM can be extended for non-linear classification
This is called kernel SVM
* will explain what kernel means
* and introduce kernel tricks :-)
## Intuition
The following 2D circularly distributed data sets are not linearly separable.
However, we can elevate them to a higher dimensional space for linear separable:
$
\phi(x_1, x_2) = (x_1, x_2, x_1^2 + x_2^2)
$
,
where $\phi$ is the mapping function.
## Animation visualization
https://youtu.be/3liCbRZPrZA
https://youtu.be/9NrALgHFwTo
```python
from IPython.display import YouTubeVideo
YouTubeVideo('3liCbRZPrZA')
```
```python
YouTubeVideo('9NrALgHFwTo')
```
## Using the kernel trick to find separating hyperplanes in higher dimensional space
For datasets that are not linearly separable, we can map them into a higher dimensional space and make them linearly separable.
Let $\phi$ be this mapping:
$\mathbf{z} = \phi(\mathbf{x})$
And we perform the linear decision in the $\mathbf{z}$ instead of the original $\mathbf{x}$ space:
$$y^{(i)} \left( \mathbf{w}^{(i)} \mathbf{z}^{(i)} + w_0\right) \geq 1 - \xi^{(i)}$$
Following similar Lagrangian multiplier optimization as above, we eventually want to optimize:
$$
\begin{align}
L &= \frac{1}{2} \sum_i \sum_j \alpha^{(i)} \alpha^{(j)} y^{(i)} y^{(j)} \left(\mathbf{z}^{(i)}\right)^T \mathbf{z}^{(j)} + \sum_i \alpha^{(i)} \\
&= \frac{1}{2} \sum_i \sum_j \alpha^{(i)} \alpha^{(j)} y^{(i)} y^{(j)} \phi\left(\mathbf{x}^{(i)}\right)^T \phi\left(\mathbf{x}^{(j)}\right) + \sum_i \alpha^{(i)}
\end{align}
$$
The key idea behind kernel trick, and kernel machines in general, is to represent the high dimensional dot product by a kernel function:
$$K\left(\mathbf{x}^{(i)}, \mathbf{x}^{(j)}\right) = \phi\left(\mathbf{x}^{(i)}\right)^T \phi\left(\mathbf{x}^{(j)}\right)$$
Intuitively, the data points become more likely to be linearly separable in a higher dimensional space.
## Kernel trick for evaluation
Recall from part of our derivation above:
$$
\frac{\partial L}{\partial \mathbf{w}} = 0 \Rightarrow \mathbf{w} = \sum_i \alpha^{(i)} y^{(i)} \mathbf{z}^{(i)}
$$
Which allows us to compute the discriminant via kernel trick as well:
$$
\begin{align}
\mathbf{w}^T \mathbf{z}
&=
\sum_i \alpha^{(i)} y^{(i)} \left(\mathbf{z}^{(i)}\right)^T \mathbf{z}
\\
&=
\sum_i \alpha^{(i)} y^{(i)} \phi\left(\mathbf{x}^{(i)}\right)^T \phi(\mathbf{x})
\\
&=
\sum_i \alpha^{(i)} y^{(i)} K\left(\mathbf{x}^{(i)}, \mathbf{x}\right)
\end{align}
$$
## Non-linear classification example
<table>
<tr> <td>x <td> y <td>xor(x, y) </tr>
<tr> <td> 0 <td> 0 <td> 0 </tr>
<tr> <td> 0 <td> 1 <td> 1 </tr>
<tr> <td> 1 <td> 0 <td> 1 </tr>
<tr> <td> 1 <td> 1 <td> 0 </tr>
</table>
Xor is not linearly separable
* math proof left as exercise
Random point sets classified via XOR based on the signs of 2D coordinates:
```python
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
X_xor = np.random.randn(200, 2)
y_xor = np.logical_xor(X_xor[:, 0] > 0,
X_xor[:, 1] > 0)
y_xor = np.where(y_xor, 1, -1)
plt.scatter(X_xor[y_xor == 1, 0],
X_xor[y_xor == 1, 1],
c='b', marker='x',
label='1')
plt.scatter(X_xor[y_xor == -1, 0],
X_xor[y_xor == -1, 1],
c='r',
marker='s',
label='-1')
plt.xlim([-3, 3])
plt.ylim([-3, 3])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/xor.png', dpi=300)
plt.show()
```
This is the classification result using a rbf (radial basis function) kernel
Notice the non-linear decision boundaries
```python
svm = SVC(kernel='rbf', random_state=0, gamma=0.10, C=10.0)
svm.fit(X_xor, y_xor)
plot_decision_regions(X_xor, y_xor,
classifier=svm)
```
## Types of kernels
A variety of kernel functions can be used.
The only requirement is that the kernel function behaves like inner product;
larger $K(\mathbf{x}, \mathbf{y})$ for more similar $\mathbf{x}$ and $\mathbf{y}$
### Linear
$
K\left(\mathbf{x}, \mathbf{y}\right) = \mathbf{x}^T \mathbf{y}
$
### Polynomials of degree $q$
$
K\left(\mathbf{x}, \mathbf{y}\right) =
(\mathbf{x}^T\mathbf{y} + 1)^q
$
Example for $d=2$ and $q=2$
$$
\begin{align}
K\left(\mathbf{x}, \mathbf{y}\right) &= \left( x_1y_1 + x_2y_2 + 1 \right)^2 \\
&= 1 + 2x_1y_1 + 2x_2y_2 + 2x_1x_2y_1y_2 + x_1^2y_1^2 + x_2^2y_2^2
\end{align}
$$
, which corresponds to the following kernel function:
$$
\phi(x, y) = \left[1, \sqrt{2}x, \sqrt{2}y, \sqrt{2}xy, x^2, y^2 \right]^T
$$
### Radial basis function (RBF)
Scalar variance:
$$
K\left(\mathbf{x}, \mathbf{y} \right) = e^{-\frac{\left|\mathbf{x} - \mathbf{y}\right|^2}{2s^2}}
$$
General co-variance matrix:
$$
K\left(\mathbf{x}, \mathbf{y} \right) = e^{-\frac{1}{2} \left(\mathbf{x}-\mathbf{y}\right)^T \mathbf{S}^{-1} \left(\mathbf{x} - \mathbf{y}\right)}
$$
General distance function $D\left(\mathbf{x}, \mathbf{y}\right)$:
$$
K\left(\mathbf{x}, \mathbf{y} \right) = e^{-\frac{D\left(\mathbf{x}, \mathbf{y} \right)}{2s^2}}
$$
RBF essentially projects to an infinite dimensional space.
### Sigmoid
$$
K\left(\mathbf{x}, \mathbf{y} \right) = \tanh\left(2\mathbf{x}^T\mathbf{y} + 1\right)
$$
## Kernel SVM for the Iris dataset
Let's apply RBF kernel
The kernel width is controlled by a gamma $\gamma$ parameter for kernel influence
$
K\left(\mathbf{x}, \mathbf{y} \right) =
e^{-\gamma D\left(\mathbf{x}, \mathbf{y} \right)}
$
and $C$ for regularization
```python
from sklearn.svm import SVC
svm = SVC(kernel='rbf', random_state=0, gamma=0.2, C=1.0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=svm, test_idx=test_idx,
xlabel = 'petal length [standardized]',
ylabel = 'petal width [standardized]')
```
```python
svm = SVC(kernel='rbf', random_state=0, gamma=100.0, C=1.0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=svm, test_idx=test_idx,
xlabel = 'petal length [standardized]',
ylabel='petal width [standardized]')
```
# Reading
* PML Chapter 3
* IML Chapter 13-1 to 13.7
* [The kernel trick](http://www.eric-kim.net/eric-kim-net/posts/1/kernel_trick.html)
# Decision tree learning
Machine learning can be like black box/magic; the model/method works after tuning parameters and such, but how and why?
Decision tree shows you how it makes decision, e.g. classification.
## Example decision tree
* analogous to flow charts for designing algorithms
* every internal node can be based on some if-statement
* automatically learned from data, not manually programmed by human
## Decision tree learning
1. Start with a single node that contains all data
1. Select a node and split it via some criterion to optimize some objective, usually information/impurity $I$
2. Repeat until convergence:
good enough classification measured by $I$;
complex enough model (overfitting);
2. Each leaf node belongs to one class
* Multiple leaf nodes can be of the same class
* Each leaf node can have misclassified samples - majority voting
<a href="http://www.cmpe.boun.edu.tr/~ethem/i2ml2e/" title = "Figure 9.1"></a>
* usually split along one dimension/feature
* a finite number of choices from the boundaries of sample classes
## Maximizing information gain - getting the most bang for the buck
$I(D)$ information/impurity for a tree node with dataset $D$
Maximize information gain $IG$ for splitting each (parent) node $D_p$ into $m$ child nodes $j$:
$$
IG = I(D_p) - \sum_{j=1}^m \frac{N_j}{N_p} I(D_j)
$$
Usually $m=2$ for simplicity (binary split)
Commonly used impurity measures $I$
$p(i|t)$ - probability/proportion of dataset in node $t$ belongs to class $i$
### Entropy
$$
I_H(t) = - \sum_{i=1}^c p(i|t) \log_2 p(i|t)
$$
* $0$ if all samples belong to the same class
* $1$ if uniform distribution
$
0.5 = p(0|t) = p(1|t)
$
<a href="https://en.wikipedia.org/wiki/Entropy_(information_theory)">Entropy (information theory)</a>
Random variable $X$ with probability mass/density function $P(X)$
Information content
$
I(X) = -\log_b\left(P(X)\right)
$
Entropy is the expectation of information
$$
H(X) = E(I(X)) = E(-\log_b(P(X)))
$$
log base $b$ can be $2$, $e$, $10$
Continuous $X$:
$$
H(X) = \int P(x) I(x) \; dx = -\int P(x) \log_b P(x) \;dx
$$
Discrete $X$:
$$
H(X) = \sum_i P(x_i) I(x_i) = -\sum_i P(x_i) \log_b P(x_i)
$$
$-\log_b P(x)$ - number of bits needed to represent $P(x)$
* the rarer the event $\rightarrow$ the less $P(x)$ $\rightarrow$ the more bits
### Gini index
Minimize expected value of misclassification
$$
I_G(t) = \sum_{i=1}^c p(i|t) \left( 1 - p(i|t) \right) = 1 - \sum_{i=1}^c p(i|t)^2
$$
* $p(i|t)$ - probability of class $i$
* $1-p(i|t)$ - probability of misclassification, i.e. $t$ is not class $i$
Similar to entropy
* expected value of information: $-\log_2 p(i|t)$
* information and mis-classification probability: both larger for lower $p(i|t)$
### Classification error
$$
I_e(t) = 1 - \max_i p(i|t)
$$
$
argmax_i \; p(i|t)
$
as the class label for node $t$
## Compare different information measures
Entropy and Gini are probabilisitic
* not assuming the label of the node (decided later after more splitting)
Classification error is deterministic
* assumes the majority class would be the label
Entropy and Gini index are similar, and tend to behave better than classification error
* curves below via a 2-class case
* example in the PML textbook
```python
import matplotlib.pyplot as plt
import numpy as np
def gini(p):
return p * (1 - p) + (1 - p) * (1 - (1 - p))
def entropy(p):
return - p * np.log2(p) - (1 - p) * np.log2((1 - p))
def error(p):
return 1 - np.max([p, 1 - p])
x = np.arange(0.0, 1.0, 0.01)
ent = [entropy(p) if p != 0 else None for p in x]
sc_ent = [e * 0.5 if e else None for e in ent]
err = [error(i) for i in x]
fig = plt.figure()
ax = plt.subplot(111)
for i, lab, ls, c, in zip([ent, sc_ent, gini(x), err],
['Entropy', 'Entropy (scaled)',
'Gini Impurity', 'Misclassification Error'],
['-', '-', '--', '-.'],
['black', 'lightgray', 'red', 'green', 'cyan']):
line = ax.plot(x, i, label=lab, linestyle=ls, lw=2, color=c)
ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.15),
ncol=3, fancybox=True, shadow=False)
ax.axhline(y=0.5, linewidth=1, color='k', linestyle='--')
ax.axhline(y=1.0, linewidth=1, color='k', linestyle='--')
plt.ylim([0, 1.1])
plt.xlabel('p(i=1)')
plt.ylabel('Impurity Index')
plt.tight_layout()
#plt.savefig('./figures/impurity.png', dpi=300, bbox_inches='tight')
plt.show()
```
## Building a decision tree
A finite number of choices for split
Split only alone boundaries of different classes
Exactly where? Maxmize margins
```python
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=0)
tree.fit(X_train, y_train)
X_combined = np.vstack((X_train, X_test))
y_combined = np.hstack((y_train, y_test))
plot_decision_regions(X_combined, y_combined,
classifier=tree, test_idx=test_idx,
xlabel='petal length [cm]',ylabel='petal width [cm]')
```
## Visualize the decision tree
```python
from sklearn.tree import export_graphviz
export_graphviz(tree,
out_file='tree.dot',
feature_names=['petal length', 'petal width'])
```
Install [Graphviz](http://www.graphviz.org/)
<!--
-->
dot -Tsvg tree.dot -o tree.svg
**Note**
If you have scikit-learn 0.18 and pydotplus installed (e.g., you can install it via `pip install pydotplus`), you can also show the decision tree directly without creating a separate dot file as shown below. Also note that `sklearn 0.18` offers a few additional options to make the decision tree visually more appealing.
```python
#import pydotplus
from IPython.display import Image
from IPython.display import display
if False and Version(sklearn_version) >= '0.18':
try:
import pydotplus
dot_data = export_graphviz(
tree,
out_file=None,
# the parameters below are new in sklearn 0.18
feature_names=['petal length', 'petal width'],
class_names=['setosa', 'versicolor', 'virginica'],
filled=True,
rounded=True)
graph = pydotplus.graph_from_dot_data(dot_data)
display(Image(graph.create_png()))
except ImportError:
print('pydotplus is not installed.')
```
# Decisions trees and SVM
* SVM considers only margins to nearest samples to the decision boundary
* Decision tree considers all samples
Case studies
## Pruning a decision tree
Split until all leaf nodes are pure?
* not always a good idea due to potential over-fitting
Simplify the tree via pruning
Pre-pruning
* stop splitting a node if the contained data size is below some threshold (e.g. 5% of all data)
Post-pruning
* build a tree first, and remove excessive branches
* reserve a pruning subset separate from the training data
* for each sub-tree (top-down or bottom-up), replace it with a leaf node labeled with the majority vote if not worsen performance for the pruning subset
Pre-pruning is simpler, post-pruning works better
## Combining weak to strong learners via random forests
Forest = collection of trees
An example of ensemble learning (more about this later)
* combine multiple weak learners to build a strong learner
* better generalization, less overfitting
Less interpretable than a single tree
### Random forest algorithm
Decide how many trees to build
To train each tree:
* Draw a random subset of samples (e.g. random sample with replacement of all samples)
* Split each node via a random subset of features (e.g. $d = \sqrt{m}$ of the original dimensionality)
(randomization is a key)
Majority vote from all trees
### Code example
```python
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(criterion='entropy',
n_estimators=10,
random_state=1,
n_jobs=2)
forest.fit(X_train, y_train)
plot_decision_regions(X_combined, y_combined,
classifier=forest, test_idx=test_idx,
xlabel = 'petal length [cm]', ylabel = 'petal width [cm]')
```
# Reading
* PML Chapter 3
* IML Chapter 9
# Parametric versus non-parametric models
* (fixed) number of parameters trained and retained
* amount of data retained
* trade-off between training and evaluation time
## Example
Linear classifiers (SVM, perceptron)
* parameters: $\mathbf{w}$
* data throw away after training
* extreme end of parametric
Kernel SVM
* depends on the type of kernel used (exercise)
Decision tree
* parameters: decision boundaries at all nodes
* number of parameters vary depending on the training data
* data throw away after training
* less parametric than SVM
Personal take:
Parametric versus non-parametric is more of a continuous spectrum than a binary decision.
Many algorithms lie somewhere in between.
# K-nearest neighbors - a lazy learning algorithm
KNN keeps all data and has no trained parameters
* extreme end of non-parametric
How it works:
* Choose the number $k$ of neighbors and a distance measure
* For each sample to classify, find the $k$ nearest neighbors in the dataset
* Assign class label via majority vote
$k$ is a hyper-parameter (picked by human), not a (ordinary) parameter (trained from data by machines)
Pro:
* zero training time
* very simple
Con:
* need to keep all data
* evaluation time linearly proportional to data size (acceleration possible though, e.g. kd-tree)
* vulnerable to curse of dimensionality
## Practical usage
[Minkowski distance](https://en.wikipedia.org/wiki/Minkowski_distance) of order $p$:
$
d(\mathbf{x}, \mathbf{y}) = \sqrt[p]{\sum_k |\mathbf{x}_k - \mathbf{y}_k|^p}
$
* $p = 2$, Euclidean distance
* $p = 1$, Manhattan distance
<a href="https://en.wikipedia.org/wiki/Minkowski_distance">
</a>
Number of neighbors $k$ trade-off between bias and variance
* too small $k$ - low bias, high variance
```python
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5, p=2, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=test_idx,
xlabel='petal length [standardized]', ylabel='petal width [standardized]')
```
```python
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1, p=2, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=test_idx,
xlabel='petal length [standardized]', ylabel='petal width [standardized]')
```
Too small k can cause overfitting (high variance).
```python
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=100, p=2, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=test_idx,
xlabel='petal length [standardized]', ylabel='petal width [standardized]')
```
Too large k can cause under-fitting (high bias).
How about using different $p$ values for Minkowski distance?
```python
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5, p=1, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=test_idx,
xlabel='petal length [standardized]', ylabel='petal width [standardized]')
```
```python
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5, p=10, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=test_idx,
xlabel='petal length [standardized]', ylabel='petal width [standardized]')
```
# Reading
* PML Chapter 3
| cd1222cdba3361559591ba4729041e2ef20b219f | 588,066 | ipynb | Jupyter Notebook | code/ch03/ch03.ipynb | 1iyiwei/pyml | 9bc0fa94abd8dcb5de92689c981fbd9de2ed1940 | [
"MIT"
]
| 27 | 2016-12-29T05:58:14.000Z | 2021-11-17T10:27:32.000Z | code/ch03/ch03.ipynb | 1iyiwei/pyml | 9bc0fa94abd8dcb5de92689c981fbd9de2ed1940 | [
"MIT"
]
| null | null | null | code/ch03/ch03.ipynb | 1iyiwei/pyml | 9bc0fa94abd8dcb5de92689c981fbd9de2ed1940 | [
"MIT"
]
| 34 | 2016-09-02T04:59:40.000Z | 2020-10-05T02:11:37.000Z | 181.110564 | 41,038 | 0.889247 | true | 13,585 | Qwen/Qwen-72B | 1. YES
2. YES | 0.699254 | 0.743168 | 0.519664 | __label__eng_Latn | 0.820745 | 0.045682 |
# Exercises to Lecture 2: Introduction to Python 3.6
By Dr. Anders S. Christensen
`anders.christensen @ unibas.ch`
## Exercise 2.1: define a function
One of the fundamental things we learned was how to define functions.
For example, the function defined below `square(x)` calculates the square, $x^2$:
```
def square(x):
y = 2 * x**2
return y
print(square(2))
```
### Question 2.1.1:
Similarly to the above code, make a function called. for example, `poly(x)` to calculate the following polynomial:
\begin{equation}
p\left( x \right) = 5 x^2 - 4x + 1
\end{equation}
Lastly, print the value of $p\left( 5 \right)$
```
def poly(x):
# Fill out the rest of the function yourself!
# Print the value for x=5
print(poly(5))
```
## Exercise 2.2: Loop within function
The code below implements the product of all numbers up to n, i.e. the factorial function "$!$"
\begin{equation}
f(n) = n! = \prod_{i=1}^n i = 1 \cdot 2 \cdot \quad ... \quad \cdot (n -1) \cdot n
\end{equation}
As an example, the code to calculate $5!$ is shown here:
```
n = 5
f = 1
for i in range(1,n+1):
f = f * i
print("Factorial", n, "is", f)
```
### Question 2.2.1
Unfortunately the above code is not very practical and re-usable, and will only work for $n=5$. Instead we would like a function named `factorial(n)`. In this Exercise, write your own function which calculates the factorial.
As output print `factorial(10)`.
```
def factorial(n):
# Write the rest of the function yourself
print(factorial(10))
```
### Question 2.2.2:
Using the `factorial(n)` function you wrote in the previous question, print all $n!$ for all n from 1 to 20.
Hint: Use another for loop!
```
# Below, write the code to print all n! from n=1 to n=20
```
## Exercise 2.3: `if` / `else` statements
`if` and `else` statments can are used to make decisions based on defined criteria.
```
n = 10
if n < 10:
print("n is less than 10")
else:
print("n is greater than 10")
```
### Question 2.3.1:
One example of mathematical functions that contain such statements are the so-called "rectifier linear unit" (ReLU) functions which are often used in Neural Networks.
An example is given here:
\begin{equation}
f\left(x\right) =
\begin{cases}
0 & x\leq 0 \\
x & x \gt 0
\end{cases}
\end{equation}
In this question, write the above ReLU function, which returns $0$ if $x \leq 0$ and returns $x$ otherwise.
Lastly, verify the correctness of your code your code using the 5 print statments in the code block below.
```
def relu(x):
# Implement the content of the ReLU function yourself!
print(relu(-10)) # should return 0
print(relu(-0.1)) # should return 0
print(relu(0)) # should return 0
print(relu(0.1)) # should return 0.1
print(relu(10)) # should return 10
```
## Exercise 2.4: Plotting
In the below example, the value of
$$y= x^2$$
is calculated for six values of $x$, and appended to a list.
Next, the points are plottet using the `plt.plot()` function from the `pyplot` library.
The function `plt.plot()` tells matplotlib to draw a lineplot that connects all the pairs of points in two lists.
```
import matplotlib.pyplot as plt
# Some x-values
x = [0.0, 1.0, 2.0, 3.0, 4.0, 5.0]
# Initialize an empty list for the y-values we wish to plot
y = []
for i in range(len(x)):
# Calculate the square of each x in the list
value = x[i]**2
# Add the calculated value to the list "y"
y.append(value)
# Print the two lists
print(x)
print(y)
# Make a line plot/figure
plt.plot(x, y)
# Actually draw the figure below
plt.show()
```
### Question 2.4.1:
Instead of plotting a line through all points, it is also possible to plot datapoints using a so-called [scatterplot](https://en.wikipedia.org/wiki/Scatter_plot).
For this behavior, you can replace the function `plt.plot()` in the above example with the function `plt.scatter()`.
In this question you are given two lists of numbers below, `a` and `b`.
Use pyplot to draw a *scatterplot* of the two lists.
```
a = [10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0]
b = [8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]
# Here, write the code to show a scatterplot for the lists a, b
```
### Question 2.4.2:
In order to give the plot a title and label the axes, insert the three functions in the code in Question 2.4.1:
* `plt.xlabel()`
* `plt.ylabel()`
* `plt.title()`
Make the plot title "Python 2020", label the x-axis "Apples" and the y-axis "Oranges"
```
```
## Exercise 2.5: More plotting (harder)
### Question 2.5.1:
In the example in Problem 2.4, the code for a line plot for $y = x^2$ is shown.
Write your own code to plot $y = \cos(x)$ from 0 to 10.
**Hints:**
* Import the `np.cos()` function using `import numpy as np`
* If the figure does not look smooth, how can you make the points on the x-axis closer than 1.0?
```
import numpy as np
## Create the numpy arrays or lists of x and y values
# Print the two lists to verify that you have the right numbers
# print(x)
# print(y)
# Make the figure
# plt.plot(x, y)
# Actually draw the figure below
# plt.show()
```
| e442b68ecc5c438638214741bab1c8541de015cf | 24,158 | ipynb | Jupyter Notebook | intro_exercises2_2020.ipynb | andersx/python-intro | 8409c89da7dd9cea21e3702a0f0f47aae816eb58 | [
"CC0-1.0"
]
| 11 | 2020-05-03T11:59:01.000Z | 2021-11-15T12:33:39.000Z | intro_exercises2_2020.ipynb | andersx/python-intro | 8409c89da7dd9cea21e3702a0f0f47aae816eb58 | [
"CC0-1.0"
]
| null | null | null | intro_exercises2_2020.ipynb | andersx/python-intro | 8409c89da7dd9cea21e3702a0f0f47aae816eb58 | [
"CC0-1.0"
]
| 7 | 2020-05-10T21:15:15.000Z | 2021-12-05T15:13:54.000Z | 44.737037 | 10,206 | 0.659657 | true | 1,602 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.819893 | 0.741717 | __label__eng_Latn | 0.996456 | 0.561589 |
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)
<br>
[Li-Yi Wei](http://liyiwei.org/)
https://github.com/1iyiwei/pyml
[MIT License](https://github.com/1iyiwei/pyml/blob/master/LICENSE.txt)
# Python Machine Learning - Code Examples
# Chapter 10 - Predicting Continuous Target Variables with Regression Analysis
We talk only about classification so far
Regression is also important
## Classification versus regression
Classification: discrete output
Regression: continuous output
Both are supervised learning
* require target variables
More similar than they appear
* similar principles, e.g. optimization
* similar goals, e.g. linear decision boundary for classification versus line fitting for regression
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
```python
%load_ext watermark
%watermark -a '' -u -d -v -p numpy,pandas,matplotlib,sklearn,seaborn
```
last updated: 2016-11-08
CPython 3.5.2
IPython 4.2.0
numpy 1.11.1
pandas 0.18.1
matplotlib 1.5.1
sklearn 0.18
seaborn 0.7.1
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.*
### Overview
- [Introducing a simple linear regression model](#Introducing-a-simple-linear-regression-model)
- [Exploring the Housing Dataset](#Exploring-the-Housing-Dataset)
- [Visualizing the important characteristics of a dataset](#Visualizing-the-important-characteristics-of-a-dataset)
- [Implementing an ordinary least squares linear regression model](#Implementing-an-ordinary-least-squares-linear-regression-model)
- [Solving regression for regression parameters with gradient descent](#Solving-regression-for-regression-parameters-with-gradient-descent)
- [Estimating the coefficient of a regression model via scikit-learn](#Estimating-the-coefficient-of-a-regression-model-via-scikit-learn)
- [Fitting a robust regression model using RANSAC](#Fitting-a-robust-regression-model-using-RANSAC)
- [Evaluating the performance of linear regression models](#Evaluating-the-performance-of-linear-regression-models)
- [Using regularized methods for regression](#Using-regularized-methods-for-regression)
- [Turning a linear regression model into a curve - polynomial regression](#Turning-a-linear-regression-model-into-a-curve---polynomial-regression)
- [Modeling nonlinear relationships in the Housing Dataset](#Modeling-nonlinear-relationships-in-the-Housing-Dataset)
- [Dealing with nonlinear relationships using random forests](#Dealing-with-nonlinear-relationships-using-random-forests)
- [Decision tree regression](#Decision-tree-regression)
- [Random forest regression](#Random-forest-regression)
- [Summary](#Summary)
```python
from IPython.display import Image
%matplotlib inline
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
```
# Introducing a simple linear regression model
Model:
$
y = \sum_{i=0}^n w_i x_i = \mathbf{w}^T \mathbf{x}
$
with $x_0 = 1$.
Given a collection of sample data $\{\mathbf{x^{(i)}}, y^{(i)} \}$, find the line $\mathbf{w}$ that minimizes the regression error:
$$
\begin{align}
L(X, Y, \mathbf{w})
= \sum_i \left( y^{(i)} - \hat{y}^{(i)} \right)^2
= \sum_i \left( y^{(i)} - \mathbf{w}^T \mathbf{x}^{(i)} \right)^2
\end{align}
$$
## 2D case
$
y = w_0 + w_1 x
$
# General regression models
We can fit different analytic models/functions (not just linear ones) to a given dataset.
Start a linear one with a real-data set first
* easier to understand and interpret (e.g. positive/negative correlation)
* less prone for over-fitting
Followed by non-linear models
# Exploring the Housing dataset
Let's explore a realistic problem: predicting house prices based on their features.
This is a regression problem
* house prices are continuous variables not discrete categories
Source: [https://archive.ics.uci.edu/ml/datasets/Housing](https://archive.ics.uci.edu/ml/datasets/Housing)
Boston suburbs
Attributes (1-13) and target (14):
<pre>
1. CRIM per capita crime rate by town
2. ZN proportion of residential land zoned for lots over 25,000 sq.ft.
3. INDUS proportion of non-retail business acres per town
4. CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
5. NOX nitric oxides concentration (parts per 10 million)
6. RM average number of rooms per dwelling
7. AGE proportion of owner-occupied units built prior to 1940
8. DIS weighted distances to five Boston employment centres
9. RAD index of accessibility to radial highways
10. TAX full-value property-tax rate per $10,000
11. PTRATIO pupil-teacher ratio by town
12. B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
13. LSTAT % lower status of the population
14. MEDV Median value of owner-occupied homes in $1000's
</pre>
## Read the dataset
```python
import pandas as pd
# online dataset
data_src = 'https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data'
# local dataset
data_src = '../datasets/housing/housing.data'
df = pd.read_csv(data_src,
header=None,
sep='\s+')
df.columns = ['CRIM', 'ZN', 'INDUS', 'CHAS',
'NOX', 'RM', 'AGE', 'DIS', 'RAD',
'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
df.head()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>CRIM</th>
<th>ZN</th>
<th>INDUS</th>
<th>CHAS</th>
<th>NOX</th>
<th>RM</th>
<th>AGE</th>
<th>DIS</th>
<th>RAD</th>
<th>TAX</th>
<th>PTRATIO</th>
<th>B</th>
<th>LSTAT</th>
<th>MEDV</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.00632</td>
<td>18.0</td>
<td>2.31</td>
<td>0</td>
<td>0.538</td>
<td>6.575</td>
<td>65.2</td>
<td>4.0900</td>
<td>1</td>
<td>296.0</td>
<td>15.3</td>
<td>396.90</td>
<td>4.98</td>
<td>24.0</td>
</tr>
<tr>
<th>1</th>
<td>0.02731</td>
<td>0.0</td>
<td>7.07</td>
<td>0</td>
<td>0.469</td>
<td>6.421</td>
<td>78.9</td>
<td>4.9671</td>
<td>2</td>
<td>242.0</td>
<td>17.8</td>
<td>396.90</td>
<td>9.14</td>
<td>21.6</td>
</tr>
<tr>
<th>2</th>
<td>0.02729</td>
<td>0.0</td>
<td>7.07</td>
<td>0</td>
<td>0.469</td>
<td>7.185</td>
<td>61.1</td>
<td>4.9671</td>
<td>2</td>
<td>242.0</td>
<td>17.8</td>
<td>392.83</td>
<td>4.03</td>
<td>34.7</td>
</tr>
<tr>
<th>3</th>
<td>0.03237</td>
<td>0.0</td>
<td>2.18</td>
<td>0</td>
<td>0.458</td>
<td>6.998</td>
<td>45.8</td>
<td>6.0622</td>
<td>3</td>
<td>222.0</td>
<td>18.7</td>
<td>394.63</td>
<td>2.94</td>
<td>33.4</td>
</tr>
<tr>
<th>4</th>
<td>0.06905</td>
<td>0.0</td>
<td>2.18</td>
<td>0</td>
<td>0.458</td>
<td>7.147</td>
<td>54.2</td>
<td>6.0622</td>
<td>3</td>
<td>222.0</td>
<td>18.7</td>
<td>396.90</td>
<td>5.33</td>
<td>36.2</td>
</tr>
</tbody>
</table>
</div>
```python
print(df.shape)
```
(506, 14)
<hr>
### Note:
If the link to the Housing dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/housing/housing.data](./../datasets/housing/housing.data).
Or you could fetch it via
```python
if False:
df = pd.read_csv('https://raw.githubusercontent.com/1iyiwei/pyml/master/code/datasets/housing/housing.data',
header=None, sep='\s+')
df.columns = ['CRIM', 'ZN', 'INDUS', 'CHAS',
'NOX', 'RM', 'AGE', 'DIS', 'RAD',
'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
df.head()
```
## Visualizing the important characteristics of a dataset
Before applying analysis and machine learning, it can be good to observate the dataset
* interesting trends that can lead to questions for analysis/ML
* issues in the datasets, such as missing entries, outliers, noises, etc.
[Exploratory data analysis (EDA)](https://en.wikipedia.org/wiki/Exploratory_data_analysis)
Use scatter plots to visualize the correlations between pairs of features.
In the seaborn library below, the diagonal lines are histograms for single features.
```python
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='whitegrid', context='notebook')
cols = ['LSTAT', 'INDUS', 'NOX', 'RM', 'MEDV']
sns.pairplot(df[cols], size=2.5)
plt.tight_layout()
# plt.savefig('./figures/scatter.png', dpi=300)
plt.show()
```
Some observations:
* prices normally distributed with a spiky tail at high range
* number of rooms normally distributed
* prices positively correlated with number of rooms
* prices negatively correlated with low income status
* low income status distribution skewed towards the lower end
* number of rooms and low income status negatively correlated
* vertically aligned samples might be problematic (e.g. clamping values)
"You can observe a lot by just watching" - Yogi Berra
Scientific pipeline
* observation $\rightarrow$ question $\rightarrow$ assumption/model $\rightarrow$ verification $\hookleftarrow$ iteration
## Correlation
A single number to summarize the visual trends.
<a href="https://en.wikipedia.org/wiki/Correlation_and_dependence">
</a>
Correlation $r$ between pairs of underlying variables $x$ and $y$ based on their samples $\{x^{(i)}, y^{(i)}\}, i=1 \; to \; n $.
$$
\begin{align}
r &= \frac{\rho_{xy}}{\rho_x \rho_y}
\\
\rho_{xy} &= \sum_{i=1}^n \left( x^{(i)} - \mu_x \right) \left( y^{(i)} - \mu_y \right)
\\
\rho_x &= \sqrt{\sum_{i=1}^{n} \left( x^{(i)} - \mu_x \right)^2}
\\
\rho_y &= \sqrt{\sum_{i=1}^{n} \left( y^{(i)} - \mu_y\right)^2}
\end{align}
$$
$\mu$: mean
$\rho_x$: std
$\rho_{xy}$: covariance
$r \in [-1, 1]$
* -1: perfect negative correlation
* +1: perfect positive correlation
* 0: no correlation
```python
import numpy as np
# compute correlation
cm = np.corrcoef(df[cols].values.T)
# visualize correlation matrix
sns.set(font_scale=1.5)
hm = sns.heatmap(cm,
cbar=True,
annot=True,
square=True,
fmt='.2f',
annot_kws={'size': 15},
yticklabels=cols,
xticklabels=cols)
# plt.tight_layout()
# plt.savefig('./figures/corr_mat.png', dpi=300)
plt.show()
```
Observations:
* high positive correlation between prices and number of rooms (RM)
* high negative correlation between prices and low-income status (LSTAT)
Thus RM or LSTAT can be good candidates for linear regression
```python
sns.reset_orig()
%matplotlib inline
```
# Implementing an ordinary least squares linear regression model
Model:
$
y = \sum_{i=0}^n w_i x_i = \mathbf{w}^T \mathbf{x}
$
with $x_0 = 1$.
Given a collection of sample data $\{\mathbf{x^{(i)}}, y^{(i)} \}$, find the line $\mathbf{w}$ that minimizes the regression error:
$$
\begin{align}
L(X, Y, \mathbf{w})
= \frac{1}{2} \sum_i \left( y^{(i)} - \hat{y}^{(i)} \right)^2
= \frac{1}{2} \sum_i \left( y^{(i)} - \mathbf{w}^T \mathbf{x}^{(i)} \right)^2
\end{align}
$$
As usual, the $\frac{1}{2}$ term is for convenience of differentiation, to cancel out the square terms:
$$
\begin{align}
\frac{1}{2} \frac{d x^2}{dx} = x
\end{align}
$$
This is called ordinary least squares (OLS).
## Gradient descent
$$
\begin{align}
\frac{\partial L}{\partial \mathbf{w}}
=
\sum_i \mathbf{x}^{(i)} (\mathbf{w}^t \mathbf{x}^{(i)} - y^{(i)})
\end{align}
$$
$$
\mathbf{w} \leftarrow \mathbf{w} - \eta \frac{\partial L}{\partial \mathbf{w}}
$$
## Solving regression for regression parameters with gradient descent
Very similar to Adaline, without the output binary quantization.
Adaline:
## Implementation
```python
class LinearRegressionGD(object):
def __init__(self, eta=0.001, n_iter=20):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
self.w_ = np.zeros(1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
output = self.net_input(X)
errors = (y - output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
cost = (errors**2).sum() / 2.0
self.cost_.append(cost)
return self
def net_input(self, X):
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
return self.net_input(X)
```
## Apply LinearRegressionGD to the housing dataset
```python
X = df[['RM']].values
y = df['MEDV'].values
```
```python
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
sc_y = StandardScaler()
X_std = sc_x.fit_transform(X)
#y_std = sc_y.fit_transform(y) # deprecation warning
#y_std = sc_y.fit_transform(y.reshape(-1, 1)).ravel()
y_std = sc_y.fit_transform(y[:, np.newaxis]).flatten()
```
```python
lr = LinearRegressionGD()
_ = lr.fit(X_std, y_std)
```
```python
plt.plot(range(1, lr.n_iter+1), lr.cost_)
plt.ylabel('SSE')
plt.xlabel('Epoch')
plt.tight_layout()
# plt.savefig('./figures/cost.png', dpi=300)
plt.show()
```
The optimization converges after about 5 epochs.
```python
def lin_regplot(X, y, model):
plt.scatter(X, y, c='lightblue')
plt.plot(X, model.predict(X), color='red', linewidth=2)
return
```
```python
lin_regplot(X_std, y_std, lr)
plt.xlabel('Average number of rooms [RM] (standardized)')
plt.ylabel('Price in $1000\'s [MEDV] (standardized)')
plt.tight_layout()
# plt.savefig('./figures/gradient_fit.png', dpi=300)
plt.show()
```
The red line confirms the positive correlation between median prices and number of rooms.
But there are some weird things, such as a bunch of points on the celing (MEDV $\simeq$ 3,000) which indicates clipping.
```python
print('Slope: %.3f' % lr.w_[1])
print('Intercept: %.3f' % lr.w_[0])
```
Slope: 0.695
Intercept: -0.000
The correlation computed earlier is 0.7, which fits the slope value.
The intercept should be 0 for standardized data.
```python
# use inverse transform to report back the original values
# num_rooms_std = sc_x.transform([[5.0]])
num_rooms_std = sc_x.transform(np.array([[5.0]]))
price_std = lr.predict(num_rooms_std)
print("Price in $1000's: %.3f" % sc_y.inverse_transform(price_std))
```
Price in $1000's: 10.840
## Estimating the coefficient of a regression model via scikit-learn
We don't have to write our own code for linear regression.
Scikit-learn provides various regression models.
* linear and non-linear
```python
from sklearn.linear_model import LinearRegression
```
```python
slr = LinearRegression()
slr.fit(X, y) # no need for standardization
y_pred = slr.predict(X)
print('Slope: %.3f' % slr.coef_[0])
print('Intercept: %.3f' % slr.intercept_)
```
Slope: 9.102
Intercept: -34.671
```python
lin_regplot(X, y, slr)
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.tight_layout()
# plt.savefig('./figures/scikit_lr_fit.png', dpi=300)
plt.show()
```
**Normal Equations** alternative for direct analytic computing without any iteration:
```python
# adding a column vector of "ones"
Xb = np.hstack((np.ones((X.shape[0], 1)), X))
w = np.zeros(X.shape[1])
z = np.linalg.inv(np.dot(Xb.T, Xb))
w = np.dot(z, np.dot(Xb.T, y))
print('Slope: %.3f' % w[1])
print('Intercept: %.3f' % w[0])
```
Slope: 9.102
Intercept: -34.671
# Fitting a robust regression model using RANSAC
Linear regression sensitive to outliers
Not always easy to decide which data samples are outliers
RANSAC (random sample consensus) can deal with this
Basic idea:
1. randomly decide which samples are inliers and outliers
2. fit the line to inliers only
3. add those in outliers close enough to the line (potential inliers)
4. refit using updated inliers
5. terminate if error small enough or iteration enough, otherwise go back to step 1 to find a better model
Can work with different base regressors
<a href="https://commons.wikimedia.org/wiki/File%3ARANSAC_LINIE_Animiert.gif">
</a>
## RANSAC in scikit-learn
```python
from sklearn.linear_model import RANSACRegressor
if Version(sklearn_version) < '0.18':
ransac = RANSACRegressor(LinearRegression(),
max_trials=100,
min_samples=50,
residual_metric=lambda x: np.sum(np.abs(x), axis=1),
residual_threshold=5.0,
random_state=0)
else:
ransac = RANSACRegressor(LinearRegression(),
max_trials=100,
min_samples=50,
loss='absolute_loss',
residual_threshold=5.0,
random_state=0)
ransac.fit(X, y)
inlier_mask = ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
line_X = np.arange(3, 10, 1)
line_y_ransac = ransac.predict(line_X[:, np.newaxis])
plt.scatter(X[inlier_mask], y[inlier_mask],
c='blue', marker='o', label='Inliers')
plt.scatter(X[outlier_mask], y[outlier_mask],
c='lightgreen', marker='s', label='Outliers')
plt.plot(line_X, line_y_ransac, color='red')
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/ransac_fit.png', dpi=300)
plt.show()
```
```python
print('Slope: %.3f' % ransac.estimator_.coef_[0])
print('Intercept: %.3f' % ransac.estimator_.intercept_)
```
Slope: 9.621
Intercept: -37.137
# Evaluating the performance of linear regression models
We know how to evaluate classification models.
* training, test, validation datasets
* cross validation
* accuracy, precision, recall, etc.
* hyper-parameter tuning and selection
We can do similar for regression models.
```python
# trainig/test data split as usual
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
# use all features
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
```
```python
slr = LinearRegression()
slr.fit(X_train, y_train)
y_train_pred = slr.predict(X_train)
y_test_pred = slr.predict(X_test)
```
```python
# plot the residuals: difference between prediction and ground truth
plt.scatter(y_train_pred, y_train_pred - y_train,
c='blue', marker='o', label='Training data')
plt.scatter(y_test_pred, y_test_pred - y_test,
c='lightgreen', marker='s', label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
plt.xlim([-10, 50])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
```
Perfect regression will have 0 residuals (the red line).
Good regression will have uniform random distribution along that red line.
Other indicate potential problems.
* outliers are far away from the 0 residual line
* patterns indicate information not captured by our model
```python
# plot residual against real values
plt.scatter(y_train, y_train_pred - y_train,
c='blue', marker='o', label='Training data')
plt.scatter(y_test, y_test_pred - y_test,
c='lightgreen', marker='s', label='Test data')
plt.xlabel('Real values')
plt.ylabel('Residuals')
plt.legend(loc='best')
plt.hlines(y=0, xmin=-5, xmax=55, lw=2, color='red')
plt.xlim([-5, 55])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
```
## Statistics for regression
For n data samples with prediction $y$ and ground truth $t$:
### Mean squared error (MSE)
$$
\begin{align}
MSE = \frac{1}{n} \sum_{i=1}^n \left(y^{(i)} - t^{(i)}\right)^2
\end{align}
$$
### Coefficient of determination
Standardized version of MSE
$$
\begin{align}
R^2 &= 1 - \frac{SSE}{SST}
\\
SSE &= \sum_{i=1}^{n} \left( y^{(i)} - t^{(i) }\right)^2
\\
SST &= \sum_{i=1}^{n} \left( t^{(i)} - \mu_t \right)^2
\end{align}
$$
$$
R^2 = 1 - \frac{MSE}{Var(t)}
$$
$R^2 = 1$ for perfect fit
* for training data, $0 \leq R^2 \leq 1$
* for test data, $R^2$ can be $<0$
```python
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
```
MSE train: 19.958, test: 27.196
R^2 train: 0.765, test: 0.673
# Using regularized methods for regression
$$
\Phi(\mathbf{X}, \mathbf{T}, \Theta) = L\left(\mathbf{X}, \mathbf{T}, \mathbf{Y}=f(\mathbf{X}, \Theta)\right) + P(\Theta)
$$
* $\mathbf{X}$, $\mathbf{T}$: training data
* $f$: our model with parameters $\Theta$ ($\mathbf{w}$ for linear regression so far)
* $L$: accuracy
$$
\begin{align}
L(X, Y, \mathbf{w})
= \frac{1}{2} \sum_i \left( y^{(i)} - \hat{y}^{(i)} \right)^2
= \frac{1}{2} \sum_i \left( y^{(i)} - \mathbf{w}^T \mathbf{x}^{(i)} \right)^2
\end{align}
$$
* $P$: regularization
Reguarlization can help simplify models and reduce overfitting
* e.g. $L_2$ for classification
Popular methods for linear regression:
* ridge regression - $L_2$
* LASSO (least absolute shrinkage and selection operator) - $L_1$
* elastic net - $L_1$ + $L_2$
## Ridge regression
Essentially $L_2$ regularization:
$$
\begin{align}
P\left(\mathbf{w}\right) = \lambda \| \mathbf{w} \|^2
\end{align}
$$
$\lambda$ is the regularization strength as usual.
$$
\begin{align}
\| \mathbf{w} \|^2 = \sum_{j=1}^{m} w_j^2
\end{align}
$$
Do not regularize $w_0$, the bias term.
## LASSO
Essentially $L_1$ regularization:
$$
\begin{align}
P\left(\mathbf{w}\right) = \lambda \| \mathbf{w} \|
\end{align}
$$
$L_1$ tends to produce more $0$ entries than $L_2$, as discussed before.
## Elastic net
Combining $L_1$ and $L_2$ regularization:
$$
\begin{align}
P\left(\mathbf{w}\right) =
\lambda_1 \| \mathbf{w} \|^2
+
\lambda_2 \| \mathbf{w} \|
\end{align}
$$
# Regularization for regression in scikit learn
```python
from sklearn.linear_model import Ridge
ridge = Ridge(alpha=0.1) # alpha is like lambda above
ridge.fit(X_train, y_train)
y_train_pred = ridge.predict(X_train)
y_test_pred = ridge.predict(X_test)
print(ridge.coef_)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
```
[ -1.20763405e-01 4.47530242e-02 5.54028575e-03 2.51088397e+00
-1.48003209e+01 3.86927965e+00 -1.14410953e-02 -1.48178154e+00
2.37723468e-01 -1.11654203e-02 -1.00209493e+00 6.89528729e-03
-4.87785027e-01]
MSE train: 19.964, test: 27.266
R^2 train: 0.764, test: 0.673
```python
from sklearn.linear_model import Lasso
lasso = Lasso(alpha=0.1) # alpha is like lambda above
lasso.fit(X_train, y_train)
y_train_pred = lasso.predict(X_train)
y_test_pred = lasso.predict(X_test)
print(lasso.coef_)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
```
[-0.11311792 0.04725111 -0.03992527 0.96478874 -0. 3.72289616
-0.02143106 -1.23370405 0.20469 -0.0129439 -0.85269025 0.00795847
-0.52392362]
MSE train: 20.926, test: 28.876
R^2 train: 0.753, test: 0.653
```python
from sklearn.linear_model import ElasticNet
from sklearn.metrics import mean_squared_error
alphas = [0.001, 0.01, 0.1, 1, 10, 100, 1000]
train_errors = []
test_errors = []
for alpha in alphas:
model = ElasticNet(alpha=alpha, l1_ratio=0.5)
model.fit(X_train, y_train)
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
train_errors.append(mean_squared_error(y_train, y_train_pred))
test_errors.append(mean_squared_error(y_test, y_test_pred))
print(train_errors)
print(test_errors)
```
[19.97680428517754, 20.309296016712558, 21.048576928265852, 24.381276557501547, 37.240261467371482, 63.612901488320496, 73.891622930755332]
[27.327995983333842, 28.070888490091839, 28.945165625782824, 31.873610817741049, 41.411383902170655, 65.773493017331489, 74.387546652368243]
# Turning a linear regression model into a curve - polynomial regression
Not everything can be explained by linear relationship
How to generalize?
Polynomial regression
$$
\begin{align}
y = w_0 + w_1 x + w_2 x^2 + \cdots + w_d x^d
\end{align}
$$
Still linear in terms of the weights $\mathbf{w}$
# Non-linear regression in scikit learn
Polynomial features
* recall kernel SVM
```python
X = np.array([258.0, 270.0, 294.0,
320.0, 342.0, 368.0,
396.0, 446.0, 480.0, 586.0])[:, np.newaxis]
y = np.array([236.4, 234.4, 252.8,
298.6, 314.2, 342.2,
360.8, 368.0, 391.2,
390.8])
```
```python
from sklearn.preprocessing import PolynomialFeatures
lr = LinearRegression()
pr = LinearRegression()
quadratic = PolynomialFeatures(degree=2) # e.g. from [a, b] to [1, a, b, a*a, a*b, b*b]
X_quad = quadratic.fit_transform(X)
```
```python
print(X[0, :])
print(X_quad[0, :])
print([1, X[0, :], X[0, :]**2])
```
[ 258.]
[ 1.00000000e+00 2.58000000e+02 6.65640000e+04]
[1, array([ 258.]), array([ 66564.])]
```python
# fit linear features
lr.fit(X, y)
X_fit = np.arange(250, 600, 10)[:, np.newaxis]
y_lin_fit = lr.predict(X_fit)
# fit quadratic features
pr.fit(X_quad, y)
y_quad_fit = pr.predict(quadratic.fit_transform(X_fit))
# plot results
plt.scatter(X, y, label='training points')
plt.plot(X_fit, y_lin_fit, label='linear fit', linestyle='--')
plt.plot(X_fit, y_quad_fit, label='quadratic fit')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/poly_example.png', dpi=300)
plt.show()
```
Quadratic polynomial fits this dataset better than linear polynomial
Not always a good idea to use higher degree functions
* cost
* overfit
```python
y_lin_pred = lr.predict(X)
y_quad_pred = pr.predict(X_quad)
```
```python
print('Training MSE linear: %.3f, quadratic: %.3f' % (
mean_squared_error(y, y_lin_pred),
mean_squared_error(y, y_quad_pred)))
print('Training R^2 linear: %.3f, quadratic: %.3f' % (
r2_score(y, y_lin_pred),
r2_score(y, y_quad_pred)))
```
Training MSE linear: 569.780, quadratic: 61.330
Training R^2 linear: 0.832, quadratic: 0.982
## Modeling nonlinear relationships in the Housing Dataset
Regression of MEDV (median house price) versus LSTAT
Compare polynomial curves
* linear
* quadratic
* cubic
```python
X = df[['LSTAT']].values
y = df['MEDV'].values
regr = LinearRegression()
# create quadratic features
quadratic = PolynomialFeatures(degree=2)
cubic = PolynomialFeatures(degree=3)
X_quad = quadratic.fit_transform(X)
X_cubic = cubic.fit_transform(X)
# fit features
X_fit = np.arange(X.min(), X.max(), 1)[:, np.newaxis]
regr = regr.fit(X, y)
y_lin_fit = regr.predict(X_fit)
linear_r2 = r2_score(y, regr.predict(X))
regr = regr.fit(X_quad, y)
y_quad_fit = regr.predict(quadratic.fit_transform(X_fit))
quadratic_r2 = r2_score(y, regr.predict(X_quad))
regr = regr.fit(X_cubic, y)
y_cubic_fit = regr.predict(cubic.fit_transform(X_fit))
cubic_r2 = r2_score(y, regr.predict(X_cubic))
# plot results
plt.scatter(X, y, label='training points', color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2,
linestyle=':')
plt.plot(X_fit, y_quad_fit,
label='quadratic (d=2), $R^2=%.2f$' % quadratic_r2,
color='red',
lw=2,
linestyle='-')
plt.plot(X_fit, y_cubic_fit,
label='cubic (d=3), $R^2=%.2f$' % cubic_r2,
color='green',
lw=2,
linestyle='--')
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.legend(loc='upper right')
plt.tight_layout()
# plt.savefig('./figures/polyhouse_example.png', dpi=300)
plt.show()
```
Transforming the dataset from observation:
$$
\begin{align}
X_{log} &= \log{X}
\\
Y_{sqrt} &= \sqrt{Y}
\end{align}
$$
```python
X = df[['LSTAT']].values
y = df['MEDV'].values
# transform features
X_log = np.log(X)
y_sqrt = np.sqrt(y)
# training
regr = regr.fit(X_log, y_sqrt)
linear_r2 = r2_score(y_sqrt, regr.predict(X_log))
# fit features
X_fit = np.arange(X_log.min()-1, X_log.max()+1, 1)[:, np.newaxis]
y_lin_fit = regr.predict(X_fit)
# plot results
plt.scatter(X_log, y_sqrt, label='training points', color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2)
plt.xlabel('log(% lower status of the population [LSTAT])')
plt.ylabel('$\sqrt{Price \; in \; \$1000\'s [MEDV]}$')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/transform_example.png', dpi=300)
plt.show()
```
# Dealing with nonlinear relationships using random forests
Decision trees can be applied for both
* classification (talked about this before)
* regression (next topic)
In classification, we associate a class label for each leaf node.
In regression, we fit a function for each leaf node.
In the simplest case, the function is a constant, which will be the mean of all y values within that node if the loss function is based on MSE.
In this case, the whole tree essentially fits a piecewise constant function to the training data.
Similar to building a decision tree for classification, a decision tree for regression can be built by iteratively splitting each node based on optimizing an objective function, such as MSE mentioned above.
Specifically,
$$IG(D_p) = I(D_p) - \sum_{j=1}^m \frac{N_j}{N_p} I(D_j)$$
, where $IG$ is the information gain we try to maximize for splitting each parent node $p$ with dataset $D_p$, $I$ is the information measure for a given dataset, and $N_p$ and $N_j$ are the number of data vectors within each parent and child nodes.
$m = 2$ for the usual binary split.
For MSE, we have
$$I(D) = \sum_{i \in D} \left|y^{(i)} - \mu_D \right|^2$$
, where $\mu_D$ is the mean of the target $y$ values of the dataset $D$.
## Decision tree regression
```python
from sklearn.tree import DecisionTreeRegressor
X = df[['LSTAT']].values
y = df['MEDV'].values
tree = DecisionTreeRegressor(max_depth=3)
tree.fit(X, y)
sort_idx = X.flatten().argsort() # sort from small to large for plotting below
lin_regplot(X[sort_idx], y[sort_idx], tree)
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000\'s [MEDV]')
# plt.savefig('./figures/tree_regression.png', dpi=300)
plt.show()
```
Notice the piecewise constant regions of the decision curve.
## Random forest regression
A random forest is a collection of decision trees
* randomization of training data and features
* generalizes better than individual trees
Can be used for
* classification (talked about this before)
* regression (next topic)
```python
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.4, random_state=1)
```
```python
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=1000,
criterion='mse',
random_state=1,
n_jobs=-1)
forest.fit(X_train, y_train)
y_train_pred = forest.predict(X_train)
y_test_pred = forest.predict(X_test)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
```
MSE train: 1.642, test: 11.052
R^2 train: 0.979, test: 0.878
Overfitting, but still good performance.
```python
plt.scatter(y_train_pred,
y_train_pred - y_train,
c='black',
marker='o',
s=35,
alpha=0.5,
label='Training data')
plt.scatter(y_test_pred,
y_test_pred - y_test,
c='lightgreen',
marker='s',
s=35,
alpha=0.7,
label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
plt.xlim([-10, 50])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
```
Looks better than the results shown above via other regressors.
# Summary
Regression: fit functions to explain continuous variables
Linear regression: simple and common
General function regression: more powerful but watch out for over-fitting
Analogous to classification in many aspects:
* training via optimization
* traing, validation, test data split
* regularization
* performance metrics
* scikit-learn provides a rich library
RANSAC: a general method for fitting noisy data
* works with different base regressors
We can observe a lot by watching.
* visualize the data before machine learning
# Reading
* PML Chapter 10
| 9d5951683add7a9437f8b5101f843c5a7579d71e | 619,676 | ipynb | Jupyter Notebook | code/ch10/ch10.ipynb | 1iyiwei/pyml | 9bc0fa94abd8dcb5de92689c981fbd9de2ed1940 | [
"MIT"
]
| 27 | 2016-12-29T05:58:14.000Z | 2021-11-17T10:27:32.000Z | code/ch10/ch10.ipynb | 1iyiwei/pyml | 9bc0fa94abd8dcb5de92689c981fbd9de2ed1940 | [
"MIT"
]
| null | null | null | code/ch10/ch10.ipynb | 1iyiwei/pyml | 9bc0fa94abd8dcb5de92689c981fbd9de2ed1940 | [
"MIT"
]
| 34 | 2016-09-02T04:59:40.000Z | 2020-10-05T02:11:37.000Z | 249.567459 | 208,102 | 0.908665 | true | 10,212 | Qwen/Qwen-72B | 1. YES
2. YES | 0.679179 | 0.760651 | 0.516618 | __label__eng_Latn | 0.683483 | 0.038605 |
## Logistic Regression
- Models the probability an object belongs to a class
- Values ranges from 0 to 1
- Can use threshold to classify into which classes a class belongs
- An S-shaped curve
$
\begin{align}
\sigma(t) = \frac{1}{1 + e^{-t}}
\end{align}
$
```python
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import math
```
### How does the Logistic Regression function looks like
```python
from matplotlib import pyplot as plt
import math
x = [i for i in range(-10, 10)]
y = [1.0/(1+math.exp(-1*i)) for i in x]
ax = plt.figure(figsize=(8, 6))
plt.plot(x, y, color='black')
plt.axhline(0.5)
plt.show()
```
### Read the data and plot
```python
import pandas as pd
# Read the data
df_data = pd.read_csv('../data/2d_classification.csv')
# Plot the data
plt.rcParams['figure.figsize'] = [10, 7] # Size of the plots
colors = {0:'b', 1:'g', 2:'r', 3:'c', 4:'m', 5:'y', 6:'k'}
fig, ax = plt.subplots()
grouped = df_data.groupby('label')
for key, group in grouped:
group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key])
plt.show()
```
### Another way to plot the data
```python
import pandas as pd
import numpy as np
# Read the data
df_data = pd.read_csv('../data/2d_classification.csv')
colors = {0:'b', 1:'g', 2:'r', 3:'c', 4:'m', 5:'y', 6:'k'}
plt.figure()
unq_labels = np.unique(df_data['label'])
for i in unq_labels:
df = df_data.loc[df_data['label'] == i][['x','y']]
x = df['x']
y = df['y']
plt.scatter(x, y, c=colors[i], alpha=1)
```
### Just using some friendly variable names
```python
data = df_data[['x','y']].values
label = df_data['label'].values
```
### Our first Logistic Regression model
```python
from sklearn.linear_model import LogisticRegression
from matplotlib import pyplot as plt
log_regr = LogisticRegression()
log_regr.fit(data, label)
predictions = log_regr.predict(data)
df_data['pred_label'] = predictions
```
```python
colors = {0:'b', 1:'g', 2:'r', 3:'c', 4:'m', 5:'y', 6:'k'}
plt.figure(figsize=(8,6))
unq_labels = np.unique(predictions)
for i in unq_labels:
df = df_data.loc[df_data['pred_label'] == i][['x','y']]
x = df['x']
y = df['y']
plt.scatter(x, y, c=colors[i], alpha=1)
```
#### Problem with the previous approach
- When we develop model, using the complete dataset will not allow us to know the actual performance metrics
- Instead always have a test data for evaluation
<font color='red'> <h1> Go to Notebook '03a - Data Split' to know about data split and return from this point </h1> </font>
## Train-Test Splits
```python
from sklearn.model_selection import train_test_split
data_train, data_test, label_train, label_test = train_test_split(data, label, test_size=0.20,
random_state=0, stratify=None)
```
```python
df_before_pred = pd.DataFrame(data_train, columns=['x','y'])
df_before_pred['label'] = label_train
# df_before_pred['label'] = df_before_pred['labels'].apply(lambda x:x+2)
df_before_pred_test = pd.DataFrame(data_test, columns=['x','y'])
df_before_pred_test['label'] = 4
df_before_pred = pd.concat([df_before_pred, df_before_pred_test], ignore_index=True)
```
__Train-test ratio of the split data__
```python
print('Complete data:', data.shape)
print('Train Data:', data_train.shape, 'Test Data:', data_test.shape)
```
Complete data: (100, 2)
Train Data: (80, 2) Test Data: (20, 2)
__How the data is split__
```python
from collections import Counter
print('Training Data split', Counter(label_train))
print('Testing Data split', Counter(label_test))
```
Training Data split Counter({0: 44, 1: 36})
Testing Data split Counter({1: 12, 0: 8})
#### Stratified split
```python
from sklearn.model_selection import train_test_split
data_train, data_test, label_train, label_test = train_test_split(data, label, test_size=0.20,
random_state=0, stratify=label)
print('Complete data:', data.shape)
print('Labels distribution', Counter(label))
print('Train Data:', data_train.shape, 'Test Data:', data_test.shape)
from collections import Counter
print('Training Data split', Counter(label_train))
print('Testing Data split', Counter(label_test))
```
Complete data: (100, 2)
Labels distribution Counter({0: 52, 1: 48})
Train Data: (80, 2) Test Data: (20, 2)
Training Data split Counter({0: 42, 1: 38})
Testing Data split Counter({1: 10, 0: 10})
### Logistic regression on the train-test stratified split
```python
from sklearn.linear_model import LogisticRegression
from matplotlib import pyplot as plt
# Run Logistic Regression
log_regr = LogisticRegression()
log_regr.fit(data_train, label_train)
predictions = log_regr.predict(data_test)
data = np.concatenate((data_train, data_test))
label = np.concatenate((label_train, predictions))
df_train_test_pred = pd.DataFrame(data, columns=['x','y'])
df_train_test_pred['label'] = label
```
```python
colors = {0:'b', 1:'g', 2:'r', 3:'c', 4:'m', 5:'y', 6:'k'}
plt.figure(figsize= (16, 6))
plt.subplot(1,2,1)
unq_labels = np.unique(df_before_pred['label'])
for i in unq_labels:
df = df_before_pred.loc[df_before_pred['label'] == i][['x','y']]
x = df['x']
y = df['y']
plt.scatter(x, y, c=colors[i], alpha=1)
plt.subplot(1,2,2)
unq_labels = np.unique(df_train_test_pred['label'])
for i in unq_labels:
df = df_train_test_pred.loc[df_train_test_pred['label'] == i][['x','y']]
x = df['x']
y = df['y']
plt.scatter(x, y, c=colors[i], alpha=1)
```
## Find the accuracy
```python
log_regr.score(data_test, label_test)
```
0.95
## Find the coefficients
```python
log_regr.coef_
```
array([[-2.87341187, -1.37330706]])
## Find the intercepts
```python
log_regr.intercept_
```
array([-0.41734966])
```python
```
| 7a5d186ee962466d2fc730a24745360ecb5bfe61 | 82,484 | ipynb | Jupyter Notebook | classification/notebooks/.ipynb_checkpoints/03 - Logistic Regression-checkpoint.ipynb | pshn111/Machine-Learning-Package | fbbaa44daf5f0701ea77e5b62eb57ef822e40ab2 | [
"MIT"
]
| null | null | null | classification/notebooks/.ipynb_checkpoints/03 - Logistic Regression-checkpoint.ipynb | pshn111/Machine-Learning-Package | fbbaa44daf5f0701ea77e5b62eb57ef822e40ab2 | [
"MIT"
]
| null | null | null | classification/notebooks/.ipynb_checkpoints/03 - Logistic Regression-checkpoint.ipynb | pshn111/Machine-Learning-Package | fbbaa44daf5f0701ea77e5b62eb57ef822e40ab2 | [
"MIT"
]
| null | null | null | 152.748148 | 18,756 | 0.899556 | true | 1,675 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.882428 | 0.798289 | __label__eng_Latn | 0.527268 | 0.693025 |
```python
from scipy import signal
from scipy.signal import freqs
import numpy as np #Importa libreria numerica
import sympy as sym #simbolica
import matplotlib.pyplot as plt #importa matplotlib solo pyplot
sym.init_printing() #activa a jupyter para mostrar simbolicamente el output
%matplotlib widget
```
```python
# Parametros de Entrada
FS=5000*10;
fp=[800, 1250]; #Banda de Paso [Hz]
fs=[200, 5000]; #Banda de Rechazo [Hz]
Wp=np.dot(2*np.pi,fp); #Banda de Paso [rad/s]
Ws=np.dot(2*np.pi,fs); #Banda de Rechazo [rad/s]
Ap=0.25; #Atenuacion maxima en Banda de Paso [dB]
As=30; #%Atenuacion minima en Banda de Rechazo [dB]
```
```python
N, Wn = signal.cheb1ord(Wp, Ws, Ap, As, analog=True)
b, a = signal.cheby1(N, Ap, Wn, btype='bandpass', analog=True)
w, h = signal.freqs(b, a)
```
```python
Filtro=signal.TransferFunction(b,a) #Funcion de transferencia calculada
#Implementacion como PasaAlto/PasaBajo
sos = signal.cheby1(N, Ap, Wn, btype='bandpass', output='sos', analog=True)
PasaBajo=signal.TransferFunction(2*sos[0,:3],sos[0,3:])
PasaAlto=signal.TransferFunction(1/2*sos[1,:3],sos[1,3:])
```
```python
plt.figure()
plt.semilogx(w, 20 * np.log10(abs(h)))
plt.title('Chebyshev I ')
plt.xlabel('Frecuencia [rad/seg]')
plt.ylabel('Amplitud [dB]')
plt.grid(which='both', axis='both')
plt.show()
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
```python
```
TransferFunctionContinuous(
array([16420899.35090685, 0. , 0. ]),
array([1.00000000e+00, 5.08000167e+03, 9.58572335e+07, 2.00550427e+11,
1.55854546e+15]),
dt: None
)
```python
PasaAlto
```
TransferFunctionContinuous(
array([0.5, 0. , 0. ]),
array([1.00000000e+00, 1.89557535e+03, 2.35000932e+07]),
dt: None
)
```python
PasaBajo
```
TransferFunctionContinuous(
array([32841798.70181369, 0. , 0. ]),
array([1.00000000e+00, 3.18442632e+03, 6.63208202e+07]),
dt: None
)
| fae846d7d775aadd5f10f32e12add9a216053e5c | 4,864 | ipynb | Jupyter Notebook | python/4/FILTRO.ipynb | WayraLHD/SRA21 | 1b0447bf925678b8065c28b2767906d1daff2023 | [
"Apache-2.0"
]
| 1 | 2021-09-29T16:38:53.000Z | 2021-09-29T16:38:53.000Z | python/4/FILTRO.ipynb | WayraLHD/SRA21 | 1b0447bf925678b8065c28b2767906d1daff2023 | [
"Apache-2.0"
]
| 1 | 2021-08-10T08:24:57.000Z | 2021-08-10T08:24:57.000Z | python/4/FILTRO.ipynb | WayraLHD/SRA21 | 1b0447bf925678b8065c28b2767906d1daff2023 | [
"Apache-2.0"
]
| null | null | null | 25.072165 | 120 | 0.533512 | true | 728 | Qwen/Qwen-72B | 1. YES
2. YES | 0.907312 | 0.754915 | 0.684944 | __label__spa_Latn | 0.10663 | 0.429685 |
# Week 2 - Crossmatching Catalogues
####
```python
import numpy as np
import time
from astropy.coordinates import SkyCoord
from astropy import units as u
```
### Convert from HMS & DMS notation to Decimal degrees
```python
def hms2dec(h,m,s):
return 15*(h + m/60 + s/(60*60))
def dms2dec(h,m,s):
return h*(1 + m/(abs(h)*60) + s/(abs(h)*60*60))
# The first example from the question
print(hms2dec(23, 12, 6))
# The second example from the question
print(dms2dec(22, 57, 18))
# The third example from the question
print(dms2dec(-66, 5, 5.1))
```
348.025
22.955000000000002
-66.08475
####
### Haversine Formula for calculating Angular Distance
######
\begin{align}
d = 2 \arcsin \sqrt{ \sin^2 \frac{|\delta_1 - \delta_2|}{2} + \cos \delta_1 \cos \delta_2 \sin^2 \frac{|\alpha_1 - \alpha_2|}{2} }
\end{align}
```python
def angular_dist(r1,d1,r2,d2):
r1, r2, d1, d2 = np.radians(r1), np.radians(r2), np.radians(d1), np.radians(d2)
a = np.sin(np.abs(d1 - d2)/2)**2
b = np.cos(d1)*np.cos(d2)*np.sin(np.abs(r1 - r2)/2)**2
d = 2*np.arcsin(np.sqrt(a + b))
return np.degrees(d)
# Run your function with the first example in the question.
print(angular_dist(21.07, 0.1, 21.15, 8.2))
# Run your function with the second example in the question
print(angular_dist(10.3, -3, 24.3, -29))
```
8.100392318146504
29.208498180546595
####
### Reading data from AT20G BSS and SuperCOSMOS catalogues
```python
def import_bss(path):
data = np.loadtxt(path, usecols=range(1, 7))
out = []
for i, row in enumerate(data, 1):
out.append((i, hms2dec(row[0], row[1], row[2]), dms2dec(row[3], row[4], row[5])))
return out
def import_super(path):
data = np.loadtxt(path, delimiter=',', skiprows=1, usecols=[0, 1])
out = []
for i, row in enumerate(data, 1):
out.append((i, row[0], row[1]))
return out
# Output of the import_bss and import_super functions
bss_cat = import_bss('Data 2/bss_truncated.dat')
super_cat = import_super('Data 2/super_truncated.csv')
print('Object ID | Right Ascension° | Declination°\n')
print(bss_cat)
print(super_cat)
```
Object ID | Right Ascension° | Declination°
[(1, 1.1485416666666666, -47.60530555555555), (2, 2.6496666666666666, -30.463416666666664), (3, 2.7552916666666665, -26.209194444444442)]
[(1, 1.0583407, -52.9162402), (2, 2.6084425, -41.5005753), (3, 2.7302499, -27.706955)]
####
### Finding closest neighbour for a target source (RA°, Dec°) from a catalogue
```python
def find_closest(data, RA1, Dec1):
ind = 0
closest = angular_dist(RA1, Dec1, data[0][1], data[0][2])
for i, row in enumerate(data, 0):
test = angular_dist(RA1, Dec1, row[1], row[2])
if test < closest:
ind = i
closest = test
return (data[ind][0], closest)
cat = import_bss('Data 2/bss.dat')
print('ID | Angular Distance°\n')
# First example from the question
print(find_closest(cat, 175.3, -32.5))
# Second example in the question
print(find_closest(cat, 32.2, 40.7))
```
ID | Angular Distance°
(156, 3.7670580226469013)
(26, 57.729135775621295)
####
## Crossmatching 2 catalogues within a given distance
```python
def crossmatch(cat1, cat2, dist):
matches, no_matches = [], []
for i, row in enumerate(cat1,1):
test = find_closest(cat2, row[1], row[2])
if test[1] < dist:
matches.append((i, test[0], test[1]))
else:
no_matches.append(i)
return matches, no_matches
bss_cat = import_bss('Data 2/bss (2).dat')
super_cat = import_super('Data 2/super.csv')
# First example in the question
max_dist = 40/3600
matches, no_matches = crossmatch(bss_cat, super_cat, max_dist)
print('1st Object ID | 2nd Object ID | Angular Distance°\n')
print(matches[:3])
print('Unmatched IDs from 1st Catalogue - ', no_matches[:3])
print('No. of Unmatched objects in 1st Catalogue = ', len(no_matches), '\n')
# Second example in the question
max_dist = 5/3600
matches, no_matches = crossmatch(bss_cat, super_cat, max_dist)
print(matches[:3])
print('Unmatched IDs from 1st Catalogue - ', no_matches[:3])
print('No. of Unmatched objects in 1st Catalogue = ', len(no_matches))
```
1st Object ID | 2nd Object ID | Angular Distance°
[(1, 2, 0.00010988610939332616), (2, 4, 0.0007649845967220993), (3, 5, 0.00020863352870707666)]
Unmatched IDs from 1st Catalogue - [5, 6, 11]
No. of Unmatched objects in 1st Catalogue = 9
[(1, 2, 0.00010988610939332616), (2, 4, 0.0007649845967220993), (3, 5, 0.00020863352870707666)]
Unmatched IDs from 1st Catalogue - [5, 6, 11]
No. of Unmatched objects in 1st Catalogue = 40
####
### Microoptimising the crossmatch
```python
def angular_dist(r1,d1,r2,d2):
a = np.sin(np.abs(d1 - d2)/2)**2
b = np.cos(d1)*np.cos(d2)*np.sin(np.abs(r1 - r2)/2)**2
d = 2*np.arcsin(np.sqrt(a + b))
return d
def find_closest(data, RA1, Dec1):
ind = 0
closest = angular_dist(RA1, Dec1, data[0][0], data[0][1])
for i, row in enumerate(data, 0):
test = angular_dist(RA1, Dec1, row[0], row[1])
if test < closest:
closest = test
ind = i
return (ind, closest)
def crossmatch(cat1, cat2, dist):
start = time.perf_counter()
matches, no_matches = [], []
cat1 = np.radians(cat1)
cat2 = np.radians(cat2)
dist = np.radians(dist)
for i, row in enumerate(cat1,0):
test = find_closest(cat2, row[0], row[1])
if test[1] < dist:
matches.append((i, test[0], np.degrees(test[1])))
else:
no_matches.append(i)
seconds = time.perf_counter() - start
return matches, no_matches, seconds
# The example in the question
cat1 = np.array([[180, 30], [45, 10], [300, -45]])
cat2 = np.array([[180, 32], [55, 10], [302, -44]])
matches, no_matches, time_taken = crossmatch(cat1, cat2, 5)
print('1st Object ID | 2nd Object ID | Angular Distance°\n')
print('matches:', matches)
print('unmatched:', no_matches)
print('time taken:', time_taken, '\n')
# A function to create a random catalogue of size n
def create_cat(n):
ras = np.random.uniform(0, 360, size=(n, 1))
decs = np.random.uniform(-90, 90, size=(n, 1))
return np.hstack((ras, decs))
# Test your function on random inputs
np.random.seed(0)
cat1 = create_cat(10)
cat2 = create_cat(20)
matches, no_matches, time_taken = crossmatch(cat1, cat2, 5)
print('matches:', matches)
print('unmatched:', no_matches)
print('time taken:', time_taken)
```
1st Object ID | 2nd Object ID | Angular Distance°
matches: [(0, 0, 2.0000000000000027), (2, 2, 1.7420109046547023)]
unmatched: [1]
time taken: 0.0003623000000061438
matches: []
unmatched: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
time taken: 0.005214199999954872
####
### Vectorisation using NumPy
```python
def crossmatch_vect(cat1, cat2, max_radius):
start = time.perf_counter()
max_radius = np.radians(max_radius)
matches, no_matches = [], []
# Convert coordinates to radians
cat1 = np.radians(cat1)
cat2 = np.radians(cat2)
ra2s = cat2[:,0]
dec2s = cat2[:,1]
for id1, (ra1, dec1) in enumerate(cat1):
dists = angular_dist(ra1, dec1, ra2s, dec2s)
min_id = np.argmin(dists)
min_dist = dists[min_id]
if min_dist > max_radius:
no_matches.append(id1)
else:
matches.append((id1, min_id, np.degrees(min_dist)))
time_taken = time.perf_counter() - start
return matches, no_matches, time_taken
# The example in the question
ra1, dec1 = np.radians([180, 30])
cat2 = [[180, 32], [55, 10], [302, -44]]
cat2 = np.radians(cat2)
ra2s, dec2s = cat2[:,0], cat2[:,1]
dists = angular_dist(ra1, dec1, ra2s, dec2s)
print('Angular distance° - ', np.degrees(dists), '\n')
cat1 = np.array([[180, 30], [45, 10], [300, -45]])
cat2 = np.array([[180, 32], [55, 10], [302, -44]])
matches, no_matches, time_taken = crossmatch_vect(cat1, cat2, 5)
print('matches:', matches)
print('unmatched:', no_matches)
print('time taken:', time_taken, '\n')
# Test your function on random inputs
cat1 = create_cat(10) # Create a random catalogue of size 10
cat2 = create_cat(20)
matches, no_matches, time_taken = crossmatch_vect(cat1, cat2, 5)
print('matches:', matches)
print('unmatched:', no_matches)
print('time taken:', time_taken)
```
Angular distance° - [ 2. 113.72587199 132.64478705]
matches: [(0, 0, 2.0000000000000027), (2, 2, 1.7420109046547023)]
unmatched: [1]
time taken: 0.00032420000025012996
matches: []
unmatched: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
time taken: 0.0007706999999754771
####
### Breaking out after maximum match radius : Searching within -90° < δ° < (δ + r)°
```python
def crossmatch(cat1, cat2, max_radius):
start = time.perf_counter()
max_radius = np.radians(max_radius)
matches, no_matches = [], []
cat1 = np.radians(cat1)
cat2 = np.radians(cat2)
order = np.argsort(cat2[:,1])
cat2_ordered = cat2[order]
for id1, (ra1, dec1) in enumerate(cat1):
min_dist = np.inf
min_id2 = None
max_dec = dec1 + max_radius
for id2, (ra2, dec2) in enumerate(cat2_ordered):
if dec2 > max_dec:
break
dist = angular_dist(ra1, dec1, ra2, dec2)
if dist < min_dist:
min_id2 = order[id2]
min_dist = dist
if min_dist > max_radius:
no_matches.append(id1)
else:
matches.append((id1, min_id2, np.degrees(min_dist)))
time_taken = time.perf_counter() - start
return matches, no_matches, time_taken
# The example in the question
cat1 = np.array([[180, 30], [45, 10], [300, -45]])
cat2 = np.array([[180, 32], [55, 10], [302, -44]])
matches, no_matches, time_taken = crossmatch(cat1, cat2, 5)
print('matches:', matches)
print('unmatched:', no_matches)
print('time taken:', time_taken, '\n')
# Test your function on random inputs
np.random.seed(0)
cat1 = create_cat(10)
cat2 = create_cat(20)
matches, no_matches, time_taken = crossmatch(cat1, cat2, 5)
print('matches:', matches)
print('unmatched:', no_matches)
print('time taken:', time_taken)
```
matches: [(0, 0, 2.0000000000000027), (2, 2, 1.7420109046547023)]
unmatched: [1]
time taken: 0.015049199999793927
matches: []
unmatched: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
time taken: 0.006830000000263681
####
### Boxing Match : Searching within (δ - r)° < δ° < (δ + r)°
```python
def crossmatch_boxing(cat1, cat2, max_radius):
start = time.perf_counter()
max_radius = np.radians(max_radius)
matches, no_matches = [], []
cat1 = np.radians(cat1)
cat2 = np.radians(cat2)
order = np.argsort(cat2[:,1])
cat2_ordered = cat2[order]
for id1, (ra1, dec1) in enumerate(cat1):
min_dist = np.inf
min_id2 = None
max_dec = dec1 + max_radius
index = np.searchsorted(cat2_ordered[:,1], dec1 - max_radius, side='left')
for id2, (ra2, dec2) in enumerate(cat2_ordered[index:,:]):
if dec2 > max_dec:
break
dist = angular_dist(ra1, dec1, ra2, dec2)
if dist < min_dist:
min_id2 = order[index:][id2]
min_dist = dist
if min_dist > max_radius:
no_matches.append(id1)
else:
matches.append((id1, min_id2, np.degrees(min_dist)))
time_taken = time.perf_counter() - start
return matches, no_matches, time_taken
# The example in the question
cat1 = np.array([[180, 30], [45, 10], [300, -45]])
cat2 = np.array([[180, 32], [55, 10], [302, -44]])
matches, no_matches, time_taken = crossmatch_boxing(cat1, cat2, 5)
print('matches:', matches)
print('unmatched:', no_matches)
print('time taken:', time_taken, '\n')
# Test your function on random inputs
np.random.seed(0)
cat1 = create_cat(10)
cat2 = create_cat(20)
matches, no_matches, time_taken = crossmatch_boxing(cat1, cat2, 5)
print('matches:', matches)
print('unmatched:', no_matches)
print('time taken:', time_taken)
```
matches: [(0, 0, 2.0000000000000027), (2, 2, 1.7420109046547023)]
unmatched: [1]
time taken: 0.044778799999676266
matches: []
unmatched: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
time taken: 0.0005314999998518033
####
### Crossmatching with k-d Trees
```python
def crossmatch_kd(cat1, cat2, dist):
start = time.perf_counter()
sky_cat1 = SkyCoord(cat1*u.degree, frame='icrs')
sky_cat2 = SkyCoord(cat2*u.degree, frame='icrs')
closest_ids, closest_dists, closest_dists3d = sky_cat1.match_to_catalog_sky(sky_cat2)
matches, no_matches = [], []
for i, ele in enumerate(closest_dists.value):
if ele < dist:
matches.append((i, closest_ids[i], ele))
else:
no_matches.append(i)
seconds = time.perf_counter() - start
return matches, no_matches, seconds
# The example in the question
cat1 = np.array([[180, 30], [45, 10], [300, -45]])
cat2 = np.array([[180, 32], [55, 10], [302, -44]])
matches, no_matches, time_taken = crossmatch_kd(cat1, cat2, 5)
print('matches:', matches)
print('unmatched:', no_matches)
print('time taken:', time_taken, '\n')
# Test your function on random inputs
np.random.seed(0)
cat1 = create_cat(10)
cat2 = create_cat(20)
matches, no_matches, time_taken = crossmatch_kd(cat1, cat2, 5)
print('matches:', matches)
print('unmatched:', no_matches)
print('time taken:', time_taken)
```
matches: [(0, 0, 2.0000000000000036), (2, 2, 1.7420109046547163)]
unmatched: [1]
time taken: 2.324950500000341
matches: []
unmatched: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
time taken: 0.005834800000229734
| a317d319b0aa247782cb2152703ed7babc01bf4e | 20,782 | ipynb | Jupyter Notebook | Week 2 - Crossmatching/Week 2.ipynb | utsav-akhaury/Data-driven-Astronomy | 7b09c30054a46f915b1a3e88b0cf59f91a4308f5 | [
"MIT"
]
| null | null | null | Week 2 - Crossmatching/Week 2.ipynb | utsav-akhaury/Data-driven-Astronomy | 7b09c30054a46f915b1a3e88b0cf59f91a4308f5 | [
"MIT"
]
| null | null | null | Week 2 - Crossmatching/Week 2.ipynb | utsav-akhaury/Data-driven-Astronomy | 7b09c30054a46f915b1a3e88b0cf59f91a4308f5 | [
"MIT"
]
| null | null | null | 29.188202 | 153 | 0.516408 | true | 4,538 | Qwen/Qwen-72B | 1. YES
2. YES | 0.956634 | 0.901921 | 0.862808 | __label__eng_Latn | 0.425544 | 0.842926 |
<a href="https://colab.research.google.com/github/AnilZen/centpy/blob/master/notebooks/Euler_2d.ipynb" target="_parent"></a>
# Euler Equation with CentPy in 2d
### Import packages
```python
# Install the centpy package
!pip install centpy
```
Collecting centpy
Downloading https://files.pythonhosted.org/packages/92/89/7cbdc92609ea7790eb6444f8a189826582d675f0b7f59ba539159c43c690/centpy-0.1-py3-none-any.whl
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from centpy) (1.18.5)
Installing collected packages: centpy
Successfully installed centpy-0.1
```python
# Import numpy and centpy for the solution
import numpy as np
import centpy
```
```python
# Imports functions from matplotlib and setup for the animation
import matplotlib.pyplot as plt
from matplotlib import animation
from IPython.display import HTML
```
### Equation
We solve the Euler equations in 2D
\begin{equation}
\partial_t
\begin{bmatrix} \rho \\ \rho u_x \\ \rho u_y \\ E \end{bmatrix}
+
\partial_x
\begin{bmatrix} \rho u_x \\ \rho u_x^2 + p \\ \rho u_x u_y \\ (E+p) u_x \end{bmatrix}
+
\partial_y
\begin{bmatrix} \rho u_y \\ \rho u_y u_x \\ \rho u_y^2 +p \\ (E+p) u_y \end{bmatrix}
= 0
\end{equation}
with the equation of state
\begin{equation}
p = (\gamma-1) \left(E-\frac{1}{2} \rho (u_x^2 - u_y^2) \right), \qquad \gamma=1.4
\end{equation}
on the domain $(x,y,t)\in([0,1]\times[0,1]\times[0,0.1])$ with initial data for a *2D Riemann problem*:
\begin{equation}
(\rho, v, p)_{t=0} =
\begin{cases}
(1,0,1) & \text{if} & 0<x\leq0.5 \\
(0.125, 0, 0.1) & \text{if} & 0.5<x<1
\end{cases}
\end{equation}
and Dirichlet boundary data set by initial data on each boundary. The solution is computed using a 200 $\times$ 200 mesh and CFL number 0.75.
```python
pars = centpy.Pars2d(
x_init=0., x_final=1.,
y_init=0., y_final=1.,
J=200, K=200,
t_final=0.4,
dt_out=0.005,
cfl=0.475,
scheme="fd2",)
pars.gamma = 1.4
```
```python
# Euler equation
class Euler2d(centpy.Equation2d):
# Helper functions and definitions for the equation
def pressure(self, u):
return (self.gamma - 1.0) * (
u[:, :, 3] - 0.5 * (u[:, :, 1] ** 2 + u[:, :, 2] ** 2) / u[:, :, 0]
)
def euler_data(self):
gamma = self.gamma
p_one = 1.5
p_two = 0.3
p_three = 0.029
p_four = 0.3
upper_right, upper_left, lower_right, lower_left = np.ones((4, 4))
upper_right[0] = 1.5
upper_right[1] = 0.0
upper_right[2] = 0.0
upper_right[3] = (
p_one / (gamma - 1.0)
+ 0.5 * (upper_right[1] ** 2 + upper_right[2] ** 2) / upper_right[0]
)
upper_left[0] = 0.5323
upper_left[1] = 1.206 * upper_left[0]
upper_left[2] = 0.0
upper_left[3] = ( p_two / (gamma - 1.0)
+ 0.5 * (upper_left[1] ** 2 + upper_left[2] ** 2) / upper_left[0] )
lower_right[0] = 0.5323
lower_right[1] = 0.0
lower_right[2] = 1.206 * lower_right[0]
lower_right[3] = ( p_four / (gamma - 1.0)
+ 0.5 * (lower_right[1]**2 + lower_right[2]**2) / lower_right[0] )
lower_left[0] = 0.138
lower_left[1] = 1.206 * lower_left[0]
lower_left[2] = 1.206 * lower_left[0]
lower_left[3] = ( p_three / (gamma - 1.0)
+ 0.5 * (lower_left[1] ** 2 + lower_left[2] ** 2) / lower_left[0] )
return upper_right, upper_left, lower_right, lower_left
# Abstract class equation definitions
def initial_data(self):
u = np.empty((self.J + 4, self.K + 4, 4))
midJ = int(self.J / 2) + 2
midK = int(self.K / 2) + 2
one_matrix = np.ones(u[midJ:, midK:].shape)
upper_right, upper_left, lower_right, lower_left = self.euler_data()
u[midJ:, midK:] = upper_right * one_matrix
u[:midJ, midK:] = upper_left * one_matrix
u[midJ:, :midK] = lower_right * one_matrix
u[:midJ, :midK] = lower_left * one_matrix
return u
def boundary_conditions(self, u):
upper_right, upper_left, lower_right, lower_left = self.euler_data()
if self.odd:
j = slice(1, -2)
u[j, 0] = u[j, 1]
u[j, -2] = u[j, -3]
u[j, -1] = u[j, -3]
u[0, j] = u[1, j]
u[-2, j] = u[-3, j]
u[-1, j] = u[-3, j]
# one
u[-2, -2] = upper_right
u[-1, -2] = upper_right
u[-2, -1] = upper_right
u[-1, -1] = upper_right
# two
u[0, -2] = upper_left
u[0, -1] = upper_left
# three
u[0, 0] = lower_left
u[0, 1] = lower_left
u[1, 0] = lower_left
u[1, 1] = lower_left
# four
u[-2, 0] = lower_right
u[-1, 0] = lower_right
u[-2, 1] = lower_right
u[-1, 1] = lower_right
else:
j = slice(2, -1)
u[j, 0] = u[j, 2]
u[j, 1] = u[j, 2]
u[j, -1] = u[j, -2]
u[0, j] = u[2, j]
u[1, j] = u[2, j]
u[-1, j] = u[-2, j]
# one
u[-1, -2] = upper_right
u[-1, -1] = upper_right
# two
u[0, -2] = upper_left
u[0, -1] = upper_left
u[1, -2] = upper_left
u[1, -1] = upper_left
# three
u[0, 0] = lower_left
u[0, 1] = lower_left
u[1, 0] = lower_left
u[1, 1] = lower_left
# four
u[-1, 0] = lower_right
u[-1, 1] = lower_right
def flux_x(self, u):
f = np.empty_like(u)
p = self.pressure(u)
f[:, :, 0] = u[:, :, 1]
f[:, :, 1] = u[:, :, 1] ** 2 / u[:, :, 0] + p
f[:, :, 2] = u[:, :, 1] * u[:, :, 2] / u[:, :, 0]
f[:, :, 3] = (u[:, :, 3] + p) * u[:, :, 1] / u[:, :, 0]
return f
def flux_y(self, u):
g = np.empty_like(u)
p = self.pressure(u)
g[:, :, 0] = u[:, :, 2]
g[:, :, 1] = u[:, :, 1] * u[:, :, 2] / u[:, :, 0]
g[:, :, 2] = u[:, :, 2] ** 2 / u[:, :, 0] + p
g[:, :, 3] = (u[:, :, 3] + p) * u[:, :, 2] / u[:, :, 0]
return g
def spectral_radius_x(self, u):
j0 = centpy._helpers.j0
rho = u[j0, j0, 0]
vx = u[j0, j0, 1] / rho
vy = u[j0, j0, 2] / rho
p = (self.gamma - 1.0) * (u[j0, j0, 3] - 0.5 * rho * (vx ** 2 + vy ** 2))
c = np.sqrt(self.gamma * p / rho)
return np.abs(vx) + c
def spectral_radius_y(self, u):
j0 = centpy._helpers.j0
rho = u[j0, j0, 0]
vx = u[j0, j0, 1] / rho
vy = u[j0, j0, 2] / rho
p = (self.gamma - 1.0) * (u[j0, j0, 3] - 0.5 * rho * (vx ** 2 + vy ** 2))
c = np.sqrt(self.gamma * p / rho)
return np.abs(vy) + c
```
### Solution
```python
eqn = Euler2d(pars)
soln = centpy.Solver2d(eqn)
soln.solve()
```
### Animation
```python
# Animation
fig = plt.figure()
ax = plt.axes(xlim=(soln.x_init,soln.x_final), ylim=(soln.y_init, soln.y_final))
ax.contour(soln.x[1:-1], soln.y[1:-1], soln.u_n[0,1:-1,1:-1,0])
def animate(i):
ax.collections = []
ax.contour(soln.x[1:-1], soln.y[1:-1], soln.u_n[i,1:-1,1:-1,0])
plt.close()
anim = animation.FuncAnimation(fig, animate, frames=soln.Nt, interval=100, blit=False);
HTML(anim.to_html5_video())
```
```python
```
| 9d60cd3e70386629c6e6d6e3f6520fe84835ea1d | 161,767 | ipynb | Jupyter Notebook | notebooks/Euler_2d.ipynb | anilzen/centpy | 71225ef1b59a000f67b87152e78f87481bd01d7b | [
"MIT"
]
| 2 | 2021-06-23T17:23:21.000Z | 2022-01-14T01:28:57.000Z | notebooks/Euler_2d.ipynb | anilzen/centpy | 71225ef1b59a000f67b87152e78f87481bd01d7b | [
"MIT"
]
| null | null | null | notebooks/Euler_2d.ipynb | anilzen/centpy | 71225ef1b59a000f67b87152e78f87481bd01d7b | [
"MIT"
]
| null | null | null | 80.681796 | 231 | 0.739218 | true | 2,715 | Qwen/Qwen-72B | 1. YES
2. YES | 0.917303 | 0.810479 | 0.743454 | __label__eng_Latn | 0.365671 | 0.565626 |
# Survival Regression with `estimators.SurvivalModel`
<hr>
Author: ***Willa Potosnak*** <[email protected]>
<div style=" float: right;">
</div>
# Contents
### 1. [Introduction](#introduction)
#### 1.1 [The SUPPORT Dataset](#support)
#### 1.2 [Preprocessing the Data](#preprocess)
### 2. [Cox Proportional Hazards (CPH)](#cph)
#### 2.1 [Fit CPH Model](#fitcph)
#### 2.2 [Evaluate CPH Model](#evalcph)
### 3. [Deep Cox Proportional Hazards (DCPH)](#fsn)
#### 3.1 [Fit DCPH Model](#fitfsn)
#### 3.2 [Evaluate DCPH Model](#evalfsn)
### 4. [Deep Survival Machines (DSM)](#dsm)
#### 4.1 [Fit DSM Model](#fitdsm)
#### 4.2 [Evaluate DSM Model](#evaldsm)
### 5. [Deep Cox Mixtures (DCM)](#dcm)
#### 5.1 [Fit DCM Model](#fitdcm)
#### 5.2 [Evaluate DCM Model](#evaldcm)
### 6. [Random Survival Forests (RSF)](#rsf)
#### 6.1 [Fit RSF Model](#fitrsf)
#### 6.2 [Evaluate RSF Model](#evalrsf)
<hr>
<a id="introduction"></a>
## 1. Introduction
The `SurvivalModels` class offers a steamlined approach to train two `auton-survival` models and three baseline survival models for right-censored time-to-event data. The fit method requires the same inputs across all five models, however, model parameter types vary and must be defined and tuned for the specified model.
### Native `auton-survival` Models
* **Faraggi-Simon Net (FSN)/DeepSurv**
* **Deep Survival Machines (DSM)**
* **Deep Cox Mixtures (DCM)**
### External Models
* **Random survival Forests (RSF)**
* **Cox Proportional Hazards (CPH)**
$\textbf{Hyperparameter tuning}$ and $\textbf{model evaluation}$ can be performed using the following metrics, among others.
* $\textbf{Brier Score (BS)}$: the Mean Squared Error (MSE) around the probabilistic prediction at a certain time horizon. The Brier Score can be decomposed into components that measure both discriminative performance and calibration.
\begin{align}
\text{BS}(t) = \mathop{\mathbf{E}}_{x\sim\mathcal{D}}\big[ ||\mathbf{1}\{ T > t \} - \widehat{\mathbf{P}}(T>t|X)\big)||_{_\textbf{2}}^\textbf{2} \big]
\end{align}
* $\textbf{Integrated Brier Score (IBS)}$: the integral of the time-dependent $\textbf{BS}$ over the interval $[t_1; t_{max}]$ where the weighting function is $w(t)= \frac{t}{t_{max}}$.
\begin{align}
\text{IBS} = \int_{t_1}^{t_{max}} \mathrm{BS}^{c}(t)dw(t)
\end{align}
* $\textbf{Area under ROC Curve (ROC-AUC)}$: survival model evaluation can be treated as binary classification to compute the **True Positive Rate (TPR)** and **False Positive Rate (FPR)** dependent on time, $t$. ROC-AUC is used to assess how well the model can distinguish samples that fail by a given time, $t$ from those that fail after this time.
\begin{align}
\widehat{AUC}(t) = \frac{\sum_{i=1}^{n} \sum_{j=1}^{n}I(y_j>t)I(y_i \leq t)w_iI(\hat{f}(x_j) \leq \hat{f}(x_i))}{(\sum_{i=1}^{n} I(y_i > t))(\sum_{i=1}^{n}I(y_i \leq t)w_i)}
\end{align}
* $\textbf{Time Dependent Concordance Index (C$^{td}$)}$: estimates ranking ability by exhaustively comparing relative risks across all pairs of individuals in the test set. We employ the ‘Time Dependent’ variant of Concordance Index that truncates the pairwise comparisons to the events occurring within a fixed time horizon.
\begin{align}
C^{td}(t) = P(\hat{F}(t|x_i) > \hat{F} (t|x_j)|\delta_i = 1, T_i < T_j, T_i \leq t)
\end{align}
<a id="support"></a>
### 1.1. The SUPPORT Dataset
*For the original datasource, please refer to the following [website](https://biostat.app.vumc.org/wiki/Main/SupportDesc).*
Data features $x$ are stored in a pandas dataframe with rows corresponding to individual samples and columns as covariates. Data outcome consists of 'time', $t$, and 'event', $e$, that correspond to the time to event and the censoring indicator, respectively.
```python
import pandas as pd
import sys
sys.path.append('../')
from auton_survival.datasets import load_dataset
```
```python
# Load the SUPPORT dataset
outcomes, features = load_dataset(dataset='SUPPORT')
# Identify categorical (cat_feats) and continuous (num_feats) features
cat_feats = ['sex', 'dzgroup', 'dzclass', 'income', 'race', 'ca']
num_feats = ['age', 'num.co', 'meanbp', 'wblc', 'hrt', 'resp',
'temp', 'pafi', 'alb', 'bili', 'crea', 'sod', 'ph',
'glucose', 'bun', 'urine', 'adlp', 'adls']
# Let's take a look at the features
display(features.head(5))
# Let's take a look at the outcomes
display(outcomes.head(5))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>sex</th>
<th>dzgroup</th>
<th>dzclass</th>
<th>income</th>
<th>race</th>
<th>ca</th>
<th>age</th>
<th>num.co</th>
<th>meanbp</th>
<th>wblc</th>
<th>...</th>
<th>alb</th>
<th>bili</th>
<th>crea</th>
<th>sod</th>
<th>ph</th>
<th>glucose</th>
<th>bun</th>
<th>urine</th>
<th>adlp</th>
<th>adls</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>male</td>
<td>Lung Cancer</td>
<td>Cancer</td>
<td>$11-$25k</td>
<td>other</td>
<td>metastatic</td>
<td>62.84998</td>
<td>0</td>
<td>97.0</td>
<td>6.000000</td>
<td>...</td>
<td>1.799805</td>
<td>0.199982</td>
<td>1.199951</td>
<td>141.0</td>
<td>7.459961</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>7.0</td>
<td>7.0</td>
</tr>
<tr>
<th>1</th>
<td>female</td>
<td>Cirrhosis</td>
<td>COPD/CHF/Cirrhosis</td>
<td>$11-$25k</td>
<td>white</td>
<td>no</td>
<td>60.33899</td>
<td>2</td>
<td>43.0</td>
<td>17.097656</td>
<td>...</td>
<td>NaN</td>
<td>NaN</td>
<td>5.500000</td>
<td>132.0</td>
<td>7.250000</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>1.0</td>
</tr>
<tr>
<th>2</th>
<td>female</td>
<td>Cirrhosis</td>
<td>COPD/CHF/Cirrhosis</td>
<td>under $11k</td>
<td>white</td>
<td>no</td>
<td>52.74698</td>
<td>2</td>
<td>70.0</td>
<td>8.500000</td>
<td>...</td>
<td>NaN</td>
<td>2.199707</td>
<td>2.000000</td>
<td>134.0</td>
<td>7.459961</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>1.0</td>
<td>0.0</td>
</tr>
<tr>
<th>3</th>
<td>female</td>
<td>Lung Cancer</td>
<td>Cancer</td>
<td>under $11k</td>
<td>white</td>
<td>metastatic</td>
<td>42.38498</td>
<td>2</td>
<td>75.0</td>
<td>9.099609</td>
<td>...</td>
<td>NaN</td>
<td>NaN</td>
<td>0.799927</td>
<td>139.0</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>4</th>
<td>female</td>
<td>ARF/MOSF w/Sepsis</td>
<td>ARF/MOSF</td>
<td>NaN</td>
<td>white</td>
<td>no</td>
<td>79.88495</td>
<td>1</td>
<td>59.0</td>
<td>13.500000</td>
<td>...</td>
<td>NaN</td>
<td>NaN</td>
<td>0.799927</td>
<td>143.0</td>
<td>7.509766</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>2.0</td>
</tr>
</tbody>
</table>
<p>5 rows × 24 columns</p>
</div>
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>event</th>
<th>time</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>2029</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>4</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>47</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>133</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>2029</td>
</tr>
</tbody>
</table>
</div>
<a id="preprocess"></a>
### 1.2. Preprocess the Data
```python
import numpy as np
from sklearn.model_selection import train_test_split
# Split the SUPPORT data into training, validation, and test data
x_tr, x_te, y_tr, y_te = train_test_split(features, outcomes, test_size=0.2, random_state=1)
x_tr, x_val, y_tr, y_val = train_test_split(x_tr, y_tr, test_size=0.25, random_state=1)
print(f'Number of training data points: {len(x_tr)}')
print(f'Number of validation data points: {len(x_val)}')
print(f'Number of test data points: {len(x_te)}')
```
Number of training data points: 5463
Number of validation data points: 1821
Number of test data points: 1821
```python
from auton_survival.preprocessing import Preprocessor
# Fit the imputer and scaler to the training data and transform the training, validation and test data
preprocessor = Preprocessor(cat_feat_strat='ignore', num_feat_strat= 'mean')
transformer = preprocessor.fit(features, cat_feats=cat_feats, num_feats=num_feats,
one_hot=True, fill_value=-1)
x_tr = transformer.transform(x_tr)
x_val = transformer.transform(x_val)
x_te = transformer.transform(x_te)
```
<a id="cph"></a>
## 2. Cox Proportional Hazards (CPH)
<b>CPH</b> [2] model assumes that individuals across the population have constant proportional hazards overtime. In this model, the estimator of the survival function conditional on $X, S(·|X) , P(T > t|X)$, is assumed to have constant proportional hazard. Thus, the relative proportional hazard between individuals is constant across time.
*For full details on CPH, please refer to the following paper*:
[2] [Cox, D. R. (1972). Regression models and life-tables. Journal of the Royal Statistical Society: Series B (Methodological).](https://www.jstor.org/stable/2985181)
<a id="fitcph"></a>
### 2.1. Fit CPH Model
```python
from auton_survival.estimators import SurvivalModel
from auton_survival.metrics import survival_regression_metric
from sklearn.model_selection import ParameterGrid
# Define parameters for tuning the model
param_grid = {'l2' : [1e-3, 1e-4]}
params = ParameterGrid(param_grid)
# Define the times for tuning the model hyperparameters and for evaluating the model
times = np.quantile(y_tr['time'][y_tr['event']==1], np.linspace(0.1, 1, 10)).tolist()
# Perform hyperparameter tuning
models = []
for param in params:
model = SurvivalModel('cph', random_seed=2, l2=param['l2'])
# The fit method is called to train the model
model.fit(x_tr, y_tr)
# Obtain survival probabilities for validation set and compute the Integrated Brier Score
predictions_val = model.predict_survival(x_val, times)
metric_val = survival_regression_metric('ibs', y_tr, y_val, predictions_val, times)
models.append([metric_val, model])
# Select the best model based on the mean metric value computed for the validation set
metric_vals = [i[0] for i in models]
first_min_idx = metric_vals.index(min(metric_vals))
model = models[first_min_idx][1]
```
<a id="evalcph"></a>
### 2.2. Evaluate CPH Model
```python
from estimators_demo_utils import plot_performance_metrics
# Obtain survival probabilities for test set
predictions_te = model.predict_survival(x_te, times)
# Compute the Brier Score and time-dependent concordance index for the test set to assess model performance
results = dict()
results['Brier Score'] = survival_regression_metric('brs', outcomes_train=y_tr, outcomes_test=y_te,
predictions=predictions_te, times=times)
results['Concordance Index'] = survival_regression_metric('ctd', outcomes_train=y_tr, outcomes_test=y_te,
predictions=predictions_te, times=times)
plot_performance_metrics(results, times)
```
<a id="fsn"></a>
## 3. Deep Cox Proportional Hazards (DCPH)
<b>DCPH</b> [2], [3] is an extension to the CPH model. DCPH involves modeling the proportional hazard ratios over the individuals with Deep Neural Networks allowing the ability to learn non linear hazard ratios.
*For full details on DCPH models, Faraggi-Simon Net (FSN) and DeepSurv, please refer to the following papers*:
[2] [Faraggi, David, and Richard Simon. "A neural network model for survival data." Statistics in medicine 14.1 (1995): 73-82.](https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.4780140108)
[3] [Katzman, Jared L., et al. "DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network." BMC medical research methodology 18.1 (2018): 1-12.](https://arxiv.org/abs/1606.00931v3)
<a id="fitfsn"></a>
### 3.1. Fit DCPH Model
```python
from auton_survival.estimators import SurvivalModel
from auton_survival.metrics import survival_regression_metric
from sklearn.model_selection import ParameterGrid
# Define parameters for tuning the model
param_grid = {'bs' : [100, 200],
'learning_rate' : [ 1e-4, 1e-3],
'layers' : [ [100], [100, 100] ]
}
params = ParameterGrid(param_grid)
# Define the times for tuning the model hyperparameters and for evaluating the model
times = np.quantile(y_tr['time'][y_tr['event']==1], np.linspace(0.1, 1, 10)).tolist()
# Perform hyperparameter tuning
models = []
for param in params:
model = SurvivalModel('dcph', random_seed=0, bs=param['bs'], learning_rate=param['learning_rate'], layers=param['layers'])
# The fit method is called to train the model
model.fit(x_tr, y_tr)
# Obtain survival probabilities for validation set and compute the Integrated Brier Score
predictions_val = model.predict_survival(x_val, times)
metric_val = survival_regression_metric('ibs', y_tr, y_val, predictions_val, times)
models.append([metric_val, model])
# Select the best model based on the mean metric value computed for the validation set
metric_vals = [i[0] for i in models]
first_min_idx = metric_vals.index(min(metric_vals))
model = models[first_min_idx][1]
```
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:03<00:00, 15.29it/s]
32%|███████████████████████████████████▌ | 16/50 [00:01<00:02, 12.67it/s]
64%|███████████████████████████████████████████████████████████████████████ | 32/50 [00:03<00:02, 8.56it/s]
20%|██████████████████████▏ | 10/50 [00:01<00:04, 8.21it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:02<00:00, 24.79it/s]
50%|███████████████████████████████████████████████████████▌ | 25/50 [00:01<00:01, 19.35it/s]
82%|███████████████████████████████████████████████████████████████████████████████████████████ | 41/50 [00:02<00:00, 16.10it/s]
16%|█████████████████▉ | 8/50 [00:00<00:03, 13.96it/s]
<a id="evalfsn"></a>
### 3.2. Evaluate DCPH Model
Compute the Brier Score and time-dependent concordance index for the test set. See notebook introduction for more details.
```python
from estimators_demo_utils import plot_performance_metrics
# Obtain survival probabilities for test set
predictions_te = model.predict_survival(x_te, times)
# Compute the Brier Score and time-dependent concordance index for the test set to assess model performance
results = dict()
results['Brier Score'] = survival_regression_metric('brs', outcomes_train=y_tr, outcomes_test=y_te,
predictions=predictions_te, times=times)
results['Concordance Index'] = survival_regression_metric('ctd', outcomes_train=y_tr, outcomes_test=y_te,
predictions=predictions_te, times=times)
plot_performance_metrics(results, times)
```
<a id="dsm"></a>
## 4. Deep Survival Machines (DSM)
<b>DSM</b> [5] is a fully parametric approach to modeling the event time distribution as a fixed size mixture over Weibull or Log-Normal distributions. The individual mixture distributions are parametrized with neural networks to learn complex non-linear representations of the data.
<b>Figure A:</b> DSM works by modeling the conditional distribution $P(T |X = x)$ as a mixture over $k$ well-defined, parametric distributions. DSM generates representation of the individual covariates, $x$, using a deep multilayer perceptron followed by a softmax over mixture size, $k$. This representation then interacts with the additional set of parameters, to determine the mixture weights $w$ and the parameters of each of $k$ underlying survival distributions $\{\eta_k, \beta_k\}^K_{k=1}$. The final individual survival distribution for the event time, $T$, is a weighted average over these $K$ distributions.
*For full details on Deep Survival Machines (DSM), please refer to the following paper*:
[5] [Chirag Nagpal, Xinyu Li, and Artur Dubrawski. Deep survival machines: Fully parametric survival regression and representation learning for censored data with competing risks. 2020.](https://arxiv.org/abs/2003.01176)
<a id="fitdsm"></a>
### 4.1. Fit DSM Model
```python
from auton_survival.estimators import SurvivalModel
from auton_survival.metrics import survival_regression_metric
from sklearn.model_selection import ParameterGrid
# Define parameters for tuning the model
param_grid = {'layers' : [[100], [100, 100], [200]],
'distribution' : ['Weibull', 'LogNormal'],
'max_features' : ['sqrt', 'log2']
}
params = ParameterGrid(param_grid)
# Define the times for tuning the model hyperparameters and for evaluating the model
times = np.quantile(y_tr['time'][y_tr['event']==1], np.linspace(0.1, 1, 10)).tolist()
# Perform hyperparameter tuning
models = []
for param in params:
model = SurvivalModel('dsm', random_seed=0, layers=param['layers'], distribution=param['distribution'], max_features=param['max_features'])
# The fit method is called to train the model
model.fit(x_tr, y_tr)
# Obtain survival probabilities for validation set and compute the Integrated Brier Score
predictions_val = model.predict_survival(x_val, times)
metric_val = survival_regression_metric('ibs', y_tr, y_val, predictions_val, times)
models.append([metric_val, model])
# Select the best model based on the mean metric value computed for the validation set
metric_vals = [i[0] for i in models]
first_min_idx = metric_vals.index(min(metric_vals))
model = models[first_min_idx][1]
```
18%|██████████████████▉ | 1799/10000 [00:04<00:20, 400.33it/s]
90%|████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 9/10 [00:01<00:00, 5.31it/s]
18%|██████████████████▉ | 1799/10000 [00:04<00:22, 369.96it/s]
90%|████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 9/10 [00:01<00:00, 4.96it/s]
18%|██████████████████▉ | 1799/10000 [00:05<00:23, 350.53it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 4.25it/s]
18%|██████████████████▉ | 1799/10000 [00:04<00:21, 376.50it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 4.96it/s]
18%|██████████████████▉ | 1799/10000 [00:04<00:22, 364.78it/s]
90%|████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 9/10 [00:02<00:00, 4.01it/s]
18%|██████████████████▉ | 1799/10000 [00:05<00:24, 340.31it/s]
90%|████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 9/10 [00:01<00:00, 4.63it/s]
12%|████████████▊ | 1224/10000 [00:03<00:21, 399.50it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:01<00:00, 5.22it/s]
12%|████████████▊ | 1224/10000 [00:03<00:22, 393.39it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:01<00:00, 5.22it/s]
12%|████████████▊ | 1224/10000 [00:03<00:23, 368.06it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 4.61it/s]
12%|████████████▊ | 1224/10000 [00:03<00:23, 372.02it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 4.72it/s]
12%|████████████▊ | 1224/10000 [00:02<00:16, 516.68it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:01<00:00, 5.32it/s]
12%|████████████▊ | 1224/10000 [00:03<00:22, 391.68it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:01<00:00, 5.24it/s]
<a id="evaldsm"></a>
### 4.2. Evaluate DSM Model
Compute the Brier Score and time-dependent concordance index for the test set. See notebook introduction for more details.
```python
from estimators_demo_utils import plot_performance_metrics
# Obtain survival probabilities for test set
predictions_te = model.predict_survival(x_te, times)
# Compute the Brier Score and time-dependent concordance index for the test set to assess model performance
results = dict()
results['Brier Score'] = survival_regression_metric('brs', outcomes_train=y_tr, outcomes_test=y_te,
predictions=predictions_te, times=times)
results['Concordance Index'] = survival_regression_metric('ctd', outcomes_train=y_tr, outcomes_test=y_te,
predictions=predictions_te, times=times)
plot_performance_metrics(results, times)
```
<a id="dcm"></a>
## 5. Deep Cox Mixtures (DCM)
<b>DCM</b> [2] generalizes the proportional hazards assumption via a mixture model, by assuming that there are latent groups and within each, the proportional hazards assumption holds. DCM allows the hazard ratio in each latent group, as well as the latent group membership, to be flexibly modeled by a deep neural network.
<b>Figure B:</b> DCM works by generating representation of the individual covariates, $x$, using an encoding neural network. The output representation, $xe$, then interacts with linear functions, $f$ and $g$, that determine the proportional hazards within each cluster $Z ∈ {1, 2, ...K}$ and the mixing weights $P(Z|X)$ respectively. For each cluster, baseline survival rates $Sk(t)$ are estimated non-parametrically. The final individual survival curve $S(t|x)$ is an average over the cluster specific individual survival curves weighted by the mixing probabilities $P(Z|X = x)$.
*For full details on Deep Cox Mixtures (DCM), please refer to the following paper*:
[2] [Nagpal, C., Yadlowsky, S., Rostamzadeh, N., and Heller, K. (2021c). Deep cox mixtures for survival regression. In
Machine Learning for Healthcare Conference, pages 674–708. PMLR.](https://arxiv.org/abs/2101.06536)
<a id="fitdcm"></a>
### 5.1. Fit DCM Model
```python
from auton_survival.estimators import SurvivalModel
from auton_survival.metrics import survival_regression_metric
from sklearn.model_selection import ParameterGrid
# Define parameters for tuning the model
param_grid = {'k' : [2, 3],
'learning_rate' : [1e-3, 1e-4],
'layers' : [[100], [100, 100]]
}
params = ParameterGrid(param_grid)
# Define the times for tuning the model hyperparameters and for evaluating the model
times = np.quantile(y_tr['time'][y_tr['event']==1], np.linspace(0.1, 1, 10)).tolist()
# Perform hyperparameter tuning
models = []
for param in params:
model = SurvivalModel('dcm', random_seed=7, k=param['k'], learning_rate=param['learning_rate'], layers=param['layers'])
# The fit method is called to train the model
model.fit(x_tr, y_tr)
# Obtain survival probabilities for validation set and compute the Integrated Brier Score
predictions_val = model.predict_survival(x_val, times)
metric_val = survival_regression_metric('ibs', y_tr, y_val, predictions_val, times)
models.append([metric_val, model])
# Select the best model based on the mean metric value computed for the validation set
metric_vals = [i[0] for i in models]
first_min_idx = metric_vals.index(min(metric_vals))
model = models[first_min_idx][1]
```
0%| | 0/50 [00:00<?, ?it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: invalid value encountered in log
probs = gates+np.log(event_probs)
14%|███████████████▋ | 7/50 [00:02<00:14, 3.04it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: divide by zero encountered in log
probs = gates+np.log(event_probs)
32%|███████████████████████████████████▌ | 16/50 [00:05<00:12, 2.75it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:58: RuntimeWarning: invalid value encountered in power
return spl(ts)**risks
C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:53: RuntimeWarning: invalid value encountered in power
s0ts = (-risks)*(spl(ts)**(risks-1))
34%|█████████████████████████████████████▋ | 17/50 [00:06<00:12, 2.64it/s]
0%| | 0/50 [00:00<?, ?it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: invalid value encountered in log
probs = gates+np.log(event_probs)
14%|███████████████▋ | 7/50 [00:02<00:14, 3.02it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: divide by zero encountered in log
probs = gates+np.log(event_probs)
72%|███████████████████████████████████████████████████████████████████████████████▉ | 36/50 [00:12<00:04, 2.90it/s]
0%| | 0/50 [00:00<?, ?it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: invalid value encountered in log
probs = gates+np.log(event_probs)
6%|██████▋ | 3/50 [00:01<00:19, 2.41it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: divide by zero encountered in log
probs = gates+np.log(event_probs)
54%|███████████████████████████████████████████████████████████▉ | 27/50 [00:12<00:10, 2.18it/s]
0%| | 0/50 [00:00<?, ?it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: invalid value encountered in log
probs = gates+np.log(event_probs)
30%|█████████████████████████████████▎ | 15/50 [00:06<00:15, 2.22it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: divide by zero encountered in log
probs = gates+np.log(event_probs)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:22<00:00, 2.27it/s]
0%| | 0/50 [00:00<?, ?it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: invalid value encountered in log
probs = gates+np.log(event_probs)
6%|██████▋ | 3/50 [00:01<00:21, 2.20it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: divide by zero encountered in log
probs = gates+np.log(event_probs)
52%|█████████████████████████████████████████████████████████▋ | 26/50 [00:11<00:10, 2.26it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:58: RuntimeWarning: invalid value encountered in power
return spl(ts)**risks
C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:53: RuntimeWarning: invalid value encountered in power
s0ts = (-risks)*(spl(ts)**(risks-1))
70%|█████████████████████████████████████████████████████████████████████████████▋ | 35/50 [00:15<00:06, 2.21it/s]
0%| | 0/50 [00:00<?, ?it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: invalid value encountered in log
probs = gates+np.log(event_probs)
4%|████▍ | 2/50 [00:00<00:15, 3.03it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: divide by zero encountered in log
probs = gates+np.log(event_probs)
72%|███████████████████████████████████████████████████████████████████████████████▉ | 36/50 [00:15<00:07, 1.84it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:58: RuntimeWarning: invalid value encountered in power
return spl(ts)**risks
C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:53: RuntimeWarning: invalid value encountered in power
s0ts = (-risks)*(spl(ts)**(risks-1))
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:23<00:00, 2.17it/s]
0%| | 0/50 [00:00<?, ?it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: invalid value encountered in log
probs = gates+np.log(event_probs)
2%|██▏ | 1/50 [00:00<00:27, 1.78it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: divide by zero encountered in log
probs = gates+np.log(event_probs)
78%|██████████████████████████████████████████████████████████████████████████████████████▌ | 39/50 [00:20<00:05, 1.91it/s]
0%| | 0/50 [00:00<?, ?it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: invalid value encountered in log
probs = gates+np.log(event_probs)
20%|██████████████████████▏ | 10/50 [00:04<00:21, 1.83it/s]C:\Users\Willa Potosnak\OneDrive\Documents\CMU Research\CMU_Projects\auton-survival\examples\..\auton_survival\models\dcm\dcm_utilities.py:105: RuntimeWarning: divide by zero encountered in log
probs = gates+np.log(event_probs)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:22<00:00, 2.27it/s]
<a id="evaldcm"></a>
### 5.2. Evaluate DCM Model
Compute the Brier Score and time-dependent concordance index for the test set. See notebook introduction for more details.
```python
from estimators_demo_utils import plot_performance_metrics
# Obtain survival probabilities for test set
predictions_te = model.predict_survival(x_te, times)
# Compute the Brier Score and time-dependent concordance index for the test set to assess model performance
results = dict()
results['Brier Score'] = survival_regression_metric('brs', outcomes_train=y_tr, outcomes_test=y_te,
predictions=predictions_te, times=times)
results['Concordance Index'] = survival_regression_metric('ctd', outcomes_train=y_tr, outcomes_test=y_te,
predictions=predictions_te, times=times)
plot_performance_metrics(results, times)
```
<a id="rsf"></a>
## 6. Random Survival Forests (RSF)
<b>RSF</b> [4] is an extension of Random Forests to the survival settings where risk scores are computed by creating Nelson-Aalen estimators in the splits induced by the Random Forest.
We observe that performance of the Random Survival Forest model, especially in terms of calibration is strongly influenced by the choice for the hyperparameters for the number of features considered at each split and the minimum number of data samples to continue growing a tree. We thus advise carefully tuning these hyper-parameters while benchmarking RSF.
*For full details on Random Survival Forests (RSF), please refer to the following paper*:
[4] [Hemant Ishwaran et al. Random survival forests. The annals of applied statistics, 2(3):841–860, 2008.](https://arxiv.org/abs/0811.1645)
<a id="fitrsf"></a>
### 6.1. Fit RSF Model
```python
from auton_survival.estimators import SurvivalModel
from auton_survival.metrics import survival_regression_metric
from sklearn.model_selection import ParameterGrid
# Define parameters for tuning the model
param_grid = {'n_estimators' : [100, 300],
'max_depth' : [3, 5],
'max_features' : ['sqrt', 'log2']
}
params = ParameterGrid(param_grid)
# Define the times for tuning the model hyperparameters and for evaluating the model
times = np.quantile(y_tr['time'][y_tr['event']==1], np.linspace(0.1, 1, 10)).tolist()
# Perform hyperparameter tuning
models = []
for param in params:
model = SurvivalModel('rsf', random_seed=8, n_estimators=param['n_estimators'], max_depth=param['max_depth'], max_features=param['max_features'])
# The fit method is called to train the model
model.fit(x_tr, y_tr)
# Obtain survival probabilities for validation set and compute the Integrated Brier Score
predictions_val = model.predict_survival(x_val, times)
metric_val = survival_regression_metric('ibs', y_tr, y_val, predictions_val, times)
models.append([metric_val, model])
# Select the best model based on the mean metric value computed for the validation set
metric_vals = [i[0] for i in models]
first_min_idx = metric_vals.index(min(metric_vals))
model = models[first_min_idx][1]
```
<a id="evalrsf"></a>
### 6.2. Evaluate RSF Model
Compute the Brier Score and time-dependent concordance index for the test set. See notebook introduction for more details.
```python
from estimators_demo_utils import plot_performance_metrics
# Obtain survival probabilities for test set
predictions_te = model.predict_survival(x_te, times)
# Compute the Brier Score and time-dependent concordance index for the test set to assess model performance
results = dict()
results['Brier Score'] = survival_regression_metric('brs', outcomes_train=y_tr, outcomes_test=y_te,
predictions=predictions_te, times=times)
results['Concordance Index'] = survival_regression_metric('ctd', outcomes_train=y_tr, outcomes_test=y_te,
predictions=predictions_te, times=times)
plot_performance_metrics(results, times)
```
```python
```
| b436c8569e7e04cadde3f36f946ac8697479bde5 | 664,062 | ipynb | Jupyter Notebook | examples/Survival Regression with Auton-Survival.ipynb | PotosnakW/auton-survival | d38ab87e76e8363331eeb21573a3ec094f8e6b09 | [
"MIT"
]
| 15 | 2022-02-01T10:03:22.000Z | 2022-03-29T01:02:36.000Z | examples/Survival Regression with Auton-Survival.ipynb | PotosnakW/auton-survival | d38ab87e76e8363331eeb21573a3ec094f8e6b09 | [
"MIT"
]
| 4 | 2022-02-27T01:14:57.000Z | 2022-03-07T14:26:23.000Z | examples/Survival Regression with Auton-Survival.ipynb | PotosnakW/auton-survival | d38ab87e76e8363331eeb21573a3ec094f8e6b09 | [
"MIT"
]
| 6 | 2022-02-10T04:28:49.000Z | 2022-03-23T17:21:41.000Z | 543.867322 | 301,528 | 0.924119 | true | 11,461 | Qwen/Qwen-72B | 1. YES
2. YES | 0.709019 | 0.740174 | 0.524798 | __label__eng_Latn | 0.48434 | 0.05761 |
# The basic solow model and extension with human capital
Imports for use in project:
```python
import numpy as np
from scipy import optimize
import sympy as sm
from types import SimpleNamespace
import matplotlib.pyplot as plt
import ipywidgets as widgets
from types import SimpleNamespace
import warnings
# Autoreload modules when code is run
%load_ext autoreload
%autoreload 2
# Import our python file
import modelproject as mp
```
# Basic Solow model
**Write out the model in equations here.**
Make sure you explain well the purpose of the model and comment so that other students who may not have seen it before can follow.
## Analytical solution
If your model allows for an analytical solution, you should provide here.
You may use Sympy for this. Then you can characterize the solution as a function of a parameter of the model.
To characterize the solution, first derive a steady state equation as a function of a parameter using Sympy.solve and then turn it into a python function by Sympy.lambdify. See the lecture notes for details.
## Numerical solution
To find a numerical solution for the steady state of capital we use the Solow-equation:
$$ k_{t+1} - k_t = \frac{1}{1+n} \left[sBk_t^\alpha - (n+\delta) k_t \right] $$
And the fact that in steady state we have: $$ k_{t+1}=k_t=k^* $$
This gives us that in steady state it must follow that:
$$ 0 = \frac{1}{1+n} \left[sBk^{*\alpha} - (n+\delta) k^* \right]$$
$$\Leftrightarrow 0 = sBk^{*\alpha} - (n+\delta) k^* \tag{1} $$
As plausible values for our parameters we look at table A pages 349-351 in the book: **Introducing Advanced Macroeconomics - Growth and business cycles** by *Peter Birch Sørensen and Hans Jørgen Whitta-Jacobsen*, second edition. We choose values for Denmark.
```python
# Plausible parameter values as found in table A
par = SimpleNamespace()
par.alpha = 1/3
par.s = 0.333
par.B = 1
par.n = 0.008
par.delta = 0.01
```
We construct an algorithm to find the steady state of capital per worker and use it with the plausible values.
```python
# Algorithm to find the root of a function on the interval [a,b] with the bisect method
def bisection(f,a,b,tol=1e-8):
"""
Args:
f (function): Function you wish to find root of
a (float) : Lower bound
b (float) : Upper bound
tol (float) : Tolerance on solution
Returns:
The x value that solves equation f(x) = 0 for a <= x <= b.
"""
# Test inputs
if f(a)*f(b) >= 0:
print("Bisection method fails.") # If f(a) and f(b) both have same sign it will not have a root on given interval
return None
# Step 1: initialize
a_n = a
b_n = b
# Step 2-4:
while True:
# Step 2: midpoint and associated value
m_n = (a_n+b_n)/2 # The midpoint of interval is found
f_m_n = f(m_n) # The function value of the midpoint (m) is found
# Step 3: Check if solution is found and if not determine sub-interval for evaluation in next iteration
if abs(f_m_n) < tol: # If the function value of the midpoint is close enough to 0 we have found our solution
return m_n
elif f(a_n)*f_m_n < 0: # If f(a) and f(m) have different signs then m is the new upper bound
a_n = a_n
b_n = m_n
elif f(b_n)*f_m_n < 0: # If f(b) and f(m) have different signs then m is the new lower bound
a_n = m_n
b_n = b_n
else:
print("Bisection method fails.")
return None
# Step 4: results
return (a_n + b_n)/2
# Use the algorithm to find the steady state of capital (k*)
f = lambda k: par.s*par.B*k**par.alpha - (par.n + par.delta)*k # Equation (1) is the one we wish to solve
print(f'The steady state of capital found with algorithm is : k* = {bisection(f,0.1,100,1e-8):.3f}')
```
The steady state of capital found with algorithm is : k* = 79.572
We can also solve the problem by using a built in optimizer from the scipy package that uses the bisection method.
```python
print(f'The steady state of capital found by using built in optimizer is :k* = {mp.ss_bas(par.n, par.s, par.B, par.alpha, par.delta):.3f}')
```
The steady state of capital found by using built in optimizer is :k* = 79.572
Unsurprisingly the results are the same.
From equation (1) we can see that in order for the model to converge to a steady state of capital then the stability condition of $$ n + \delta > 0 $$ must be fulfilled. If this condition is met then the model will converge for any starting values.
## Further analysis
We plot the basic solow model with interactive widgets to examine how changes in parameters affect the model. We also find the steady states of: capital per worker (k*), output per worker (y*) and consumption per worker (c*).
```python
widgets.interact(mp.ss_bas_plot,
alpha = widgets.FloatSlider(description = r'$\alpha$', min = 0.01, max = 0.6, step = 0.005, value = 0.3, readout_format='.3f'),
delta = widgets.FloatSlider(description = r'$\delta$', min = 0, max = 0.1, step = 0.005, value = 0.01, readout_format='.3f'),
s = widgets.FloatSlider(description = '$s$', min = 0.01, max = 0.7, step = 0.005, value = 0.4, readout_format='.3f'),
n = widgets.FloatSlider(description ='$n$', min = -0.05, max = 0.05, step = 0.005, value = 0.01, readout_format='.3f'),
B = widgets.fixed(1), #B is assumed to be the same across countries
k_max = widgets.IntSlider(description='k_max', min = 1, max = 1000, step = 10, value = 100)) # Changes the size of x-axis to zoom
```
interactive(children=(FloatSlider(value=0.01, description='$n$', max=0.05, min=-0.05, readout_format='.3f', st…
<function modelproject.ss_bas_plot(n, s, B, alpha, delta, k_max)>
From figure 1 and the interactive widgets we can see the ceteris paribus effects of changes in the different parameters. We find that: n is positively correlated with k*, y* and c*, s is positively correlated with k* and y* but negatively correlated with c*, $\alpha$ is positively correlated with k*, y* and c* while $\delta$ is negatively correlated with k*, y* and c*.
In order to ensure convergance we must have that $ n + \delta > 0 $
# Solow model with human capital
**Write out the model in equations here.**
Make sure you explain well the purpose of the model and comment so that other students who may not have seen it before can follow.
```python
# Plausible parameter values as found in table A
par.s_H = 0.1
par.s_K = 0.2
par.g = 0.02
par.n = 0.01
par.alpha = 1/3
par.phi = 1/3
par.delta = 0.01
```
## Analytical solution
If your model allows for an analytical solution, you should provide here.
You may use Sympy for this. Then you can characterize the solution as a function of a parameter of the model.
To characterize the solution, first derive a steady state equation as a function of a parameter using Sympy.solve and then turn it into a python function by Sympy.lambdify. See the lecture notes for details.
## Numerical solution
The equations for the steady states of physical capital per capita and human capital per capita can be written as
\\[ 0 = \left(\dfrac{s_k^{1-\varphi}s_h^{\varphi}}{n+g+\delta+ng}\right)^{\frac{1}{1-\alpha-\varphi}} - \tilde{k}^{\ast} \\]
\\[ 0 = \left(\dfrac{s_k^{\alpha}s_h^{1-\alpha}}{n+g+\delta+ng}\right)^{\frac{1}{1-\alpha-\varphi}} - \tilde{h}^{\ast} \\]
```python
warnings.filterwarnings("ignore", category=RuntimeWarning) # Some values of parameters give a runtimewarning, this is ignored
# We define a function containing our h- and k-functions, and a vector x = [h,k]
obj = lambda x: [mp.h_func(x[1],par.s_H,par.s_K,par.g,par.n,par.alpha,par.phi,par.delta,x[0]),mp.k_func(x[0],par.s_H,par.s_K,par.g,par.n,par.alpha,par.phi,par.delta,x[1])]
# We solve the vector functions, have nonlinear system of equation and therefore use broyden1 method
sol = optimize.root(obj,[1,1],method = 'broyden1')
# The numerical solution is found
num_sol1 = sol.x
print(f'Numerical solution is: k* = {num_sol1[1]:.3f}, h* = {num_sol1[0]:.3f}')
```
Numerical solution is: k* = 61.572, h* = 30.786
```python
par.incr = 0.01 # Define a variable called incr to examine the effects of increase in human capital
# Solve the functions, first with original value of human capital and then with increased human capital
k_vec, h_vec_DeltaK0, h_vec_DeltaH0 = mp.solve_ss(par.s_H,par.s_K,par.g,par.n,par.alpha,par.phi,par.delta)
k1_vec, h1_vec_DeltaK0, h1_vec_DeltaH0 = mp.solve_ss(par.s_H+par.incr,par.s_K,par.g,par.n,par.alpha,par.phi,par.delta)
# Find steady state values for new s_H
# # We define a function containing our h- and k-functions, and a vector x = [h,k] for increased human capital
obj1 = lambda x: [mp.h_func(x[1],par.s_H+par.incr,par.s_K,par.g,par.n,par.alpha,par.phi,par.delta,x[0]),mp.k_func(x[0],par.s_H+par.incr,par.s_K,par.g,par.n,par.alpha,par.phi,par.delta,x[1])]
# We solve the vector functions, have nonlinear system of equation and therefore use broyden1 method
sol1 = optimize.root(obj1,[1,1],method = 'broyden1')
# The numerical solution is found for increased human capital
num_sol2 = sol1.x
# Create the plot
fig2 = plt.figure(figsize=(7,7))
ax = fig2.add_subplot(1,1,1)
ax.plot(k_vec,h_vec_DeltaK0, label=r'$\Delta \tilde{k}=0$', c='lime')
ax.plot(k_vec,h_vec_DeltaH0, label=r'$\Delta \tilde{h}=0$', c='aqua')
ax.plot(k1_vec,h1_vec_DeltaH0, label=r'$\Delta \tilde{h_1}=0$', c='teal')
ax.set_xlabel(r'$\tilde{k}$')
ax.set_ylabel(r'$\tilde{h}$')
ax.legend()
# We mark the steady state
plt.scatter(sol.x[1],sol.x[0],color='black',s=80,zorder=3)
plt.scatter(sol1.x[1],sol1.x[0],color='black',s=80,zorder=3)
# Lines are drawn to mark ss-value on the axes
plt.axvline(sol.x[1],ymax=sol.x[0]/60,color='gray',linestyle='--')
plt.axhline(sol.x[0],xmax=sol.x[1]/80,color='gray',linestyle='--')
plt.axvline(sol1.x[1],ymax=sol1.x[0]/60,color='black',linestyle='--')
plt.axhline(sol1.x[0],xmax=sol1.x[1]/80,color='black',linestyle='--')
# Text is added to the plot
ax.text(0.15,sol.x[0]+1 , r'$\tilde{h}^*$', fontsize=12)
ax.text(sol.x[1]+1,0.15 , r'$\tilde{k}^*$', fontsize=12)
ax.text(0.15,sol1.x[0]+1 , r'$\tilde{h_1}^*$', fontsize=12)
ax.text(sol1.x[1]+1,0.15 , r'$\tilde{k_1}^*$', fontsize=12)
#The axis limits are chosen
ax.set(xlim=(0, 80), ylim=(0, 60))
ax.set_title('Figure 2. Phase diagram');
print(f'Numerical solution for original s_H is: k* = {num_sol1[1]:.3f}, h* = {num_sol1[0]:.3f}')
print(f'Numerical solution for increased s_H is: k* = {num_sol2[1]:.3f}, h* = {num_sol2[0]:.3f}')
```
Figure 2 shows the phase diagram and the steady state is found where the nullclines intersect. We can see that an increase in $s_H$ increases the steady states of both $\tilde{h}$ and $\tilde{k}$.
# Conclusion
Add concise conclusion.
| ef01ea8660a312a14cff06af889f70554d3bbc54 | 53,272 | ipynb | Jupyter Notebook | modelproject/Solow model.ipynb | NumEconCopenhagen/projects-2022-msm | 619192f54879f2494b3eab121f183a0869a1064d | [
"MIT"
]
| null | null | null | modelproject/Solow model.ipynb | NumEconCopenhagen/projects-2022-msm | 619192f54879f2494b3eab121f183a0869a1064d | [
"MIT"
]
| 2 | 2022-03-28T15:23:11.000Z | 2022-03-31T10:45:29.000Z | modelproject/Solow model.ipynb | NumEconCopenhagen/projects-2022-msm | 619192f54879f2494b3eab121f183a0869a1064d | [
"MIT"
]
| null | null | null | 104.454902 | 36,240 | 0.840273 | true | 3,287 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.853913 | 0.740373 | __label__eng_Latn | 0.984809 | 0.558466 |
```python
from collections import defaultdict
import numpy as np
import scipy
import scipy.sparse as sps
import math
import matplotlib.pyplot as plt
import time
from sklearn.datasets import load_svmlight_file
import random
%matplotlib inline
```
# Support Vector Machines
## Classification Using SVM
Load dataset. We will use w1a dataset from LibSVM datasets https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
The original optimization problem for the Support Vector Machine (SVM) is given by
\begin{equation}\label{eq:primal}
\min_{w \in R^d} \ \sum_{i=1}^n \ell(y_i A_i^\top w) + \frac\lambda2 \|w\|^2
\end{equation}
where $\ell : R\rightarrow R$, $\ell(z) := \max\{0,1-z\}$ is the hinge loss function.
Here for any $i$, $1\le i\le n$, the vector $A_i\in R^d$ is the $i$-th data example, and $y_i\in\{\pm1\}$ is the corresponding label.
The dual optimization problem for the SVM is given by
\begin{equation}\label{eq:dual}
\max_{\boldsymbol{\alpha} \in R^n } \ \alpha^\top\boldsymbol{1} - \tfrac1{2\lambda} \alpha^\top Y A A^\top Y\alpha
\text{ such that $0\le \alpha_i \le 1 \ \forall i$}
\end{equation}
where $Y := \mathop{diag}(y)$, and $A\in R^{n \times d}$ again collects all $n$ data examples as its columns.
Note that $w$ can be derived from $\alpha$ as
\begin{equation}
w(\alpha) = \frac{1}{\lambda} A^\top Y \alpha.
\end{equation}
```python
DATA_TRAIN_PATH = 'data/w1a'
A, y = load_svmlight_file(DATA_TRAIN_PATH)
A = A.toarray()
print(y.shape, A.shape)
```
(2477,) (2477, 300)
## Prepare cost and prediction functions
```python
def calculate_primal_objective(y, A, w, lambda_):
"""
Compute the full cost (the primal objective), that is loss plus regularizer.
y: +1 or -1 labels, shape = (num_examples)
A: Dataset matrix, shape = (num_examples, num_features)
w: Model weights, shape = (num_features)
return: scalar value
"""
# ***************************************************
s = 0
for i in range(len(y)):
loss = max(0, 1 - y[i] * (A[i].T @ w))
s += loss
w2 = np.sum(w ** 2)
obj = s + lambda_ / 2 * w2
# ***************************************************
return obj
```
```python
def calculate_accuracy(y, A, w):
"""
Compute the training accuracy on the training set (can be called for test set as well).
y: +1 or -1 labels, shape = (num_examples)
A: Dataset matrix, shape = (num_examples, num_features)
w: Model weights, shape = (num_features)
return: scalar value
"""
# ***************************************************
# compute predictions
pred = A @ w
# correct predictions to be -1 or 1
for i in range(len(pred)):
if pred[i] > 0:
pred[i] = 1
else:
pred[i] = -1
# compute accuracy
acc = np.mean(pred == y)
# ***************************************************
return acc
```
## Coordinate Descent (Ascent) for SVM
Compute the closed-form update for the i-th variable alpha, in the dual optimization problem, given alpha and the current corresponding w.
Hints:
- Differentiate the dual objective with respect to one `alpha[i]`.
- Set the derivative to zero to compute a new `alpha[i]`.
- Make sure the values of alpha stay inside a `[0, 1]` box.
- You can formulate the update as `alpha[i] = projection(alpha[i] + lambda_ * (some update))`.
- You can test the correctness of your implementation by checking if the difference between the dual objective and primal objective goes to zero. This difference, the duality gap, should get smaller than 10 in 700000 iterations.
```python
def calculate_coordinate_update(y, A, lambda_, alpha, w, i):
"""
Compute a coordinate update (closed form) for coordinate i.
y: +1 or -1 labels, shape = (num_examples)
A: Dataset matrix, shape = (num_examples, num_features)
lambda_: Regularization parameter, scalar
alpha: Dual variables, shape = (num_examples)
w: Model weights, shape = (num_examples)
i: Index of the entry of the dual variable 'alpha' that is to be updated
return: New weights w (shape (num_features)), New dual variables alpha (shape (num_examples))
"""
# calculate the update of coordinate at index=n.
a_i, y_i = A[i], y[i]
old_alpha_i = np.copy(alpha[i])
# ***************************************************
C = old_alpha_i + lambda_ * ((1 - y_i * a_i.T @ w)/(a_i.T @ a_i))
if C < 0: C = 0
if C > 1: C = 1
alpha[i] = C
# update w
w += (alpha[i] - old_alpha_i) * y_i * a_i * (1/lambda_)
# ***************************************************
return w, alpha
```
```python
def calculate_dual_objective(y, A, w, alpha, lambda_):
"""
Calculate the objective for the dual problem.
Follow the formula given above.
y: +1 or -1 labels, shape = (num_examples)
A: Dataset matrix, shape = (num_examples, num_features)
alpha: Dual variables, shape = (num_examples)
lambda_: Regularization parameter, scalar
return: Scalar value
"""
# ***************************************************
w2 = np.sum(w ** 2)
obj = np.sum(alpha) - lambda_/2 * w2
# ***************************************************
return obj
```
```python
def coordinate_descent_for_svm_demo(y, A, trace=False):
max_iter = 1000000
lambda_ = 0.01
history = defaultdict(list) if trace else None
num_examples, num_features = A.shape
w = np.zeros(num_features)
alpha = np.zeros(num_examples)
for it in range(max_iter):
# i = sample one data point uniformly at random from the columns of A
i = random.randint(0,num_examples-1)
w, alpha = calculate_coordinate_update(y, A, lambda_, alpha, w, i)
if it % 100000 == 0:
# primal objective
primal_value = calculate_primal_objective(y, A, w, lambda_)
# dual objective
dual_value = calculate_dual_objective(y, A, w, alpha, lambda_)
# primal dual gap
duality_gap = primal_value - dual_value
print('iteration=%i, primal:%.5f, dual:%.5f, gap:%.5f'%(
it, primal_value, dual_value, duality_gap))
if it % 1000 == 0:
primal_value = calculate_primal_objective(y, A, w, lambda_)
if trace:
history["objective_function"] += [primal_value]
history['iter'].append(it)
print("training accuracy = {l}".format(l=calculate_accuracy(y, A, w)))
return history
history_cd = coordinate_descent_for_svm_demo(y, A, trace=True)
```
iteration=0, primal:2198.07161, dual:0.00018, gap:2198.07143
c:\users\melanija\appdata\local\programs\python\python37\lib\site-packages\ipykernel_launcher.py:17: RuntimeWarning: divide by zero encountered in double_scalars
iteration=100000, primal:279.61403, dual:216.25006, gap:63.36397
iteration=200000, primal:263.39900, dual:222.35389, gap:41.04512
iteration=300000, primal:244.70078, dual:223.61763, gap:21.08315
iteration=400000, primal:237.81812, dual:224.60328, gap:13.21483
iteration=500000, primal:245.41629, dual:225.51759, gap:19.89869
iteration=600000, primal:241.74883, dual:225.98354, gap:15.76530
iteration=700000, primal:235.64178, dual:226.36782, gap:9.27396
iteration=800000, primal:235.74450, dual:226.68425, gap:9.06025
iteration=900000, primal:237.35067, dual:226.99495, gap:10.35572
training accuracy = 0.994751715785224
```python
# plot training curve
plt.plot(history_cd["iter"], history_cd["objective_function"], label="Coordinate Descent")
plt.yscale('log')
plt.legend()
```
# Stochastic gradient descent for SVM
Let's now compare it with SGD on original problem for the SVM. In this part, you will implement stochastic gradient descent on the primal SVM objective. The stochasticity comes from sampling data points.
```python
def compute_stoch_gradient_svm(A_sample, b_sample, lambda_, w_t, num_data_points):
"""
Calculate stochastic gradient over A_batch, b_batch.
A_sample: A data sample, shape=(num_features)
b_sample: Corresponding +1 or -1 label, scalar
w_t: Model weights, shape=(num_features)
num_data_points: Total size of the dataset, scalar integer
"""
# ***************************************************
z = A_sample.dot(w_t) * b_sample
if z < 1:
gradient = (lambda_ * w_t) - (num_data_points * b_sample * A_sample)
else:
gradient = (lambda_ * w_t)
# ***************************************************
return gradient.reshape(-1)
```
```python
def stochastic_gradient_descent_svm_demo(A, b, gamma, batch_size=1, trace=False):
history = defaultdict(list) if trace else None
num_data_points, num_features = np.shape(A)
max_iter = 1000000
lambda_ = 0.01
w_t = np.zeros(num_features)
current_iter = 0
while (current_iter < max_iter):
i = random.randint(0,num_data_points - 1)
b_batch, A_batch = b[i], A[i]
gradient = compute_stoch_gradient_svm(A_batch, b_batch, lambda_, w_t, num_data_points)
w_t = w_t - gamma * gradient
if current_iter % 100000 == 0:
primal_value = calculate_primal_objective(y, A, w_t, lambda_)
print('iteration=%i, primal:%.5f'%(
current_iter, primal_value))
if current_iter % 1000 == 0:
primal_value = calculate_primal_objective(y, A, w_t, lambda_)
if trace:
history['objective_function'].append(primal_value)
history['iter'].append(current_iter)
current_iter += 1
print("training accuracy = {l}".format(l=calculate_accuracy(y, A, w_t)))
return history
```
Try different stepsizes and find the best one
```python
lrs = [0.1, 0.01, 0.05, 0.001, 0.0005]
results = []
for lr in lrs:
print('Running SGD with lr=', lr)
sgd = stochastic_gradient_descent_svm_demo(A, y, lr, trace=True)
results.append(sgd)
print('---------------------------------------------')
```
Running SGD with lr= 0.1
iteration=0, primal:2477.00000
iteration=100000, primal:110097.59283
iteration=200000, primal:66169.28576
iteration=300000, primal:60418.10744
iteration=400000, primal:66759.96304
iteration=500000, primal:51609.27102
iteration=600000, primal:56694.14760
iteration=700000, primal:54628.00356
iteration=800000, primal:59701.83805
iteration=900000, primal:62832.66969
training accuracy = 0.9604360113039968
---------------------------------------------
Running SGD with lr= 0.01
iteration=0, primal:2704.13212
iteration=100000, primal:2997.47830
iteration=200000, primal:2902.55030
iteration=300000, primal:2608.37200
iteration=400000, primal:2990.86377
iteration=500000, primal:3821.56364
iteration=600000, primal:3999.55021
iteration=700000, primal:2648.27864
iteration=800000, primal:2887.71066
iteration=900000, primal:2295.59859
training accuracy = 0.9802180056519983
---------------------------------------------
Running SGD with lr= 0.05
iteration=0, primal:7996.44112
iteration=100000, primal:19427.61448
iteration=200000, primal:14790.81258
iteration=300000, primal:23330.29655
iteration=400000, primal:38011.09413
iteration=500000, primal:34758.62805
iteration=600000, primal:26086.74112
iteration=700000, primal:17469.29459
iteration=800000, primal:22557.04987
iteration=900000, primal:25115.32774
training accuracy = 0.9729511505853855
---------------------------------------------
Running SGD with lr= 0.001
iteration=0, primal:2477.00000
iteration=100000, primal:390.66261
iteration=200000, primal:368.84870
iteration=300000, primal:361.60669
iteration=400000, primal:364.48207
iteration=500000, primal:409.86861
iteration=600000, primal:393.81841
iteration=700000, primal:391.60344
iteration=800000, primal:384.84073
iteration=900000, primal:351.65206
training accuracy = 0.9899071457408155
---------------------------------------------
Running SGD with lr= 0.0005
iteration=0, primal:1117.83303
iteration=100000, primal:284.79567
iteration=200000, primal:303.49552
iteration=300000, primal:288.66791
iteration=400000, primal:275.83349
iteration=500000, primal:297.25770
iteration=600000, primal:291.40584
iteration=700000, primal:280.49232
iteration=800000, primal:286.82717
iteration=900000, primal:277.28330
training accuracy = 0.9919257165926524
---------------------------------------------
Plot learning curves
```python
# plotting
for i in range(len(results)):
sgd = results[i]
plt.plot(sgd["iter"], sgd["objective_function"], label="SGD lr = {:1.5f}".format(lrs[i]))
plt.yscale('log')
plt.legend()
```
It seems that the first 3 learning rates are too big.
Repeat experiment with even smaller learning rates (lr <= 0.001):
```python
lrs = [0.001, 0.0005, 0.0001, 0.00005, 0.00001]
results = []
for lr in lrs:
print('Running SGD with lr=', lr)
sgd = stochastic_gradient_descent_svm_demo(A, y, lr, trace=True)
results.append(sgd)
print('---------------------------------------------')
```
Running SGD with lr= 0.001
iteration=0, primal:988.62016
iteration=100000, primal:349.17183
iteration=200000, primal:364.29257
iteration=300000, primal:369.87797
iteration=400000, primal:366.53268
iteration=500000, primal:377.20163
iteration=600000, primal:398.52138
iteration=700000, primal:357.86760
iteration=800000, primal:335.80020
iteration=900000, primal:411.15856
training accuracy = 0.9862737182075091
---------------------------------------------
Running SGD with lr= 0.0005
iteration=0, primal:2234.06902
iteration=100000, primal:283.66106
iteration=200000, primal:385.30826
iteration=300000, primal:284.98281
iteration=400000, primal:295.02302
iteration=500000, primal:290.33378
iteration=600000, primal:287.14758
iteration=700000, primal:287.17260
iteration=800000, primal:277.65706
iteration=900000, primal:304.45808
training accuracy = 0.9903108599111828
---------------------------------------------
Running SGD with lr= 0.0001
iteration=0, primal:2477.00000
iteration=100000, primal:252.70261
iteration=200000, primal:243.05037
iteration=300000, primal:240.04018
iteration=400000, primal:238.99765
iteration=500000, primal:238.66440
iteration=600000, primal:244.46554
iteration=700000, primal:238.59503
iteration=800000, primal:242.71816
iteration=900000, primal:235.88044
training accuracy = 0.9951554299555915
---------------------------------------------
Running SGD with lr= 5e-05
iteration=0, primal:2330.37550
iteration=100000, primal:254.48615
iteration=200000, primal:243.52702
iteration=300000, primal:237.63993
iteration=400000, primal:237.11102
iteration=500000, primal:235.35983
iteration=600000, primal:234.34964
iteration=700000, primal:234.18111
iteration=800000, primal:234.55288
iteration=900000, primal:233.14598
training accuracy = 0.994751715785224
---------------------------------------------
Running SGD with lr= 1e-05
iteration=0, primal:2377.12739
iteration=100000, primal:276.42534
iteration=200000, primal:259.53120
iteration=300000, primal:252.75191
iteration=400000, primal:249.99986
iteration=500000, primal:247.15174
iteration=600000, primal:243.67913
iteration=700000, primal:241.48185
iteration=800000, primal:239.45611
iteration=900000, primal:237.73139
training accuracy = 0.9939442874444893
---------------------------------------------
```python
# plotting
for i in range(len(results)):
sgd = results[i]
plt.plot(sgd["iter"], sgd["objective_function"], label="SGD lr = {:f}".format(lrs[i]))
plt.yscale('log')
plt.legend()
```
The best learning rates are 0.0001 with a training accuracy of 0.9951.
## Compare SGD with Coordinate Descent
Compare two algorithms in terms of convergence, time complexities per iteration. Which one is easier to use?
```python
# plot CD and best SGD for comparison
#index of best lr
i = 2
plt.plot(results[i]["iter"], results[i]["objective_function"], label="SGD")
plt.plot(history_cd["iter"], history_cd["objective_function"], label="CD")
plt.yscale('log')
plt.legend()
```
We can observe that the convergence of CD is a bit not stable at the beginning. However, it manages to converge at nearly the same iteration as the SGD, even though for SGD we have tunned the learning rate. Both methods have the same time complexity per iteration.
| 063d4db470f6f0df81c5e3b9ad7722fb74963956 | 144,168 | ipynb | Jupyter Notebook | exercises/12_Coordinate_descent/.ipynb_checkpoints/Coordinate_descent-checkpoint.ipynb | mozzafiato/Optimization-methods | e99c6f887c58ebecc320c467c7fc08158cb046b6 | [
"CC0-1.0"
]
| null | null | null | exercises/12_Coordinate_descent/.ipynb_checkpoints/Coordinate_descent-checkpoint.ipynb | mozzafiato/Optimization-methods | e99c6f887c58ebecc320c467c7fc08158cb046b6 | [
"CC0-1.0"
]
| null | null | null | exercises/12_Coordinate_descent/.ipynb_checkpoints/Coordinate_descent-checkpoint.ipynb | mozzafiato/Optimization-methods | e99c6f887c58ebecc320c467c7fc08158cb046b6 | [
"CC0-1.0"
]
| null | null | null | 184.59411 | 54,764 | 0.8853 | true | 4,765 | Qwen/Qwen-72B | 1. YES
2. YES | 0.868827 | 0.826712 | 0.718269 | __label__eng_Latn | 0.500141 | 0.507112 |
Polynomial $A(x) = \sum_{j=0}^{n-1} a_j x^j$
let coeffs $A = (a_0,...,a_{n-1})$.
that $k=0,...,n-1$
$$
\begin{align}
\text{DFT}(A) &= (\sum_{j=0}^{n-1} a_j e^{-\frac{2\pi i}{n} jk},...)_k \\
&= (A(\omega_n^k),...)_k,\ \ \ \omega_n^k = \exp(-\frac{2\pi i}{n} k)
\end{align}
$$
then consider convolution, or multiplication of $A(x)$ and $B(x)$. we wanna know coeffs of $(A*B)(x) = A(x)B(x)$.
for simplicity write $F=DFT$, notice that, $F(A) = (A(\omega_n^k))_k$, that
$$
\begin{align}
F(A*B) &= ((A*B)(\omega_n^k))_k = (A(\omega_n^k)B(\omega_n^k))_k = F(A)\cdot F(B) \\
A*B &= F^{-1}[F(A)\cdot F(B)]
\end{align}
$$
so next we need is to compute $F, F^{-1}$ in $O(n\log n)$
Before continue, note the lemmas
$$
\begin{align}
\omega_{dn}^{dk} = \omega_n^k, n,k,d \geq 0 \\
\omega_n^{n/2} = \omega_2^1 = -1, n>0 \\
(\omega_n^{k+n/2})^2=(\omega_n^k)^2=\omega_{n/2}^k, n>0, \text{even} \\
\omega_n^{k+n/2} = -\omega_n^k, n\geq 0
\end{align}
$$
then by separate odd,even, i.e. $A(x) = A_0(x^2) + xA_1(x^2)$. combine above lemmas, get
$$
\begin{align}
A(\omega_n^k) = A_0(\omega_{n/2}^k) + \omega_n^k A_1(\omega_{n/2}^k), 0\leq k < n/2 \\
A(\omega_n^{k+n/2}) = A_0(\omega_{n/2}^k) - \omega_n^k A_1(\omega_{n/2}^k), k+n/2 \geq n/2
\end{align}
$$
Thus, by divide and conquer solved. what about $F^{-1}$. notice that matrix form
$$
\begin{align}
y=F(A) = W a \\
w_{k,j} = \omega_n^{kj}, k,j=0,...n-1\\
a = F^{-1}(y) = W^{-1}y
\end{align}
$$
by the special form of $W$
$$
\begin{align}
W^{-1} = \frac{1}{n} \bar{W}\\
v_{kj} = \frac{1}{n} \bar{w_n} ^{kj}
\end{align}
$$
So, just conjugate and divided by $n$. then same method sovlable.
### NTT
let $\alpha$ replace $\omega_n$ in integer field.
DFT require
$$
\begin{align}
\sum_{j=0}^{n-1} \alpha^{kj} = 0, k=1,...,n-1
\end{align}
$$
and $\alpha^n=1, \alpha^k \neq 1, k=1,...,n-1$ is sufficient
if $n$ is power of $2$, $\alpha^{n/2} = -1$ is sufficient
if we get the $\alpha$ on $Z_p$, then $p=c2^k+1$, that $\alpha^c$ can be the $\alpha'$ to express conv length $\leq 2^{k-1}$
For in place NTT, leaves' addr. are bit-reversed. (NEED rigorous proof)
and if for $F^{-1}$, one way is calc $\alpha^{-1}$, for each iter. Here is another way. Notice the matrix form. suppose $a$ divided by $n$, then we reverse $[a_1,...,a_{n-1}]$. by the fact $\alpha^{k(n-j)} = \alpha^{-kj}$, thus result is exactly $F^{-1}$.
#### inverse
we wanna know $BA \equiv 1 (\mod x^n)$. by the step that double $B$'s size, with init $B_0=1/A_0$. wlog write $B_1$ repre. $B_n$, $B_2$ repre. $B_{2n}$. that
$$
\begin{align}
& (B_1 A - 1)^2 \equiv 0 (\mod x^2) \\
\Rightarrow & B_2 = 2B_1 - B_1^2 A (\mod x^2)
\end{align}
$$
note, in impl. $B_1^2A$ has $B_1(\mod x)$ part in there since $B_1A \equiv 1(\mod x)$. aka, $(\mod x)$ part remain $B_1$, actually, we only need modify the higher order part, $-B_1^2 A ([x\leq..<x^2])$
#### sqrt
also, the double size technique
$$
\begin{align}
& (B_1^2 - A)^2 \equiv 0 (\mod x^2) \\
\Rightarrow & B_2 = \frac{1}{2} [B_1 + A B_1^{-1}](\mod x^2)
\end{align}
$$
again, notice the old part $B_1$ shall remain, we can only modify new part in impl. and careful $B_0^2=A_0$ when $\neq 1$
#### log
$\log P = \int \frac{P'}{P}$
#### exponent
double step, $B_2 = B_1(1 + A - \log B_1)$
### FWHT(FHT)
Hadamard transform, def, with $H_0 = 1$
$$
\begin{align}
H_m = \frac{1}{\sqrt{2}} \begin{bmatrix} H_{m-1} & H_{m-1} \\ H_{m-1} & -H_{m-1} \end{bmatrix} \\
(H_m)_{i,j} = \frac{1}{2^{m/2}} (-1)^{\langle i, j \rangle_b}
\end{align}
$$
where $i,j= 0,1,..,2^m-1$ and $\langle .,.\rangle_b$ is bitwise dot product, aka
```
__builtin_popcount(i&j)
```
when do fwht, don't need bit reversal and generator. besides, there is good property $H_m^2= I$.
One can easy show $H_1^2 = I$ by direct compute or quantum tech. $H_1 = |+\rangle \langle 0| + |-\rangle \langle 1| $. then by induction. $H_m = H_1 \otimes H_{m-1}$. that $H_m^2 = H_1^2 \otimes H_{m-1}^2 = I$. which means we can do $H^{-1}$ by direct do $H$ !
note in impl. we often omit $2^{n/2}$, that use $H' = 2^{n/2}H$. so when do inverse, we need divide by $2^n$.
still, we can do multiply by $A*B = H^{-1} [H(A) \cdot H(B)]$. note $\cdot$ is element-wise mult.
also, suprisingly, for solving equation $A*B = C$, if we put $H$ on each side, that
$H(A) \cdot H(B) = H(C)$, since $\cdot$ is element-wised, thus we can get $A = H^{-1} [H(C) / H(B)]$, element-wise.
Note. for $\sum_i B_i = 0$, aka, not linear indep. the solution may has many, notice that $R = (1,1,...,1)$, that $R*B = 0$. e.g. agc034f.
so we can minus arbitral still solution, but according specific condition, we can get the right one.
### useful links
https://cp-algorithms.com/algebra/fft.html
https://cp-algorithms.com/algebra/polynomial.html#toc-tgt-4
https://en.wikipedia.org/wiki/Discrete_Fourier_transform#Polynomial_multiplication
https://en.wikipedia.org/wiki/Discrete_Fourier_transform_(general)
https://codeforces.com/blog/entry/43499
https://codeforces.com/blog/entry/48798
https://crypto.stanford.edu/pbc/notes/numbertheory/gen.html
https://csacademy.com/blog/fast-fourier-transform-and-variations-of-it
```python
```
| ba2c899d1c4766738dc6c34cee0ca0e8186e3f78 | 7,624 | ipynb | Jupyter Notebook | notes/FFT.ipynb | sogapalag/problems | 0ea7d65448e1177f8b3f81124a82d187980d659c | [
"MIT"
]
| 1 | 2020-04-04T14:56:12.000Z | 2020-04-04T14:56:12.000Z | notes/FFT.ipynb | sogapalag/problems | 0ea7d65448e1177f8b3f81124a82d187980d659c | [
"MIT"
]
| null | null | null | notes/FFT.ipynb | sogapalag/problems | 0ea7d65448e1177f8b3f81124a82d187980d659c | [
"MIT"
]
| null | null | null | 35.460465 | 275 | 0.485703 | true | 2,107 | Qwen/Qwen-72B | 1. YES
2. YES | 0.924142 | 0.865224 | 0.79959 | __label__eng_Latn | 0.703217 | 0.696048 |
# sympy
Since we want to work interactively with `sympy`, we will import the complete module. Note that this polutes the namespace, and is not recommended in general.
```python
from sympy import *
```
Enable pretty printing in this notebook.
```python
init_printing()
```
## Expression manipulation
Define a number of symbols to work with, as well as an example expression.
```python
x, y, a, b, c = symbols('x y a b c')
```
```python
expr = (a*x**2 - b*y**2 + 5)/(c*x + y)
```
Check the expression's type.
```python
expr.func
```
sympy.core.mul.Mul
Although the expression was defined as a divisino, it is represented as a multiplicatino by `sympy`. The `args` attribute of an expressions stores the operands of the top-level operator.
```python
expr.args
```
Although the first factor appears to be a division, it is in fact a power. The denominator of this expression would be given by:
```python
expr.args[0].func
```
sympy.core.power.Pow
```python
expr.args[0].args[0]
```
The expression $\frac{1}{a x + b}$ can alternatively be defined as follows, which highlights the internal representation of expressions.
```python
expr = Pow(Add(Mul(a, x), b), -1)
```
```python
pprint(expr)
```
1
───────
a⋅x + b
```python
expr.args
```
```python
expr.args[0].args[0]
```
This may be a bit surprising when you look at the mathematical representation of the expression, but the order of the terms is different from its rendering on the screen.
```python
expr.args[0].args
```
Since the addition operation is commutative, this makes no difference mathematically.
```python
expr.args[0].args[1].args
```
```python
expr = x**2 + 2*a*x + y**2
```
```python
expr2 = expr.subs(y, a)
expr2
```
Most expression manipulation algorithms can be called as functions, or as methods on expressions.
```python
factor(expr2)
```
```python
expr2.factor()
```
```python
x, y = symbols('x y', positive=True)
```
```python
(log(x) + log(y)).simplify()
```
## Calculus
### Series expansion
```python
x, a = symbols('x a')
```
```python
expr = sin(a*x)/x
```
```python
expr2 = series(expr, x, 0, n=7)
```
```python
expr2
```
A term of a specific order in a given variable can be selected easily.
```python
expr2.taylor_term(2, x)
```
When the order is unimportant, or when the expression should be used to define a function, the order term can be removed.
```python
expr2.removeO()
```
Adding two series deals with the order correctly.
```python
s1 = series(sin(x), x, 0, n=7)
```
```python
s2 = series(cos(x), x, 0, n=4)
```
```python
s1 + s2
```
### Derivatives and integrals
```python
expr = a*x**2 + b*x + c
```
```python
expr.diff(x)
```
```python
expr.integrate(x)
```
| 636c1cc737cf7039e87d6460d0a84abdfe279b01 | 44,016 | ipynb | Jupyter Notebook | source-code/sympy/sympy.ipynb | gjbex/Scientific-Python | b4b7ca06fdedf1de37a0ad537d69c128e24c747c | [
"CC-BY-4.0"
]
| 11 | 2021-03-24T08:05:29.000Z | 2022-01-06T13:45:23.000Z | source-code/sympy/sympy.ipynb | gjbex/Scientific-Python | b4b7ca06fdedf1de37a0ad537d69c128e24c747c | [
"CC-BY-4.0"
]
| 1 | 2020-01-15T07:17:50.000Z | 2020-01-15T07:17:50.000Z | source-code/sympy/sympy.ipynb | gjbex/Scientific-Python | b4b7ca06fdedf1de37a0ad537d69c128e24c747c | [
"CC-BY-4.0"
]
| 10 | 2020-12-07T08:06:05.000Z | 2022-01-25T13:00:48.000Z | 50.944444 | 4,034 | 0.773923 | true | 781 | Qwen/Qwen-72B | 1. YES
2. YES | 0.927363 | 0.917303 | 0.850673 | __label__eng_Latn | 0.991226 | 0.814731 |
```python
# pip install pyreadr
```
```python
# Import relevant packages
import pandas as pd
import numpy as np
import pyreadr
import math
```
```python
# Import relevant packages
rdata_read = pyreadr.read_r("d:/Users/Manuela/Documents/GitHub/ECO224/Labs/data/wage2015_subsample_inference.Rdata")
# Extracting the data frame from rdata_read
data = rdata_read[ 'data' ]
data.shape
```
(5150, 20)
To start our analysis, we compare the sample means given gender:
```python
Z_scl = data[data[ 'scl' ] == 1 ]
Z_clg = data[data[ 'clg' ] == 1 ]
Z_data = pd.concat([Z_scl,Z_clg])
Z = Z_data[ ["lwage","sex","shs","hsg","scl","clg","ad","ne","mw","so","we","exp1"] ]
data_female = Z_data[Z_data[ 'sex' ] == 1 ]
Z_female = data_female[ ["lwage","sex","shs","hsg","scl","clg","ad","ne","mw","so","we","exp1"] ]
data_male = Z_data[ Z_data[ 'sex' ] == 0 ]
Z_male = data_male[ [ "lwage","sex","shs","hsg","scl","clg","ad","ne","mw","so","we","exp1" ] ]
table = np.zeros( (12, 3) )
table[:, 0] = Z.mean().values
table[:, 1] = Z_male.mean().values
table[:, 2] = Z_female.mean().values
table_pandas = pd.DataFrame( table, columns = [ 'All', 'Men', 'Women'])
table_pandas.index = ["Log Wage","Sex","Less then High School","High School Graduate","Some College","Collage Graduate","Advanced Degree", "Northeast","Midwest","South","West","Experience"]
table_html = table_pandas.to_html()
table_pandas
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>All</th>
<th>Men</th>
<th>Women</th>
</tr>
</thead>
<tbody>
<tr>
<th>Log Wage</th>
<td>3.000022</td>
<td>3.038412</td>
<td>2.956904</td>
</tr>
<tr>
<th>Sex</th>
<td>0.470991</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
<tr>
<th>Less then High School</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>High School Graduate</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Some College</th>
<td>0.466754</td>
<td>0.481824</td>
<td>0.449827</td>
</tr>
<tr>
<th>Collage Graduate</th>
<td>0.533246</td>
<td>0.518176</td>
<td>0.550173</td>
</tr>
<tr>
<th>Advanced Degree</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Northeast</th>
<td>0.226532</td>
<td>0.219347</td>
<td>0.234602</td>
</tr>
<tr>
<th>Midwest</th>
<td>0.265971</td>
<td>0.261245</td>
<td>0.271280</td>
</tr>
<tr>
<th>South</th>
<td>0.285854</td>
<td>0.290819</td>
<td>0.280277</td>
</tr>
<tr>
<th>West</th>
<td>0.221643</td>
<td>0.228589</td>
<td>0.213841</td>
</tr>
<tr>
<th>Experience</th>
<td>12.700945</td>
<td>12.433148</td>
<td>13.001730</td>
</tr>
</tbody>
</table>
</div>
```python
print( table_html )
```
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>All</th>
<th>Men</th>
<th>Women</th>
</tr>
</thead>
<tbody>
<tr>
<th>Log Wage</th>
<td>3.000022</td>
<td>3.038412</td>
<td>2.956904</td>
</tr>
<tr>
<th>Sex</th>
<td>0.470991</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
<tr>
<th>Less then High School</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>High School Graduate</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Some College</th>
<td>0.466754</td>
<td>0.481824</td>
<td>0.449827</td>
</tr>
<tr>
<th>Collage Graduate</th>
<td>0.533246</td>
<td>0.518176</td>
<td>0.550173</td>
</tr>
<tr>
<th>Advanced Degree</th>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Northeast</th>
<td>0.226532</td>
<td>0.219347</td>
<td>0.234602</td>
</tr>
<tr>
<th>Midwest</th>
<td>0.265971</td>
<td>0.261245</td>
<td>0.271280</td>
</tr>
<tr>
<th>South</th>
<td>0.285854</td>
<td>0.290819</td>
<td>0.280277</td>
</tr>
<tr>
<th>West</th>
<td>0.221643</td>
<td>0.228589</td>
<td>0.213841</td>
</tr>
<tr>
<th>Experience</th>
<td>12.700945</td>
<td>12.433148</td>
<td>13.001730</td>
</tr>
</tbody>
</table>
```python
data_female['lwage'].mean() - data_male['lwage'].mean()
```
-0.08150855508736754
We obtained that the unconditional gender wage gap is about $8,15$% for the group of never married workers (women with Some college and College graduate get paid less on average). We also observe that never married working women are relatively more educated than working men and have lower working experience.
This unconditional (predictive) effect of gender equals the coefficient $\beta$ in the univariate ols regression of $Y$ on $D$:
$$\begin{align}
\log(Y) =\beta D + \epsilon.
\end{align}$$
```python
import statsmodels.api as sm
import statsmodels.formula.api as smf
```
```python
nocontrol_model = smf.ols( formula = 'lwage ~ sex', data = Z_data)
nocontrol_est = nocontrol_model.fit().summary2().tables[1]['Coef.']['sex']
HCV_coefs = nocontrol_model.fit().cov_HC0
nocontrol_se = np.power( HCV_coefs.diagonal() , 0.5)[1]
# print unconditional effect of gender and the corresponding standard error
print( f'The estimated gender coefficient is {nocontrol_est} and the corresponding robust standard error is {nocontrol_se}' )
```
The estimated gender coefficient is -0.08150855508736031 and the corresponding robust standard error is 0.019579647767772337
Next, we run an ols regression of $Y$ on $(D,W)$ to control for the effect of covariates summarized in $W$:
$$\begin{align}
\log(Y) =\beta_1 D + \beta_2' W + \epsilon.
\end{align}$$
$W$ controls for experience, education, region, and occupation and industry indicators plus transformations and two-way interactions.
Now, we are going to run the ols regression with controls.
# Ols regression with controls
```python
flex = 'lwage ~ sex + (exp1+exp2+exp3+exp4)*(clg+occ2+ind2+mw+so+we)'
# The smf api replicates R script when it transform data
control_model = smf.ols( formula = flex, data = Z_data )
control_est = control_model.fit().summary2().tables[1]['Coef.']['sex']
print(control_model.fit().summary2().tables[1])
print( f"Coefficient for OLS with controls {control_est}" )
HCV_coefs = control_model.fit().cov_HC0
control_se = np.power( HCV_coefs.diagonal() , 0.5)[1]
```
Coef. Std.Err. t P>|t| [0.025 0.975]
Intercept 2.985101 0.336482 8.871492 1.250129e-18 2.325327 3.644876
occ2[T.10] 0.091982 0.243220 0.378184 7.053225e-01 -0.384925 0.568888
occ2[T.11] -0.499418 0.436858 -1.143202 2.530511e-01 -1.356010 0.357175
occ2[T.12] 0.190101 0.341142 0.557249 5.774012e-01 -0.478810 0.859012
occ2[T.13] -0.194529 0.271881 -0.715492 4.743637e-01 -0.727633 0.338575
... ... ... ... ... ... ...
exp3:we -0.230864 0.184398 -1.251987 2.106777e-01 -0.592431 0.130704
exp4:clg -0.013467 0.020134 -0.668849 5.036463e-01 -0.052945 0.026012
exp4:mw 0.014287 0.025814 0.553477 5.799802e-01 -0.036328 0.064902
exp4:so -0.003759 0.022547 -0.166725 8.675981e-01 -0.047968 0.040450
exp4:we 0.028286 0.023812 1.187890 2.349761e-01 -0.018405 0.074978
[231 rows x 6 columns]
Coefficient for OLS with controls -0.053062340357755505
The estimated regression coefficient $\beta_1\approx-0.053$ measures how our linear prediction of wage changes if we set the gender variable $D$ from 0 to 1, holding the controls $W$ fixed.
We can call this the *predictive effect* (PE), as it measures the impact of a variable on the prediction we make. Overall, we see that the unconditional wage gap of size $8$\% for women decreases to about $5$\% after controlling for worker characteristics. Also, we can see that people with complete college earn $24$\% more than those with some college.
The next step is the Frisch-Waugh-Lovell theorem from the lecture partialling-out the linear effect of the controls via ols.
# Partialling-Out using ols
```python
# models
# model for Y
flex_y = 'lwage ~ (exp1+exp2+exp3+exp4)*(shs+hsg+scl+clg+occ2+ind2+mw+so+we)'
# model for D
flex_d = 'sex ~ (exp1+exp2+exp3+exp4)*(shs+hsg+scl+clg+occ2+ind2+mw+so+we)'
# partialling-out the linear effect of W from Y
t_Y = smf.ols( formula = flex_y , data = Z_data ).fit().resid
# partialling-out the linear effect of W from D
t_D = smf.ols( formula = flex_d , data = Z_data ).fit().resid
data_res = pd.DataFrame( np.vstack(( t_Y.values , t_D.values )).T , columns = [ 't_Y', 't_D' ] )
# regression of Y on D after partialling-out the effect of W
partial_fit = smf.ols( formula = 't_Y ~ t_D' , data = data_res ).fit()
partial_est = partial_fit.summary2().tables[1]['Coef.']['t_D']
print("Coefficient for D via partialling-out", partial_est)
# standard error
HCV_coefs = partial_fit.cov_HC0
partial_se = np.power( HCV_coefs.diagonal() , 0.5)[1]
# confidence interval
partial_fit.conf_int( alpha=0.05 ).iloc[1, :]
```
Coefficient for D via partialling-out -0.053062340357753604
0 -0.089571
1 -0.016554
Name: t_D, dtype: float64
Again, the estimated coefficient measures the linear predictive effect (PE) of $D$ on $Y$ after taking out the linear effect of $W$ on both of these variables. This coefficient equals the estimated coefficient from the ols regression with controls.
We know that the partialling-out approach works well when the dimension of $W$ is low in relation to the sample size $n$. When the dimension of $W$ is relatively high, we need to use variable selection or penalization for regularization purposes.
# Figure
```python
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
```
```python
sns.set_theme(color_codes=True)
```
```python
sns.lmplot(x="exp1", y="lwage", data=Z_data,
lowess=True);
```
```python
```
| 4ed1ccea9da4298dc1ee79ac09f79a92720a1d60 | 74,353 | ipynb | Jupyter Notebook | _portfolio/group3_lab1_python.ipynb | thaisbazanb/thaisbazanb.github.io | 7ad06a9573cad043fb3018153c570f922796837c | [
"MIT"
]
| null | null | null | _portfolio/group3_lab1_python.ipynb | thaisbazanb/thaisbazanb.github.io | 7ad06a9573cad043fb3018153c570f922796837c | [
"MIT"
]
| null | null | null | _portfolio/group3_lab1_python.ipynb | thaisbazanb/thaisbazanb.github.io | 7ad06a9573cad043fb3018153c570f922796837c | [
"MIT"
]
| null | null | null | 120.117932 | 55,382 | 0.843342 | true | 3,696 | Qwen/Qwen-72B | 1. YES
2. YES | 0.763484 | 0.675765 | 0.515935 | __label__eng_Latn | 0.676261 | 0.03702 |
# Optimización de funciones escalares diferenciables con `SymPy`
> - Mediante optimización se obtienen soluciones elegantes tanto en teoría como en ciertas aplicaciones.
> - La teoría de optimización usa elementos comenzando con cálculo elemental y álgebra lineal básica, y luego se extiende con análisis funcional y convexo.
> - Las aplicaciones en optimización involucran ciencia, ingeniería, economía, finanzas e industria.
> - El amplio y creciente uso de la optimización lo hace escencial para estudiantes y profesionales de cualquier rama de la ciencia y la tecnología.
**Referencia:**
- http://www.math.uwaterloo.ca/~hwolkowi//henry/reports/talks.d/t06talks.d/06msribirs.d/optimportance.shtml
Algunas aplicaciones son:
1. Ingeniería
- Encontrar la composición de equilibrio de una mezcla de diferentes átomos.
- Planeación de ruta para un robot (o vehículo aéreo no tripulado).
- Planeación de la mano de obra óptima en una construcción o planta de producción.
2. Distribución óptima de recursos.
- Distribución de rutas de vuelo.
- Encontrar una dieta óptima.
- Planeación de ruta óptima.
3. Optimización financiera
- Administración de riesgos.
- Portafolios de inversión.
En esta clase veremos aspectos básicos de optimización. En específico, veremos cómo obtener máximos y mínimos de una función escalar de una variable (como en cálculo diferencial).
___
## 0. Librerías que usaremos
Como ya dijimos en la primer clase `python` es el lenguaje de programación (el cual es de alto nivel). Sin embargo, `python` solo tiene unos pocos comandos primitivos y para hacer más fácil su uso en nuestras actividades de simulación en ingeniería, otras personas ya han escrito ciertas librerías por nosotros.
### 0.1 `NumPy`
`NumPy` (Numerical Python) es la librería fundamental para computación científica (numérica) con `Python`. Contiene, entre otras cosas:
- un objeto tipo arreglo N-dimensional muy poderoso
- funciones sofisticadas
- funciones de álgebra lineal, transformada de Fourier y números aleatorios.
Por lo anterior, `NumPy` es de amplio uso entre la comunidad científica e ingenieril (por su manejo de cantidades vectoriales). De la misma manera, se usa para guardar datos. Para nuestros propósitos, se puede usar libremente.
**Referencia:**
- http://www.numpy.org/
`NumPy` ya viene incluido en la instalación estándar de Anaconda por defecto. Para comenzar a usarlo, solo debemos de importarlo:
```python
# importar la librería numpy
```
### 0.2 `SymPy`
`SymPy` (Symbolic Python) es una librería de `Python` para matemáticas simbólicas. Su objetivo es convertirse en un sistema de álgebra computacional con las mejores características, manteniendo el código lo más simple posible para que sea comprensible.
**Referencia:**
- http://www.sympy.org/en/index.html
`SymPy` ya viene incluido en la instalación estándar de Anaconda por defecto. Para comenzar a usarlo, solo debemos de importarlo:
```python
# importar la librería sympy
```
```python
# imprimir en formato latex
```
La funcionalidad de imprimir en formato LaTex que nos da `SymPy` mediante el proyecto `mathjax` hace de `SymPy` una herramienta muy atractiva...
Notar que en `SymPy` y en `NumPy` existen funciones con el mismo nombre, pero reciben tipos de datos diferentes...
```python
```
```python
# diferencias de funciones de sympy y numpy
```
```python
```
```python
```
```python
```
Explicar el uso de la sintaxis `from numpy import *` y sus peligros (no recomendable).
```python
# importar con * y ver que pasa
# from numpy import *
# from sympy import *
# No recomendado
```
### 0.3 `PyPlot` de `matplotlib`
El módulo `PyPlot` de la librería `matplotlib` contiene funciones que nos permite generar una gran cantidad de gráficas rápidamente. Las funciones de este módulo están escritas con el mismo nombre que las funciones para graficar en `Matlab`.
**Referencia:**
- https://matplotlib.org/api/pyplot_summary.html
```python
# importar matplotlib.pyplot
# comando para que las gráficas salgan en la misma ventana
```
Ya que revisamos todas las librerías que usaremos, empecemos con la clase como tal...
___
Basamos todos los resultados en los siguientes teoremas:
## 1. Teorema de Fermat (análisis)
Si una función $f(x)$ alcanza un máximo o mínimo local en $x=c$, y si la derivada $f'(c)$ existe en el punto $c$, entonces $f'(c) = 0$.
### Ejemplo
Sabemos que la función $f(x)=x^2$ tiene un mínimo global en $x=0$, pues
$$f(x)=x^2\geq0,\qquad\text{y}\qquad f(x)=x^2=0 \qquad\text{si y solo si}\qquad x=0.$$
```python
# declarar la variable real x
```
```python
# declarar ahora f=x^2 y mostrar
```
```python
# derivar f respecto a x y mostrar
```
```python
# resolver f'(x)=0 y mostrar soluciones
```
```python
```
Veamos la gráfica...
```python
# convertir f e una función que se pueda evaluar numéricamente (función lambdify de la librería sympy)
```
```python
# Coordenadas x (abscisas)
```
```python
# graficar
# Crear ventana de graficos y damos medidas de la ventana
# Sirve para hacer el grafico y determinar sus caracteristicas
# Nombre del eje x de la grafica
# Nombre del eje y
# Sirve para poner las etiquetas de las graficas
# Sirve para poner la cuadricula
```
Ver diferencias entre f y f_num
```python
# intentar evaluar f y f_num
```
```python
```
```python
```
**Otra manera de hacer lo anterior**
Concepto de función...
```python
```
```python
```
```python
```
```python
```
```python
```
```python
# graficar
# Crear ventana de graficos y damos medidas de la ventana
# Sirve para hacer el grafico y determinar sus caracteristicas
#plt.plot(xnum, funcion_de_clase(xnum), 'k', label='$y=x^2$')
# Nombre del eje x de la grafica
# Nombre del eje y
# Sirve para poner las etiquetas de las graficas
# Sirve para poner la cuadricula
```
El converso del teorema anterior no es cierto.
### Actividad
Considere $g(x)=x^3$.
- Usando `sympy`, muestre que $g'(0)=0$.
- Sin embargo, descartar que $x=0$ es un extremo de $g(x)$ viendo su **gráfica**.
```python
# Declarar la variable simbolica x
```
```python
# Definimos funcion g(x)
```
```python
# Derivamos g(x)
```
```python
# Puntos criticos
```
```python
# graficar
# Crear ventana de graficos y damos medidas de la ventana
# Sirve para hacer el grafico y determinar sus caracteristicas
# Nombre del eje x de la grafica
# Nombre del eje y
# Sirve para poner las etiquetas de las graficas
# Sirve para poner la cuadricula
```
## 2. Criterio de la segunda derivada
Sea $f(x)$ una función tal que $f’(c)=0$ y cuya segunda derivada existe en un intervalo abierto que contiene a $c$.
- Si $f’’(c)>0$, entonces $f(c)$ es un mínimo relativo.
- Si $f’’(c)<0$, entonces $f(c)$ es un máximo relativo.
- Si $f’’(c)=0$, entonces el criterio no decide.
### Ejemplo
Mostrar, usando `sympy`, que la función $f(x)=x^2$ tiene un mínimo relativo en $x=0$.
Ya vimos que $f'(0)=0$. Notemos que:
```python
```
```python
# Sacamos la segunda derivada
```
```python
```
Por tanto, por el criterio de la segunda derivada, $f(0)=0$ es un mínimo relativo (en efecto, el mínimo global).
### Ejemplo
¿Qué pasa con $g(x)=x^3$ al intentar utilizar el criterio de la segunda derivada? (usar `sympy`).
```python
```
```python
```
```python
```
Como $g''(0)=0$ entonces el criterio de la segunda derivada no concluye.
### Actividad
¿Qué pasa con $h(x)=x^4$ al intentar utilizar el criterio de la segunda derivada?.
```python
```
```python
```
```python
```
```python
```
```python
```
## 3. Método para determinar extremos absolutos de una función continua y=f(x) en [a,b]
- Determinar todos los valores críticos $c_1, c_2, c_3, \dots, c_n$ en $(a,b)$.
- Evaluar $f$ en todos los valores críticos y en los extremos $x=a$ y $x=b$.
- El más grande y el más pequeño de los valores de la lista $f(a), f(b), f(c_1), f(c_2), \dots, f(c_n)$ son el máximo absoluto y el mínimo absoluto, respectivamente, de f en el intervalo [a,b].
### Ejemplo
Determinar los extremos absolutos de $f(x)=x^2-6x$ en $\left[0,5\right]$.
Obtenemos los puntos críticos de $f$ en $\left[0,5\right]$:
```python
```
```python
# Derivamos f
```
```python
```
Evaluamos $f$ en los extremos y en los puntos críticos:
```python
```
Concluimos que el máximo absoluto de $f$ en $\left[0,5\right]$ es $0$ y se alcanza en $x=0$, y que el mínimo absoluto es $-9$ y se alcanza en $x=3$.
```python
# graficar
# Crear ventana de graficos y damos medidas de la ventana
# Sirve para hacer el grafico y determinar sus caracteristicas
# Nombre del eje x de la grafica
# Nombre del eje y
# Sirve para poner las etiquetas de las graficas
# Sirve para poner la cuadricula
```
### Actividad
Determinar los valores extremos absolutos de $h(x)=x^3-3x$ en $\left[-2.2,1.8\right]$, usando `sympy`. Mostrar en una gráfica.
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
### En varias variables...
El procedimiento es análogo.
Si una función $f:\mathbb{R}^n\to\mathbb{R}$ alcanza un máximo o mínimo local en $\boldsymbol{x}=\boldsymbol{c}\in\mathbb{R}^n$, y $f$ es diferenciable en el punto $\boldsymbol{x}=\boldsymbol{c}$, entonces $\left.\frac{\partial f}{\partial \boldsymbol{x}}\right|_{\boldsymbol{x}=\boldsymbol{c}}=\boldsymbol{0}$ (todas las derivadas parciales en el punto $\boldsymbol{x}=\boldsymbol{c}$ son cero).
**Criterio de la segunda derivada:** para ver si es máximo o mínimo, se toma la segunda derivada (matriz jacobiana) y se verifica definición negativa o positiva, respectivamente.
Si se restringe a cierta región, hay ciertas técnicas. La más general, pero también la más compleja es la de **multiplicadores de Lagrange**.
**Ejemplo:** hacer a mano a la vez para corroborar...
```python
sym.var('x y')
x, y
```
```python
def f(x, y):
return x**2 + y**2
```
```python
dfx = sym.diff(f(x,y), x)
dfy = sym.diff(f(x,y), y)
dfx, dfy
```
```python
xy_c = sym.solve([dfx, dfy], [x, y])
xy_c
```
```python
x_c, y_c = xy_c[x], xy_c[y]
x_c, y_c
```
```python
d2fx = sym.diff(f(x,y), x, 2)
d2fy = sym.diff(f(x,y), y, 2)
dfxy = sym.diff(f(x,y), x, y)
Jf = sym.Matrix([[d2fx, dfxy], [dfxy, d2fy]])
Jf.eigenvals()
```
```python
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
```
```python
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x = np.linspace(-2, 2, 100)
y = x
X, Y = np.meshgrid(x, y)
ax.plot_surface(X, Y, f(X, Y))
ax.plot([x_c], [y_c], [f(x_c,y_c)], '*r')
```
# Anuncios parroquiales
## 1. [Curso gratis sugerido](https://www.kaggle.com/learn/python)
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez.
</footer>
| 28ce367b0e9a6ba6ae4c5e5e9821d172035932c0 | 23,471 | ipynb | Jupyter Notebook | Modulo1/Clase2_OptimizacionSympy.ipynb | HissamQA/simmatp2021 | 63bb82c65d25148a871ab520325a023fcd876ebc | [
"MIT"
]
| null | null | null | Modulo1/Clase2_OptimizacionSympy.ipynb | HissamQA/simmatp2021 | 63bb82c65d25148a871ab520325a023fcd876ebc | [
"MIT"
]
| null | null | null | Modulo1/Clase2_OptimizacionSympy.ipynb | HissamQA/simmatp2021 | 63bb82c65d25148a871ab520325a023fcd876ebc | [
"MIT"
]
| null | null | null | 24.34751 | 425 | 0.547782 | true | 3,309 | Qwen/Qwen-72B | 1. YES
2. YES | 0.76908 | 0.887205 | 0.682332 | __label__spa_Latn | 0.988256 | 0.423616 |
# Linear Pathway
# Preliminaries
```python
from src.surfaceAnalyzer import SurfaceAnalyzer
from common_python.ODEModel.LTIModel import LTIModel
from common_python.sympy import sympyUtil as su
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import numpy as np
import tellurium as te
import pandas as pd
import seaborn as sn
import sympy
import os
```
```python
NRMSE = "nrmse" # Normalized root of the mean square error (residuals)
```
```python
MODEL_DIR = os.path.join(os.getcwd(), "../models")
```
# Helper Functions
```python
MODEL = """
J1: $X0 -> x; k1*X0
J2: x -> $X1; k2*x
X0 = 1
x = 0
k1 = 1
k2 = 1
"""
```
```python
rr = te.loada(MODEL)
trueData = rr.simulate()
rr.plot(trueData)
```
```python
PARAMETER_DCT = {"k1": 1, "k2": 2}
analyzer = SurfaceAnalyzer(MODEL, PARAMETER_DCT)
scales = [10, 2, 0.1, 0.03]
k2s = [0.5, 1.0, 2.0]
k2s = [1.0]
for scale in scales:
for k2 in k2s:
parameterDct = dict(PARAMETER_DCT)
parameterDct["k2"] = k2
analyzer = SurfaceAnalyzer(MODEL, parameterDct)
analyzer.runExperiments(0.5, 50)
title = "k2: %2.2f, scale: %2.4f" % (k2, scale)
analyzer.plotSurface(scale=scale, title=title, xlim=[0.5, 1.5], ylim=[0.5, 1.5])
```
# Constructing Linear Pathway Models
```python
def mkParameterName(num):
return "k%d" % num
#TESTS
name = mkParameterName(1)
assert(isinstance(name, str))
assert(len(name) == 2)
```
```python
def mkRandomParameterDct(numReaction, minValue=0.1, maxValue=100):
"""
Constructs the parameter dictionary for a linear pathway with random parameter values.
"""
return {mkParameterName(idx): np.random.uniform(minValue, maxValue) for idx in range(numReaction)}
# TESTS
dct = mkRandomParameterDct(3)
assert(len(dct) == 3)
assert(isinstance(dct, dict))
```
```python
def mkFixedParameterDct(numReaction, value=1):
"""
Constructs the parameter dictionary for a linear pathway with fixed parameter values.
"""
return {mkParameterName(idx): value for idx in range(numReaction)}
# TESTS
dct = mkFixedParameterDct(3)
assert(len(dct) == 3)
assert(isinstance(dct, dict))
```
```python
def mkModel(parameterDct, speciesDct=None, isFirstFixed=True, isLastFixed=True, fixedSpeciesValue=10):
"""
Creates an antimony model for a linear pathway.
The number of reactions is len(parameterDct).
Species are named "S*". There is one more species than reaction.
"""
def mkInitializations(dct):
"""
Constructs initializations for the name, value pairs.
Parameters
----------
dct: dict
Returns
-------
list-str
"""
initializations = []
for name, value in dct.items():
assignment = "%s = %s" % (name, str(value))
initializations.append(assignment)
return initializations
#
def mkSpecies(num):
if isFirstFixed and (num == 0):
prefix = "$"
elif isLastFixed and (num == len(parameterDct)):
prefix = "$"
else:
prefix = ""
return "%sS%d" % (prefix, num)
#
if speciesDct is None:
speciesDct = {mkSpecies(n): fixedSpeciesValue if n == 0 else 0
for n in range(len(parameterDct))}
#
numReaction = len(parameterDct)
parameters = list(parameterDct.keys())
reactions = []
# Construct the reactions
for idx in range(numReaction):
reactant = mkSpecies(idx)
product = mkSpecies(idx + 1)
kinetics = "%s*%s" % (parameters[idx], reactant)
reaction = "%s -> %s; %s" % (reactant, product, kinetics)
reactions.append(reaction)
# Initialtion statements
parameterInitializations = mkInitializations(parameterDct)
speciesInitializations = mkInitializations(speciesDct)
# Assemble the model
model = "// Model with %d reactions\n\n" % numReaction
model = model + "// Reactions\n"
model = model + "\n".join(reactions)
model = model + "\n\n// Parameter Initializations\n"
model = model + "\n".join(parameterInitializations)
model = model + "\n\n// Species Initializations\n"
model = model + "\n".join(speciesInitializations)
return model
# Tests
model = mkModel({"k0": 1, "k1": 2}, isFirstFixed=False)
print(model)
rr = te.loada(model)
rr.plot(rr.simulate())
```
# Numerical Studies
```python
def analyzeModel(numReaction, parameterValue=1, fixedSpeciesValue=1, isPlot=True):
parameterDct = mkFixedParameterDct(numReaction, value=parameterValue)
model = mkModel(parameterDct, fixedSpeciesValue=fixedSpeciesValue)
analyzer = SurfaceAnalyzer(model, parameterDct)
parameterNames= [mkParameterName(0), mkParameterName(numReaction-1)]
analyzer.runExperiments(0.5, 50, parameterNames=parameterNames)
analyzer.plotSurface(isPlot=isPlot, title="No. Reaction: %d" % numReaction)
return analyzer
# TESTS
analyzer = analyzeModel(2, isPlot=False)
assert(isinstance(analyzer, SurfaceAnalyzer))
```
```python
for numReaction in range(10, 20):
analyzeModel(numReaction)
```
| f6eeeda80018eacc8b8cd7ad36fa927d1f69ac67 | 272,439 | ipynb | Jupyter Notebook | notebooks/linear_pathway.ipynb | ScienceStacks/FittingSurface | 7994995c7155817ea4334f10dcd21e691cee46da | [
"MIT"
]
| null | null | null | notebooks/linear_pathway.ipynb | ScienceStacks/FittingSurface | 7994995c7155817ea4334f10dcd21e691cee46da | [
"MIT"
]
| null | null | null | notebooks/linear_pathway.ipynb | ScienceStacks/FittingSurface | 7994995c7155817ea4334f10dcd21e691cee46da | [
"MIT"
]
| null | null | null | 511.142589 | 30,716 | 0.946869 | true | 1,397 | Qwen/Qwen-72B | 1. YES
2. YES | 0.746139 | 0.771843 | 0.575903 | __label__eng_Latn | 0.507561 | 0.176344 |
Remarques de analyse fonctionnelle.
---
Auteur: [André ROCHA](https://github.com/rochamatcomp)
---
# Les espaces $\ell^{p}(n)$
#### Définition.
Soit $1 \le p \le \infty$. L'espace $\ell^{p}(n)$ est défini comme étant l'espace $\mathbb{R}^{n}$ muni de la norme :
\begin{equation}
||x||_{p} = \left( \sum\limits_{i=1}^{n} |x_{i}|^{p} \right)^{1/p}, \quad si 1 \le p < \infty
\end{equation}
\begin{equation}
||x||_{\infty} = \max\limits_{1 \le i \le n} |x_{i}|. \quad (p = \infty).
\end{equation}
#### Propositions à preuver.
L'espace $\ell^{p}(n)$ est un espace vectoriel normé complet (espace de Banach).
##### Remarque.
> Regarde que $\ell^{p}(n)$ est l'espace $\mathbb{R}^{n}$ muni de la norme euclidienne, la quelle est derivé de un produit intérieur.
<!--
F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx
\end{equation}
-->
<!--
!-- $\mathscr{L}$
!-- $\mathscr{l}$.
-->
<!--
!--\begin{equation})
!-- \llbracket 1 \rrbracket \quad
!-- \llparenthesis 2 \rrparenthesis \quad
!-- \llceil 3 \rrceil \quad
!-- \llfloor 4 \rrfloor \quad
!--\end{equation}
-->
```python
```
| c607fd43bce1c095d3974c671e820ba68c2a9109 | 2,230 | ipynb | Jupyter Notebook | manuscrit/analyse_fonctionnelle.ipynb | rochamatcomp/mathematiques-modelisation | 425aacc95c6f404fd0d236a14444328684f81cd9 | [
"MIT"
]
| null | null | null | manuscrit/analyse_fonctionnelle.ipynb | rochamatcomp/mathematiques-modelisation | 425aacc95c6f404fd0d236a14444328684f81cd9 | [
"MIT"
]
| null | null | null | manuscrit/analyse_fonctionnelle.ipynb | rochamatcomp/mathematiques-modelisation | 425aacc95c6f404fd0d236a14444328684f81cd9 | [
"MIT"
]
| null | null | null | 26.235294 | 144 | 0.456951 | true | 442 | Qwen/Qwen-72B | 1. YES
2. YES | 0.919643 | 0.843895 | 0.776082 | __label__fra_Latn | 0.541924 | 0.64143 |
```python
# ============================================================
# Notebook setup
# ============================================================
%load_ext autoreload
%autoreload 2
# Control figure size
interactive_figures = True
if interactive_figures:
# Normal behavior
%matplotlib widget
figsize=(9, 3)
else:
# PDF export behavior
figsize=(14, 3)
from util import nn
import numpy as np
from matplotlib import pyplot as plt
import pandas as pd
# Load HPC data
data_folder = '/app/data'
hpc = pd.read_csv(data_folder+ '/hpc.csv', parse_dates=['timestamp'])
# Identify input columns
hpc_in = hpc.columns[1:-1]
# Standardization
tr_end, val_end = 3000, 4500
hpcs = hpc.copy()
tmp = hpcs.iloc[:tr_end]
hpcs[hpc_in] = (hpcs[hpc_in] - tmp[hpc_in].mean()) / tmp[hpc_in].std()
# Training, validation, and test set
trdata = hpcs.iloc[:tr_end]
valdata = hpcs.iloc[tr_end:val_end]
tsdata = hpcs.iloc[val_end:]
# Anomaly labels
hpc_labels = pd.Series(index=hpc.index, data=(hpc['anomaly'] != 0), dtype=int)
# Cost model
c_alarm, c_missed, tolerance = 1, 5, 12
cmodel = nn.HPCMetrics(c_alarm, c_missed, tolerance)
```
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
# Density Estimation with Neural Models
## Density Estimation vs Autoencoders
**Anomaly detection can be formulated as density estimation**
* This is probably _the cleanest formulation_ for the problem
* ...And usually leads to good results
**KDE as an estimation technique**
* ...Works reasonably well for low-dimensional data
* ...Becomes _slower and more data hungry_ for higher-dimensional data
**Autoencoders overcome some of these limitations**
* They are _faster and less data hungry_ for high-dimensional data
* They can provide _additional insight_ in the anomalies
* ...But they tend to be _worse_ than D.E. in terms of _pure detection power_
**Let's try to understand why this may be the case...**
## Density Estimation vs Autoencoders
**Anomaly Detection based on D.E. checks whether:**
$$
- \log f({\bf x}, \lambda) \geq \theta
$$
* Where $\bf x$ is the input vector, $f$ the density estimator, and $\lambda$ its parameter vector
* $\theta$ is the anomaly detection threshold
**Anomaly Detection based on autoencoders usually relies on:**
$$
\|g({\bf x}, \lambda) - {\bf x}\|_2^2 \geq \theta^\prime
$$
* Where $g$ is the autoencoder, with parameter vector $\lambda$
* $\theta^\prime$ is again a suitably-chosen detection threshold
## Density Estimation vs Autoencoders
**The detection condition for autoencoders admits a probabilistic interpretation**
Like we did for linear regression, we can rewrite:
$$
\|g({\bf x}, \lambda) - {\bf x}\|_2^2 \longrightarrow
\sum_{j=1}^m (g_j({\bf x}, \lambda) - x_j)^2 \longrightarrow
\log \prod_{j=1}^m \exp\left((g_j({\bf x}, \lambda) - x_j)^2\right)
$$
From which, with an _affine transformation_, for some fixed $\sigma$ we get:
$$
\log \frac{1}{\sigma\sqrt{2\pi}} + \frac{1}{\sigma^2} \log \prod_{j=1}^m \exp \left((g_j({\bf x}, \lambda) - x_j)^2\right) \quad \longrightarrow\\
\longrightarrow\quad \log \prod_{j=1}^m \frac{1}{\sigma\sqrt{2\pi}} \exp \left(\left(\frac{g_j({\bf x}, \lambda) - x_j}{\sigma}\right)^2\right)
$$
* The transformation preserves all the optimal points
## Density Estimation vs Autoencoders
**Therefore, optimizing the MSE is equivalent to optimizing**
$$
-\log \prod_{j=1}^m \varphi (x_j \mid g_j({\bf x}, \lambda), \sigma)
$$
* I.e. the log likelihood (estimated conditional probability of the data)...
* ...Assuming that the prediction for each $x_i$ is _independent and normally distributed_
* ...with _mean_ equal to the predictions $g_j({\bf x}, \lambda)$ and fixed _standard deviation_ $\sigma$
**This is similar to what we observed for Linear Regression**
* In LR, we assume normality, independence and fixed variance _on the samples_
* Here, we do it _also on the features_
## Density Estimation vs Autoencoders
**The bottomline**
* Even with autoencoders, at training time we _solve a density estimation problem_
* ...But we do it _with some limiting assumptions_
> **This is why D.E.-based anomaly detection _tends to work better_**
**So we have**
* Either a density estimator with issues on high-dimensional data (KDE)
* ...Or a worse D.E. with good support for high-dimensional data (autoencoders)
> **Can we get the best of both worlds?**
## Flow Models
**Ideally, we wish _a neural approach for density estimation_**
There are only a handful of approaches, often referred to as _flow models_:
* [Normalizing Flows](https://arxiv.org/abs/1505.05770)
* [Real Non-Volume Preserving transformations (Real NVP)](https://arxiv.org/abs/1605.08803)
* [Generative Flow with 1x1 convolutions (Glow)](https://arxiv.org/abs/1807.03039)
**These are all (fairly) advanced and recent approaches**
* Main idea: transforming _a simple (and known) probability distribution_...
* ..._Into a complex (and unknown) distribution_ that matches that of the available data
As many ML models, they are trained for maximum likelihood
* I.e. to maximize the estimated probability of the available data
## Flow Models
**All flow models rely on the _change of variable formula_**
* Let $x$ be a random variable representing the source of our data
* Let $p_x(x)$ be its (unknown) density function
* Let $z$ be a random _latent variable_ with known distribution $p_z$
* Let $f$ be a _bijective_ (i.e. invertible) transformation
Then, the change of variable formula states that:
$$
p_x(x) = p_z(f(x)) \left| \det \left(\frac{\partial f(x)}{\partial x^T} \right)\right|
$$
* Where $\det$ is the determinant and $\partial f / \partial x^T$ is the Jacobian of $f$
**The formula links the two distributions via _the flow model $f$_**
## Flow Models
**Let's consider how we can use the formula**
$$
p_x(x) = p_z(f(x)) \left| \det \left(\frac{\partial f(x)}{\partial x^T} \right)\right|
$$
* Given _an example $x$_ (e.g. from our dataset)
* We _compute the mapping $f(x)$_, i.e. the corresponding value for the latent variable $z$
* ...Plus the _determinant of the Jacobian_ $\partial f / \partial x^T$ in $x$
* Then we can use the formula to compute the _probability of the example_
**The challenge is defining the transformation $f$ (i.e. the mapping)**
* It must be _invertible_ (for the formula to hold)
* It must be _non-linear_ (to handle any distribution)
* It should allow for an _easy computation of the determinant_
## Real NVP
**We will use [Real Non-Volume Preserving transformations](https://arxiv.org/abs/1605.08803) as an example**
Real NVPs are _a type of neural network_
* _Input:_ a vector $x$ representing an example
* _Output:_ a vector $z$ of values for the latent variable
* _Key property:_ $z$ should have a chosen probability distribution
* ...Typically: standard Normal distribution for each $z_i$:
$$
z \sim \mathcal{N}({\bf 0}, I)
$$
In other words
* $z$ follows a multivariate distribution
* ...But the covariance matrix is diagonal, i.e. each component is independent
## Real NVP
**A Real NVP architecture consists of a stack of _affine coupling layers_**
Each layer treats its input $x$ as split into two components, i.e. $x = (x^1, x^2)$
* One component is _passed forward_ as it is
* The second is processed via an _affine transformation_
$$\begin{align}
y^1 &= x^1 \\
y^2 &= e^{s(x^1)} \odot x^2 + t(x^1)
\end{align}$$
**The affine transformation is parameterized with two functions:**
* $x^2$ is _scaled_ using $e^{s(x^1)}$, $x^2$ is _translated_ using $t(x^1)$
* $\odot$ is the element-wise product (Hadamard product)
Since we have functions rather than fixed vectors, _the transformation is non-linear_
## Real NVP - Affine Coupling Layers
**Visually, each layer has the following _compute graph:_**
<center></center>
* We are using part of the input (i.e. $x^1$)...
* ...To transform the remaining part (i.e. $x^2$)
**Both $s$ and $t$ are usually implemented as Multilayer Perceptrons**
* I.e. pretend there are a few fully connected layers when you see $s$ and $t$
## Real NVP - Affine Coupling Layers
**Each affine coupling layer is _easy to invert_**
<center></center>
Since part of the input (i.e. $x^1$) has been passed forward unchanged, we have that:
$$\begin{align}
x^1 &= y^1 \\
x^2 &= (y^2 - t(y^1)) \oslash e^{s(y^1)}
\end{align}$$
* $\oslash$ is the element-wise division
## Real NVP - Affine Coupling Layers
**The _determinant_ of each layer is easy to compute**
The Jacobian of the transformation is:
$$
\frac{\partial y}{\partial x^T} = \left(\begin{array}{cc}
I & 0 \\
\frac{\partial t(x^1)}{\partial x^T} & \text{diag}(e^{s(x^1)})
\end{array}\right)
$$
The most (only, actually) important thing is that _the matrix is triangular:_
* ...Hence, its determinant is the product of the terms on the main diagonal:
$$
\det\left(\frac{\partial y}{\partial x^T}\right) = \prod_{j \in I_{x_1}} e^{s(x^1_i)} = \exp \left( \sum_{j \in I_{x_1}} s(x^1_i) \right)
$$
## Real NVP - Considerations
**Overall, we have a transformation that:**
* ...Is _non-linear_, and can be made arbitrarily _deep_
* ...Is _Invertible_ (so as to allow application of the change of variable formula)
* ...Is well suited for _determinant computation_
**Depth and non-linearity are very important:**
* The whole approach works _only if_ we can construct a mapping between $x$ and $z$...
* ...I.e. if we can transform one probability distribution into the other
A poor mapping will lead to poor estimates
## Real NVP - Considerations
**At training time we maximize the log likelihood...**
...Hence we care about _log probabilities_:
$$
\log p_x(x) = \log p_z(f(x)) +\log\, \left| \det \left(\frac{\partial f(x)}{\partial x^T} \right)\right|
$$
* If we choose a Normal distribution for $z$, the log _cancels all exponentials in the formula_
* I.e. the one in the Normal PDF and the one in the determinant computation
**In general, we want to make sure that all variables are transformed**
* We need to be careful to define the $x^1, x^2$ components on different layers...
* ...So that no variable is passed forward unchanged along the whole network
A simple approach: _alternate the roles_ (i.e. swap the role of $x^1, x^2$ at every layer)
## Real NVP as Generative Models
**Since Real NVPs are invertible, they can be used as _generative models_**
Formally, they can _sample_ from the distribution they have learned
* We just need to sample from $p_z$, i.e. on the latent space
- ...And this is easy since the distribution is simple an known
* Then we go through the whole architecture _backwards_
- ...Using the inverted version of the affine coupling layers
**In fact, generating data is often their _primary purpose_**
They can (or could) be used for:
* Super resolution
* Procedural content generation
* Data augmentation (relevant in an industrial context)
Recent versions allow for [data generation with controlled attributes](https://openai.com/blog/glow/)
# Implementing Real NVPs
## Implementing Real NVPs
**We will now see how to implement Real NVPs**
The basis from our code comes from the [official keras documentation](https://keras.io/examples/generative/real_nvp/)
* It will rely partially on low-level APIs of keras
We start by importing several packages:
```python
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow.keras.layers import Dense
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.regularizers import l2
from sklearn.datasets import make_moons
```
* `tensorflow_probability` is a tensorflow extension for probabilistic computations
* ...And allows for easy manipulation of probability distributions
## Affine Coupling Layer
**Then we define a function to build each _affine coupling layer_:**
```python
def coupling(input_shape, nunits=64, nhidden=2, reg=0.01):
assert(nhidden >= 0)
x = keras.layers.Input(shape=input_shape)
# Build the layers for the t transformation (translation)
t = x
for i in range(nhidden):
t = Dense(nunits, activation="relu", kernel_regularizer=l2(reg))(t)
t = Dense(input_shape, activation="linear", kernel_regularizer=l2(reg))(t)
# Build the layers for the s transformation (scale)
s = x
for i in range(nhidden):
s = Dense(nunits, activation="relu", kernel_regularizer=l2(reg))(s)
s = Dense(input_shape, activation="tanh", kernel_regularizer=l2(reg))(s)
# Return the layers, wrapped in a keras Model object
return keras.Model(inputs=x, outputs=[s, t])
```
## Affine Coupling Layer
**This part of the code builds _the translation (i.e. $t$) function:_**
```python
def coupling(input_shape, nunits=64, nhidden=2, reg=0.01):
...
x = keras.layers.Input(shape=input_shape)
t = x
for i in range(nhidden):
t = Dense(nunits, activation="relu", kernel_regularizer=l2(reg))(t)
t = Dense(input_shape, activation="linear", kernel_regularizer=l2(reg))(t)
...
```
* It's _just a Multi-Layer Perceptron_ built using the functional API
* The output represents an offset, hence the "linear" activation function in the last layer
## Affine Coupling Layer
**This part of the code builds _the translation (i.e. $t$) function:_**
```python
def coupling(input_shape, nunits=64, nhidden=2, reg=0.01):
...
x = keras.layers.Input(shape=input_shape)
t = x
for i in range(nhidden):
t = Dense(nunits, activation="relu", kernel_regularizer=l2(reg))(t)
t = Dense(input_shape, activation="linear", kernel_regularizer=l2(reg))(t)
...
```
* The output and input have the same shape, but $x^1$ and $x^2$ may have _different size_
* This will be resolved by _masking_ some of the output of the affine layer
* ...The masked portions _will have no effect_, with effectively the same result
* The main drawback is higher memory consumption (and computational cost)
## Affine Coupling Layer
**This part of the code builds _the scaling (i.e. $s$) function:_**
```python
def coupling(input_shape, nunits=64, nhidden=2, reg=0.01):
...
x = keras.layers.Input(shape=input_shape)
...
s = x
for i in range(nhidden):
s = Dense(nunits, activation="relu", kernel_regularizer=l2(reg))(s)
s = Dense(input_shape, activation="tanh", kernel_regularizer=l2(reg))(s)
...
```
* Another MLP, with a bipolar sigmoid ("tanh") activation function in the output layer
* Using "tanh" limits the amount of scaling per affine coupling layer
* ...Which in turn makes training more numerically stable
* For the same reason, we use L2 regularizers on the MPL weights
## RNVP Model
**Then, we define a Real NVP architecture by subclassing keras.model**
```python
class RealNVP(keras.Model):
def __init__(self, input_shape, num_coupling, units_coupling=32, depth_coupling=0,
reg_coupling=0.01): ...
@property
def metrics(self): ...
def call(self, x, training=True): ...
def log_loss(self, x): ...
def score_samples(self, x): ...
def train_step(self, data): ...
def test_step(self, data): ...
```
* We will now discuss _the most important methods_
* Sometimes with a few simplifications (for sake of clarity)
## RNVP Model
**The `__init__` method (constructor) initializes the internal fields**
```python
def __init__(self, input_shape, num_coupling, units_coupling=32, depth_coupling=0,
reg_coupling=0.01):
super(RealNVP, self).__init__()
self.distribution = tfp.distributions.MultivariateNormalDiag(
loc=np.zeros(input_shape, dtype=np.float32),
scale_diag=np.ones(input_shape, dtype=np.float32)
)
half_n = int(np.ceil(input_shape/2))
m1 = ([0, 1] * half_n)[:input_shape]
m2 = ([1, 0] * half_n)[:input_shape]
self.masks = np.array([m1, m2] * (num_coupling // 2), dtype=np.float32)
self.loss_tracker = keras.metrics.Mean(name="loss")
self.layers_list = [coupling(input_shape, units_coupling, depth_coupling, reg_coupling)
for i in range(num_coupling)]
```
## RNVP Model
**The `__init__` method (constructor) initializes the internal fields**
```python
def __init__(self, input_shape, num_coupling, units_coupling=32, depth_coupling=0,
reg_coupling=0.01):
...
self.distribution = tfp.distributions.MultivariateNormalDiag(
loc=np.zeros(input_shape, dtype=np.float32),
scale_diag=np.ones(input_shape, dtype=np.float32)
)
...
```
Here we build a `tfp` object to handle the known distribution
* As it is customary, we chosen a Multivariate Normal distribution
* ...With independent components, zero mean, and unary standard deviation
## RNVP Model
**The `__init__` method (constructor) initializes the internal fields**
```python
def __init__(self, input_shape, num_coupling, units_coupling=32, depth_coupling=0,
reg_coupling=0.01):
...
half_n = int(np.ceil(input_shape/2))
m1 = ([0, 1] * half_n)[:input_shape]
m2 = ([1, 0] * half_n)[:input_shape]
self.masks = np.array([m1, m2] * (num_coupling // 2), dtype=np.float32)
...
```
Here we build the masks to discriminate the $x_1$ and $x_2$ components at each layer
* As in the original RNVP paper, we use an _alternating checkboard pattern_
- I.e. we take even indexes at one layer, and odd indexes at the next layer
* ...So that all variables are transformed, if we have at least 2 affine coupling layers
## RNVP Model
**The `__init__` method (constructor) initializes the internal fields**
```python
def __init__(self, input_shape, num_coupling, units_coupling=32, depth_coupling=0,
reg_coupling=0.01):
...
self.layers_list = [coupling(input_shape, units_coupling, depth_coupling, reg_coupling)
for i in range(num_coupling)]
```
Finally, here we build the model layers
* Each one consists in an affine coupling
* ...And contains in turn two Multi Layer Perceptrons
* Recall that we need at least 2 affine couplings to transform all variables
## RNVP Model
**The `call` method handles the transformation, in both directions**
```python
def call(self, x, training=True):
log_det_inv, direction = 0, 1
if training: direction = -1
for i in range(self.num_coupling)[::direction]:
x_masked = x * self.masks[i]
reversed_mask = 1 - self.masks[i]
s, t = self.layers_list[i](x_masked)
s, t = s*reversed_mask, t*reversed_mask
gate = (direction - 1) / 2
x = reversed_mask * (x * tf.exp(direction * s) + direction * t * tf.exp(gate * s)) \
+ x_masked
log_det_inv += gate * tf.reduce_sum(s, axis=1)
return x, log_det_inv
```
## RNVP Model
**The `call` method handles the transformation, in both directions**
```python
def call(self, x, training=True):
log_det_inv, direction = 0, 1
if training: direction = -1
for i in range(self.num_coupling)[::direction]:
...
```
The `direction` variable controls the direction of the transformation
* By default, this implementation transforms $z$ into $x$
- I.e. it works _backwards_, compared to our theoretical discussion
* This is the case since RNVP are often mainly used as _generative models_
* At training time, we always want to transform $x$ into $z$
* ...And this is why `direction = -1` when `training` is `True`
## RNVP Model
**The `call` method handles the transformation, in both directions**
```python
def call(self, x, training=True):
for i in range(self.num_coupling)[::direction]:
x_masked = x * self.masks[i]
reversed_mask = 1 - self.masks[i]
s, t = self.layers_list[i](x_masked)
s, t = s*reversed_mask, t*reversed_mask
...
```
* Here we mask $x$, i.e. filter the $x_1$ subset of variables
* ...We compute the value of the $s$ and $t$ function
* Then we filter such values using a the reversed (i.e. negated) mask
* I.e. prepare $s$ and $t$ for their application to the $x_2$ subset
## RNVP Model
**The `call` method handles the transformation, in both directions**
```python
def call(self, x, training=True):
...
gate = (direction - 1) / 2
x = reversed_mask * (x * tf.exp(direction * s) + direction * t * tf.exp(gate * s)) \
+ x_masked
...
```
Here we compute the main transformation (backwards, as mentioned):
* If `training = True`, we have `direction = -1` and we compute:
$$\begin{align}
x^1 &= y^1 \\
x^2 &= (y^2 - t(y^1)) \oslash e^{s(y^1)}
\end{align}$$
## RNVP Model
**The `call` method handles the transformation, in both directions**
```python
def call(self, x, training=True):
...
gate = (direction - 1) / 2
x = reversed_mask * (x * tf.exp(direction * s) + direction * t * tf.exp(gate * s)) \
+ x_masked
...
```
Here we compute the main transformation (backwards, as mentioned):
* If `training = False`, we have `direction = 1` and we compute:
$$\begin{align}
y^1 &= x^1 \\
y^2 &= e^{s(x^1)} \odot x^2 + t(x^1)
\end{align}$$
## RNVP Model
**The `call` method handles the transformation, in both directions**
```python
def call(self, x, training=True):
...
for i in range(self.num_coupling)[::direction]:
...
log_det_inv += gate * tf.reduce_sum(s, axis=1)
return x, log_det_inv
```
At each layer, we also compute the $\log \det$ of the Jacobian
* ...Which is simply the sum of the $s$ function values
* Determinants of different layers should be multiplied (due to the chain rule)...
* ...Which means that their $\log$ is simply summed
At then end of the process, the determinant has been computed
## RNVP Model
**The `score_samples` method performs _density estimation_**
```python
def score_samples(self, x):
y, logdet = self(x)
log_probs = self.distribution.log_prob(y) + logdet
return log_probs
```
The process relies on the change of variable formula:
* First, it triggers the `call` method with `training=True`
- I.e. transforms data points $x$ into their latent representation $z$
* Then, it computes the (log) density of $z$
- Using `tensorfllow_probability` comes in handy at this point
* ...And then sums the log determinant
## RNVP Model
**The `log_loss` method computes the _loss function_**
```python
def log_loss(self, x):
log_densities = self.score_samples(x)
return -tf.reduce_mean(log_densities)
```
This is done by:
* Obtaining the estimated densities via `score_samples`
* ...Summing up (in log scale, i.e. a product in the original scale)
* ...And finally swapping the sign of the resut
- ...Since we want to _maximize_ the likelihood
## RNVP Model
**The `train_step` method is called by the keras `fit` method**
```python
def train_step(self, data):
with tf.GradientTape() as tape:
loss = self.log_loss(data)
g = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(g, self.trainable_variables))
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
```
The `GradientTape` is _how tensorflow handles differentiation_
* All tensor operations made in the scope of a `GradientTape` are tracked
* ...So that a gradient can then be extracted
* Then we apply the gradient to the model weights (using the optimizer)
* ...And finally we track the loss
# Using Real NVPs
## Using Real NVP
**We are ready to test our model**
We will use a classical benchmark for density estimation (shaped like two half moons)
```python
from sklearn.datasets import make_moons
data = make_moons(3000, noise=0.05)[0].astype(np.float32)
nn.plot_distribution_2D(samples=data, figsize=figsize)
```
* We use `float32` numbers for easier interplay with tensorflow
## Training
**Now, we need to train a Real NVP model**
* We will use the whole dataset (this is just a simple test)
* ...But first, we need to _standardize_ it
```python
data_s = (data - data.mean(axis=0)) / data.std(axis=0)
```
**Standardization is very important when using Real NVPs**
* This is true for Neural Networks in general, for the usual reasons
* But even more in this case, since _the distribution for $z$ is standardized_
- Standardizing the data makes it easier to learn a mapping
## Training
**Next we can perform training, as usual in keras**
```python
from tensorflow.keras.callbacks import EarlyStopping
model = nn.RealNVP(input_shape=2, num_coupling=10, units_coupling=32, depth_coupling=2, reg_coupling=0.01)
model.compile(optimizer='Adam')
cb = [EarlyStopping(monitor='loss', patience=40, min_delta=0.0001, restore_best_weights=True)]
history = model.fit(data_s, batch_size=256, epochs=200, verbose=1, callbacks=cb)
```
Epoch 1/200
12/12 [==============================] - 4s 4ms/step - loss: 2.9582
Epoch 2/200
12/12 [==============================] - 0s 4ms/step - loss: 2.7342
Epoch 3/200
12/12 [==============================] - 0s 4ms/step - loss: 2.5690
Epoch 4/200
12/12 [==============================] - 0s 4ms/step - loss: 2.5072
Epoch 5/200
12/12 [==============================] - 0s 4ms/step - loss: 2.4623
Epoch 6/200
12/12 [==============================] - 0s 4ms/step - loss: 2.4218
Epoch 7/200
12/12 [==============================] - 0s 4ms/step - loss: 2.3766
Epoch 8/200
12/12 [==============================] - 0s 4ms/step - loss: 2.3357
Epoch 9/200
12/12 [==============================] - 0s 4ms/step - loss: 2.2893
Epoch 10/200
12/12 [==============================] - 0s 4ms/step - loss: 2.2384
Epoch 11/200
12/12 [==============================] - 0s 4ms/step - loss: 2.1972
Epoch 12/200
12/12 [==============================] - 0s 4ms/step - loss: 2.1474
Epoch 13/200
12/12 [==============================] - 0s 4ms/step - loss: 2.0999
Epoch 14/200
12/12 [==============================] - 0s 4ms/step - loss: 2.0578
Epoch 15/200
12/12 [==============================] - 0s 4ms/step - loss: 2.0285
Epoch 16/200
12/12 [==============================] - 0s 4ms/step - loss: 2.0015
Epoch 17/200
12/12 [==============================] - 0s 4ms/step - loss: 1.9644
Epoch 18/200
12/12 [==============================] - 0s 4ms/step - loss: 1.9246
Epoch 19/200
12/12 [==============================] - 0s 4ms/step - loss: 1.8893
Epoch 20/200
12/12 [==============================] - 0s 4ms/step - loss: 1.8999
Epoch 21/200
12/12 [==============================] - 0s 4ms/step - loss: 1.8473
Epoch 22/200
12/12 [==============================] - 0s 4ms/step - loss: 1.8471
Epoch 23/200
12/12 [==============================] - 0s 4ms/step - loss: 1.8188
Epoch 24/200
12/12 [==============================] - 0s 4ms/step - loss: 1.8037
Epoch 25/200
12/12 [==============================] - 0s 4ms/step - loss: 1.7702
Epoch 26/200
12/12 [==============================] - 0s 4ms/step - loss: 1.7690
Epoch 27/200
12/12 [==============================] - 0s 4ms/step - loss: 1.7703
Epoch 28/200
12/12 [==============================] - 0s 5ms/step - loss: 1.7137
Epoch 29/200
12/12 [==============================] - 0s 6ms/step - loss: 1.7134
Epoch 30/200
12/12 [==============================] - 0s 6ms/step - loss: 1.6914
Epoch 31/200
12/12 [==============================] - 0s 6ms/step - loss: 1.6766
Epoch 32/200
12/12 [==============================] - 0s 6ms/step - loss: 1.6859
Epoch 33/200
12/12 [==============================] - 0s 6ms/step - loss: 1.6902
Epoch 34/200
12/12 [==============================] - 0s 6ms/step - loss: 1.6717
Epoch 35/200
12/12 [==============================] - 0s 5ms/step - loss: 1.6347
Epoch 36/200
12/12 [==============================] - 0s 5ms/step - loss: 1.6327
Epoch 37/200
12/12 [==============================] - 0s 4ms/step - loss: 1.6462
Epoch 38/200
12/12 [==============================] - 0s 4ms/step - loss: 1.6361
Epoch 39/200
12/12 [==============================] - 0s 4ms/step - loss: 1.6378
Epoch 40/200
12/12 [==============================] - 0s 4ms/step - loss: 1.6284
Epoch 41/200
12/12 [==============================] - 0s 4ms/step - loss: 1.6124
Epoch 42/200
12/12 [==============================] - 0s 4ms/step - loss: 1.6401
Epoch 43/200
12/12 [==============================] - 0s 4ms/step - loss: 1.5696
Epoch 44/200
12/12 [==============================] - 0s 4ms/step - loss: 1.5517
Epoch 45/200
12/12 [==============================] - 0s 4ms/step - loss: 1.5557
Epoch 46/200
12/12 [==============================] - 0s 4ms/step - loss: 1.5577
Epoch 47/200
12/12 [==============================] - 0s 5ms/step - loss: 1.5653
Epoch 48/200
12/12 [==============================] - 0s 6ms/step - loss: 1.5537
Epoch 49/200
12/12 [==============================] - 0s 5ms/step - loss: 1.5318
Epoch 50/200
12/12 [==============================] - 0s 4ms/step - loss: 1.5194
Epoch 51/200
12/12 [==============================] - 0s 4ms/step - loss: 1.5062
Epoch 52/200
12/12 [==============================] - 0s 4ms/step - loss: 1.5209
Epoch 53/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4900
Epoch 54/200
12/12 [==============================] - 0s 5ms/step - loss: 1.5006
Epoch 55/200
12/12 [==============================] - 0s 4ms/step - loss: 1.5075
Epoch 56/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4839
Epoch 57/200
12/12 [==============================] - 0s 4ms/step - loss: 1.5291
Epoch 58/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4826
Epoch 59/200
12/12 [==============================] - 0s 5ms/step - loss: 1.4759
Epoch 60/200
12/12 [==============================] - 0s 5ms/step - loss: 1.4576
Epoch 61/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4511
Epoch 62/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4506
Epoch 63/200
12/12 [==============================] - 0s 4ms/step - loss: 1.6070
Epoch 64/200
12/12 [==============================] - 0s 4ms/step - loss: 1.5913
Epoch 65/200
12/12 [==============================] - 0s 4ms/step - loss: 1.5237
Epoch 66/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4656
Epoch 67/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4327
Epoch 68/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4296
Epoch 69/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4188
Epoch 70/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4284
Epoch 71/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4163
Epoch 72/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4116
Epoch 73/200
12/12 [==============================] - 0s 4ms/step - loss: 1.4076
Epoch 74/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3908
Epoch 75/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3748
Epoch 76/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3666
Epoch 77/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3662
Epoch 78/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3726
Epoch 79/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3892
Epoch 80/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3593
Epoch 81/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3530
Epoch 82/200
12/12 [==============================] - 0s 5ms/step - loss: 1.3609
Epoch 83/200
12/12 [==============================] - 0s 6ms/step - loss: 1.3697
Epoch 84/200
12/12 [==============================] - 0s 6ms/step - loss: 1.3839
Epoch 85/200
12/12 [==============================] - 0s 5ms/step - loss: 1.3756
Epoch 86/200
12/12 [==============================] - 0s 5ms/step - loss: 1.3377
Epoch 87/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3218
Epoch 88/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3029
Epoch 89/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3069
Epoch 90/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3089
Epoch 91/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3012
Epoch 92/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3344
Epoch 93/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3451
Epoch 94/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3361
Epoch 95/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3095
Epoch 96/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3090
Epoch 97/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2886
Epoch 98/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2938
Epoch 99/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3136
Epoch 100/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3002
Epoch 101/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2870
Epoch 102/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2937
Epoch 103/200
12/12 [==============================] - 0s 5ms/step - loss: 1.3117
Epoch 104/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2813
Epoch 105/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2606
Epoch 106/200
12/12 [==============================] - 0s 5ms/step - loss: 1.3260
Epoch 107/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2983
Epoch 108/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2689
Epoch 109/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2598
Epoch 110/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2481
Epoch 111/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2718
Epoch 112/200
12/12 [==============================] - 0s 6ms/step - loss: 1.3031
Epoch 113/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3096
Epoch 114/200
12/12 [==============================] - 0s 4ms/step - loss: 1.3070
Epoch 115/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2780
Epoch 116/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2654
Epoch 117/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2539
Epoch 118/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2532
Epoch 119/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2457
Epoch 120/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2600
Epoch 121/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2409
Epoch 122/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2498
Epoch 123/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2358
Epoch 124/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2445
Epoch 125/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2543
Epoch 126/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2642
Epoch 127/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2511
Epoch 128/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2336
Epoch 129/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2400
Epoch 130/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2448
Epoch 131/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2364
Epoch 132/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2631
Epoch 133/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2741
Epoch 134/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2498
Epoch 135/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2297
Epoch 136/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2220
Epoch 137/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2168
Epoch 138/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2350
Epoch 139/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2172
Epoch 140/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2114
Epoch 141/200
12/12 [==============================] - 0s 6ms/step - loss: 1.2108
Epoch 142/200
12/12 [==============================] - 0s 6ms/step - loss: 1.2151
Epoch 143/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2098
Epoch 144/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2037
Epoch 145/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2017
Epoch 146/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2081
Epoch 147/200
12/12 [==============================] - 0s 6ms/step - loss: 1.2162
Epoch 148/200
12/12 [==============================] - 0s 6ms/step - loss: 1.2217
Epoch 149/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2057
Epoch 150/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2211
Epoch 151/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2013
Epoch 152/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1969
Epoch 153/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2202
Epoch 154/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2212
Epoch 155/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2261
Epoch 156/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2378
Epoch 157/200
12/12 [==============================] - 0s 6ms/step - loss: 1.2072
Epoch 158/200
12/12 [==============================] - 0s 6ms/step - loss: 1.2131
Epoch 159/200
12/12 [==============================] - 0s 6ms/step - loss: 1.2159
Epoch 160/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2079
Epoch 161/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2056
Epoch 162/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1981
Epoch 163/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2145
Epoch 164/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2155
Epoch 165/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1989
Epoch 166/200
12/12 [==============================] - 0s 5ms/step - loss: 1.1963
Epoch 167/200
12/12 [==============================] - 0s 6ms/step - loss: 1.2261
Epoch 168/200
12/12 [==============================] - 0s 6ms/step - loss: 1.2210
Epoch 169/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2309
Epoch 170/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2015
Epoch 171/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1927
Epoch 172/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2029
Epoch 173/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2130
Epoch 174/200
12/12 [==============================] - 0s 6ms/step - loss: 1.2294
Epoch 175/200
12/12 [==============================] - 0s 6ms/step - loss: 1.1953
Epoch 176/200
12/12 [==============================] - 0s 5ms/step - loss: 1.1965
Epoch 177/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1975
Epoch 178/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1932
Epoch 179/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1830
Epoch 180/200
12/12 [==============================] - 0s 5ms/step - loss: 1.1871
Epoch 181/200
12/12 [==============================] - 0s 6ms/step - loss: 1.1859
Epoch 182/200
12/12 [==============================] - 0s 5ms/step - loss: 1.1701
Epoch 183/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1729
Epoch 184/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1856
Epoch 185/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1811
Epoch 186/200
12/12 [==============================] - 0s 5ms/step - loss: 1.1907
Epoch 187/200
12/12 [==============================] - 0s 6ms/step - loss: 1.2068
Epoch 188/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2103
Epoch 189/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1961
Epoch 190/200
12/12 [==============================] - 0s 4ms/step - loss: 1.2080
Epoch 191/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1792
Epoch 192/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1700
Epoch 193/200
12/12 [==============================] - 0s 6ms/step - loss: 1.1747
Epoch 194/200
12/12 [==============================] - 0s 5ms/step - loss: 1.1817
Epoch 195/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1716
Epoch 196/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1947
Epoch 197/200
12/12 [==============================] - 0s 4ms/step - loss: 1.1874
Epoch 198/200
12/12 [==============================] - 0s 5ms/step - loss: 1.1886
Epoch 199/200
12/12 [==============================] - 0s 5ms/step - loss: 1.2214
Epoch 200/200
12/12 [==============================] - 0s 6ms/step - loss: 1.2064
## Training
**As usual with NNs, choosing the right architecture can be complicated**
```python
model = RealNVP(input_shape=2, num_coupling=16, units_coupling=32, depth_coupling=2, reg_coupling=0.01)
```
* We went for a relatively deep model (10 affine coupling)
* Each coupling has also a good degree of non-linearity (2 hidden layers)
* We used a small degree of L2 regularization to stabilize the training process
**We also use relatively _large batch size_**
```python
history = model.fit(data_s, batch_size=256, epochs=200, verbose=2, callbacks=cb)
```
* Large batch sizes are usually a good choice with density estimation approaches
* Batches should be ideally be representative of the distribution
## Training
**Let's see the evolution of the training loss over time**
```python
nn.plot_training_history(history, figsize=figsize)
```
## Latent Space Representation
**We can obtain the latent space representation by calling the trained model**
This will trigger the `call` method with default parameters (i.e. `training=True`)
```python
z, _ = model(data_s)
nn.plot_distribution_2D(samples=z, figsize=figsize)
```
## Density Estimation
**We can estimate the density of any data point**
```python
nn.plot_distribution_2D(estimator=model, xr=np.linspace(-2, 2, 100, dtype=np.float32),
yr=np.linspace(-2, 2, 100, dtype=np.float32), figsize=figsize)
```
* A good approximation! With a strange low-density connection between the moons
## Data Generation
**We can also generate data, by sampling from $p_z$ and then calling `predict`**
This will trigger the `call` method with `training=False`
```python
samples = model.distribution.sample(3000)
x, _ = model.predict(samples)
nn.plot_distribution_2D(samples=x, figsize=figsize)
```
## Data Generation
**We can also plot the mapping for selected data points...**
...Which gives and intuition of how the transformation works
```python
nn.plot_rnvp_transformation(model, figsize=figsize)
```
# RNVP for Anomaly Detection
## RNVP for Anomaly Detection
**RNVPs can be used for anomaly detection like any other density estimator**
First, we build and compile the model (for the HPC data)
```python
input_shape = len(hpc_in)
hpc_rnvp = nn.RealNVP(input_shape=input_shape,
num_coupling=6, units_coupling=32, depth_coupling=1, reg_coupling=0.01)
hpc_rnvp.compile(optimizer='Adam')
```
We chose a _simpler_ architecture this time
* With RNVP, dealing with higher dimensional data has actually some advantage
* In particular, we have richer input for the $s$ and $t$ functions
- In the "moons" dataset, $s$ and $t$ had 2/2 = 1 input feature
- Now we have 159/2 = 79--80 features
## RNVP for Anomaly Detection
**Then we perform training as usual**
```python
X = trdata[hpc_in].astype(np.float32).values
cb = [EarlyStopping(monitor='loss', patience=10, min_delta=0.001, restore_best_weights=True)]
history = hpc_rnvp.fit(X, batch_size=256, epochs=100, verbose=1, callbacks=cb)
```
Epoch 1/100
12/12 [==============================] - 2s 7ms/step - loss: 719.0934
Epoch 2/100
12/12 [==============================] - 0s 7ms/step - loss: 905.2411
Epoch 3/100
12/12 [==============================] - 0s 7ms/step - loss: 412.6078
Epoch 4/100
12/12 [==============================] - 0s 7ms/step - loss: 1169.5496
Epoch 5/100
12/12 [==============================] - 0s 6ms/step - loss: 754.5610
Epoch 6/100
12/12 [==============================] - 0s 7ms/step - loss: 678.0566
Epoch 7/100
12/12 [==============================] - 0s 7ms/step - loss: 434.5121
Epoch 8/100
12/12 [==============================] - 0s 7ms/step - loss: 365.8380
Epoch 9/100
12/12 [==============================] - 0s 7ms/step - loss: 264.4407
Epoch 10/100
12/12 [==============================] - 0s 7ms/step - loss: 195.1885
Epoch 11/100
12/12 [==============================] - 0s 6ms/step - loss: 151.8016
Epoch 12/100
12/12 [==============================] - 0s 7ms/step - loss: 123.2256
Epoch 13/100
12/12 [==============================] - 0s 7ms/step - loss: 85.4567
Epoch 14/100
12/12 [==============================] - 0s 6ms/step - loss: 126.3301
Epoch 15/100
12/12 [==============================] - 0s 6ms/step - loss: 859.1494
Epoch 16/100
12/12 [==============================] - 0s 6ms/step - loss: 63.1145
Epoch 17/100
12/12 [==============================] - 0s 6ms/step - loss: 49.0259
Epoch 18/100
12/12 [==============================] - 0s 6ms/step - loss: 41.8723
Epoch 19/100
12/12 [==============================] - 0s 6ms/step - loss: 23.2193
Epoch 20/100
12/12 [==============================] - 0s 6ms/step - loss: 1.9452
Epoch 21/100
12/12 [==============================] - 0s 6ms/step - loss: -11.8411
Epoch 22/100
12/12 [==============================] - 0s 7ms/step - loss: -21.3956
Epoch 23/100
12/12 [==============================] - 0s 6ms/step - loss: -29.9412
Epoch 24/100
12/12 [==============================] - 0s 6ms/step - loss: -36.1167
Epoch 25/100
12/12 [==============================] - 0s 6ms/step - loss: -42.5414
Epoch 26/100
12/12 [==============================] - 0s 8ms/step - loss: -47.1233
Epoch 27/100
12/12 [==============================] - 0s 8ms/step - loss: -52.5653
Epoch 28/100
12/12 [==============================] - 0s 6ms/step - loss: -50.0787
Epoch 29/100
12/12 [==============================] - 0s 6ms/step - loss: -61.3525
Epoch 30/100
12/12 [==============================] - 0s 7ms/step - loss: -65.5290
Epoch 31/100
12/12 [==============================] - 0s 6ms/step - loss: -69.6306
Epoch 32/100
12/12 [==============================] - 0s 7ms/step - loss: -73.5066
Epoch 33/100
12/12 [==============================] - 0s 7ms/step - loss: -76.8681
Epoch 34/100
12/12 [==============================] - 0s 6ms/step - loss: -79.3996
Epoch 35/100
12/12 [==============================] - 0s 6ms/step - loss: -83.0936
Epoch 36/100
12/12 [==============================] - 0s 6ms/step - loss: -86.4273
Epoch 37/100
12/12 [==============================] - 0s 7ms/step - loss: -57.7764
Epoch 38/100
12/12 [==============================] - 0s 6ms/step - loss: 4.6803
Epoch 39/100
12/12 [==============================] - 0s 7ms/step - loss: 78.7103
Epoch 40/100
12/12 [==============================] - 0s 7ms/step - loss: 77.7866
Epoch 41/100
12/12 [==============================] - 0s 7ms/step - loss: 518.9639
Epoch 42/100
12/12 [==============================] - 0s 7ms/step - loss: -18.1266
Epoch 43/100
12/12 [==============================] - 0s 6ms/step - loss: -51.2412
Epoch 44/100
12/12 [==============================] - 0s 6ms/step - loss: -63.3980
Epoch 45/100
12/12 [==============================] - 0s 7ms/step - loss: -79.5090
Epoch 46/100
12/12 [==============================] - 0s 7ms/step - loss: -86.7409
Epoch 47/100
12/12 [==============================] - 0s 6ms/step - loss: -91.8204
Epoch 48/100
12/12 [==============================] - 0s 6ms/step - loss: -95.7429
Epoch 49/100
12/12 [==============================] - 0s 7ms/step - loss: -94.0352
Epoch 50/100
12/12 [==============================] - 0s 7ms/step - loss: -93.8613
Epoch 51/100
12/12 [==============================] - 0s 8ms/step - loss: -95.4797
Epoch 52/100
12/12 [==============================] - 0s 7ms/step - loss: -81.3049
Epoch 53/100
12/12 [==============================] - 0s 7ms/step - loss: -96.0608
Epoch 54/100
12/12 [==============================] - 0s 7ms/step - loss: -106.8405
Epoch 55/100
12/12 [==============================] - 0s 7ms/step - loss: -109.5008
Epoch 56/100
12/12 [==============================] - 0s 8ms/step - loss: -114.8781
Epoch 57/100
12/12 [==============================] - 0s 6ms/step - loss: -116.7885
Epoch 58/100
12/12 [==============================] - 0s 6ms/step - loss: -118.7055
Epoch 59/100
12/12 [==============================] - 0s 6ms/step - loss: -120.1718
Epoch 60/100
12/12 [==============================] - 0s 6ms/step - loss: -122.3495
Epoch 61/100
12/12 [==============================] - 0s 7ms/step - loss: -117.5011
Epoch 62/100
12/12 [==============================] - 0s 7ms/step - loss: -126.3227
Epoch 63/100
12/12 [==============================] - 0s 6ms/step - loss: -128.0181
Epoch 64/100
12/12 [==============================] - 0s 6ms/step - loss: -129.3858
Epoch 65/100
12/12 [==============================] - 0s 6ms/step - loss: -130.7806
Epoch 66/100
12/12 [==============================] - 0s 6ms/step - loss: -131.5900
Epoch 67/100
12/12 [==============================] - 0s 6ms/step - loss: -133.4695
Epoch 68/100
12/12 [==============================] - 0s 6ms/step - loss: -134.7427
Epoch 69/100
12/12 [==============================] - 0s 6ms/step - loss: -136.1255
Epoch 70/100
12/12 [==============================] - 0s 6ms/step - loss: -132.4985
Epoch 71/100
12/12 [==============================] - 0s 6ms/step - loss: -138.4926
Epoch 72/100
12/12 [==============================] - 0s 6ms/step - loss: -139.9982
Epoch 73/100
12/12 [==============================] - 0s 6ms/step - loss: -139.2789
Epoch 74/100
12/12 [==============================] - 0s 6ms/step - loss: -141.9384
Epoch 75/100
12/12 [==============================] - 0s 6ms/step - loss: -142.7077
Epoch 76/100
12/12 [==============================] - 0s 6ms/step - loss: -122.0286
Epoch 77/100
12/12 [==============================] - 0s 6ms/step - loss: -138.0880
Epoch 78/100
12/12 [==============================] - 0s 6ms/step - loss: -141.1452
Epoch 79/100
12/12 [==============================] - 0s 6ms/step - loss: -146.9416
Epoch 80/100
12/12 [==============================] - 0s 6ms/step - loss: -149.3540
Epoch 81/100
12/12 [==============================] - 0s 6ms/step - loss: -151.3710
Epoch 82/100
12/12 [==============================] - 0s 6ms/step - loss: -152.5702
Epoch 83/100
12/12 [==============================] - 0s 6ms/step - loss: -153.2488
Epoch 84/100
12/12 [==============================] - 0s 6ms/step - loss: -154.0018
Epoch 85/100
12/12 [==============================] - 0s 6ms/step - loss: -156.0649
Epoch 86/100
12/12 [==============================] - 0s 6ms/step - loss: -156.7068
Epoch 87/100
12/12 [==============================] - 0s 6ms/step - loss: -157.5935
Epoch 88/100
12/12 [==============================] - 0s 7ms/step - loss: -158.7546
Epoch 89/100
12/12 [==============================] - 0s 7ms/step - loss: -159.1322
Epoch 90/100
12/12 [==============================] - 0s 6ms/step - loss: -160.5154
Epoch 91/100
12/12 [==============================] - 0s 6ms/step - loss: -151.3180
Epoch 92/100
12/12 [==============================] - 0s 7ms/step - loss: -159.1134
Epoch 93/100
12/12 [==============================] - 0s 7ms/step - loss: -159.9041
Epoch 94/100
12/12 [==============================] - 0s 7ms/step - loss: -161.6100
Epoch 95/100
12/12 [==============================] - 0s 6ms/step - loss: -162.6764
Epoch 96/100
12/12 [==============================] - 0s 6ms/step - loss: -162.8461
Epoch 97/100
12/12 [==============================] - 0s 6ms/step - loss: -163.9352
Epoch 98/100
12/12 [==============================] - 0s 6ms/step - loss: -165.5593
Epoch 99/100
12/12 [==============================] - 0s 6ms/step - loss: -166.6348
Epoch 100/100
12/12 [==============================] - 0s 6ms/step - loss: -167.3380
## RNVP for Anomaly Detection
**Here is the loss evolution over time**
```python
nn.plot_training_history(history, figsize=figsize)
```
## RNVP for Anomaly Detection
**Then we can generate a signal as usual**
```python
X = hpcs[hpc_in].astype(np.float32).values
signal_hpc = pd.Series(index=hpcs.index, data=-hpc_rnvp.score_samples(X))
nn.plot_signal(signal_hpc, hpc_labels, figsize=figsize)
```
* The signal is very similar to that of KDE (not a surprise)
## RNVP for Anomaly Detection
**Finally, we can tune the threshold**
```python
th_range = np.linspace(1e5, 1.5e6, 100)
thr, val_cost = nn.opt_threshold(signal_hpc[tr_end:val_end],
valdata['anomaly'],
th_range, cmodel)
print(f'Best threshold: {thr:.3f}')
tr_cost = cmodel.cost(signal_hpc[:tr_end], hpcs['anomaly'][:tr_end], thr)
print(f'Cost on the training set: {tr_cost}')
print(f'Cost on the validation set: {val_cost}')
ts_cost = cmodel.cost(signal_hpc[val_end:], hpcs['anomaly'][val_end:], thr)
print(f'Cost on the test set: {ts_cost}')
```
Best threshold: 1217171.717
Cost on the training set: 0
Cost on the validation set: 269
Cost on the test set: 265
* Once again, the performance is on par with KDE
* ...But we have better support for high-dimensional data!
| 8f1916cd4393c5995325b29479ba369813b1634b | 780,645 | ipynb | Jupyter Notebook | notebooks/2. Density Estimation with Neural Networks.ipynb | lompabo/aiiti-course-2021-04 | 48871506e88455adab0249dd5f8dda39b558b42c | [
"MIT"
]
| null | null | null | notebooks/2. Density Estimation with Neural Networks.ipynb | lompabo/aiiti-course-2021-04 | 48871506e88455adab0249dd5f8dda39b558b42c | [
"MIT"
]
| null | null | null | notebooks/2. Density Estimation with Neural Networks.ipynb | lompabo/aiiti-course-2021-04 | 48871506e88455adab0249dd5f8dda39b558b42c | [
"MIT"
]
| null | null | null | 124.943182 | 134,168 | 0.855084 | true | 18,028 | Qwen/Qwen-72B | 1. YES
2. YES | 0.72487 | 0.721743 | 0.52317 | __label__eng_Latn | 0.358161 | 0.053829 |
```python
from sympy import init_printing; init_printing();
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
```
```python
from silkpy import ParametricSurface
from sympy import symbols, sin, cos, pi, cot, Array, refine, Q
from silkpy.sympy_utility import dot
u, v = symbols('u, v', real=True)
```
```python
surf_choice = 'torus'
if surf_choice=='cylindrical':
R = symbols('R', positive=True)
s = ParametricSurface([u, v], [R*cos(u), R*sin(u), v])
elif surf_choice=='cone':
w = symbols('omega', real=True)
s = ParametricSurface([u, v], [v*cos(u), v*sin(u), v*cot(w)])
elif surf_choice=='Mobius':
theta = symbols('theta', real=True)
s = ParametricSurface([theta, v],
Array([cos(theta), sin(theta), 0 ]) +
Array([sin(theta/2) * cos(theta), sin(theta/2) * sin(theta), cos(theta/2)]) * v)
elif surf_choice=='torus':
from sympy import Q, ask
from sympy.assumptions import global_assumptions
a, r = symbols('a, r', real=True, positive=True)
# global_assumptions.add(Q.positive(a - r))
global_assumptions.add(Q.positive(a + r*cos(u)))
s = ParametricSurface([u, v], [ (a+r*cos(u)) * cos(v), (a+r*cos(u)) * sin(v), r*sin(u)])
```
```python
s.christoffel_symbol.tensor()
```
```python
```
```python
s.metric_tensor.tensor()
s.metric_tensor.change_config('uu').tensor()
s.christoffel_symbol.tensor()
r_u, r_v = s.expr().diff(u), s.expr().diff(v); r_u, r_v
a_, b_ = r_u, r_v
s.weingarten_matrix
```
```python
Wa = s.weingarten_transform(a_)
Wb = s.weingarten_transform(b_)
dot(Wa, b_), dot(a_, Wb)
s.K_H
s.prin_curvature_and_vector
from silkpy.sympy_utility import dot
(_, vec1), (_, vec2) = s.prin_curvature_and_vector
dot(vec1, vec2) # The two principal curvature vectors are perpendicular to each other.
```
```python
InteractiveShell.ast_node_interactivity = "last"
```
```python
from sympy import sin, cos, pi
from silkpy.numeric.surface.geodesic import geodesic_ncurve
theta = pi / 24 # symbols('theta', real=True)
t_arr, (u_arr, v_arr) = geodesic_ncurve(
s.subs({a:5, r:2}), [pi/4, pi/4], [cos(theta), sin(theta)])
```
```python
from sympy import sin, cos, pi
from silkpy.numeric.surface.geodesic import geodesic_polar_ncoordinate
rho_arr, theta_arr, u_grid, v_grid = geodesic_polar_ncoordinate(
s.subs({a:5, r:2}), [pi/4, pi/4], rho1=2.4, nrho=12, ntheta=48)
from silkpy.symbolic.geometry_map import lambdify
x_grid, y_grid, z_grid = lambdify(s.subs({a:5, r:2}))(u_grid, v_grid)
```
```python
from silkpy.symbolic.surface.draw import draw_surface_plotly
import plotly.graph_objects as go
if surf_choice=='cylindrical':
R = 1.0
s = ParametricSurface([u, v], [R*cos(u), R*sin(u), v])
elif surf_choice=='cone':
w = float(pi) / 4
s = ParametricSurface([u, v], [v*cos(u), v*sin(u), v*cot(w)] )
fig = draw_surface_plotly(s, domain=[(-2*float(pi), 2*float(pi)), (4, 6)])
elif surf_choice=='torus':
fig = draw_surface_plotly(s.subs({a: 5, r:2}), domain=[(-float(pi), float(pi)), (-float(pi), float(pi))])
# fig.add_trace(go.Scatter3d(
# x=x_arr, y=y_arr, z=z_arr,
# mode='lines',
# line=dict(color=t_arr, width=2)
# ))
import numpy as np
for i in range(len(theta_arr)):
fig.add_trace(go.Scatter3d(
x=x_grid[:, i],
y=y_grid[:, i],
z=z_grid[:, i],
mode='lines',
line=dict(color=rho_arr, width=2)
))
for i in range(len(rho_arr)):
fig.add_trace(go.Scatter3d(
x=np.r_[x_grid[i,:], x_grid[i,:]],
y=np.r_[y_grid[i,:], y_grid[i,:]],
z=np.r_[z_grid[i,:], z_grid[i,:]],
mode='lines',
line=dict(color=rho_arr[i], width=2)
))
fig.show()
```
## Not yet done
```python
from sympy import series, Eq
t0 = symbols('t_0', real=True)
```
```python
t0 = 0
exprs[0].subs(t, t0) + (t-t0) * exprs[0].diff(t, 1).subs(t, t0)
exprs[1].subs(t, t0) + (t-t0) * exprs[1].diff(t, 1).subs(t, t0)
```
```python
exprs[0].evalf(subs={t:0}) + exprs[0].diff(t, 1).evalf(subs={t:0})
```
```python
from sympy import Eq
import sympy.solvers.ode as ode
ode.systems.dsolve_system([
Eq(linearized_exprs[0], 0),
Eq(linearized_exprs[1], 0)], funcs=[u1, u2])
```
```python
```
```python
def curvature_curve(surface):
from sympy import Matrix, Array, Eq
from sympy import Function, symbols
import sympy.solvers.ode as ode
t = symbols('t', real=True)
# u1, u2 = symbols('u1, u2', real=True, cls=Function)
u1 = Function(surface.sym(0), real=True)(t)
u2 = Function(surface.sym(1), real=True)(t)
curvature_curve_mat = Matrix([
[u1.diff(t)**2, -u1.diff(t) * u2.diff(t), u2.diff(t)**2],
Array(surface.E_F_G).subs(surface.sym(0), u1),
Array(surface.L_M_N).subs(surface.sym(1), u2)])
# typically there would be two solutions
sol_with_u1_equal_t = ode.systems.dsolve_system(
[Eq(curvature_curve_mat.det(), 0 ), Eq(u1.diff(t), 1)])[0]
sol_with_u2_equal_t = ode.systems.dsolve_system(
[Eq(curvature_curve_mat.det(), 0 ), Eq(u2.diff(t), 1)])[0]
return [sol_with_u1_equal_t, sol_with_u2_equal_t]
```
```python
curvature_curve(s)
```
```python
```
| 2a0e7bd5aeaee4d1fdc542dce06e4c2bea73b869 | 9,184 | ipynb | Jupyter Notebook | nb/construct_surface.ipynb | jiaxin1996/silkpy | 7720d47b33b731d9e11e67d99c8574514b8f177b | [
"MIT"
]
| null | null | null | nb/construct_surface.ipynb | jiaxin1996/silkpy | 7720d47b33b731d9e11e67d99c8574514b8f177b | [
"MIT"
]
| null | null | null | nb/construct_surface.ipynb | jiaxin1996/silkpy | 7720d47b33b731d9e11e67d99c8574514b8f177b | [
"MIT"
]
| null | null | null | 28.171779 | 118 | 0.524064 | true | 1,689 | Qwen/Qwen-72B | 1. YES
2. YES | 0.865224 | 0.782662 | 0.677178 | __label__eng_Latn | 0.263883 | 0.411644 |
# Introduction to TensorFlow
# Notebook - Automatic Differentiation
*Disclosure: This notebook is an adaptation of Toronto's Neural Networks and Deep Learning Course (CSC421) tutorial material.*
## Setup of python packages
Google Colaboratory comes with a preinstalled Python runtime
```
# Show used python version
import sys
print(f"Used python version: {sys.version}")
```
Used python version: 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0]
Install any python package via pip within the virtual machine provided by colab using
!pip install <package>
Note: each time the runtime is reset, you have to reinstall all required packages.
```
# Install symbolic python package
!pip install sympy
```
Requirement already satisfied: sympy in /usr/local/lib/python3.6/dist-packages (1.1.1)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.6/dist-packages (from sympy) (1.1.0)
```
# Import the installed package
import sympy as sp
```
## Approaches for computing derivatives
* **Numeric differentiation:** Approximating derivatives by finite differences:
$$
\frac{\partial}{\partial x_i} f(x_1, \dots, x_N) = \lim_{h \to 0} \frac{f(x_1, \dots, x_i + h, \dots, x_N) - f(x_1, \dots, x_i - h, \dots, x_N)}{2h}
$$
* **Symbolic differentiation:** automatic manipulation of mathematical expressions to get derivatives
- Takes a math expression (e.g. sigmoid) and returns a math expression: $$\sigma(x) = \frac{1}{e^{-x} + 1} \rightarrow \frac{d\sigma(x)}{dx} = \frac{e^{-x}}{(e^{-x} + 1)^2} = \sigma(x)(1 - \sigma(x)$$
- Used in SymPy or Mathematica
```
# Use SymPy for symbolic differentiation
# E.g. sigmoid of convolution of two inputs
x_1, x_2, w_1, w_2, b_1 = sp.symbols('x_1, x_2, w_1, w_2, b_1', real=True)
g = 1 / (1 + sp.exp(-(w_1 * x_1 + w_2 * x_2 + b_1)))
print(f'Mathematical Expression: $$g(x) = {sp.latex(g)}$$')
print(f'Partial Derivatives:')
for var in [x_1, x_2]:
print(f'$$ \\frac{{\\partial g}}{{\\partial {str(var)} }} = {sp.latex(sp.simplify(g.diff(var)))} $$')
```
Mathematical Expression: $$g(x) = \frac{1}{e^{- b_{1} - w_{1} x_{1} - w_{2} x_{2}} + 1}$$
Partial Derivatives:
$$ \frac{\partial g}{\partial x_1 } = \frac{w_{1} e^{- b_{1} - w_{1} x_{1} - w_{2} x_{2}}}{\left(e^{- b_{1} - w_{1} x_{1} - w_{2} x_{2}} + 1\right)^{2}} $$
$$ \frac{\partial g}{\partial x_2 } = \frac{w_{2} e^{- b_{1} - w_{1} x_{1} - w_{2} x_{2}}}{\left(e^{- b_{1} - w_{1} x_{1} - w_{2} x_{2}} + 1\right)^{2}} $$
If posted to a text cell, the code snippet generates the following latex expressions.
Mathematical Expression: $$g(x) = \frac{1}{e^{- b_{1} - w_{1} x_{1} - w_{2} x_{2}} + 1}$$
Partial Derivatives:
$$ \frac{\partial g}{\partial x_1 } = \frac{w_{1} e^{- b_{1} - w_{1} x_{1} - w_{2} x_{2}}}{\left(e^{- b_{1} - w_{1} x_{1} - w_{2} x_{2}} + 1\right)^{2}} $$
$$ \frac{\partial g}{\partial x_2 } = \frac{w_{2} e^{- b_{1} - w_{1} x_{1} - w_{2} x_{2}}}{\left(e^{- b_{1} - w_{1} x_{1} - w_{2} x_{2}} + 1\right)^{2}} $$
* **Automatic differentiation:** Takes code that computes a function and returns code that computes the derivative of that function.
- Reverse Mode AD: A method to get exact derivatives efficiently, by storing information as you go forward that you can reuse as you go backwards
- The goal isn't to obtain closed-form solutions, but to be able to wirte a program that efficiently computes the derivatives (Backpropagation)
## Autograd
* [Autograd](https://github.com/HIPS/autograd) is a Python package for automatic differentiation.
* There are a lot of great [examples](https://github.com/HIPS/autograd/tree/master/examples) provided with the source code
### What can Autograd do?
From the Autograd Github repository:
* Autograd can automatically differentiate native Python and Numpy code.
* It can handle a large subset of Python's features, including loops, conditional statements (if/else), recursion and closures
* It can also compute higher-order derivatives
* It uses reverse-mode differentiation (a.k.a. backpropagation) so it can efficiently take gradients of scalar-valued functions with respect to array-valued arguments.
### Autograd vs Deep Learning Frameworks
Many Deep Learning packages implement automatic differentiation using _domain-specific languages_ within Python. Older versions, such as TensorFlow 1.X, required you to _explicitly_ construct a computation graph; Autograd constructs a computation graph _implicitly_, by tracking the sequence of operations that have been performed during the execution of a program.
Note: There is no direct GPU support for Autograd. If you're interested in automatic differentiation with support for hardware accelerators have a look at [JAX](https://github.com/google/jax). It is a successor that provides Just-in-Time compilation like PyTorch or TensorFlow.
## Autograd Basic Usage
Autograd wraps the NumPy package providing an almost identical API to the NumPy functionality, but performs additional bookkeeping in the background to build the computation graph.
```
# Install autograd
!pip install autograd
```
Requirement already satisfied: autograd in /usr/local/lib/python3.6/dist-packages (1.3)
Requirement already satisfied: numpy>=1.12 in /usr/local/lib/python3.6/dist-packages (from autograd) (1.18.4)
Requirement already satisfied: future>=0.15.2 in /usr/local/lib/python3.6/dist-packages (from autograd) (0.16.0)
```
# Import autograd
import autograd.numpy as np # Import thinly-wrapped NumPy
from autograd import grad # The only function of Autograd, you need to call
```
```
# Define a function like normal, using Python and Numpy
def tanh(x):
y = np.exp(-x)
return (1.0 - y) / (1.0 + y)
# Create a *function* that computes the gradient of tanh
grad_tanh = grad(tanh) # autograd.grad takes a function as input
# Evaluate the gradient at x = 1.0
print(f'Autograd gradient: {grad_tanh(1.0)}')
# Compare to numeric gradient computed using finite differences
h = 0.0001
print(f'Finite differences: {(tanh(1.0001) - tanh(0.9999)) / (2*h)}')
```
Autograd gradient: 0.39322386648296376
Finite differences: 0.39322386636453377
## Autograd vs Manual Gradients via Staged Computation
In this example, we will see how the computation of a function can be written as a composition of simpler functions. This provides a scalable strategy for computing gradients using the chain rule.
Say we want to write a function to compute the gradient of the *sigmoid function*:
$$
\sigma(x) = \frac{1}{e^{-x} + 1}
$$
We can write $\sigma(x)$ as a composition of several elementary functions, as $\sigma(x) = s(c(b(a(x))))$, where:
$$
a(x) = -x
$$
$$
b(a) = e^a
$$
$$
c(b) = 1 + b
$$
$$
s(c) = \frac{1}{c}
$$
Here, we have "staged" the computation such that it contains several intermediate variables, each of which are basic expressions for which we can easily compute the local gradients.
The input to this function is $x$, and the final output is represented by $s$. We wish compute the gradient of $d$ with respect to $x$, $\frac{\partial s}{\partial x}$. In order to make use of our intermediate computations, we can use the chain rule as follows:
$$
\frac{\partial s}{\partial x} = \frac{\partial s}{\partial c} \frac{\partial c}{\partial b} \frac{\partial b}{\partial a} \frac{\partial a}{\partial x}
$$
```
def grad_sigmoid_manual(x):
"""Implements the gradient of the logistic sigmoid function
$\sigma(x) = 1 / (1 + e^{-x})$ using staged computation
"""
# Forward pass, keeping track of intermediate values for use in the
# backward pass
a = -x # -x in denominator
b = np.exp(a) # e^{-x} in denominator
c = 1 + b # 1 + e^{-x} in denominator
s = 1.0 / c # Final result, 1.0 / (1 + e^{-x})
# Backward pass
dsdc = (-1.0 / (c**2))
dsdb = dsdc * 1
dsda = dsdb * np.exp(a)
dsdx = dsda * (-1)
return dsdx
def sigmoid(x):
y = 1.0 / (1.0 + np.exp(-x))
return y
# Instead of writing grad_sigmoid_manual manually, we can use
# Autograd's grad function:
grad_sigmoid_automatic = grad(sigmoid)
# Compare the results of manual and automatic gradient functions:
print(f'Autograd gradient: {grad_sigmoid_automatic(2.0)}')
print(f'Manual gradient: {grad_sigmoid_manual(2.0)}')
```
Autograd gradient: 0.1049935854035065
Manual gradient: 0.1049935854035065
# Example
## Linear Regression with Autograd
The next section of the notebook shows an example of using Autograd in the context of **1-D linear regression** by gradient descent.
We try to fit a model to a function $y = wx + b$
### Review
We are given a set of data points $\{ (x_1, t_1), (x_2, t_2), \dots, (x_N, t_N) \}$, where each point $(x_i, t_i)$ consists of an *input value* $x_i$ and a *target value* $t_i$.
The **model** we use is:
$$
y_i = wx_i + b
$$
We want each predicted value $y_i$ to be close to the ground truth value $t_i$. In linear regression, we use squared error to quantify the disagreement between $y_i$ and $t_i$. The **loss function** for a single example is:
$$
\mathcal{L}(y_i,t_i) = \frac{1}{2} (y_i - t_i)^2
$$
The **cost function** is the loss averaged over all the training examples:
$$
\mathcal{C}(w,b) = \frac{1}{N} \sum_{i=1}^N \mathcal{L}(y_i, t_i) = \frac{1}{N} \sum_{i=1}^N \frac{1}{2} \left(wx_i + b - t_i \right)^2
$$
```
import autograd.numpy as np # Import wrapped NumPy from Autograd
from autograd import grad # To compute gradients
import matplotlib.pyplot as plt # Most common plotting / data visualization tool in Python
```
## Generate Synthetic Data
We generate a synthetic dataset $\{ (x_i, t_i) \}$ by first taking the $x_i$ to be linearly spaced in the range $[0, 1]$ and generating the corresponding value of $t_i$ using the following equation (where $w = 2$ and $b=0.5$):
$$
t_i = 2 x_i + 0.5 + \epsilon
$$
Here, $\epsilon \sim \mathcal{N}(0, 0.01)$ (that is, $\epsilon$ is drawn from a Gaussian distribution with mean 0 and variance 0.01). This introduces some random fluctuation in the data, to mimic real data that has an underlying regularity, but for which individual observations are corrupted by random noise.
```
# In our synthetic data, we have w = 2 and b = 0.5
N = 100 # Number of training data points
x = np.random.uniform(size=(N,))
eps = np.random.normal(size=(len(x),), scale=0.1)
t = 2.0 * x + 0.5 + eps
plt.plot(x, t, 'r.')
```
```
# Initialize random parameters
w = np.random.normal(0, 1)
b = np.random.normal(0, 1)
params = { 'w': w, 'b': b } # One option: aggregate parameters in a dictionary
def cost(params):
y = params['w'] * x + params['b']
return (1 / N) * np.sum(0.5 * np.square(y - t))
# Find the gradient of the cost function using Autograd
grad_cost = grad(cost)
num_epochs = 2000 # Number of epochs of training
alpha = 0.025 # Learning rate
for i in range(num_epochs):
# Evaluate the gradient of the current parameters stored in params
cost_params = grad_cost(params)
# Gradient Descent step
# Update parameters w and b
params['w'] = params['w'] - alpha * cost_params['w']
params['b'] = params['b'] - alpha * cost_params['b']
print(params)
```
{'w': 1.926380013972517, 'b': 0.5288309549899681}
```
# Plot the training data again, together with the line defined by y = wx + b
# where w and b are our final learned parameters
plt.plot(x, t, 'r.')
plt.plot([0, 1], [params['b'], params['w'] + params['b']], 'b-')
```
| 28bf1a699f3d75de3e93d71aee0a7ae9de4fc2b2 | 39,151 | ipynb | Jupyter Notebook | Tutorials/Autodiff.ipynb | SoumyadeepB/DeepLearning | 5dee1bbe0416ec2ca06fa4ebdaf2df50283e42fe | [
"MIT"
]
| 3 | 2020-07-30T11:14:33.000Z | 2021-05-12T09:33:59.000Z | Tutorials/Autodiff.ipynb | SoumyadeepB/DeepLearning | 5dee1bbe0416ec2ca06fa4ebdaf2df50283e42fe | [
"MIT"
]
| 1 | 2020-08-17T12:02:17.000Z | 2020-08-17T12:02:17.000Z | Tutorials/Autodiff.ipynb | SoumyadeepB/DeepLearning | 5dee1bbe0416ec2ca06fa4ebdaf2df50283e42fe | [
"MIT"
]
| 1 | 2021-05-12T09:34:07.000Z | 2021-05-12T09:34:07.000Z | 39,151 | 39,151 | 0.822916 | true | 3,429 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.917303 | 0.783296 | __label__eng_Latn | 0.967964 | 0.658192 |
# Linear Algebra
## Dot Products
A dot product is defined as
$ a \cdot b = \sum_{i}^{n} a_{i}b_{i} = a_{1}b_{1} + a_{2}b_{2} + a_{3}b_{3} + \dots + a_{n}b_{n}$
The geometric definition of a dot product is
$ a \cdot b = $\|\|b\|\|\|\|a\|\|
### What does a dot product conceptually mean?
A dot product is a representation of the similarity between two components, because it is calculated based upon shared elements.
The actual value of a dot product reflects the direction of change:
* **Zero**: we don't have any growth in the original direction
* **Positive** number: we have some growth in the original direction
* **Negative** number: we have negative (reverse) growth in the original direction
```python
A = [0,2]
B = [0,1]
def dot_product(x,y):
return sum(a*b for a,b in zip(x,y))
dot_product(A,B)
# What will the dot product of A and B be?
```
-0.4480736161291701
```python
A = [1,2]
B = [2,4]
# What will the dot product of A and B be?
```
```python
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
data_corpus = ["John likes to watch movies. Mary likes movies too.",
"John also likes to watch football games. Mary does not like football much."]
X = vectorizer.fit_transform(data_corpus)
print(vectorizer.get_feature_names())
```
['also', 'does', 'football', 'games', 'john', 'like', 'likes', 'mary', 'movies', 'much', 'not', 'to', 'too', 'watch']
# Bag of Words Models
You can use **`sklearn.feature_extraction.text.CountVectorizer`** to easily convert your corpus into a bag of words matrix:
```python
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
data_corpus = ["John likes to watch movies. Mary likes movies too.",
"John also likes to watch football games. Mary does not like football much."]
X = vectorizer.fit_transform(data_corpus)
```
Note that the output `X` here is not your traditional Numpy matrix! Calling **`type(X)`** here will yield **`<class 'scipy.sparse.csr.csr_matrix'>`**, which is a **CSR ([compressed sparse row format matrix](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.csr_matrix.html))**. To convert it into an actual matrix, call the `toarray()` method:
```python
X.toarray()
```
Your output will be
```
array([[0, 0, 0, 0, 1, 0, 2, 1, 2, 0, 0, 1, 1, 1],
[1, 1, 2, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1]], dtype=int64)
```
Notice that using **`X.shape`** $\rightarrow$ `(2,14)`, indicating a total vocabulary size $V$ of 14. To get what word each of the 14 columns corresponds to, use **`vectorizer.get_feature_names()`**:
```
['also', 'does', 'football', 'games', 'john', 'like', 'likes', 'mary', 'movies', 'much', 'not', 'to', 'too', 'watch']
```
Notice, however, that as the vocabulary size $V$ increases, the percent of the matrix taken up by zero values increases:
```python
corpus = [
"Some analysts think demand could drop this year because a large number of homeowners take on remodeling projectsafter buying a new property. With fewer homes selling, home values easing, and mortgage rates rising, they predict home renovations could fall to their lowest levels in three years.",
"Most home improvement stocks are expected to report fourth-quarter earnings next month.",
"The conversation boils down to how much leverage management can get out of its wide-ranging efforts to re-energize operations, branding, digital capabilities, and the menu–and, for investors, how much to pay for that.",
"RMD’s software acquisitions, efficiency, and mix overcame pricing and its gross margin improved by 90 bps Y/Y while its operating margin (including amortization) improved by 80 bps Y/Y. Since RMD expects the slower international flow generator growth to continue for the next few quarters, we have lowered our organic growth estimates to the mid-single digits."
]
X = vectorizer.fit_transform(corpus).toarray()
```
```python
corpus = [
"Some analysts think demand could drop this year because a large number of homeowners take on remodeling projectsafter buying a new property. With fewer homes selling, home values easing, and mortgage rates rising, they predict home renovations could fall to their lowest levels in three years.",
"Most home improvement stocks are expected to report fourth-quarter earnings next month.",
"The conversation boils down to how much leverage management can get out of its wide-ranging efforts to re-energize operations, branding, digital capabilities, and the menu–and, for investors, how much to pay for that.",
"RMD’s software acquisitions, efficiency, and mix overcame pricing and its gross margin improved by 90 bps Y/Y while its operating margin (including amortization) improved by 80 bps Y/Y. Since RMD expects the slower international flow generator growth to continue for the next few quarters, we have lowered our organic growth estimates to the mid-single digits. "
]
X = vectorizer.fit_transform(corpus).toarray()
import numpy as np
from sys import getsizeof
zeroes = np.where(X.flatten() == 0)[0].size
percent_sparse = zeroes / X.size
print(f"The bag of words feature space is {round(percent_sparse * 100,2)}% sparse. \n\
That's approximately {round(getsizeof(X) * percent_sparse,2)} bytes of wasted memory. This is why sklearn uses CSR (compressed sparse rows) instead of normal matrices!")
```
The bag of words feature space is 72.63% sparse.
That's approximately 2777.34 bytes of wasted memory. This is why sklearn uses CSR (compressed sparse rows) instead of normal matrices!
# Distance Measures
## Euclidean Distance
Euclidean distances can range from 0 (completely identically) to $\infty$ (extremely dissimilar). **Magnitude** plays an extremely important role:
```python
from math import sqrt
def euclidean_distance_1(x,y):
distance = sum((a-b)**2 for a, b in zip(x, y))
return sqrt(distance)
```
There's typically an easier way to write this function that takes advantage of Numpy's vectorization capabilities:
```python
import numpy as np
def euclidean_distance_2(x,y):
x = np.array(x)
y = np.array(y)
return np.linalg.norm(x-y)
```
# Similarity Measures
Similarity measures will always range between -1 and 1. A similarity of -1 means the two objects are complete opposites, while a similarity of 1 indicates the objects are identical.
## Pearson Correlation Coefficient
* We use **ρ** when the correlation is being measured from the population, and **r** when it is being generated from a sample.
* An r value of 1 represents a **perfect linear** relationship, and a value of -1 represents a perfect inverse linear relationship.
The equation for Pearson's correlation coefficient is
$$
ρ_{Χ_Υ} = \frac{cov(X,Y)}{σ_Xσ_Y}
$$
### Intuition Behind Pearson Correlation Coefficient
#### When $ρ_{Χ_Υ} = 1$ or $ρ_{Χ_Υ} = -1$
This requires **$cov(X,Y) = σ_Xσ_Y$** or **$-1 * cov(X,Y) = σ_Xσ_Y$** (in the case of $ρ = -1$) . This corresponds with all the data points lying perfectly on the same line.
## Cosine Similarity
The cosine similarity of two vectors (each vector will usually represent one document) is a measure that calculates $ cos(\theta)$, where $\theta$ is the angle between the two vectors.
Therefore, if the vectors are **orthogonal** to each other (90 degrees), $cos(90) = 0$. If the vectors are in exactly the same direction, $\theta = 0$ and $cos(0) = 1$.
Cosine similiarity **does not care about the magnitude of the vector, only the direction** in which it points. This can help normalize when comparing across documents that are different in terms of word count.
### Shift Invariance
* The Pearson correlation coefficient between X and Y does not change with you transform $X \rightarrow a + bX$ and $Y \rightarrow c + dY$, assuming $a$, $b$, $c$, and $d$ are constants and $b$ and $d$ are positive.
* Cosine similarity does, however, change when transformed in this way.
<h1><span style="background-color: #FFFF00">Exercise (20 minutes):</span></h1>
>In Python, find the **cosine similarity** and the **Pearson correlation coefficient** of the two following sentences, assuming a **one-hot encoded binary bag of words** model. You may use a library to create the BoW feature space, but do not use libraries other than `numpy` or `scipy` to compute Pearson and cosine similarity:
>`A = "John likes to watch movies. Mary likes movies too"`
>`B = "John also likes to watch football games, but he likes to watch movies on occasion as well"`
# Use the Example Below to Create Your Own Cosine Similarity Function
#### 1. Create a list of all the **vocabulary $V$**
Using **`sklearn`**'s **`CountVectorizer`**:
```python
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
data_corpus = ["John likes to watch movies. Mary likes movies too",
"John also likes to watch football games, but he likes to watch movies on occasion as well"]
X = vectorizer.fit_transform(data_corpus)
V = vectorizer.get_feature_names()
```
##### Native Implementation:
```python
def get_vocabulary(sentences):
vocabulary = {} # create an empty set - question: Why not a list?
for sentence in sentences:
# this is a very crude form of "tokenization", would not actually use in production
for word in sentence.split(" "):
if word not in vocabulary:
vocabulary.add(word)
return vocabulary
```
#### 2. Create your Bag of Words model
```python
X = X.toarray()
print(X)
```
Your console output:
```python
[[0 0 0 1 2 1 2 1 1 1]
[1 1 1 1 1 0 0 1 0 1]]
```
#### 3. Define your cosine similarity functions
```python
from scipy.spatial.distance import cosine # we are importing this library to check that our own cosine similarity func works
from numpy import dot # to calculate dot product
from numpy.linalg import norm # to calculate the norm
def cosine_similarity(A, B):
numerator = dot(A, B)
denominator = norm(A) * norm(B)
return numerator / denominator
def cosine_distance(A,B):
return 1 - cosine_similarity
A = [0,2,3,4,1,2]
B = [1,3,4,0,0,2]
# check that your native implementation and 3rd party library function produce the same values
assert round(cosine_similarity(A,B),4) == round(cosine(A,B),4)
```
#### 4. Get the two documents from the BoW feature space and calculate cosine similarity
```python
cosine_similarity(X[0], X[1])
```
>0.5241424183609592
```python
from scipy.spatial.distance import cosine
from numpy import dot
import numpy as np
from numpy.linalg import norm
def cosine_similarity(A, B):
numerator = dot(A, B)
denominator = norm(A) * norm(B)
return numerator / denominator # remember, you take 1 - the distance to get the distance
def cosine_distance(A,B):
return 1 - cosine_similarity
A = [0,2,3,4,1,2]
B = [1,3,4,0,0,2]
# check that your native implementation and 3rd party library function produce the same values
assert round(cosine_similarity(A,B),4) == round(1 - cosine(A,B),4)
# check for shift invariance
cosine(np.array(A), B)
```
0.31115327980633567
```python
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
# take two very similar sentences, should have high similarity
# edit these sentences to become less similar, and the similarity score should decrease
data_corpus = ["John likes to watch movies. Mary likes movies too.",
"John also likes to watch football games"]
X = vectorizer.fit_transform(data_corpus)
X = X.toarray()
print(vectorizer.get_feature_names())
cosine_similarity(X[0], X[1])
```
['also', 'football', 'games', 'john', 'likes', 'mary', 'movies', 'to', 'too', 'watch']
0.5241424183609592
# Pointwise Mutual Information (We'll Cover Next Week)
Pointwise mutual information measures the ratio between the **joint probability of two events happening** with the probabilities of the two events happening, assuming they are independent. It can be defined with the following equation:
$$
\begin{equation}
MI_{i,j} = log(\frac{P(i,j)}{P(i)P(j)})
\end{equation}
$$
Remember that when two events are independent, $P(i,j) = P(i)P(j)$.
```python
from nltk.collocations import BigramCollocationFinder, BigramAssocMeasures
from nltk.stem import WordNetLemmatizer
from nltk import word_tokenize
lemmatizer = WordNetLemmatizer()
from nltk.corpus import stopwords
stopwords = set(stopwords.words('english') + [".",",",":", "''", "'s", "'", "``"])
```
```python
documents = []
articles = [
"bbcsport/football/001.txt",
"bbcsport/football/002.txt",
"bbcsport/football/003.txt"
]
for article in articles:
article = open(article)
for line in article.readlines():
line = line.replace("\n", "")
if len(line) > 0:
line = [lemmatizer.lemmatize(token) for token in word_tokenize(line)]
for word in line:
if word in stopwords:
line.remove(word)
documents.append(line)
```
```python
collocation_finder = BigramCollocationFinder.from_documents(documents)
measures = BigramAssocMeasures()
collocation_finder.nbest(measures.raw_freq, 10)
```
[('Manchester', 'United'),
('Van', 'Nistelrooy'),
('``', 'I'),
('said', '``'),
('``', 'He'),
('.', "''"),
('23', 'minute'),
('Alex', 'Ferguson'),
('But', 'wa'),
('City', 'Sunday')]
```python
```
| 4fd9971d58f179fff2529305fb60b0f5e99f2915 | 19,460 | ipynb | Jupyter Notebook | week2/Linear Algebra, Distance and Similarity (Completed).ipynb | ychennay/dso-599-text-analytics-nlp | e02e136d24e5a704e2ad69c599e89f9173072968 | [
"MIT"
]
| 19 | 2019-03-06T02:34:41.000Z | 2021-12-28T23:06:57.000Z | week2/Linear Algebra, Distance and Similarity (Completed).ipynb | ychennay/dso-599-text-analytics-nlp | e02e136d24e5a704e2ad69c599e89f9173072968 | [
"MIT"
]
| null | null | null | week2/Linear Algebra, Distance and Similarity (Completed).ipynb | ychennay/dso-599-text-analytics-nlp | e02e136d24e5a704e2ad69c599e89f9173072968 | [
"MIT"
]
| 30 | 2019-03-06T02:25:01.000Z | 2021-04-09T14:02:09.000Z | 36.373832 | 378 | 0.579188 | true | 3,429 | Qwen/Qwen-72B | 1. YES
2. YES | 0.810479 | 0.803174 | 0.650955 | __label__eng_Latn | 0.990148 | 0.350718 |
# ASSIGMENT 1
```python
import sympy as sp
import numpy as np
import pandas as pd
from astropy import units as u
from astropy.coordinates import solar_system_ephemeris
from astropy.time import Time
from astropy import constants as const
solar_system_ephemeris.set("jpl")
import matplotlib.pyplot as plt
from sympy.utilities.lambdify import lambdify
from scipy.integrate import odeint
from matplotlib.collections import LineCollection
from matplotlib.colors import ListedColormap, BoundaryNorm
```
```python
from poliastro.bodies import Earth, Jupiter, Sun
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter
plt.style.use("seaborn")
earth = Orbit.from_body_ephem(Earth)
jupiter = Orbit.from_body_ephem(Jupiter)
sun = Orbit.from_body_ephem(Sun)
# frame = OrbitPlotter()
# frame.plot(earth, label="Earth")
# frame.plot(jupiter, label="Jupiter")
EPOCH = Time.now()
EPOCH = Time(EPOCH, scale='tdb')
earth = Orbit.from_body_ephem(Earth, EPOCH)
jupiter = Orbit.from_body_ephem(Jupiter, EPOCH)
sun = Orbit.from_body_ephem(Sun, EPOCH)
```
## 2D n-body problem
Set up symbols to be used for function to set up n-body problem
### Symbol space
```python
r_i_x, r_i_y, r_j_x, r_j_y = sp.symbols('r_i_x, r_i_y, r_j_x, r_j_y', real=True) # Positions
V_i_x, V_i_y, V_j_x, V_j_y = sp.symbols('V_i_x, V_i_y, V_j_x, V_j_y', real=True) # Velocities
G = sp.symbols('G', real=True)
M, m_i, m_j = sp.symbols('M, m_i, m_j', real=True)
r_i_vec = sp.Matrix([r_i_x, r_i_y])
r_j_vec = sp.Matrix([r_j_x, r_j_y])
V_i_vec = sp.Matrix([V_i_x, V_i_y])
V_j_vec = sp.Matrix([V_j_x, V_j_y])
r_ij_vec = r_j_vec - r_i_vec
r_i_norm, r_j_norm, r_ij_norm = sp.symbols(['|r_i|', '|r_j|', '|r_ij|'])
r_i_sym, r_j_sym, r_ij_sym = sp.MatrixSymbol('r_i', 2, 1), sp.MatrixSymbol('r_j', 2, 1), sp.MatrixSymbol('r_ij', 2, 1)
```
### Equations of Motion: Barycentric form
```python
"""
The following symbolic equations are those defining the n-body
problem with respect to the barcenter of the system. The following
are the respective outputs of the expressions using sympy.pprint().
It should be noted that the folowing samples are only between two
bodies.
"""
eom_bc1_vec = - G * M / (r_i_norm ** 3) * r_i_sym
eom_bc2_vec = G * m_j * (1/ (r_ij_norm **3) - 1/(r_i_norm**3)) * r_ij_sym
"""
----------------------
Vector representation.
----------------------
>>> from sympy import pprint
>>> pprint(eom_bc1_vec + eom_bc2_vec)
-G⋅M ⎛ 1 1 ⎞
──────⋅rᵢ +G⋅m_j⋅⎜- ────── + ───────⎟⋅r_ij
3 ⎜ 3 3⎟
|r_i| ⎝ |r_i| |r_ij| ⎠
"""
eom_bc1 = - G * M / (r_i_vec.norm() ** 3) * r_i_vec
eom_bc2 = G * m_j * (1/ (r_ij_vec.norm() ** 3) - 1/(r_i_vec.norm() **3) ) * r_ij_vec
"""
------------------------
Component representation.
------------------------
>>> from sympy import pprint, latex
>>> print(latex(eom_bc1 + eom_bc2))
The image below shows the latex rendering of the above code output.
"""
pass
```
Using the previous general definition for the barycentric EOM between i and j, we can now create a function to create the system of equations given any list of bodies. This is what `_barycentric_eom(bodies, vector=False)` is purposed for.
```python
def _barycentric_eom(bodies, vector=False):
"""
Returns the equations of motion for all bodies within the n-body barycentric reference frame.
-G⋅M ⎛ 1 1 ⎞
──────⋅rᵢ +G⋅m_j⋅⎜- ────── + ───────⎟⋅r_ij
3 ⎜ 3 3⎟
|r_i| ⎝ |r_i| |r_ij| ⎠
"""
_system = []
if vector is False:
for body_i in bodies:
_body_system = []
# Subscript symbol of body_i
sub_i = body_i.name[0]
# Parameter symbols of body_i
var_i = {
m_i: sp.symbols("m_{}".format(sub_i)),
r_i_x: sp.symbols("r_{}_x".format(sub_i)),
r_i_y: sp.symbols("r_{}_y".format(sub_i)),
}
# Add two-body influence from EOM
_body_system.append(eom_bc1.subs(var_i))
for body_j in bodies:
# Ensure that body_j is not body_i, else skip.
if body_j != body_i:
# Subscript symbol of body_j
sub_j = body_j.name[0]
# Parameter symbols of body_j
var_j = {
m_j: sp.symbols("m_{}".format(sub_j)),
r_j_x: sp.symbols("r_{}_x".format(sub_j)),
r_j_y: sp.symbols("r_{}_y".format(sub_j)),
}
# Add body_j perturbations from EOM
_body_system.append(eom_bc2.subs({**var_j, **var_i}))
# Skip if body_j == body_i
else:
pass
lhs = sp.Matrix([*sp.symbols(['a_{}_x'.format(sub_i), 'a_{}_y'.format(sub_i)])])
rhs = sum(_body_system, sp.zeros(2,1))
_system.append(sp.Eq(
lhs[0], rhs[0]
))
_system.append(sp.Eq(
lhs[1], rhs[1]
))
return _system
"""
------------------------
Component representation.
------------------------
>>> bodies = [Earth, Sun]
>>> print(latex(_barycentric_eom(bodies)))
The image below shows the latex rendering of the above code output.
# TODO: Output format changed from below to sets of equations.
"""
pass
```
Sample output is seen below in vector format. The `sympy` allows for ease of LaTeX with its integrated formatter, from which the following render was created.
```python
def _eq_linear_momentum(bodies): # barycentre
"""
returns Eq in vector format
"""
_eq = []
req = []
for _body in bodies:
sub_i = _body.name[0]
_eq.append( (sp.symbols("m_{}".format(sub_i)) *
sp.Matrix(sp.symbols("V_{}_x V_{}_y".format(sub_i, sub_i)))
))
shape = _eq[0].shape
start = sp.zeros(*shape)
m = sum(_eq, sp.zeros(*shape))
return [sp.Eq(0, m[0]), sp.Eq(0, m[1])]
pprint(_eq_linear_momentum(bodies))
```
[0 = V_E_x⋅m_E + V_J_x⋅m_J + V_S_x⋅m_S, 0 = V_E_y⋅m_E + V_J_y⋅m_J + V_S_y⋅m_S]
```python
def _eq_angular_momentum(bodies): # 2D
_eq = []
for _body in bodies:
_n = _body.name
_eq.append(sp.symbols("m_{}".format(_n)) *
(
sp.symbols("r_{}_x".format(_n)) * sp.symbols("V_{}_y".format(_n)) - sp.symbols("r_{}_y".format(_n)) * sp.symbols("V_{}_x".format(_n))
)
)
return [sp.Eq(sp.symbols("H_z"), sum(_eq, 0))]
```
```python
from mpmath import power
def _eq_energy_conservation(bodies, vector=False):
"""
Returns the equation for the n-body system defining the total energy of the system.
"""
_eq = []
E_k = 0.5 * m_i * V_i_vec.norm() ** 2
E_p = - 0.5 * G * (m_i * m_j) / r_ij_vec.norm()
for i in bodies:
sub_i=i.name[0]
var_i={
m_i:i.mass.si.value,
r_i_x:sp.symbols('r_{}_x'.format(sub_i)),
r_i_y:sp.symbols('r_{}_y'.format(sub_i)),
V_i_x:sp.symbols('V_{}_x'.format(sub_i)),
V_i_y:sp.symbols('V_{}_y'.format(sub_i))
}
_eq.append(E_k.subs(var_i))
for j in bodies:
if i != j:
sub_j=j.name[0]
var_j={
m_j:j.mass.si.value,
r_j_x:sp.symbols('r_{}_x'.format(sub_j)),
r_j_y:sp.symbols('r_{}_y'.format(sub_j)),
V_j_x:sp.symbols('V_{}_x'.format(sub_j)),
V_j_y:sp.symbols('V_{}_y'.format(sub_j))
}
_eq.append(E_p.subs({**var_i, **var_j}))
else:
pass
return sp.Eq(sp.symbols("C"), sum(_eq, 0))
```
```python
def _state_matrix(bodies):
"""
Creates a symbolic vector of the state of the system given the bodies.
"""
states = []
for _body in bodies:
sub_i = _body.name[0]
for s in 'r_{}_x r_{}_y'.split(' '):
states.append(sp.symbols(s.format(sub_i)))
for _body in bodies:
sub_i = _body.name[0]
for s in 'V_{}_x V_{}_y'.split(' '):
states.append(sp.symbols(s.format(sub_i)))
return sp.Matrix(states)
def _derivative_matrix(bodies):
"""
Create a symbolic vector for the state derivative of the system given the bodies.
"""
states = []
eom = _barycentric_eom(bodies)
for _body in bodies:
sub_i = _body.name[0]
for s in 'V_{}_x V_{}_y'.split(' '):
states.append(sp.symbols(s.format(sub_i)))
for _body in bodies:
sub_i = _body.name[0]
for s in 'a_{}_x a_{}_y'.split(' '):
states.append(sp.symbols(s.format(sub_i)))
return sp.Matrix(states).subs([(eom[i].lhs, eom[i].rhs) for i in range(len(eom))])
```
```python
def var(bodies):
"""
Function built to return all constant parameters for a function
prior to a function being lambdified.
"""
_var = {
G: const.G.si.value,
M: sum([b.mass.si.value for b in bodies])
}
for body in bodies:
_sub_i = body.name[0]
_var_b = {
sp.symbols("m_{}".format(_sub_i)): body.mass.si.value,
}
_var = {**_var, **_var_b}
return _var
```
```python
def S0(bodies):
"""
Returns the initial state vector given the list of involved bodies.
It must be noted that the calculations below for Jupiter are only
valid when Jupiter is part of the input argument. Otherwise it is
ignored in the calculation of the barcentre and velocity of bodies.
Some important information:
===============================
1) Imposed uncertainty |||
===============================
Name = Gravitational constant
Value = 6.67408e-11
Uncertainty = 3.1e-15
Unit = m3 / (kg s2)
Reference = CODATA 2014
===============================
2) Parameters used |||
===============================
Jupiter -----------------------
SMA: 5.2044 AU
Earth -------------------------
SMA: 1.0 AU
"""
# Step 1: Assume two-body problem positioning in arbitrary frame on x-axis.
_a_Earth = u.AU.to(u.m)
_a_Jupiter = u.AU.to(u.m)* 5.2044
## Initialised positions for bodies
_pos_x = {
Earth: _a_Earth,
Jupiter: _a_Jupiter,
Sun: 0.0}
# Step 2: Calculate circular velocity using the SMA
_V_circ_Earth = np.sqrt(Sun.k.si.value/_a_Earth)
_V_circ_Jupiter = np.sqrt(Sun.k.si.value/_a_Jupiter)
# Step 3: Calculate the position of the Barycentre in perifocal frame.
_num = sum([b.mass.si.value * _pos_x[b] for b in bodies])
_M = sum([b.mass.si.value for b in bodies])
_r_cm_x = _num/_M
# Step 4: Offset x_position of bodies by r_cm
for b in bodies:
_pos_x[b] += - _r_cm_x
# Step 5: Calculate velocity of Sun for sum of linear momentum = 0
_st = {
sp.symbols('r_E_x'):_pos_x[Earth],
sp.symbols('r_E_y'):0.0,
sp.symbols('r_S_x'):_pos_x[Sun],
sp.symbols('r_S_y'): 0.0,
sp.symbols('r_J_x'):_pos_x[Jupiter],
sp.symbols('r_J_y'):0.0,
sp.symbols('V_E_x'):0.0,
sp.symbols('V_E_y'):_V_circ_Earth,
sp.symbols('V_J_x'):0.0,
sp.symbols('V_J_y'):_V_circ_Jupiter,
sp.symbols('m_E'):Earth.mass.si.value,
sp.symbols('m_J'):Jupiter.mass.si.value,
sp.symbols('m_S'):Sun.mass.si.value
}
## Solving the set of linear equations for the entire system's linear momentum.
linear_momentum_eqs = [_eq.subs(_st) for _eq in _eq_linear_momentum(bodies)]
sol = sp.solve(linear_momentum_eqs, dict=True)
_st[sp.symbols("V_S_x")] = sol[0][sp.symbols('V_S_x')]
_st[sp.symbols("V_S_y")] = sol[0][sp.symbols('V_S_y')]
## Generate state vector depending on given bodies.
_state = [_st[_s] for _s in np.array(S).flatten()]
# Step 6: Return the state vector!
return np.array(_state).flatten().astype(float)
```
## Prepare for propagation (Earth + Sun)
```python
# Define bodies for n-body system.
bodies = [Earth, Sun]
# Instantiate state-vector from bodies list.
S = _state_matrix(bodies=bodies)
# Instantiate state-vector derivative from bodies list.
F = _derivative_matrix(bodies)
# Energy equation for evaluation.
E = _eq_energy_conservation(bodies).subs(var(bodies))
# Lambdify for increased computation of energy.
E = lambdify((S), (E.rhs))
# Substitute constants to increase speed through propagation.
F = F.subs(var(bodies))
```
## Prepare for propagation (Earth + Sun + Jupiter)
```python
# Define bodies for n-body system.
bodies = [Earth, Sun, Jupiter]
# Instantiate state-vector from bodies list.
S = _state_matrix(bodies=bodies)
# Instantiate state-vector derivative from bodies list.
F = _derivative_matrix(bodies)
# Energy equation for evaluation.
E = _eq_energy_conservation(bodies).subs(var(bodies))
# Lambdify for increased computation of energy.
E = lambdify((S), (E.rhs))
# Substitute constants to increase speed through propagation.
F = F.subs(var(bodies))
bodies = [Earth, Sun, Jupiter]
from sympy import pprint, latex
print(latex(sp.Matrix(S0(bodies))))
```
\left[\begin{matrix}148854763525.635\\0.0\\-743107174.364845\\0.0\\777824051096.715\\0.0\\0.0\\29784.6920652165\\0.0\\-12.5551530705683\\0.0\\13055.9290198896\end{matrix}\right]
```python
# Lambdify the symbolic expression for increased computational speed.
"""
The Lambdified equation for dS is used in the following way, and returns accordingly.
>>> dS(*S)
::returns::
[V_Earth_x V_Earth_y V_Sun_x V_Sun_y a_Earth_x a_Earth_y a_Sun_x a_Sun_y]
::type:: np.ndarray
"""
dS = lambdify((S), (F))
# Define function for the propagation procedure.
def dS_dt(_S, t):
"""
Integration of the governing vector differential equation for
the barcyentric form of the two-body problem.
[example]
S = [r_Earth_x r_Earth_y r_Sun_x r_Sun_y V_Earth_x V_Earth_y V_Sun_x V_Sun_y]
F = [V_Earth_x V_Earth_y V_Sun_x V_Sun_y a_Earth_x a_Earth_y a_Sun_x a_Sun_y]
"""
return np.array(dS(*_S)).flatten().astype(float)
```
```python
# Define the time-steps for the propagation.
t = np.arange(0.0, 365*24*60*60, 100)
# Calculate the results from the propagation.
S_l = odeint(dS_dt, S0(bodies), t)
"""
The following plot is for the energy throughout the time domain.
- Setting the time-step to 0.0000001 for np.arange(0, 0.00001)
shows that the jump in energy is not a result of integration error,
but implementation error.
"""
# Plot graph of Energy throughout time domain of entire system.
# figure(figsize=(6,4), dpi=300, facecolor='w', edgecolor='k')
# plt.plot(t, [E(*S_l[i,:]) for i in range(len(t))])
# plt.axes().set_xlabel('t [s]')
# plt.axes().set_ylabel('C [J]')
# ax = plt.gca()
# ax.ticklabel_format(useOffset=False)
# plt.show()
# figure(figsize=(7,4), dpi=300, facecolor='w', edgecolor='k')
# plt.axes().set_aspect('equal')
# plt.axes().set_xlabel('x [m]')
# plt.axes().set_ylabel('y [m]')
# for idx, body in enumerate(bodies):
# plt.plot(S_l[:,idx*2], S_l[:,idx*2+1], label=body.name)
# plt.show()
"""
Velocity plots for Earth and Sun respectively.
"""
# plt.plot(S_l[:,4], S_l[:,5])
# plt.plot(S_l[:,6], S_l[:,7])
# plt.show()
pass
```
```python
x = S_l[:,2]
y = S_l[:,3]
dx = S_l[:,8]
dy = S_l[:,9]
dydx = t
# Create a set of line segments so that we can color them individually
# This creates the points as a N x 1 x 2 array so that we can stack points
# together easily to get the segments. The segments array for line collection
# needs to be (numlines) x (points per line) x 2 (for x and y)
points = np.array([x, y]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
points2 = np.array([dx, dy]).T.reshape(-1, 1, 2)
segments2 = np.concatenate([points2[:-1], points2[1:]], axis=1)
fig, axs = plt.subplots(1,2, figsize=(10,5), dpi=300, facecolor='w', edgecolor='k')
axs[0].set_aspect('equal', 'datalim')
axs[0].set_xlim(np.min(x)*1.05, np.max(x)*1.05)
axs[0].set_ylim(np.min(y)*1.05, np.max(y)*1.05)
# Create a continuous norm to map from data points to colors
norm = plt.Normalize(dydx.min(), dydx.max())
lc = LineCollection(segments, cmap='viridis', norm=norm)
lc.set_array(dydx)
lc.set_linewidth(1)
line = axs[0].add_collection(lc)
fig.colorbar(line, ax=axs[0], label='Time [s]')
axs[0].set_ylabel('y [m]')
axs[0].set_xlabel('x [m]')
axs[1].set_aspect('equal', 'datalim')
axs[1].set_xlim(np.min(dx)*1.05, np.max(dx)*1.05)
axs[1].set_ylim(np.min(dy)*1.05, np.max(dy)*1.05)
# Create a continuous norm to map from data points to colors
norm = plt.Normalize(dydx.min(), dydx.max())
lc = LineCollection(segments2, cmap='viridis', norm=norm)
lc.set_array(dydx)
lc.set_linewidth(1)
line = axs[1].add_collection(lc)
# fig.colorbar(line, ax=axs[0], label='Time [s]')
plt.subplots_adjust(left=0.01,wspace=0.30)
# left = 0.125 # the left side of the subplots of the figure
# right = 0.9 # the right side of the subplots of the figure
# bottom = 0.1 # the bottom of the subplots of the figure
# top = 0.9 # the top of the subplots of the figure
# wspace = 0.2 # the amount of width reserved for space between subplots,
# # expressed as a fraction of the average axis width
# hspace = 0.2 # the amount of height reserved for space between subplots,
# # expressed as a fraction of the average axis height
axs[1].set_ylabel('y [m/s]')
axs[1].set_xlabel('x [m/s]')
plt.show()
```
```python
```
```python
```
```python
```
| 41f32b2569a114f7f800f4b4855de5cdce6d3c76 | 163,495 | ipynb | Jupyter Notebook | planetary_sciences_assignment_1.ipynb | MattTurnock/PlanetarySciencesMatt | 81954d2182d9577bd7327a98c45963ae42968df4 | [
"MIT"
]
| null | null | null | planetary_sciences_assignment_1.ipynb | MattTurnock/PlanetarySciencesMatt | 81954d2182d9577bd7327a98c45963ae42968df4 | [
"MIT"
]
| null | null | null | planetary_sciences_assignment_1.ipynb | MattTurnock/PlanetarySciencesMatt | 81954d2182d9577bd7327a98c45963ae42968df4 | [
"MIT"
]
| null | null | null | 206.694058 | 137,056 | 0.885226 | true | 5,329 | Qwen/Qwen-72B | 1. YES
2. YES | 0.912436 | 0.835484 | 0.762325 | __label__eng_Latn | 0.725868 | 0.609469 |
# Fitting Laplace equation
11.6 $\mu$m-sized droplet making a contact angle of 140$^{\circ}$ on the AFM cantilever.
Volume of the spherical cap is given by
\begin{equation*}
V = \frac{\pi}{3} R^3 (2 + \cos \theta) (1-\cos \theta)^2
\end{equation*}
```python
import numpy as np
import matplotlib.pyplot as plt
import os
import glob
import pandas as pd
import seaborn as sns
from scipy import interpolate
import warnings
warnings.filterwarnings('ignore')
sns.set_style("ticks")
color_b = sns.xkcd_rgb["marine"]
theta = 140
cos_t = np.cos(theta/180*np.pi) # contact angle
R_ = (11.6/2)*1e-6 # um
r_ = R_*np.sin(theta/180*np.pi)
V_ = np.pi/3*R_**3*(2+cos_t)*(1-cos_t)**2
print ("Contact size on cantilever is {:e} m".format(2*r_))
print ("Droplet volume is {:e} m^3".format(V_))
H_init = (1-cos_t)*R_
print ("Initial height is {:e} m".format(H_init))
```
Contact size on cantilever is 7.456336e-06 m
Droplet volume is 7.863491e-16 m^3
Initial height is 1.024306e-05 m
Reading text file containing force spectroscopy measurements
```python
file_name = 'droplet.txt'
col_names = ["Vertical Tip Position", "Vertical Deflection", "Precision 5", "Height",
"Height (measured & smoothed)", "Height (measured)", "Series Time", "Segment Time"]
df = pd.read_csv(file_name, comment='#', sep=' ', names = col_names)
x1 = df["Vertical Tip Position"]
x1 = x1 + 0.1*1e-6
y1 = df["Vertical Deflection"]*1e9 # nN
t = df["Series Time"]
h = df["Height (measured)"]
i_start = 0
i_end = 500000
i_turn = np.where(h == np.amin(h))[0][0]
i_skip = 1
x1_approach = x1[i_start:i_turn:i_skip]*1e6
y1_approach = y1[i_start:i_turn:i_skip]
x1_retract = x1[i_turn:i_end:i_skip]*1e6
y1_retract = y1[i_turn:i_end:i_skip]
x_shift_approach = 0
x1_approach = x1_approach.values-x_shift_approach
y1_approach = y1_approach.values
x_shift_retract = 0
x1_retract = x1_retract-x_shift_retract
```
# Fitting force curve
## Solving non-dimensionalized Laplace equation
Following the approach of Butt et al, Curr. Opin. Colloid Interface Sci. 19, 343-354 (2014), we non-dimensionalize the variables by $a$ and $\gamma$, i.e.
\begin{align}
\hat{F} &= F \gamma^{-1} a^{-1} \\
\hat{r} &= r a^{-1} \\
\hat{V} &= V a^{-3} \\
\Delta \hat{P} &= \Delta P a \gamma^{-1},\mathrm{etc}
\end{align}
and hence, we get the non-dimensionalized Young-Laplace equation
\begin{align}
\frac{\hat{u}''}{(1+\hat{u}'^{2})^{3/2}} - \frac{1}{\hat{u}\sqrt{1 + \hat{u}'^{2}}}
&= -\Delta \hat{P} \; \mathrm{for} \; \hat{z} \in (0, \hat{h}) \\
\end{align}
subject to the boundary conditions
\begin{align}
\hat{u}(0) &= \hat{r} \\
\hat{u}(\hat{h}) &= 1 \\
\int_{0}^{h} \pi \hat{u}^{2}d\hat{z} &= \hat{V} \\
\end{align}
and
\begin{align}
F &= -\frac{2 \pi \hat{r}}{\sqrt{1 + \hat{u}'(0)^{2}}} + \pi r^{2}\Delta P.
\end{align}
Using the shooting method, we chose to frame the Young-Laplace equation as an initial value problem, with initial values
\begin{align}
\hat{u}(0) &= \hat{r} \\
\hat{u}'(0) &= -\cot \theta,
\end{align}
while requiring that
$$\mathbf{\hat{m}}(\hat{r}, \Delta \hat{P}, \theta) =
\begin{pmatrix} \hat{u}(\hat{h})- 1 \\
\int_{0}^{\hat{h}} \pi \hat{u}^{2}d\hat{z} - \hat{V}\\
\hat{F} + 2 \pi \gamma \hat{r} \sin \theta - \pi \hat{r}^{2}\Delta \hat{P}
\end{pmatrix} = 0$$.
This converts the boundary value problem into a root-finding problem of $\mathbf{\hat{m}}=0$, as implemented below
# Results
```python
from scipy.integrate import odeint
from scipy.integrate import solve_ivp
from scipy.optimize import newton, fsolve
from scipy.integrate import simps
n_point = 100 # number of points to plot droplet profile
def ode(u, z):
return np.array([u[1], (1+u[1]**2)**(1.5)*(1/(u[0]*np.sqrt(1+u[1]**2)) - u[2]), 0])
# Calculate the residue
def m_res(x, h_f, F_exp):
r = x[0]
dP = x[1]
theta = x[2]
z_arr = np.linspace(0, h_f, n_point)
y_start = np.array([r, -1/np.tan(theta), dP])
sol = odeint(ode, y_start, z_arr, atol=1e-7)
# sol = solve_ivp(ode, z_arr, y_start, atol=1e-11)
u_arr = sol[:,0]
dP_arr = sol[:,2]
u_arr_1 = u_arr[dP_arr == dP]
z_arr_1 = z_arr[dP_arr == dP]
V_int = simps(np.pi*u_arr_1**2, z_arr_1)
u_h = u_arr[-1]
F = 2*np.pi*r*np.sin(theta) - dP*np.pi*r**2
return np.array([1-u_h, V_norm-V_int, F-F_exp])
def calculate_sol(x, hf):
r = x[0]
dP = x[1]
theta = x[2]
z_arr = np.linspace(0, hf, n_point)
y_start = np.array([r, -1/np.tan(theta), dP])
sol = odeint(ode, y_start, z_arr, atol=1e-7)
# sol = solve_ivp(ode, z_arr, y_start, atol=1e-11)
u_arr = sol[:,0]
dP_arr = sol[:,2]
u_arr_1 = u_arr[dP_arr == dP]
z_arr_1 = z_arr[dP_arr == dP]
V_int = simps(np.pi*u_arr_1**2, z_arr_1)
F_sol = 2*np.pi*r*np.sin(theta) - dP*np.pi*r**2
return z_arr, u_arr, dP, V_int, F_sol
```
```python
fig, ax = plt.subplots(2, 1, figsize=(8,10), gridspec_kw={'height_ratios': [1.2, 1]})
i_point = 50000
H = (H_init+x1[i_point])*1e6 # Droplet height in um
F = y1[i_point] # Force in nN
print (H_init, H)
V = V_*1e15 # Droplet volume in pl
r = r_*1e6 # Contact size in um
gamma = 630. # Surface tension in mN/m
# Non-dimensionalized parameters
V_norm = V/r**3*1e3
H_norm = H/r
F_norm = -F/(r*gamma)
# guess values (non-dimensionalized)
dP = 1.0
r0 = 0.3
t0 = 140./180*np.pi
x_guess = np.array([r0, dP, t0])
# root-finding solver, look for solution m=0
x_opt = fsolve(m_res, x_guess, args=(H_norm, F_norm))
print (m_res(x_opt, H_norm, F_norm))
z_sol, u_sol, p_sol, V_sol, F_sol = calculate_sol(x_opt, H_norm)
ax[1].plot(u_sol*r, z_sol*r, color=color_b)
ax[1].plot(-u_sol*r, z_sol*r, color=color_b)
ax[1].plot([-40, 40], [0, 0], '-k') # plotting the base
ax[1].plot([-u_sol[-1]*r, u_sol[-1]*r], [H, H], '-k', alpha=0.5, lw=5)
ax[1].set_xlim([-8, 8])
ax[1].set_ylim([-1,13])
ax[1].set_xlabel("x (um)")
ax[1].set_ylabel("h (um)")
ax[1].set_aspect('equal')
#print out the fitted results
r_fit = x_opt[0]
P_fit = x_opt[1]
t_fit = x_opt[2]
ax[1].text(-1.2, 1.5, r"$\dot{\theta}$ = " + "{}".format(np.round(t_fit/np.pi*180,1))+r"$^{\circ}$")
print ("Contact size is {} um".format(np.round(2*r_fit*r,1)))
print ("Contact angle is {} deg".format(np.round(t_fit/np.pi*180,1)))
print ("Pressure inside droplet is {} kPa".format(np.round(P_fit*gamma/r,1)))
x_shift = 0.0
ax[0].plot(x1_approach-x_shift, y1_approach, color=color_b, lw=1, ls='--')
ax[0].plot(x1_retract-x_shift, y1_retract, color_b, lw=1, ls='-')
ax[0].set_xlim([-1.5, 2])
ax[0].scatter(x1[i_point]*1e6, y1[i_point])
ax[0].set_xlabel("z (um)")
ax[0].set_ylabel("F (nN)")
```
| bc05d7b0442285f272f527394b19f397e917af32 | 50,102 | ipynb | Jupyter Notebook | Fitting_Laplace.ipynb | ddaniel331/laplace_solver_2 | b5070181ecce936d2ef52d31bd3d7e789d6df043 | [
"MIT"
]
| null | null | null | Fitting_Laplace.ipynb | ddaniel331/laplace_solver_2 | b5070181ecce936d2ef52d31bd3d7e789d6df043 | [
"MIT"
]
| null | null | null | Fitting_Laplace.ipynb | ddaniel331/laplace_solver_2 | b5070181ecce936d2ef52d31bd3d7e789d6df043 | [
"MIT"
]
| null | null | null | 136.146739 | 39,116 | 0.863858 | true | 2,494 | Qwen/Qwen-72B | 1. YES
2. YES | 0.875787 | 0.800692 | 0.701236 | __label__eng_Latn | 0.319539 | 0.467537 |
```python
import numpy as np
import numba
import matplotlib.pyplot as plt
import sympy as sym
plt.style.use('presentation')
%matplotlib notebook
colors_cycle=plt.rcParams.get('axes.prop_cycle')
colors = [item['color'] for item in colors_cycle]
def d2np(d):
names = []
numbers = ()
dtypes = []
for item in d:
names += item
if type(d[item]) == float:
numbers += (d[item],)
dtypes += [(item,float)]
if type(d[item]) == int:
numbers += (d[item],)
dtypes += [(item,int)]
if type(d[item]) == np.ndarray:
numbers += (d[item],)
dtypes += [(item,np.float64,d[item].shape)]
return np.array([numbers],dtype=dtypes)
```
```python
psi_ds,psi_qs,psi_dr,psi_qr = sym.symbols('psi_ds,psi_qs,psi_dr,psi_qr')
i_ds,i_qs,i_dr,i_qr = sym.symbols('i_ds,i_qs,i_dr,i_qr')
di_ds,di_qs,di_dr,di_qr = sym.symbols('di_ds,di_qs,di_dr,di_qr')
L_s,L_r,L_m = sym.symbols('L_s,L_r,L_m')
R_s,R_r = sym.symbols('R_s,R_r')
omega_s,omega_r,sigma = sym.symbols('omega_s,omega_r,sigma')
v_ds,v_qs,v_dr,v_qr = sym.symbols('v_ds,v_qs,v_dr,v_qr')
eq_ds = (L_s+L_m)*i_ds + L_m*i_dr - psi_ds
eq_qs = (L_s+L_m)*i_qs + L_m*i_qr - psi_qs
eq_dr = (L_r+L_m)*i_dr + L_m*i_ds - psi_dr
eq_qr = (L_r+L_m)*i_qr + L_m*i_qs - psi_qr
dpsi_ds = v_ds - R_s*i_ds + omega_s*psi_qs
dpsi_qs = v_qs - R_s*i_qs - omega_s*psi_ds
dpsi_dr = v_dr - R_r*i_dr + sigma*omega_s*psi_qr
dpsi_qr = v_qr - R_r*i_qr - sigma*omega_s*psi_dr
'''
s = sym.solve([ eq_dr, eq_qr, dpsi_ds, dpsi_qs, dpsi_dr, dpsi_qr],
[ i_ds, i_qs, psi_ds, psi_qs, v_dr, v_qr])
s = sym.solve([dpsi_ds,dpsi_qs,dpsi_dr,dpsi_qr],
[ i_ds, i_qs, i_dr, i_qr,
psi_ds, psi_qs, i_dr, psi_qr])
s = sym.solve([ eq_ds, eq_qs, eq_dr, eq_qr,
dpsi_ds,dpsi_qs,dpsi_dr,dpsi_qr],
[ i_ds, i_qs, v_dr, v_qr,
psi_ds, psi_qs, psi_dr, psi_qr])
'''
s = sym.solve([ eq_dr, eq_qr,
dpsi_ds,dpsi_qs,dpsi_dr,dpsi_qr],
[ i_ds, i_qs, v_dr, v_qr,
psi_dr, psi_qr])
s = sym.solve([ eq_ds, eq_qs, eq_dr, eq_qr,
dpsi_ds,dpsi_qs,dpsi_dr,dpsi_qr],
[ i_ds, i_qs, v_dr, v_qr,
psi_ds, psi_qs, psi_dr, psi_qr])
for item in s:
print(item, '=', sym.simplify(s[item]))
```
psi_dr = (L_m**2*R_s*i_qr*omega_s - L_m**2*i_dr*omega_s**2*(L_m + L_s) + L_m*R_s*v_ds + L_m*omega_s*v_qs*(L_m + L_s) + i_dr*(L_m + L_r)*(R_s**2 + omega_s**2*(L_m + L_s)**2))/(R_s**2 + omega_s**2*(L_m + L_s)**2)
v_dr = (L_m**2*R_s*i_dr*omega_s**2*sigma + L_m**2*i_qr*omega_s**3*sigma*(L_m + L_s) - L_m*R_s*omega_s*sigma*v_qs + L_m*omega_s**2*sigma*v_ds*(L_m + L_s) + R_r*i_dr*(R_s**2 + omega_s**2*(L_m + L_s)**2) - i_qr*omega_s*sigma*(L_m + L_r)*(R_s**2 + omega_s**2*(L_m + L_s)**2))/(R_s**2 + omega_s**2*(L_m + L_s)**2)
i_qs = (-L_m*R_s*i_dr*omega_s - L_m*i_qr*omega_s**2*(L_m + L_s) + R_s*v_qs - omega_s*v_ds*(L_m + L_s))/(R_s**2 + omega_s**2*(L_m + L_s)**2)
i_ds = (L_m*R_s*i_qr*omega_s - L_m*i_dr*omega_s**2*(L_m + L_s) + R_s*v_ds + omega_s*v_qs*(L_m + L_s))/(R_s**2 + omega_s**2*(L_m + L_s)**2)
psi_qs = (L_m*R_s**2*i_qr - L_m*R_s*i_dr*omega_s*(L_m + L_s) + R_s*v_qs*(L_m + L_s) - omega_s*v_ds*(L_m + L_s)**2)/(R_s**2 + omega_s**2*(L_m + L_s)**2)
psi_qr = (-L_m**2*R_s*i_dr*omega_s - L_m**2*i_qr*omega_s**2*(L_m + L_s) + L_m*R_s*v_qs - L_m*omega_s*v_ds*(L_m + L_s) + i_qr*(L_m + L_r)*(R_s**2 + omega_s**2*(L_m + L_s)**2))/(R_s**2 + omega_s**2*(L_m + L_s)**2)
psi_ds = (L_m*R_s**2*i_dr + L_m*R_s*i_qr*omega_s*(L_m + L_s) + R_s*v_ds*(L_m + L_s) + omega_s*v_qs*(L_m + L_s)**2)/(R_s**2 + omega_s**2*(L_m + L_s)**2)
v_qr = (L_m**2*R_s*i_qr*omega_s**2*sigma - L_m**2*i_dr*omega_s**3*sigma*(L_m + L_s) + L_m*R_s*omega_s*sigma*v_ds + L_m*omega_s**2*sigma*v_qs*(L_m + L_s) + R_r*i_qr*(R_s**2 + omega_s**2*(L_m + L_s)**2) + i_dr*omega_s*sigma*(L_m + L_r)*(R_s**2 + omega_s**2*(L_m + L_s)**2))/(R_s**2 + omega_s**2*(L_m + L_s)**2)
```python
# [1] T. Demiray, F. Milano, and G. Andersson,
# “Dynamic phasor modeling of the doubly-fed induction generator under unbalanced conditions,” 2007 IEEE Lausanne POWERTECH, Proc., no. 2, pp. 1049–1054, 2007.
@numba.jit(nopython=True, cache=True)
def dfim_wecs(struct,i,m):
'''
'''
@numba.jit(nopython=True, cache=True)
def dfim_wecs_ctrl(struct,i,m):
'''
'''
x_idx = struct[i]['dfim_wecs_ctrl_idx']
L_m = struct[i]['L_m']
L_r = struct[i]['L_r']
L_s = struct[i]['L_s']
R_r = struct[i]['R_r']
R_s = struct[i]['R_s']
N_pp = struct[i]['N_pp']
Dt = struct[i]['Dt']
i_dr_ref = struct[i]['i_dr_ref']
i_qr_ref = struct[i]['i_qr_ref']
i_dr = i_dr_ref
i_qr = i_qr_ref
v_ds = struct[i]['v_ds']
v_qs = struct[i]['v_qs']
omega_r = struct[i]['omega_r']
omega_s = struct[i]['omega_s']
sigma = (omega_s - omega_r)/omega_s
den = R_s**2 + omega_s**2*(L_m + L_s)**2
i_qs = (-L_m*R_s*i_dr*omega_s - L_m*i_qr*omega_s**2*(L_m + L_s) + R_s*v_qs - omega_s*v_ds*(L_m + L_s))/den
i_ds = ( L_m*R_s*i_qr*omega_s - L_m*i_dr*omega_s**2*(L_m + L_s) + R_s*v_ds + omega_s*v_qs*(L_m + L_s))/den
v_qr = R_r*i_qr + omega_s*sigma*(L_m*i_dr + L_m*i_ds + L_r*i_dr)
v_dr = R_r*i_dr - omega_s*sigma*(L_m*i_qr + L_m*i_qs + L_r*i_qr)
psi_dr = L_m*i_dr + L_m*i_ds + L_r*i_dr
psi_qs = (R_s*i_ds - v_ds)/omega_s
psi_ds = (-R_s*i_qs + v_qs)/omega_s
psi_qr = L_m*i_qr + L_m*i_qs + L_r*i_qr
tau_e = 3.0/2.0*N_pp*(psi_qr*i_dr - psi_dr*i_qr)
struct[i]['v_dr'] = v_dr
struct[i]['v_qr'] = v_qr
struct[i]['i_ds'] = i_ds
struct[i]['i_qs'] = i_qs
struct[i]['i_dr'] = i_dr
struct[i]['i_qr'] = i_qr
struct[i]['psi_ds'] = psi_ds
struct[i]['psi_qs'] = psi_qs
struct[i]['psi_dr'] = psi_dr
struct[i]['psi_qr'] = psi_qr
struct[i]['tau_e'] = tau_e
struct[i]['sigma'] = sigma
struct[i]['p_s'] = 3.0/2.0*(v_ds*i_ds + v_qs*i_qs)
struct[i]['q_s'] = 3.0/2.0*(v_ds*i_qs - v_qs*i_ds)
struct[i]['p_r'] = 3.0/2.0*(v_dr*i_dr + v_qr*i_qr)
struct[i]['q_r'] = 3.0/2.0*(v_dr*i_qr - v_qr*i_dr)
return tau_e
@numba.jit(nopython=True, cache=True)
def vsc_grid_ctrl(struct,i,m):
x_idx = struct[i]['mech_idx']
omega_t = struct[i]['x'][x_idx,0] # rad/s
tau_t = struct[i]['tau_t']
tau_r = struct[i]['tau_r']
J_t = struct[i]['J_t']
N_tr = struct[i]['N_tr']
Dt = struct[i]['Dt']
domega_t = 1.0/J_t*(tau_t - N_tr*tau_r)
omega_r = N_tr*omega_t
struct[i]['f'][x_idx,0] = domega_t
struct[i]['omega_r'] = omega_r
struct[i]['omega_t'] = omega_t
return omega_t
@numba.jit(nopython=True, cache=True)
def vsc_rotor_ctrl(struct,i,m):
x_idx = struct[i]['mech_idx']
omega_t = struct[i]['x'][x_idx,0] # rad/s
tau_t = struct[i]['tau_t']
tau_r = struct[i]['tau_r']
J_t = struct[i]['J_t']
N_tr = struct[i]['N_tr']
Dt = struct[i]['Dt']
domega_t = 1.0/J_t*(tau_t - N_tr*tau_r)
omega_r = N_tr*omega_t
struct[i]['f'][x_idx,0] = domega_t
struct[i]['omega_r'] = omega_r
struct[i]['omega_t'] = omega_t
return omega_t
@numba.jit(nopython=True, cache=True)
def vsc_pitch_ctrl(struct,i,m):
x_idx = struct[i]['mech_idx']
omega_t = struct[i]['x'][x_idx,0] # rad/s
tau_t = struct[i]['tau_t']
tau_r = struct[i]['tau_r']
J_t = struct[i]['J_t']
N_tr = struct[i]['N_tr']
Dt = struct[i]['Dt']
domega_t = 1.0/J_t*(tau_t - N_tr*tau_r)
omega_r = N_tr*omega_t
struct[i]['f'][x_idx,0] = domega_t
struct[i]['omega_r'] = omega_r
struct[i]['omega_t'] = omega_t
return omega_t
```
```python
@numba.jit(nopython=True, cache=True)
def dfim_ctrl2(struct,i,m):
'''
Control level 2 for DFIM for stator active and reactive power.
'''
x_idx = struct[i]['ctrl2r_idx']
xi_p_s = float(struct[i]['x'][x_idx+0,0])
xi_q_s = float(struct[i]['x'][x_idx+1,0])
K_r_p = struct[i]['K_r_p']
K_r_i = struct[i]['K_r_i']
p_s_ref = struct[i]['p_s_ref']
q_s_ref = struct[i]['q_s_ref']
p_s = struct[i]['p_s']
q_s = struct[i]['q_s']
S_b = struct[i]['S_b']
omega_r = struct[i]['omega_r']
omega_s = struct[i]['omega_s']
R_r = struct[i]['R_r']
I_b = S_b/(np.sqrt(3)*690.0)
sigma = (omega_s - omega_r)/omega_s
error_p_s = (p_s_ref - p_s)/S_b
error_q_s = (q_s_ref - q_s)/S_b
dxi_p_s = error_p_s
dxi_q_s = error_q_s
struct[i]['f'][x_idx+0,0] = dxi_p_s
struct[i]['f'][x_idx+1,0] = dxi_q_s
struct[i]['i_dr_ref'] = -I_b*(K_r_p*error_p_s + K_r_i*xi_p_s)
struct[i]['i_qr_ref'] = -I_b*(K_r_p*error_q_s + K_r_i*xi_q_s)
return struct[0]['i_dr_ref'],struct[0]['i_qr_ref']
```
```python
Omega_b = 2.0*np.pi*50.0
S_b = 2.0e6
U_b = 690.0
Z_b = U_b**2/S_b
#nu_w =np.linspace(0.1,15,N)
H = 2.0
N_pp = 2
N_tr = 20
# H = 0.5*J*Omega_t_n**2/S_b
S_b = 2.0e6
Omega_t_n = Omega_b/N_pp/N_tr
J_t = 2*H*S_b/Omega_t_n**2
#Z_b = 1.0
#Omega_b = 1.0
d =dict(S_b = S_b,
Omega_b = Omega_b,
R_r = 0.01*Z_b,
R_s = 0.01*Z_b,
L_r = 0.08*Z_b/Omega_b,
L_s = 0.1*Z_b/Omega_b,
L_m = 3.0*Z_b/Omega_b,
N_pp = N_pp,
psi_ds = 0.0,
psi_qs = 0.0,
p_s = 0.0,
q_s = 0.0,
p_r = 0.0,
q_r = 0.0,
psi_dr = 0.0,
psi_qr = 0.0,
p_s_ref = 0.0,
q_s_ref = 0.0,
i_ds = 0.0,
i_qs = 0.0,
i_dr = 0.0,
i_qr = 0.0,
i_dr_ref = 0.0,
i_qr_ref = 0.0,
v_ds = 0.0,
v_qs = 0.0,
v_dr = 0.0,
v_qr = 0.0,
omega_r = Omega_b/N_pp,
omega_s = Omega_b/N_pp,
sigma = 0.0,
tau_e = 0.0,
x = np.zeros((3,1)),
f = np.zeros((3,1)),
Dt = 0.0,
J_t = J_t,
omega_t = 0.0,
tau_t = 0.0,
tau_r = 0.0,
N_tr = N_tr,
K_r_p = 0.02,
K_r_i = 20.0,
dfim_idx = 0,
mech_idx = 0,
ctrl2r_idx = 1
)
struct = d2np(d)
struct = np.hstack((struct[0],np.copy(struct[0])))
#wecs_mech_1(struct,0)
dfim_alg_ctrl1(struct,0,0)
dfim_ctrl2(struct,0,0)
dfim_alg_ctrl1(struct,1,0)
dfim_ctrl2(struct,1,0)
print(struct[0]['p_s']/1e6,struct[0]['q_s']/1e6,struct[0]['tau_e'])
print(struct[1]['p_s']/1e6,struct[0]['q_s']/1e6,struct[0]['tau_e'])
```
0.0 0.0 0.0
0.0 0.0 0.0
```python
struct = d2np(d)
struct = np.hstack((struct[0],np.copy(struct[0])))
sys_d = dict(x = np.zeros((6,1)),
f = np.zeros((6,1)))
sys_struct = d2np(sys_d)
@numba.jit(nopython=True, cache=True)
def f_eval(sys_struct,struct):
for i in range(2):
struct[i]['x'][:,0] = sys_struct[0]['x'][3*i:3*(i+1),0]
wecs_mech_1(struct,i,2)
dfim_ctrl2(struct,i,2)
dfim_alg_ctrl1(struct,i,2)
sys_struct[0]['f'][3*i:3*(i+1),:] = struct[i]['f']
return 0
```
```python
@numba.jit(nopython=True, cache=True)
def run(sys_struct,struct):
N_steps = 1000
N_states = 6
Dt = 10.0e-3
Omega_r = np.zeros((N_steps,1))
Omega_t = np.zeros((N_steps,1))
P_s_1 = np.zeros((N_steps,1))
Q_s_1 = np.zeros((N_steps,1))
P_r_1 = np.zeros((N_steps,1))
Q_r_1 = np.zeros((N_steps,1))
P_s_2 = np.zeros((N_steps,1))
Q_s_2 = np.zeros((N_steps,1))
P_r_2 = np.zeros((N_steps,1))
Q_r_2 = np.zeros((N_steps,1))
V_dr = np.zeros((N_steps,1))
V_qr = np.zeros((N_steps,1))
I_dr = np.zeros((N_steps,1))
I_qr = np.zeros((N_steps,1))
I_ds = np.zeros((N_steps,1))
I_qs = np.zeros((N_steps,1))
Tau_e = np.zeros((N_steps,1))
Tau_t = np.zeros((N_steps,1))
T = np.zeros((N_steps,1))
X = np.zeros((N_steps,N_states))
p_ref = 0.0
q_ref = 0.0
xi_p = 0.0
xi_q = 0.0
struct[0]['x'][:,0] = np.copy(sys_struct[0]['x'][0:3,0])
struct[1]['x'][:,0] = np.copy(sys_struct[0]['x'][3:6,0])
for it in range(N_steps):
t = Dt*float(it)
# perturbations and references
struct[0]['p_s_ref'] = 0.0
struct[1]['p_s_ref'] = 0.0
struct[0]['q_s_ref'] = 0.0
struct[1]['q_s_ref'] = 0.0
if t>1.0:
Omega_t_b = struct[0]['Omega_b']/struct[0]['N_tr']/struct[0]['N_pp']
struct[0]['tau_t'] = 1.5e6/Omega_t_b+np.random.normal(500e3,100e3)/Omega_t_b
if t>1.5:
struct[0]['p_s_ref'] = 1.0e6
if t>3.0:
struct[1]['p_s_ref'] = 1.50e6
if t>4.0:
struct[0]['q_s_ref'] = 0.5e6
if t>5.0:
struct[1]['q_s_ref'] = -0.7e6
## solver
f_eval(sys_struct,struct)
f1 = np.copy(sys_struct[0]['f'])
x1 = np.copy(sys_struct[0]['x'])
sys_struct[0]['x'][:]= np.copy(x1 + Dt*f1)
f_eval(sys_struct,struct)
f2 = np.copy(sys_struct[0]['f'])
sys_struct[0]['x'][:]= np.copy(x1 + 0.5*Dt*(f1 + f2))
for i in range(2):
struct[i]['x'][:,0] = sys_struct[0]['x'][3*i:3*(i+1),0]
struct[0]['tau_r'] = struct[0]['tau_e']
struct[1]['tau_r'] = struct[1]['tau_e']
T[it,0] = t
P_s_1[it,0] = float(struct[0]['p_s'])
Q_s_1[it,0] = float(struct[0]['q_s'])
P_r_1[it,0] = float(struct[0]['p_r'])
Q_r_1[it,0] = float(struct[0]['q_r'])
P_s_2[it,0] = float(struct[1]['p_s'])
Q_s_2[it,0] = float(struct[1]['q_s'])
P_r_2[it,0] = float(struct[1]['p_r'])
Q_r_2[it,0] = float(struct[1]['q_r'])
I_dr[it,0] = float(struct[0]['i_dr'])
I_qr[it,0] = float(struct[0]['i_qr'])
I_ds[it,0] = float(struct[0]['i_ds'])
I_qs[it,0] = float(struct[0]['i_qs'])
Omega_r[it,0] = float(struct[0]['omega_r'])
Omega_t[it,0] = float(struct[0]['omega_t'])
V_dr[it,0] = float(struct[0]['v_dr'])
V_qr[it,0] = float(struct[0]['v_qr'])
Tau_e[it,0] = float(struct[0]['tau_e'])
Tau_t[it,0] = float(struct[0]['tau_t'])
X[it,:] = sys_struct[0]['x'][:].T
return T,X,Tau_e,P_s_1,Q_s_1,P_r_1,Q_r_1,P_s_2,Q_s_2,P_r_2,Q_r_2,V_dr,V_qr,Omega_r,Omega_t,I_dr,I_qr,I_ds,I_qs,Tau_t
%timeit run(sys_struct, struct)
```
The slowest run took 5.81 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 1.33 ms per loop
```python
sys_struct['x'][:]= np.zeros((6,1))
struct['v_qs'] = 0.0
struct['v_ds'] = 690.0*np.sqrt(2.0/3.0)
struct['tau_t'] = 0.0
sys_struct[0]['x'][0,0] = Omega_b*0.9/struct[0]['N_tr']/struct[0]['N_pp']
sys_struct[0]['x'][3,0] = Omega_b*1.1/struct[1]['N_tr']/struct[0]['N_pp']
T,X,Tau_e,P_s_1,Q_s_1,P_r_1,Q_r_1,P_s_2,Q_s_2,P_r_2,Q_r_2,V_dr,V_qr,Omega_r,Omega_t,I_dr,I_qr,I_ds,I_qs,Tau_t = run(sys_struct, struct)
```
```python
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(8, 5), sharex = True)
axes.plot(T,Tau_e)
fig.savefig('dfim_tau_e.svg', bbox_inches='tight')
```
<IPython.core.display.Javascript object>
```python
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(9, 8), sharex = True)
axes[0].plot(T,P_s_1/1e6, label='$\sf p_{s1}$')
axes[0].plot(T,Q_s_1/1e6, label='$\sf q_{s1}$')
#axes[0].plot(T,P_s_2/1e6, label='$\sf p_{s2}$')
#axes[0].plot(T,Q_s_2/1e6, label='$\sf q_{s2}$')
axes[1].plot(T,P_r_1/1e6, label='$\sf p_{r1}$')
axes[1].plot(T,Q_r_1/1e6, label='$\sf q_{r1}$')
#axes[1].plot(T,P_r_2/1e6, label='$\sf p_{r2}$')
#axes[1].plot(T,Q_r_2/1e6, label='$\sf q_{r2}$')
axes[0].legend(loc='best')
axes[1].legend(loc='best')
axes[0].set_ylabel('Stator powers (MVA)')
axes[1].set_ylabel('Rotor powers (MVA)')
axes[1].set_xlabel('Time (s)')
axes[0].set_ylim([-0.1,1.1])
#axes[0].set_xlim([0,3.0])
fig.savefig('dfim_pq_s_pq_r.svg', bbox_inches='tight')
```
<IPython.core.display.Javascript object>
```python
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(8, 8), sharex = True)
axes[0].plot(T,Omega_t)
axes[1].plot(T,Tau_t/20.0/1000, label='$\sf \\tau_t$')
axes[1].plot(T,Tau_e/1000, label='$\sf \\tau_e$')
#axes[1].plot(T,P_r_2/1e6, label='$\sf p_{r2}$')
#axes[1].plot(T,Q_r_2/1e6, label='$\sf q_{r2}$')
axes[0].legend(loc='best')
axes[1].legend(loc='best')
axes[0].set_ylabel('Rotor speed')
axes[1].set_ylabel('Torques (kNm)')
axes[1].set_xlabel('Time (s)')
#axes[0].set_ylim([0,2.5])
#axes[0].set_xlim([0,3.0])
fig.savefig('dfim_omega_taus.svg', bbox_inches='tight')
```
<IPython.core.display.Javascript object>
/home/jmmauricio/bin/anaconda3/lib/python3.5/site-packages/matplotlib/axes/_axes.py:531: UserWarning: No labelled objects found. Use label='...' kwarg on individual plots.
warnings.warn("No labelled objects found. "
```python
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(9, 8), sharex = True)
axes[0].plot(T,V_dr, label='$\sf v_{dr}$')
axes[0].plot(T,V_qr, label='$\sf v_{qr}$')
axes[1].plot(T,P_r_1/1e6, label='$\sf p_{r}$', color = colors[2])
axes[1].plot(T,Q_r_1/1e6, label='$\sf q_{r}$', color = colors[3])
axes[1].plot(T,(P_r_1**2+Q_r_1**2)**0.5/1e6, label='$\sf s_{r}$', color = colors[4])
axes[0].legend(loc='best')
axes[1].legend(loc='best')
axes[0].set_ylabel('Rotor voltages (V)')
axes[1].set_ylabel('Rotor powers (MVA)')
axes[1].set_xlabel('Time (s)')
#axes[0].set_ylim([0,2.5])
#axes[0].set_xlim([0,3.0])
fig.savefig('dfim_rotor_v_powers.svg', bbox_inches='tight')
```
<IPython.core.display.Javascript object>
```python
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(9, 8), sharex = True)
axes[0].plot(T,I_dr/1000, label='$\sf i_{dr}$')
axes[0].plot(T,I_qr/1000, label='$\sf i_{qr}$')
axes[1].plot(T,I_ds/1000, label='$\sf i_{ds}$')
axes[1].plot(T,I_qs/1000, label='$\sf i_{qs}$')
axes[0].legend()
axes[1].legend()
axes[0].set_ylabel('Stator currents (kA)')
axes[1].set_ylabel('Rotor currents (kA)')
axes[1].set_xlabel('Time (s)')
#axes[0].set_ylim([0,2.5])
#axes[0].set_xlim([0,3.0])
fig.savefig('dfim_i_s_i_r.svg', bbox_inches='tight')
```
<IPython.core.display.Javascript object>
```python
```
```python
```
```python
```
```python
```
| da4aae6df7fa237d12ac74a69aba34a055b249da | 413,280 | ipynb | Jupyter Notebook | models/dfim_wecs.ipynb | pydgrid/pydgrid | c56073c385f42883c79333533f7cfb8383a173aa | [
"MIT"
]
| 15 | 2019-01-29T08:22:39.000Z | 2022-01-13T20:41:32.000Z | models/dfim_wecs.ipynb | pydgrid/pydgrid | c56073c385f42883c79333533f7cfb8383a173aa | [
"MIT"
]
| 1 | 2017-11-28T21:34:52.000Z | 2017-11-28T21:34:52.000Z | models/dfim_wecs.ipynb | pydgrid/pydgrid | c56073c385f42883c79333533f7cfb8383a173aa | [
"MIT"
]
| 4 | 2018-02-15T02:12:47.000Z | 2020-02-16T17:52:15.000Z | 88.724775 | 61,081 | 0.730301 | true | 7,130 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.7773 | 0.673947 | __label__kor_Hang | 0.089185 | 0.404135 |
<a href="https://colab.research.google.com/github/marianasmoura/tecnicas-de-otimizacao/blob/main/Otimizacao_irrestrita_Mono_Bissecao.ipynb" target="_parent"></a>
UNIVERSIDADE FEDERAL DO PIAUÍ
CURSO DE GRADUAÇÃO EM ENGENHARIA ELÉTRICA
DISCIPLINA: TÉCNICAS DE OTIMIZAÇÃO
DOCENTE: ALDIR SILVA SOUSA
DISCENTE: MARIANA DE SOUSA MOURA
---
Atividade 2: Otimização Irrestrita pelo Método da Bisseção - Monovariável
**A classe Parametros**
Esta classe tem por finalidade enviar em uma única variável, todos os parâmetros necessários para executar o método.
Para o método da Bisseção, precisamos da função que se deseja minimizar, o intervalo de incerteza inicial e a tolerância requerida.
```python
import numpy as np
import sympy as sym #Para criar variáveis simbólicas.
class Parametros:
def __init__(self,f,vars,eps,a,b):
self.f = f
self.a = a
self.b = b
self.vars = vars #variáveis simbólicas
self.eps = eps
```
```python
def eval(sym_f,vars,x):
map = dict()
map[vars[0]] = x
return float(sym_f.subs(map))
import pandas as pd
import math
def bissecao(params):
n = math.ceil( -math.log(params.eps/(params.b-params.a),2) )
f = params.f
diff = sym.diff(f) #retorna a derivada simbólica de f
cols = ['a','b','x','f(x)','df(x)']
a = params.a
b = params.b
df = pd.DataFrame([], columns=cols)
for k in range(n):
x = float((b + a)/2)
fx = eval(f,params.vars,x) #Não é necessário. Somente para debug
dfx = eval(diff,params.vars,x)
#fx = float(fx)
#dfx = float(dfx)
row = pd.DataFrame([[a,b,x,fx,dfx]],columns=cols)
df = df.append(row, ignore_index=True)
if (dfx == 0): break # Mínimo encontrado. Parar.
if (dfx > 0 ):
#Passo 2
b = x
else:
#Passo 3
a = x
x = float((b + a)/2)
return x,df
```
**Exercícios**
**1)** Resolva
min $x^2 -cos(x) + e ^{-2x}$
sujeito a: $~-1\leq x \leq 1$
```python
import numpy as np
import sympy as sym
x = sym.Symbol('x');
vars = [x]
a = -1
b = 1
l = 1e-3
f1 = x*x - sym.cos(x) + sym.exp(-2*x)
params = Parametros(f1,vars,l,a,b)
x,df=bissecao(params)
```
```python
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
<th>x</th>
<th>f(x)</th>
<th>df(x)</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-1</td>
<td>1</td>
<td>0.000000</td>
<td>0.000000</td>
<td>-2.000000</td>
</tr>
<tr>
<th>1</th>
<td>0</td>
<td>1</td>
<td>0.500000</td>
<td>-0.259703</td>
<td>0.743667</td>
</tr>
<tr>
<th>2</th>
<td>0</td>
<td>0.5</td>
<td>0.250000</td>
<td>-0.299882</td>
<td>-0.465657</td>
</tr>
<tr>
<th>3</th>
<td>0.25</td>
<td>0.5</td>
<td>0.375000</td>
<td>-0.317516</td>
<td>0.171539</td>
</tr>
<tr>
<th>4</th>
<td>0.25</td>
<td>0.375</td>
<td>0.312500</td>
<td>-0.318650</td>
<td>-0.138084</td>
</tr>
<tr>
<th>5</th>
<td>0.3125</td>
<td>0.375</td>
<td>0.343750</td>
<td>-0.320502</td>
<td>0.018857</td>
</tr>
<tr>
<th>6</th>
<td>0.3125</td>
<td>0.34375</td>
<td>0.328125</td>
<td>-0.320189</td>
<td>-0.059068</td>
</tr>
<tr>
<th>7</th>
<td>0.328125</td>
<td>0.34375</td>
<td>0.335938</td>
<td>-0.320498</td>
<td>-0.019971</td>
</tr>
<tr>
<th>8</th>
<td>0.335938</td>
<td>0.34375</td>
<td>0.339844</td>
<td>-0.320538</td>
<td>-0.000523</td>
</tr>
<tr>
<th>9</th>
<td>0.339844</td>
<td>0.34375</td>
<td>0.341797</td>
<td>-0.320529</td>
<td>0.009175</td>
</tr>
<tr>
<th>10</th>
<td>0.339844</td>
<td>0.341797</td>
<td>0.340820</td>
<td>-0.320536</td>
<td>0.004328</td>
</tr>
</tbody>
</table>
</div>
```python
x
```
0.34033203125
**2)** A localização do centróide de um setor circular
>
é dada por:
$\overline{x} = \frac{2r \sin(\theta)}{3\theta}$
Determine o ângulo $\theta$ para o qual x = r/2.
```python
# x = 2*r*sin(teta)/(3*teta)
# x = r/2
# r/2 = 2*r*sin(teta)/(3*teta)
# 3*teta - 4*sin(teta) = 0
# z = 3*teta - 4*sin(teta)
import numpy as np
import sympy as sym
x = sym.Symbol('x');
vars = [x]
a = 70
b = 80
a = (a*sym.pi)/180
b = (b*sym.pi)/180
l = 1e-3
f2 = (3*x -4*sym.sin(x))**2
params = Parametros(f2,vars,l,a,b)
x,df=bissecao(params)
```
```python
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
<th>x</th>
<th>f(x)</th>
<th>df(x)</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>7*pi/18</td>
<td>4*pi/9</td>
<td>1.308997</td>
<td>4.005309e-03</td>
<td>0.248685</td>
</tr>
<tr>
<th>1</th>
<td>7*pi/18</td>
<td>1.309</td>
<td>1.265364</td>
<td>3.525637e-04</td>
<td>-0.067490</td>
</tr>
<tr>
<th>2</th>
<td>1.26536</td>
<td>1.309</td>
<td>1.287180</td>
<td>4.554619e-04</td>
<td>0.080273</td>
</tr>
<tr>
<th>3</th>
<td>1.26536</td>
<td>1.28718</td>
<td>1.276272</td>
<td>1.112400e-06</td>
<td>0.003879</td>
</tr>
<tr>
<th>4</th>
<td>1.26536</td>
<td>1.27627</td>
<td>1.270818</td>
<td>7.952763e-05</td>
<td>-0.032425</td>
</tr>
<tr>
<th>5</th>
<td>1.27082</td>
<td>1.27627</td>
<td>1.273545</td>
<td>1.556920e-05</td>
<td>-0.014429</td>
</tr>
<tr>
<th>6</th>
<td>1.27354</td>
<td>1.27627</td>
<td>1.274908</td>
<td>2.099881e-06</td>
<td>-0.005314</td>
</tr>
<tr>
<th>7</th>
<td>1.27491</td>
<td>1.27627</td>
<td>1.275590</td>
<td>3.923802e-08</td>
<td>-0.000727</td>
</tr>
</tbody>
</table>
</div>
```python
x
```
1.2759311309013235
**3)** Metodologia do Financiamento com Prestações Fixas. Cálculo com juros compostos e capitalização mensal.
$q_0 = \frac{1-(1+j)^{-n}}{j}p$
Onde:
* n = Número de meses
* j= taxa de juros mensal
* p = valor da prestação
* q0 = valor financiado.
Fonte: BC.
Maria pretende comprar um automóvel que custa R$\$~80.000$. Para tanto, ela dispõe de no máximo R$\$ ~20.000$ para dar de entrada e o restante seria financiado. O banco de Maria está propondo que essa dê a entrada de R$\$~ 20.000$ e divida o restante em 36x de R$\$ ~2.300$. Qual é a taxa de juros mensal desta operação proposta pelo banco?
```python
# q0 = (1-(1+j)**(-n))*p/j
# q0 - (1-(1+j)**(-n))*p/j = 0
# z = q0 - (1-(1+j)**(-n))*p/j
import numpy as np
import sympy as sym
j = sym.Symbol('j');
vars = [j]
a = 0
b = 1
q0 = 60000
p = 2300
n = 36
l = 1e-5
f3 = (q0 - (1-(1+j)**(-n))*p/j)**2
params = Parametros(f3,vars,l,a,b)
juros,df=bissecao(params)
```
```python
df
```
```python
juros
```
0.018566131591796875
**Conclusão**
Ambos os métodos conseguem resolver um problema de minimização ou maximização de uma função derivável. O código para método de Newton apresenta uma precisão muito maior que para o caso do método da Bisseção, porém para que este método convirja, é necessário um ponto de partida próximo do valor da solução. Caso contrário, o código entrará em um loop e não conseguirá convergir para a resposta correta. O método da Bisseção não tem esse problema, visto que o valor da derivada no ponto auxilia na busca, indicando a direção do mínimo local mesmo que o intervalo de busca inicial não esteja perto do mínimo. Para aplicações em que se priorize a precisão, pode-se unir as vantagens de ambos os métodos, iniciando a busca através do método da Bisseção e, em seguida, melhorando a precisão ainda mais, aplicando-se o ponto de partida obtido por este ao método de Newton.
| 8417e9755956a95795b85fce9a89f98d6e09f582 | 132,035 | ipynb | Jupyter Notebook | Otimizacao_irrestrita_Mono_Bissecao.ipynb | marianasmoura/tecnicas-de-otimizacao | 755153b20e2100237904af2835d2e850d2daa0a0 | [
"MIT"
]
| null | null | null | Otimizacao_irrestrita_Mono_Bissecao.ipynb | marianasmoura/tecnicas-de-otimizacao | 755153b20e2100237904af2835d2e850d2daa0a0 | [
"MIT"
]
| null | null | null | Otimizacao_irrestrita_Mono_Bissecao.ipynb | marianasmoura/tecnicas-de-otimizacao | 755153b20e2100237904af2835d2e850d2daa0a0 | [
"MIT"
]
| null | null | null | 194.741888 | 110,448 | 0.868906 | true | 3,273 | Qwen/Qwen-72B | 1. YES
2. YES | 0.779993 | 0.763484 | 0.595512 | __label__por_Latn | 0.453296 | 0.221904 |
```python
import numpy as np
import matplotlib.pyplot as plt
```
# More on Numeric Optimization
Recall that in homework 2, in one problem you were asked to maximize the following function:
\begin{align}
f(x) & = -7x^2 + 930x + 30
\end{align}
Using calculus, you found that $x^* = 930/14=66.42857$ maximizes $f$. You also used a brute force method to find $x^*$ that involved computing $f$ over a grid of $x$ values. That approach works but is inefficient.
An alternative would be to use an optimizaton algorithm that takes an initial guess and proceeds in a deliberate way. The `fmin` function from `scipy.optimize` executes such an algorithm. `fmin` takes as arguments a *function* and an ititial guess. It iterates, computing updates to the initial guess until the function appears to be close to a *minimum*. It's standard for optimization routines to minimize functions. If you want to maximize a function, supply the negative of the desired function to `fmin`.
## Example using `fmin`
Let's use `fmin` to solve the problem from Homework 2. First, import `fmin`.
```python
from scipy.optimize import fmin
```
Next, define a function that returns $-(-7x^2 + 930x + 30)$. We'll talk in class later about how to do this.
```python
def quadratic(x):
return -(-7*x**2 + 930*x + 30)
```
Now call `fmin`. We know that the exact solution, but let's guess something kind of far off. Like $x_0 = 10$.
```python
x_star = fmin(quadratic,x0=10)
print()
print('fmin solution: ',x_star[0])
print('exact solution:',930/14)
```
Optimization terminated successfully.
Current function value: -30919.285714
Iterations: 26
Function evaluations: 52
fmin solution: 66.4285888671875
exact solution: 66.42857142857143
`fmin` iterated 26 times and evaluated the function $f$ only 52 times. The solution is accurate to 4 digits. The same accuracy in the assignment would be obtained by setting the step to 0.00001 in constructing `x`.Wtih min and max values of 0 and 100, `x` would have 10,000,000 elements implying that the funciton $f$ would have to be evaluated that many times. Greater accuracy would imply ever larger numbers of function evaluations.
To get a sense of the iterative process that `fmin` uses, we can request that the function return the value of $x$ at each iteration using the argument `retall=True`.
```python
x_star, x_values = fmin(quadratic,x0=10,retall=True)
print()
print('fmin solution: ',x_star[0])
print('exact solution:',930/14)
```
Optimization terminated successfully.
Current function value: -30919.285714
Iterations: 26
Function evaluations: 52
fmin solution: 66.4285888671875
exact solution: 66.42857142857143
We can plot the iterated values to see how the routine converges.
```python
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('Iteration of fmin')
ax.set_ylabel('x')
ax.plot(x_values,label="Computed by fmin")
ax.plot(np.zeros(len(x_values))+930/14,'--',label="True $x^*$")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
Accuracy of the `fmin` result can be improved by reducing the `xtol` and `ftol` arguments. These arguments specify the required maximum magnitide between iterations of $x$ and $f$ that is acceptable for algorithm convergence. Both default to 0.0001.
Let's try `xtol=1e-7`.
```python
fmin(quadratic,x0=10,xtol=1e-7)
```
Optimization terminated successfully.
Current function value: -30919.285714
Iterations: 36
Function evaluations: 75
array([66.42857122])
The result is accurate to an additional decimal place. Greater accuracy will be hard to achieve with `fmin` because the function is large in absolute value at the maximum. We can improve accuracy by scaling the function by 1/30,000.
```python
def quadratic_2(x):
return -(-7*(x)**2 + 930*(x) + 30)/30000
x_star = fmin(quadratic_2,x0=930/14,xtol=1e-7)
print()
print('fmin solution: ',x_star[0])
print('exact solution:',930/14)
```
Optimization terminated successfully.
Current function value: -1.030643
Iterations: 26
Function evaluations: 54
fmin solution: 66.42857142857143
exact solution: 66.42857142857143
Now the computed solution is accurate to 14 decimal places.
## Another example
Consider the polynomial function:
\begin{align}
f(x) & = -\frac{(x-1)(x-2)(x-7)(x-9)}{200}
\end{align}
The function has two local maxima which can be seen by plotting.
```python
def polynomial(x):
'''Funciton for computing the NEGATIVE of the polynomial'''
return (x-1)*(x-2)*(x-7)*(x-9)/200
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('y')
ax.set_ylabel('x')
ax.set_title('$f(x) = -(x-1)(x-2)(x-7)(x-9)/200$')
x = np.linspace(0,10,1000)
plt.plot(x,-polynomial(x))
```
Now, let's use `fmin` to compute the maximum of $f(x)$. Suppose that our initial guess is $x_0=4$.
```python
x_star,x_values = fmin(polynomial,x0=4,retall=True)
print()
print('fmin solution: ',x_star[0])
```
Optimization terminated successfully.
Current function value: -0.051881
Iterations: 18
Function evaluations: 36
fmin solution: 1.4611328124999978
The routine apparently converges on a value that is only a local maximum because the inital guess was not properly chosen. To see how `fmin` proceeded, plot the steps of the iterations on the curve:
```python
# Redefine x_values because it is a list of one-dimensional Numpy arrays. Not convenient.
x_values = np.array(x_values).T[0]
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('y')
ax.set_ylabel('x')
ax.set_title('$f(x) = -(x-1)(x-2)(x-7)(x-9)/200$')
plt.plot(x,-polynomial(x))
plt.plot(x_values,-polynomial(x_values),'o',alpha=0.5,label='iterated values')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
`fmin` takes the intial guess and climbs the hill to the left. So apparently the ability of the routine to find the maximum depends on the quality of the initial guess. That's why plotting is important. We can see that beyond about 5.5, the function ascends to the global max. So let's guess $x_0 = 6$.
```python
x_star,x_values = fmin(polynomial,x0=6,retall=True)
print()
print('fmin solution: ',x_star[0])
```
Optimization terminated successfully.
Current function value: -0.214917
Iterations: 17
Function evaluations: 34
fmin solution: 8.147973632812505
```python
# Redefine x_values because it is a list of one-dimensional Numpy arrays. Not convenient.
x_values = np.array(x_values).T[0]
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('y')
ax.set_ylabel('x')
ax.set_title('$f(x) = -(x-1)(x-2)(x-7)(x-9)/200$')
plt.plot(x,-polynomial(x))
plt.plot(x_values,-polynomial(x_values),'o',alpha=0.5,label='iterated values')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
`fmin` converges to the global maximum.
## Solving systems of equations
A related problem to numeric optimization is finding the solutions to systems of equations. Consider the problem of mximizing utility:
\begin{align}
U(x,1,x_2) & = x_1^{\alpha} x_2^{\beta}
\end{align}
subject to the budget constraint:
\begin{align}
M & = p_1x_1 + p_2x_2
\end{align}
by choosing $x_1$ and $x_2$. Solve this by constructing the Lagrangian function:
\begin{align}
\mathcal{L}(x_1,x_2,\lambda) & = x_1^{\alpha} x_2^{\beta} + \lambda \left(M - p_1x_1 - p_2x_2\right)
\end{align}
where $\lambda$ is the Lagrange multiplier on the constraint. The first-order conditions represent a system of equations to be solved:
\begin{align}
\alpha x1^{\alpha-1} x2^{\beta} - \lambda p_1 & = 0\\
\beta x1^{\alpha} x2^{\beta-1} - \lambda p_2 & = 0\\
M - p_1x_1 - p_2 x_2 & = 0\\
\end{align}
Solved by hand, you find:
\begin{align}
x_1^* & = \left(\frac{\alpha}{\alpha+\beta}\right)\frac{M}{p_1}\\
x_1^* & = \left(\frac{\beta}{\alpha+\beta}\right)\frac{M}{p_2}\\
\lambda^* & = \left(\frac{\alpha}{p_1}\right)^{\alpha}\left(\frac{\beta}{p_2}\right)^{\beta}\left(\frac{M}{\alpha+\beta}\right)^{\alpha+\beta - 1}
\end{align}
But solving this problem by hand was tedious. If we knew values for $\alpha$, $\beta$, $p_1$, $p_2$, and $M$, then we could use an equation solver to solve the system. The one we'll use is called `fsolve` from `scipy.optimize`.
For the rest of the example, assumethe following parameter values:
| $\alpha$ | $\beta$ | $p_1$ | $p_2$ | $M$ |
|----------|---------|-------|-------|-------|
| 0.25 | 0.75 | 1 | 2 | 100 |
First, import `fsolve`.
```python
from scipy.optimize import fsolve
```
Define variables to store parameter values and compute exact solution
```python
# Parameters
alpha = 0.25
beta = 0.75
p1 = 1
p2 = 2
m = 100
# Solution
x1_star = m/p1*alpha/(alpha+beta)
x2_star = m/p2*beta/(alpha+beta)
lam_star = x_star = alpha**alpha*beta**beta*p1**-alpha*p2**-beta
exact_soln = np.array([x1_star,x2_star,lam_star])
```
Next, define a function that returns the system of equations solved for zero. I.e., when the solution is input into the function, it return an array of zeros.
```python
def system(x):
x1,x2,lam = x
retval = np.zeros(3)
retval[0] = alpha*x1**(alpha-1)*x2**beta - lam*p1
retval[1] = beta*x1**alpha*x2**(beta-1) - lam*p2
retval[2] = m - p1*x1 - p2*x2
return retval
```
Solve the system with `fsolve`. Set initial guess for $x_1$, $x_2$, and $\lambda$ to 1, 1, and 1.
```python
approx_soln = fsolve(system,x0=[1,1,1])
print('Approximated solution:',approx_soln)
print('Exact solution: ',exact_soln)
```
Approximated solution: [25. 37.5 0.33885075]
Exact solution: [25. 37.5 0.33885075]
Apparently the solution form fsolve is highly accurate. However, we can (and should) verify that original system is in fact equal to zero at the values returned by `fsolve`. Use `np.isclose` to test.
```python
np.isclose(system(approx_soln),0)
```
array([ True, True, True])
Note that like `fmin`, the results of `fsolve` are sensitive to the intial guess. Suppose we guess 1000 for $x_1$ and $x_2$.
```python
approx_soln = fsolve(system,x0=[1000,1000,1])
approx_soln
```
<ipython-input-16-e6b246438e1a>:7: RuntimeWarning: invalid value encountered in double_scalars
retval[0] = alpha*x1**(alpha-1)*x2**beta - lam*p1
<ipython-input-16-e6b246438e1a>:8: RuntimeWarning: invalid value encountered in double_scalars
retval[1] = beta*x1**alpha*x2**(beta-1) - lam*p2
/Users/bcjenkin/opt/anaconda3/lib/python3.8/site-packages/scipy/optimize/minpack.py:175: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
array([1000., 1000., 1.])
The routine does not converge on the solution. The lesson is that with numerical routines for optimization and equation solving, you have to use juedgment in setting initial guesses and it helps to think carefully about the problem that you are solving beforehand.
| 6d13eacd3af5ed600a9ba6ca27b3454d77bcda12 | 96,158 | ipynb | Jupyter Notebook | Examples/Optimization_and_Equation_Solving.ipynb | letsgoexploring/econ126 | 05f50d2392dd1c7c38b14950cb8d7eff7ff775ee | [
"MIT"
]
| 2 | 2020-12-12T16:28:44.000Z | 2021-02-24T12:11:04.000Z | Examples/Optimization_and_Equation_Solving.ipynb | letsgoexploring/econ126 | 05f50d2392dd1c7c38b14950cb8d7eff7ff775ee | [
"MIT"
]
| 1 | 2019-04-29T08:50:41.000Z | 2019-04-29T08:51:05.000Z | Examples/.ipynb_checkpoints/Optimization_and_Equation_Solving-checkpoint.ipynb | letsgoexploring/econ126 | 05f50d2392dd1c7c38b14950cb8d7eff7ff775ee | [
"MIT"
]
| 19 | 2019-03-08T18:49:19.000Z | 2022-03-07T23:27:16.000Z | 131.723288 | 21,412 | 0.875798 | true | 3,306 | Qwen/Qwen-72B | 1. YES
2. YES | 0.912436 | 0.938124 | 0.855978 | __label__eng_Latn | 0.95531 | 0.827058 |
# Worksheet 6
```
%matplotlib inline
```
## Question 1
Explain the shooting method for BVPs.
### Answer Question 1
The boundary value problem for $y(x)$ with boundary data at $x = a, b$ is converted to an initial value problem for $y(x)$ by, at first, guessing the additional (initial) boundary data $z$ at $x = a$ that is required for a properly posed (i.e., completely specified) IVP. The IVP can then be solved using any appropriate solver to get some solution $y(x; z)$ that depends on the guessed initial data $z$. By comparing against the required boundary data at $y(b)$ we can check if we have the correct solution of the original BVP. To be precise, we can write
\begin{equation}
\phi (z) = \left. y(x; z) \right|_{x=b} − y(b),
\end{equation}
a nonlinear equation for $z$. At the root where $\phi(z) = 0$ we have the appropriate initial data $z$ such that the solution of the IVP is also a solution of the original BVP. The root of this nonlinear equation can be found using any standard method such as bisection or the secant method.
## Question 2
Give a complete algorithm for solving the BVP
\begin{equation}
y'' − 3 y' + 2y = 0, \quad y(0) = 0, \quad y(1) = 1
\end{equation}
using the finite difference method. Include the description of the grid, the grid spacing, the treatment of the boundary conditions, the finite difference operators and a description of the linear system to be solved. You do not need to say which method would be used to solve the linear system, but should mention any special properties of the system that might make it easier to solve.
### Answer Question 2
We first choose the grid. We will use $N + 2$ point to cover the domain $x \in [0, 1]$; this implies that we have a grid spacing $h = 1 / (N + 1)$ and we can explicitly write the coordinates of the grid points as
\begin{equation}
x_i = h i, \quad i = 0, 1, \dots , N + 1.
\end{equation}
We denote the value of the (approximate finite difference) solution at the grid points as $y_i (\approx y(x_i))$. We will impose the boundary conditions using
\begin{align}
y_0 & = y(0) & y_{N +1} & = y(1) \\
& = 0 & & = 1.
\end{align}
We will use central differencing which gives
\begin{align}
\left. y'(x) \right|_{x = x_i} & \approx \frac{y_{i+1} − y_{i−1}}{2 h}, \\
\left. y''(x) \right|_{x = x_i} & \approx \frac{y_{i+1} + y_{i−1} - 2 y_i}{h^2}.
\end{align}
We can then substitute all of these definitions into the original definition to find the finite difference
equation that holds for the interior points $i = 1, \dots , N$:
\begin{equation}
y_{i+1} \left( 1 − \frac{3}{2} h \right) + y_i \left( −2 + 2 h^2 \right) + y_{i−1} \left( 1 + \frac{3}{2} h \right) = 0.
\end{equation}
This defines a linear system for the unknowns $y_i , i = 1, \dots , N$ of the form
\begin{equation}
T {\bf y} = {\bf f}.
\end{equation}
We can see that the matrix $T$ is tridiagonal and has the form
\begin{equation}
T =
\begin{pmatrix}
-2 + 2 h^2 & 1 - \tfrac{3}{2} h & 0 & 0 & 0 & \dots & 0 \\
1 + \tfrac{3}{2} h & -2 + 2 h^2 & 1 - \frac{3}{2} h & 0 & 0 & \dots & 0 \\
0 & 1 + \tfrac{3}{2} h & -2 + 2 h^2 & 1 - \frac{3}{2} h & 0 & \dots & 0 \\
0 & 0 & \ddots & \ddots & \ddots & \dots & 0 \\
0 & \dots & 0 & 1 + \tfrac{3}{2} h & -2 + 2 h^2 & 1 - \frac{3}{2} h & 0 \\
0 & \dots & \dots & 0 & 1 + \tfrac{3}{2} h & -2 + 2 h^2 & 1 - \frac{3}{2} h \\
0 & \dots & \dots & \dots & 0 & 1 + \tfrac{3}{2} h & -2 + 2 h^2
\end{pmatrix}
\end{equation}
The right hand side vector results from the boundary data and is
\begin{equation}
{\bf f} = \begin{pmatrix} - \left( 1 + \tfrac{3}{2} h \right) y_0 \\ 0 \\ \vdots \\ 0 \\ - \left( 1 - \tfrac{3}{2} h \right) y_{N+1} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ 0 \end{pmatrix}.
\end{equation}
As the system is given by a tridiagonal matrix it is simple and cheap to solve using, e.g., the Thomas algorithm.
## Question 3
Explain how your algorithm would have to be modified to solve the BVP where the boundary condition at $x = 1$ becomes the Neumann condition
\begin{equation}
y'(1) = 1 + \frac{e}{e − 1}.
\end{equation}
### Answer Question 3
First a finite difference representation of the boundary condition is required. A first order representation would be to use backward differencing
\begin{equation}
\frac{y_{N + 1} − y_N}{h} = 1 + \frac{e}{e - 1}.
\end{equation}
This can be rearranged to give
\begin{equation}
y_{N + 1} = y_N + h \left( 1 + \frac{e}{e − 1} \right).
\end{equation}
So now whenever we replaced $y(1)$ as represented by $y_{N + 1}$ by the boundary value in the previous algorithm we must instead replace it with the above equation which uses the known boundary data and unknown interior values.
Explicitly, this modifies the matrix $T$ to
\begin{equation}
T =
\begin{pmatrix}
-2 + 2 h^2 & 1 - \tfrac{3}{2} h & 0 & 0 & 0 & \dots & 0 \\
1 + \tfrac{3}{2} h & -2 + 2 h^2 & 1 - \frac{3}{2} h & 0 & 0 & \dots & 0 \\
0 & 1 + \tfrac{3}{2} h & -2 + 2 h^2 & 1 - \frac{3}{2} h & 0 & \dots & 0 \\
0 & 0 & \ddots & \ddots & \ddots & \dots & 0 \\
0 & \dots & 0 & 1 + \tfrac{3}{2} h & -2 + 2 h^2 & 1 - \frac{3}{2} h & 0 \\
0 & \dots & \dots & 0 & 1 + \tfrac{3}{2} h & -2 + 2 h^2 & 1 - \frac{3}{2} h \\
0 & \dots & \dots & \dots & 0 & 1 + \tfrac{3}{2} h & -2 + 2 h^2 + \color{red}{\left(1 - \frac{3}{2} h \right)}
\end{pmatrix}
\end{equation}
and the right hand side vector ${\bf f}$ to
\begin{equation}
{\bf f} = \begin{pmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ \color{red}{- \left( 1 - \frac{3}{2} h \right) h \left( 1 + \frac{e}{e - 1} \right)} \end{pmatrix}.
\end{equation}
## Coding Question 1
Write a simple shooting method to solve the BVP
\begin{equation}
y'' − 3 y' + 2 y = 0, \quad y(0) = 0, \quad y(1) = 1.
\end{equation}
Use standard black-box methods to solve the ODE, rewritten in first order form, and either a simple bisection method or the standard black-box methods to find the root. Compare your estimate against the answer
\begin{equation}
y(x) = \frac{e^{2 x − 1} − e^{x − 1}}{e − 1}.
\end{equation}
### Answer Coding Question 1
```
def shooting_Dirichlet(f, ivp_interval, guess_interval, y_bc, method = 'brentq', tolerance = 1.e-8, MaxSteps = 100):
"""Solve the BVP z' = f(x, z) on x \in ivp_interval = [a, b] where z = [y, y'], subject to boundary conditions y(a) = y_bc[0], y(b) = y_bc[1]."""
import numpy as np
import scipy.integrate as si
import scipy.optimize as so
# Define the function computing the error in the boundary condition at b
def shooting_phi(guess):
"""Internal function for the root-finding"""
import numpy as np
import scipy.integrate as si
# The initial conditions from the guess and the boundary conditions
y0 = [y_bc[0], guess]
# Solve the IVP
y = si.odeint(f, y0, np.linspace(ivp_interval[0], ivp_interval[1]))
# Compute the error at the final point
return y[-1, 0] - y_bc[1]
# Choose between the root-finding methods
if (method == 'bisection'):
guess_min = guess_interval[0]
guess_max = guess_interval[1]
phi_min = shooting_phi(guess_min)
phi_max = shooting_phi(guess_max)
assert(phi_min * phi_max < 0.0)
for i in range(MaxSteps):
guess = (guess_min + guess_max) / 2.0
phi = shooting_phi(guess)
if (phi_min * phi < 0.0):
guess_max = guess
phi_max = phi
else:
guess_min = guess
phi_min = phi
if (abs(phi) < tolerance) or (guess_max - guess_min < tolerance):
break
elif (method == 'brentq'):
guess = so.brentq(shooting_phi, guess_interval[0], guess_interval[1])
else:
raise Exception("method parameter must be in ['brentq', 'bisection']")
# The initial conditions from the boundary, and the now "correct" value from the root-find
y0 = [y_bc[0], guess]
# Solve the IVP
x = np.linspace(ivp_interval[0], ivp_interval[1])
y = si.odeint(f, y0, x)
return [x, y]
# Define the specific ODE to be solved
def f_bvp(y, x):
"""First order form of the above ODE"""
import numpy as np
dydx = np.zeros_like(y)
dydx[0] = y[1]
dydx[1] = 3.0 * y[1] - 2.0 * y[0]
return dydx
# Define the exact solution for comparison
def y_exact(x):
"""Exact solution as given above."""
import numpy as np
return (np.exp(2.0*x - 1.0) - np.exp(x - 1.0)) / (np.exp(1.0) - 1.0)
# Now test it on the BVP to be solved
import numpy as np
x, y_brentq = shooting_Dirichlet(f_bvp, [0.0, 1.0], [-10.0, 10.0], [0.0, 1.0])
x, y_bisection = shooting_Dirichlet(f_bvp, [0.0, 1.0], [-10.0, 10.0], [0.0, 1.0], method = 'bisection')
import matplotlib.pyplot as plt
plt.figure(figsize = (12, 8))
plt.plot(x, y_brentq[:, 0], 'kx', x, y_bisection[:, 0], 'ro', x, y_exact(x), 'b-')
plt.xlabel('$x$', size = 16)
plt.ylabel('$y$', size = 16)
plt.legend(('Shooting, brentq method', 'Shooting, bisection', 'Exact'), loc = "upper left")
plt.figure(figsize = (12, 8))
plt.semilogy(x, np.absolute(y_brentq[:, 0] - y_exact(x)), 'kx', x, np.absolute(y_bisection[:, 0] - y_exact(x)), 'ro')
plt.xlabel('$x$', size = 16)
plt.ylabel('$|$Error$|$', size = 16)
plt.legend(('Shooting, brentq method', 'Shooting, bisection'), loc = "lower right")
plt.show()
```
## Coding Question 2
Implement your finite difference algorithm algorithm above to solve this BVP, using a standard black-box linear system solver. Show that your result converges to the correct answer.
### Answer Coding Question 2
```
def bvp_FD_Dirichlet(p, q, f, interval, y_bc, N = 100):
"""Solve linear BVP y'' + p(x) y' + q(x) y = f(x) on the given interval = [a, b] using y(a) = y_bc[0], y(b) = y_bc[1]."""
import numpy as np
import scipy.linalg as la
h = (interval[1] - interval[0]) / (N + 1.0)
# The grid, including boundaries, and set up final solution (fix at boundaries)
x = np.linspace(interval[0], interval[1], N+2)
y = np.zeros_like(x)
y[0] = y_bc[0]
y[-1] = y_bc[1]
# Set up diagonal entries of the matrix. Call sub-diagonal, diagonal, and super-diagonal vectors VE, VF, VG.
VE = 1.0 - h / 2.0 * p(x[2:-1])
VF = -2.0 + h**2 * q(x[1:-1])
VG = 1.0 + h / 2.0 * p(x[1:-2])
# Set up RHS vector F
F = h**2 * f(x[1:-1])
# Include boundary contributions
F[0] -= y_bc[0] * (1.0 - h / 2.0 * p(x[1]))
F[-1] -= y_bc[1] * (1.0 + h / 2.0 * p(x[-2]))
# Be lazy: set up full matrix
T = np.diag(VE, -1) + np.diag(VF) + np.diag(VG, +1)
y[1:-1] = la.solve(T, F)
return [x, y]
# Define the problem to be solved
def bvp_p(x):
"""Term proportional to y' in definition of BVP"""
import numpy as np
return -3.0 * np.ones_like(x)
def bvp_q(x):
"""Term proportional to y in definition of BVP"""
import numpy as np
return 2.0 * np.ones_like(x)
def bvp_f(x):
"""Term on RHS in definition of BVP"""
import numpy as np
return np.zeros_like(x)
# Define the exact solution for comparison
def y_exact(x):
"""Exact solution as given above."""
import numpy as np
return (np.exp(2.0*x - 1.0) - np.exp(x - 1.0)) / (np.exp(1.0) - 1.0)
# Now solve the problem
import numpy as np
x, y = bvp_FD_Dirichlet(bvp_p, bvp_q, bvp_f, [0.0, 1.0], [0.0, 1.0])
import matplotlib.pyplot as plt
plt.figure(figsize = (12, 8))
plt.plot(x, y, 'kx', x, y_exact(x), 'b-')
plt.xlabel('$x$', size = 16)
plt.ylabel('$y$', size = 16)
plt.legend(('Finite difference solution', 'Exact'), loc = "upper left")
# Now do a convergence test
import scipy.linalg as la
levels = np.array(range(4, 10))
Npoints = 2**levels
err_2norm = np.zeros(len(Npoints))
for i in range(len(Npoints)):
x, y = bvp_FD_Dirichlet(bvp_p, bvp_q, bvp_f, [0.0, 1.0], [0.0, 1.0], Npoints[i])
err_2norm[i] = la.norm(y - y_exact(x), 2) / np.sqrt(Npoints[i])
# Best fit to the errors
h = 1.0 / Npoints
p = np.polyfit(np.log(h), np.log(err_2norm), 1)
fig = plt.figure(figsize = (12, 8), dpi = 50)
plt.loglog(h, err_2norm, 'kx')
plt.loglog(h, np.exp(p[1]) * h**(p[0]), 'b-')
plt.xlabel('$h$', size = 16)
plt.ylabel('$\|$Error$\|_1$', size = 16)
plt.legend(('Finite difference errors', "Best fit line slope {0:.3}".format(p[0])), loc = "upper left")
plt.show()
```
## Coding Question 3
Modify your algorithm for the Neumann boundary condition above. Check that it converges to the same answer as for the Dirichlet case.
### Answer Coding Question 3
```
def bvp_FD_DirichletNeumann(p, q, f, interval, y_bc, N = 100):
"""Solve linear BVP y'' + p(x) y' + q(x) y = f(x) on the given interval = [a, b] using y(a) = y_bc[0], y'(b) = y_bc[1]."""
import numpy as np
import scipy.linalg as la
h = (interval[1] - interval[0]) / (N + 1.0)
# The grid, including boundaries, and set up final solution (fix at boundaries)
x = np.linspace(interval[0], interval[1], N+2)
y = np.zeros_like(x)
y[0] = y_bc[0]
# Neumann boundary condition at the right end, so value of solution unknown
# Set up diagonal entries of the matrix. Call sub-diagonal, diagonal, and super-diagonal vectors VE, VF, VG.
VE = 1.0 - h / 2.0 * p(x[2:-1])
VF = -2.0 + h**2 * q(x[1:-1])
VG = 1.0 + h / 2.0 * p(x[1:-2])
# Set up RHS vector F
F = h**2 * f(x[1:-1])
# Include boundary contributions
F[0] -= y_bc[0] * (1.0 - h / 2.0 * p(x[1]))
# Neumann boundary condition at the right end - modify matrix and RHS vector
VF[-1] += (1.0 + h / 2.0 * p(x[-2]))
F[-1] -= (1.0 + h / 2.0 * p(x[-2])) * h * y_bc[1]
# Be lazy: set up full matrix
T = np.diag(VE, -1) + np.diag(VF) + np.diag(VG, +1)
y[1:-1] = la.solve(T, F)
# Finally set the solution at the right boundary
y[-1] = y[-2] + h * y_bc[1]
return [x, y]
# Define the problem to be solved
def bvp_p(x):
"""Term proportional to y' in definition of BVP"""
import numpy as np
return -3.0 * np.ones_like(x)
def bvp_q(x):
"""Term proportional to y in definition of BVP"""
import numpy as np
return 2.0 * np.ones_like(x)
def bvp_f(x):
"""Term on RHS in definition of BVP"""
import numpy as np
return np.zeros_like(x)
# Define the exact solution for comparison
def y_exact(x):
"""Exact solution as given above."""
import numpy as np
return (np.exp(2.0*x - 1.0) - np.exp(x - 1.0)) / (np.exp(1.0) - 1.0)
# Now solve the problem
import numpy as np
x, y = bvp_FD_DirichletNeumann(bvp_p, bvp_q, bvp_f, [0.0, 1.0], [0.0, 1.0 + np.exp(1.0) / (np.exp(1.0) - 1.0)])
import matplotlib.pyplot as plt
plt.figure(figsize = (12, 8))
plt.plot(x, y, 'kx', x, y_exact(x), 'b-')
plt.xlabel('$x$', size = 16)
plt.ylabel('$y$', size = 16)
plt.legend(('Finite difference solution', 'Exact'), loc = "upper left")
# Now do a convergence test
import scipy.linalg as la
levels = np.array(range(4, 10))
Npoints = 2**levels
err_DN_2norm = np.zeros(len(Npoints))
for i in range(len(Npoints)):
x, y = bvp_FD_DirichletNeumann(bvp_p, bvp_q, bvp_f, [0.0, 1.0], [0.0, 1.0 + np.exp(1.0) / (np.exp(1.0) - 1.0)], Npoints[i])
err_DN_2norm[i] = la.norm(y - y_exact(x), 2) / np.sqrt(Npoints[i])
# Best fit to the errors
h = 1.0 / Npoints
p = np.polyfit(np.log(h), np.log(err_DN_2norm), 1)
fig = plt.figure(figsize = (12, 8), dpi = 50)
plt.loglog(h, err_DN_2norm, 'kx')
plt.loglog(h, np.exp(p[1]) * h**(p[0]), 'b-')
plt.xlabel('$h$', size = 16)
plt.ylabel('$\|$Error$\|_2$', size = 16)
plt.legend(('Finite difference errors (Neumann BC)', "Best fit line slope {0:.3}".format(p[0])), loc = "upper left")
plt.show()
```
```
from IPython.core.display import HTML
def css_styling():
styles = open("../../IPythonNotebookStyles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: Verdana, Arial, Helvetica, sans-serif;
}
h2 {
font-family: Verdana, Arial, Helvetica, sans-serif;
}
h3 {
font-family: Verdana, Arial, Helvetica, sans-serif;
}
div.text_cell_render{
font-family: Gill, Verdana, Arial, Helvetica, sans-serif;
line-height: 110%;
font-size: 120%;
width:700px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro", source-code-pro,Consolas, monospace;
}
/* .prompt{
display: None;
}*/
.text_cell_render h5 {
font-weight: 300;
font-size: 12pt;
color: #4057A1;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
> (The cell above executes the style for this notebook. It closely follows the style used in the [12 Steps to Navier Stokes](http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes/) course.)
| 42a5fe94855f9a91d27857d2e932446b2b790cd4 | 198,825 | ipynb | Jupyter Notebook | Worksheets/Worksheet6_Notebook.ipynb | alistairwalsh/NumericalMethods | fa10f9dfc4512ea3a8b54287be82f9511858bd22 | [
"MIT"
]
| 1 | 2021-12-01T09:15:04.000Z | 2021-12-01T09:15:04.000Z | Worksheets/Worksheet6_Notebook.ipynb | indranilsinharoy/NumericalMethods | 989e0205565131057c9807ed9d55b6c1a5a38d42 | [
"MIT"
]
| null | null | null | Worksheets/Worksheet6_Notebook.ipynb | indranilsinharoy/NumericalMethods | 989e0205565131057c9807ed9d55b6c1a5a38d42 | [
"MIT"
]
| 1 | 2021-04-13T02:58:54.000Z | 2021-04-13T02:58:54.000Z | 263.693634 | 32,393 | 0.885849 | true | 5,878 | Qwen/Qwen-72B | 1. YES
2. YES | 0.685949 | 0.921922 | 0.632392 | __label__eng_Latn | 0.888773 | 0.307589 |
# Ekman spiral derivation
Upon Reynolds-averaging, the primitive equations for the atmosphere can be written as
$$ \frac{\partial u}{\partial t} + u_i \frac{\partial u}{\partial x_i} - fv = -\frac{1}{\rho}\frac{\partial P}{\partial x} - \frac{\partial \overline{u'u'}}{\partial x} - \frac{\partial \overline{u'v'}}{\partial y} - \frac{\partial \overline{u'w'}}{\partial z} $$
$$ \frac{\partial v}{\partial t} + u_i \frac{\partial v}{\partial x_i} + fu = -\frac{1}{\rho}\frac{\partial P}{\partial y} - \frac{\partial \overline{v'u'}}{\partial x} - \frac{\partial \overline{v'v'}}{\partial y} - \frac{\partial \overline{v'w'}}{\partial z} $$
If we neglect large-scale advection and subsidence and assume stationarity, the first two terms cancel. The pressure gradient term can be eliminated by substituting the expressions for geostropich balance,
$$ fv_g = \frac{1}{\rho}\frac{\partial P}{\partial x} $$
$$ -fu_g = \frac{1}{\rho}\frac{\partial P}{\partial y} $$
Finally, we assume that the momentum fluxes in the last terms can be approximated with first order closure,
$$ \overline{u'w'} = - K \frac{\partial u}{\partial z}$$
$$ \overline{v'w'} = - K \frac{\partial v}{\partial z}$$
We could do likewise for the horizontal fluxes, but considering that horizontal gradients are generally small compared to vertical gradients, we also discard these terms. This leaves us with
$$ -fv = -fv_g + \frac{\partial}{\partial z} \left(K \frac{\partial u}{\partial z}\right)$$
$$ fu = fu_g + \frac{\partial}{\partial z} \left(K \frac{\partial v}{\partial z}\right)$$
We can rewrite this as
$$ -f(v-v_g) = \frac{\partial}{\partial z} \left(K \frac{\partial u}{\partial z}\right)$$
$$ f(u-u_g) = \frac{\partial}{\partial z} \left(K \frac{\partial v}{\partial z}\right)$$
These coupled equations define the Ekman spiral. If K would be constant, the expressions are somewhat simplified
$$ K\frac{\partial^2u}{\partial z^2} = -f(v-v_g) $$
$$ K\frac{\partial^2v}{\partial z^2} = f(u-u_g) $$
An analytical solution for this system can be found by defining a complex velocity $ W = u + iv $, such that the two equations can be combined into one. To this end, the meridional equation is multiplied by $i = \sqrt{-1}$ and subsequently added to the lateral equation:
$$ iK\frac{\partial^2v}{\partial z^2} = if(u-u_g) $$
$$ K\frac{\partial^2u}{\partial z^2}+iK\frac{\partial^2v}{\partial z^2} = -f(v-v_g) + if(u-u_g) $$
$$ K\frac{\partial^2W}{\partial z^2} = if(u+iv) - if(u_g-iv_g)$$
$$ \frac{\partial^2W}{\partial z^2} - \frac{if}{K}W + \frac{if}{K}W_g = 0$$
This is an inhomogeneous, second order differential equation, which can be solved in two steps. First, we find a particular integral, by realizing that one solution to the problem would be that z is independent of height. This solution is given by
$$ \frac{if}{K}W = \frac{if}{K}W_g $$
from which it follows that $ W = W_g $. Then the homogeneous part of the differential equation is solved by substituting a trial solution of the form $W(z) = A \exp(\lambda z)$, which gives:
$$ \frac{\partial^2W}{\partial z^2} = \frac{if}{K}W $$
$$ \lambda^2 A \exp(\lambda z) = \frac{if}{K} A \exp(\lambda z) $$
$$ \lambda^2 = \frac{if}{K} $$
$$ \lambda = \pm \sqrt{\frac{if}{K}} $$
$$ \lambda = \pm (1+i)\sqrt{\frac{f}{2K}} = \pm (1+i) \gamma $$
where $ \sqrt{i} = \frac{1+i}{\sqrt{2}} $ is used in the last step and $\gamma^{-1}$ is known as the 'Ekman depth'. Substituting these roots in the solution gives
$$ W(z) = A \exp((\gamma + i\gamma) z) + B \exp((-\gamma - i\gamma) z) $$
Now, the first term on the right-hand side, grows exponentially with z. This is not a physical solution, since we expect $W$ to converge to $W_g$ at the top of the Ekman layer, and therefore this term must be discarded. This leaves us with the physical solution
$$ W(z) = W_g + B \exp((-\gamma - i\gamma) z) $$
where the particular solution $W = W_g$ has been substituted in the solution of the homogeneous equation. To find B, we insert the boundary condition $ W(0) = 0$, which yields
$$ W_g + B \exp(0) = 0 $$
yielding $B = -W_g$ and the complete solution
$$ W(z) = W_g \left[1 - \exp(-\gamma z)\exp(-i\gamma z) \right]$$
Using Euler's formula $\exp(-ix) = \cos(x) -i \sin(x)$ we can split the complex exponential terms in a real and imaginary part
$$ u(z) + iv(z) = ( u_g + iv_g ) \left[1 - \exp(-\gamma z)\cos(\gamma z) + i\exp(-\gamma z)\sin(\gamma z) \right]$$
$$ u(z) + iv(z) = u_g - u_g \exp(-\gamma z)\cos(\gamma z) + iu_g \exp(-\gamma z)\sin(\gamma z) + iv_g - iv_g\exp(-\gamma z)\cos(\gamma z) - v_g \exp(-\gamma z)\sin(\gamma z)$$
so that we can finally retrieve the equations for u and v seperately
$$ u(z) = u_g - u_g \exp(-\gamma z)\cos(\gamma z) - v_g \exp(-\gamma z)\sin(\gamma z) $$
$$ v(z) = v_g - v_g \exp(-\gamma z)\cos(\gamma z) + u_g \exp(-\gamma z)\sin(\gamma z) $$
Let's implement this system in Python.
```python
# Import packages
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Set parameters
ug = 10.; vg = 0.
f = 1e-4; K = 10.
gamma = np.sqrt(f/(2*K))
def uwind(ug,vg,gamma,z):
return ug - ug*np.exp(-gamma*z)*np.cos(gamma*z) - vg*np.exp(-gamma*z)*np.sin(gamma*z)
def vwind(ug,vg,gamma,z):
return vg - vg*np.exp(-gamma*z)*np.cos(gamma*z) + ug*np.exp(-gamma*z)*np.sin(gamma*z)
z = np.arange(2001)
u = uwind(ug,vg,gamma,z)
v = vwind(ug,vg,gamma,z)
f,ax = plt.subplots(1,2,figsize=(12,5))
ax[0].plot(u,z,v,z)
ax[1].plot(u,v)
plt.show()
```
Alternatively, we may try to solve the system numerically. We can start with an initial assumption for the profile and iteratively solve for u and v. Let's keep K constant first and then try it with z-dependent K where we can use the fixed-K profile as initial guess.
The expressions for u and v with time dependency reitroduced, are:
$$ \frac{\partial u}{\partial t} = f(v-v_g)+\frac{\partial}{\partial z}\left[K \frac{\partial u}{\partial z} \right] $$
$$ \frac{\partial v}{\partial t} = -f(u-u_g)+\frac{\partial}{\partial z}\left[K \frac{\partial v}{\partial z} \right] $$
```python
# Import packages
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Defining functions
def uwind(f,v,vg,K,u,dt):
coriolis = f*(v-vg)
d_u = np.gradient(u)
d_uw = K*np.gradient(d_u)
tendency = coriolis+d_uw
new_u = u + tendency*dt
return new_u
def vwind(f,u,ug,K,v,dt):
coriolis = -f*(u-ug)
d_v = np.gradient(v)
d_vw = np.gradient(K*d_v)
tendency = coriolis+d_vw
new_v = v + tendency*dt
return new_v
# Set parameters
ug = 10.; vg = 0.
f = 1e-4; K = 10.
dt = 0.1
z = np.arange(2000)+1
u = np.log(z/0.002)
v = 0.3*np.log(z/0.002)
plt.plot(u,z,v,z)
for i in range(20000):
u = uwind(f,v,vg,K,u,dt)
v = vwind(f,u,ug,K,v,dt)
plt.plot(u,z,v,z)
plt.show()
```
## Realistic K-profile
The K-profile is not constant in reality. Rather, it has the form
$$ K(z) = \frac{\kappa u_* z}{1+\alpha \frac{z}{L}}\left(1-\frac{z}{h}\right)^2 $$
This looks like:
```python
# Import packages
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def kprofile(z):
'''
Bert's notes
'''
kappa = 0.4 # Von Karman constant
ust = 0.3 # just a guess
alpha = 5. # stability function
L = 500. # Obukhov length
h = float(len(z))
return kappa*ust*z/(1+alpha*z/L)*(1-z/h)**2
z = np.arange(2001)
K = kprofile(z)
plt.plot(K,z)
plt.show
```
In general, K varies with height and stability:
$$ K(z) = \frac{\kappa u_* z}{1+\alpha \frac{z}{L}}\left(1-\frac{z}{h}\right)^2 $$
where the Businger-Dyer function for stable stratification is recognizable in the denominator. Our aim is to find a z-averaged K-function that is still stability-dependent. As such, we seek to integrate
$$ \overline{K} = \int_0^1 K\left(\frac{z}{h}\right) \, d\left(\frac{z}{h}\right) $$
which can be written as
$$ \overline{K} = \frac{\kappa u_*}{h} \int_0^1 \frac{x \left(1-x\right)^2}{1+\alpha \frac{h}{L}x} \, dx$$
with $x = z/h$. Expanding the numerator and defining $\alpha' = \alpha h/l$ gives
$$ \overline{K} = \frac{\kappa u_*}{h} \int_0^1 \frac{x^3-2x^2+x}{1+\alpha'x} \, dx$$
The integral in this equation can be solved by successive partial integration:
$$
\begin{align}
& \int \frac{x^3-2x^2+x}{1+\alpha'x} \, dx \\
&= \frac{(x^3-2x^2+x)}{\alpha'} \ln(1+\alpha'x) - \int (3x^2-4x+1)(1+\alpha'x)^{-1} \,dx \\
&= \frac{(x^3-2x^2+x)}{\alpha'} \ln(1+\alpha'x) - \left[\frac{(3x^2-4x+1)}{\alpha'} \ln(1+\alpha'x) -\int (6x-4)(1+\alpha'x)^{-1} \,dx \right]\\
&= \frac{(x^3-2x^2+x)}{\alpha'} \ln(1+\alpha'x) - \left[\frac{(3x^2-4x+1)}{\alpha'} \ln(1+\alpha'x) - \left[\frac{(6x-4)}{\alpha'} \ln(1+\alpha'x) -\int 6(1+\alpha'x)^{-1} \,dx \right]\right]\\
&= \frac{(x^3-2x^2+x)}{\alpha'} \ln(1+\alpha'x) - \left[\frac{(3x^2-4x+1)}{\alpha'} \ln(1+\alpha'x) - \left[\frac{(6x-4)}{\alpha'} \ln(1+\alpha'x) - \frac{6}{\alpha'} \ln(1+\alpha'x)\right]\right]\\
&= \frac{x^3-5x^2+11x-11}{\alpha'} \ln(1+\alpha'x)
\end{align}
$$
```python
# Import packages
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def kmean(alpha,x):
return 0.4*0.3*2000.*(x**3-5*x**2+11*x-11)/alpha*np.log(1+alpha*x)
h = 2000.
L = 800.
alpha = 4.7*h/L
z = np.arange(0,1,0.01)
ahh = kmean(alpha,z)
plt.plot(ahh,z)
plt.show
print 'integral value is',0.4*0.3*2000.*kmean(alpha,1.)-0.4*0.3*2000.*kmean(alpha,0.)
```
```python
```
| 1ecea320baf77f59ea639843954a554ce9bf4e5f | 73,285 | ipynb | Jupyter Notebook | ekman.ipynb | Peter9192/Python | 670dc7c01f0f3ba176c6a2e4b848f540c27c1c23 | [
"Apache-2.0"
]
| null | null | null | ekman.ipynb | Peter9192/Python | 670dc7c01f0f3ba176c6a2e4b848f540c27c1c23 | [
"Apache-2.0"
]
| null | null | null | ekman.ipynb | Peter9192/Python | 670dc7c01f0f3ba176c6a2e4b848f540c27c1c23 | [
"Apache-2.0"
]
| null | null | null | 195.426667 | 26,362 | 0.86966 | true | 3,308 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91118 | 0.7773 | 0.70826 | __label__eng_Latn | 0.888701 | 0.483856 |
# Lorenz Equations
A demonstration of reproducible research.
```python
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import cnames
```
```python
%matplotlib inline
```
## Background
In the early 1960s, [Edward Lorenz](https://en.wikipedia.org/wiki/Edward_Norton_Lorenz), a mathemtaician and meteorologist was studying convection. Considering a 2-dimensional flow of fluid of uniform depth with an imposed vertical temperature difference.
Simplifying a more general set of equations for convection, Lorenz derived:
$$
\begin{align}
\dot{x} & = \sigma(y-x) \\
\dot{y} & = \rho x - y - xz \\
\dot{z} & = -\beta z + xy
\end{align}
$$
Where
* x is proportional to the itensity of convective motion
* y is proportional to the temperature difference of ascending and descending currents
* z is proportional to the distortion of thr vertical temperature profile
* $\sigma$ is the [Prandtl Number](https://en.wikipedia.org/wiki/Prandtl_number): the ratio of momentum diffusivity (Kinematic viscosity) and thermal diffusivity.
* $\rho$ is the [Rayleigh Number](https://en.wikipedia.org/wiki/Rayleigh_number): ratio of buoyancy and viscosity forces multiplied by the ratio of momentum and thermal diffusivities.
* $\beta$ is a geometric factor.
For more information on the physical meanings, see [this answer](http://physics.stackexchange.com/questions/89880/physical-interpretation-of-the-lorenz-system). A typical value of the three parameters is $\sigma=10, \beta=8/3, \rho=28$
## Define the equations
```python
def dx(x,y, sigma):
return sigma*(y-x)
```
```python
def dy(x, y, z, rho):
return x*(rho-z) - y
```
```python
def dz(x, y, z, beta):
return x*y-beta*z
```
Now create a function which returns the time derivative. To be able to integrate this numerically, it must accept a time argument t0.
```python
def lorenz_deriv(point, t0, sigma=10, beta=2.666, rho=28):
"""Compute the time-derivative of a Lorentz system
Arguments;
point : (x,y,z) values, tuple or list of length 3
t0 : time value
sigma : Prandtl number, default=10
beta : geometric factor, default=2.666
rho : Rayleigh number, default=28
Returns, the derivative (dx, dy, dt) calculated at point"""
x = point[0]
y = point[1]
z = point[2]
return [dx(x, y, sigma), dy(x, y, z, rho), dz(x, y, z, beta)]
```
Create a series of timesteps to integrate over.
```python
max_time = 100
t = np.linspace(0, max_time, int(250*max_time))
```
## Integrate numerically
Lorenz simulated the behaviour of these equations on an [LGP-30](https://en.wikipedia.org/wiki/LGP-30), a "desktop" machine weighing >300Kg, and taking tape input. Since simulations took a long time, he would often print out intermediate results and restart the simulations from somewhere in the middle. The intermediate results were truncated to 3 decimal places, which lead to his famous discovery...
First simulate the system with a low value of $\rho$, meaning conduction is favoured over convection.
```python
x0 = 3.0
y0 = 15.0
z0 = 0
sigma = 10
beta = 2.666
rho = 10
epsilon = 0.001
```
Here we use the `scipy.integrate.odeint` function, which uses the [LSODA](http://www.oecd-nea.org/tools/abstract/detail/uscd1227) solver.
```python
r1 = integrate.odeint(lorenz_deriv, (x0, y0, z0), t, args=(sigma, beta, rho))
```
And redo the simulation with slightly different initial conditions...
```python
r2 = integrate.odeint(lorenz_deriv, (x0, y0+epsilon, z0), t, args=(sigma, beta, rho))
```
Plot the results, examine intensity of convection over time...
```python
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
ax1.plot(t, r1[:,0], 'r-', linewidth=2)
ax2.plot(t, r2[:,0], 'b-', linewidth=2)
```
Not so interesting. Steady state solution shows that convection doesn't occur. In this case, making a small change to the initial conditions doesn't matter. Plot a scatter of results from the two runs.
```python
plt.scatter(r1[:,0], r2[:,0], marker='.')
```
Show how x, y, and z evolve over time
```python
fig = plt.figure(figsize=(7,7))
ax = fig.add_axes([0, 0, 1, 1], projection='3d')
x, y, z = r1.T
ax.plot(x, y, z, 'r-', linewidth=1)
x, y, z = r2.T
lines = ax.plot(x, y, z, 'b-', linewidth=1)
```
## Exercise
**Rerun the code with a more realistic value of $\rho=28$. **
## The Lorenz Attractor
We can explore in more detail what happens over different ranges of starting points
```python
N = 5
sigma = 10
beta = 2.666
rho = 28
# generate random initial conditions uniform(-15, 15)
x0 = -15 + 30 * np.random.random((N, 3))
x0
```
array([[ -5.04117765, -11.17065497, -1.36463646],
[ 12.80126419, -12.38458801, -2.72177287],
[-14.99277182, 0.24928451, -9.76713759],
[ -5.45408094, 14.60035494, 10.50024973],
[ 10.92013321, -13.57775883, -13.42161216]])
```python
results = np.asarray([integrate.odeint(lorenz_deriv, x0i, t) for x0i in x0])
fig = plt.figure(figsize=(7,7))
ax = fig.add_axes([0, 0, 1, 1], projection='3d')
# choose a different color for each trajectory
colors = plt.cm.jet(np.linspace(0, 1, N))
for i in range(N):
x, y, z = results[i,:,:].T
ax.plot(x, y, z, '-', c=colors[i])
plt.show()
```
## Conlcusions
The Lorenz equations behave *chaoticly*, exhibiting extreme *sensitivity to inital conditions* at certain parameter values.
The values of x, y, and z tend toward two regions, representing two semi-stable states. Values can remain for some time in the same region, then suddenly flip into another state.
Broad aspects of the system can be predicted, but exact details such as when convection will begin and end are impossible to predict beyond a certain timescale. Because of the feedbacks between variables, even tiny deviations in initial conditions will grow over time.
| 191d0c2fa623ad034327f414e528f3774c1cdfeb | 380,805 | ipynb | Jupyter Notebook | Lorenz Equations.ipynb | samwisehawkins/teaching | 92ea7a0398111d3cc61afe1a81be08ff6330a8ed | [
"MIT"
]
| null | null | null | Lorenz Equations.ipynb | samwisehawkins/teaching | 92ea7a0398111d3cc61afe1a81be08ff6330a8ed | [
"MIT"
]
| null | null | null | Lorenz Equations.ipynb | samwisehawkins/teaching | 92ea7a0398111d3cc61afe1a81be08ff6330a8ed | [
"MIT"
]
| null | null | null | 803.386076 | 234,278 | 0.942703 | true | 1,723 | Qwen/Qwen-72B | 1. YES
2. YES | 0.908618 | 0.868827 | 0.789432 | __label__eng_Latn | 0.95884 | 0.672447 |
# Linear algebra overview
Linear algebra is the study of **vectors** and **linear transformations**. This notebook introduces concepts form linear algebra in a birds-eye overview. The goal is not to get into the details, but to give the reader a taste of the different types of thinking: computational, geometrical, and theoretical, that are used in linear algebra.
## Chapters overview
- 1/ Math fundamentals
- 2/ Intro to linear algebra
- Vectors
- Matrices
- Matrix-vector product representation of linear transformations
- Linear property: $f(a\mathbf{x} + b\mathbf{y}) = af(\mathbf{x}) + bf(\mathbf{y})$
- 3/ Computational linear algebra
- Gauss-Jordan elimination procedure
- Augemnted matrix representaiton of systems of linear equations
- Reduced row echelon form
- Matrix equations
- Matrix operations
- Matrix product
- Determinant
- Matrix inverse
- 4/ Geometrical linear algebra
- Points, lines, and planes
- Projection operation
- Coordinates
- Vector spaces
- Vector space techniques
- 5/ Linear transformations
- Vector functions
- Input and output spaces
- Matrix representation of linear transformations
- Column space and row spaces of matrix representations
- Invertible matrix theorem
- 6/ Theoretical linear algebra
- Eigenvalues and eigenvectors
- Special types of matrices
- Abstract vectors paces
- Abstract inner product spaces
- Gram–Schmidt orthogonalization
- Matrix decompositions
- Linear algebra with complex numbers
- 7/ Applications
- 8/ Probability theory
- 9/ Quantum mechanics
- Notation appendix
```python
# helper code needed for running in colab
if 'google.colab' in str(get_ipython()):
print('Downloading plot_helpers.py to util/ (only neded for colab')
!mkdir util; wget https://raw.githubusercontent.com/minireference/noBSLAnotebooks/master/util/plot_helpers.py -P util
```
```python
# setup SymPy
from sympy import *
x, y, z, t = symbols('x y z t')
init_printing()
# a vector is a special type of matrix (an n-vector is either a nx1 or a 1xn matrix)
Vector = Matrix # define alias Vector so I don't have to explain this during video
Point = Vector # define alias Point for Vector since they're the same thing
# setup plotting
%matplotlib inline
import matplotlib.pyplot as mpl
from util.plot_helpers import plot_vec, plot_vecs, plot_line, plot_plane, autoscale_arrows
```
# 1/ Math fundamentals
Linear algebra builds upon high school math concepts like:
- Numbers (integers, rationals, reals, complex numbers)
- Functions ($f(x)$ takes an input $x$ and produces an output $y$)
- Basic rules of algebra
- Geometry (lines, curves, areas, triangles)
- The cartesian plane
```python
```
# 2/ Intro to linear algebra
Linear algebra is the study of vectors and matrices.
## Vectors
```python
# define two vectors
u = Vector([2,3])
v = Vector([3,0])
u
```
```python
v
```
```python
plot_vecs(u, v)
autoscale_arrows()
```
## Vector operations
- Addition (denoted $\vec{u}+\vec{v}$)
- Subtraction, the inverse of addition (denoted $\vec{u}-\vec{v}$)
- Scaling (denoted $\alpha \vec{u}$)
- Dot product (denoted $\vec{u} \cdot \vec{v}$)
- Cross product (denoted $\vec{u} \times \vec{v}$)
### Vector addition
```python
# algebraic
u+v
```
```python
# graphical
plot_vecs(u, v)
plot_vec(v, at=u, color='b')
plot_vec(u+v, color='r')
autoscale_arrows()
```
### Basis
When we describe the vector as the coordinate pair $(4,6)$, we're implicitly using the *standard basis* $B_s = \{ \hat{\imath}, \hat{\jmath} \}$. The vector $\hat{\imath} \equiv (1,0)$ is a unit-length vector in the $x$-direciton,
and $\hat{\jmath} \equiv (0,1)$ is a unit-length vector in the $y$-direction.
To be more precise when referring to vectors, we can indicate the basis as a subscript of every cooridnate vector $\vec{v}=(4,6)_{B_s}$, which tells $\vec{v}= 4\hat{\imath}+6\hat{\jmath}=4(1,0) +6(0,1)$.
```python
# the standard basis
ihat = Vector([1,0])
jhat = Vector([0,1])
v = 4*ihat + 6*jhat
v
```
```python
# geomtrically...
plot_vecs(ihat, jhat, 4*ihat, 6*jhat, v)
autoscale_arrows()
```
The same vector $\vec{v}$ will correspond to the a different pair of coefficients if a differebt basis is used.
For example, if we use the basis $B^\prime = \{ (1,1), (1,-1) \}$, the same vector $\vec{v}$ must be expressed as $\vec{v} = 5\vec{b}_1 +(-1)\vec{b}_2=(5,-1)_{B^\prime}$.
```python
# another basis B' = { (1,1), (1,-1) }
b1 = Vector([ 1, 1])
b2 = Vector([ 1, -1])
v = 5*b1 + (-1)*b2
v
# How did I know 5 and -1 are the coefficients w.r.t basis {b1,b2}?
# Matrix([[1,1],[1,-1]]).inv()*Vector([4,6])
```
```python
# geomtrically...
plot_vecs(b1, b2, 5*b1, -1*b2, v)
autoscale_arrows()
```
## Matrix operations
- Addition (denoted $A+B$)
- Subtraction, the inverse of addition (denoted $A-B$)
- Scaling by a constant $\alpha$ (denoted $\alpha A$)
- Matrix-vector product (denoted $A\vec{x}$, related to linear transformations)
- Matrix product (denoted $AB$)
- Matrix inverse (denoted $A^{-1}$)
- Trace (denoted $\textrm{Tr}(A)$)
- Determinant (denoted $\textrm{det}(A)$ or $|A|$)
In linear algebra we'll extend the notion of funttion $f:\mathbb{R}\to \mathbb{R}$, to functions that act on vectors called *linear transformations*. We can understand the properties of linear transformations $T$ in analogy with ordinary functions:
\begin{align*}
\textrm{function }
f:\mathbb{R}\to \mathbb{R}
& \ \Leftrightarrow \,
\begin{array}{l}
\textrm{linear transformation }
T:\mathbb{R}^{n}\! \to \mathbb{R}^{m}
\end{array} \\
\textrm{input } x\in \mathbb{R}
& \ \Leftrightarrow \
\textrm{input } \vec{x} \in \mathbb{R}^n \\
\textrm{output } f(x) \in \mathbb{R}
& \ \Leftrightarrow \
\textrm{output } T(\vec{x})\in \mathbb{R}^m \\
g\circ\! f \: (x) = g(f(x))
& \ \Leftrightarrow \
% \textrm{matrix product }
S(T(\vec{x})) \\
\textrm{function inverse } f^{-1}
& \ \Leftrightarrow \
\textrm{inverse transformation } T^{-1} \\
\textrm{zeros of } f
& \ \Leftrightarrow \
\textrm{kernel of } T \\
\textrm{image of } f
& \ \Leftrightarrow \
\begin{array}{l}
\textrm{image of } T
\end{array}
\end{align*}
## Linear property
$$
T(a\mathbf{x}_1 + b\mathbf{x}_2) = aT(\mathbf{x}_1) + bT(\mathbf{x}_2)
$$
## Matrix-vector product representation of linear transformations
Equivalence between linear transformstions $T$ and matrices $M_T$:
$$
T : \mathbb{R}^n \to \mathbb{R}^m
\qquad
\Leftrightarrow
\qquad
M_T \in \mathbb{R}^{m \times n}
$$
$$
\vec{y} = T(\vec{x})
\qquad
\Leftrightarrow
\qquad
\vec{y} = M_T\vec{x}
$$
# 3/ Computational linear algebra
## Gauss-Jordan elimination procedure
Suppose you're asked to solve for $x_1$ and $x_2$ in the following system of equations
\begin{align*}
1x_1 + 2x_2 &= 5 \\
3x_1 + 9x_2 &= 21.
\end{align*}
```python
# represent as an augmented matrix
AUG = Matrix([
[1, 2, 5],
[3, 9, 21]])
AUG
```
```python
# eliminate x_1 in second equation by subtracting 3x times the first equation
AUG[1,:] = AUG[1,:] - 3*AUG[0,:]
AUG
```
```python
# simplify second equation by dividing by 3
AUG[1,:] = AUG[1,:]/3
AUG
```
```python
# eliminate x_2 from first equation by subtracting 2x times the second equation
AUG[0,:] = AUG[0,:] - 2*AUG[1,:]
AUG
```
This augmented matrix is in *reduced row echelon form* (RREF), and corresponds to the system of equations:
\begin{align*}
1x_1 \ \ \qquad &= 1 \\
1x_2 &= 2,
\end{align*}
so the the solution is $x_1=1$ and $x_2=2$.
## Matrix equations
See **page 177** in v2.2 of the book.
## Matrix product
```python
a,b,c,d,e,f, g,h,i,j = symbols('a b c d e f g h i j')
A = Matrix([[a,b],
[c,d],
[e,f]])
B = Matrix([[g,h],
[i,j]])
A, B
```
```python
A*B
```
```python
def mat_prod(A, B):
"""Compute the matrix product of matrices A and B."""
assert A.cols == B.rows, "Error: matrix dimensions not compatible."
m, ell = A.shape # A is a m x ell matrix
ell, n = B.shape # B is a ell x n matrix
C = zeros(m,n)
for i in range(0,m):
for j in range(0,n):
C[i,j] = A[i,:].dot(B[:,j])
return C
mat_prod(A,B)
```
```python
# mat_prod(B,A)
```
## Determinant
```python
a, b, c, d = symbols('a b c d')
A = Matrix([[a,b],
[c,d]])
A.det()
```
```python
# Consider the parallelogram with sides:
u1 = Vector([3,0])
u2 = Vector([2,2])
plot_vecs(u1,u2)
plot_vec(u1, at=u2, color='k')
plot_vec(u2, at=u1, color='b')
autoscale_arrows()
# What is the area of this parallelogram?
```
```python
# base = 3, height = 2, so area is 6
```
```python
# Compute the area of the parallelogram with sides u1 and u2 using the deteminant
A = Matrix([[3,0],
[2,2]])
A.det()
```
## Matrix inverse
For an invertible matrix $A$, the matrix inverse $A^{-1}$ acts to undo the effects of $A$:
$$
A^{-1} A \vec{v} = \vec{v}.
$$
The effect applying $A$ followed by $A^{-1}$ (or the other way around) is the identity transformation:
$$
A^{-1}A \ = \ \mathbb{1} \ = \ AA^{-1}.
$$
```python
A = Matrix([[1, 2],
[3, 9]])
A
```
```python
# Compute deteminant to check if inverse matrix exists
A.det()
```
The deteminant is non-zero so inverse exists.
```python
A.inv()
```
```python
A.inv()*A
```
### Adjugate-matrix formula
The *adjugate matrix* of the matrix $A$ is obtained by replacing each entry of the matrix with a partial determinant calculation (called *minors*). The minor $M_{ij}$ is the determinant of $A$ with its $i$th row and $j$th columns removed.
```python
A.adjugate() / A.det()
```
### Augmented matrix approach
$$
\left[ \, A \, | \, \mathbb{1} \, \right]
\qquad
-\textrm{Gauss-Jordan elimination}\rightarrow
\qquad
\left[ \, \mathbb{1} \, | \, A^{-1} \, \right]
$$
```python
AUG = A.row_join(eye(2))
AUG
```
```python
# perform row operations until left side of AUG is in RREF
AUG[1,:] = AUG[1,:] - 3*AUG[0,:]
AUG[1,:] = AUG[1,:]/3
AUG[0,:] = AUG[0,:] - 2*AUG[1,:]
AUG
```
```python
# the inverse of A is in the right side of RREF(AUG)
AUG[:,2:5] # == A-inverse
```
```python
# verify A times A-inverse gives the identity matrix...
A*AUG[:,2:5]
```
### Using elementary matrices
Each row operation $\mathcal{R}_i$ can be represented as an elementary matrix $E_i$. The elementary matrix of a given row operation is obtained by performing the row operation on the identity matrix.
```python
E1 = eye(2)
E1[1,:] = E1[1,:] - 3*E1[0,:]
E2 = eye(2)
E2[1,:] = E2[1,:]/3
E3 = eye(2)
E3[0,:] = E3[0,:] - 2*E3[1,:]
E1, E2, E3
```
```python
# the sequence of three row operations transforms the matrix A into RREF
E3*E2*E1*A
```
Recall definition $A^{-1}A=\mathbb{1}$, and we just observed that $E_3E_2E_1 A =\mathbb{1}$, so it must be that $A^{-1}=E_3E_2E_1$.
```python
E3*E2*E1
```
# 4/ Geometrical linear algebra
Points, lines, and planes are geometrical objects that are conveniently expressed using the language of vectors.
## Points
A point $p=(p_x,p_y,p_z)$ refers to a single location in $\mathbb{R}^3$.
```python
p = Point([2,4,5])
p
```
## Lines
A line is a one dimensional infinite subset of $\mathbb{R}^3$ that can be described as
$$
\ell: \{ p_o + \alpha \vec{v} \ | \ \forall \alpha \in \mathbb{R} \}.
$$
```python
po = Point([1,1,1])
v = Vector([1,1,0])
plot_line(v, po)
```
## Planes
A plane is a two-dimensional infinite subset of $\mathbb{R}^3$ that can be described in one of three ways:
The *general equation*:
$$
P: \left\{ \, Ax+By+Cz=D \, \right\}
$$
The *parametric equation*:
$$
P: \{ p_{\textrm{o}}+s\,\vec{v} + t\,\vec{w}, \ \forall s,t \in \mathbb{R} \},
$$
which defines a plane that that contains the point $p_{\textrm{o}}$ and the vectors $\vec{v}$ and $\vec{w}$.
Or the *geometric equation*:
$$
P: \left\{ \vec{n} \cdot [ (x,y,z) - p_{\textrm{o}} ] = 0 \,\right\},
$$
which defines a plane that contains point $p_{\textrm{o}}$ and has normal vector $\hat{n}$.
```python
# plot plane 2x + 1y + 1z = 5
normal = Vector([2, 1, 1])
D = 5
plot_plane(normal, D)
```
## Projection operation
A projection of the vector $\vec{v}$ in the direction $\vec{d}$ is denoted $\Pi_{\vec{d}}(\vec{v})$. The formula for computing the projections uses the dot product operation:
$$
\Pi_{\vec{d}}(\vec{v})
\ \equiv \
(\vec{v} \cdot \hat{d}) \hat{d}
\ = \
\left(\vec{v} \cdot \frac{\vec{d}}{\|\vec{d}\|} \right) \frac{\vec{d}}{\|\vec{d}\|}.
$$
```python
def proj(v, d):
"""Computes the projection of vector `v` onto direction `d`."""
return v.dot( d/d.norm() )*( d/d.norm() )
```
```python
v = Vector([2,2])
d = Vector([3,0])
proj_v_on_d = proj(v,d)
plot_vecs(d, v, proj_v_on_d)
autoscale_arrows()
```
The basic projection operation can be used to compute projection onto planes, and compute distances between geomteirc objects (page 192).
## Bases and coordinate projections
See [page 225](https://minireference.com/static/excerpts/noBSLA_v2_preview.pdf#page=68) in v2.2 of the book:
- Different types of bases
- Orthonormal
- Orthogonal
- Generic
- Change of basis operation
## Vector spaces
See **page 231** in v2.2 of the book.
## Vector space techniques
See **page 244** in the book.
# 5/ Linear transformations
See [page 257](https://minireference.com/static/excerpts/noBSLA_v2_preview.pdf#page=70) in v2.2 of the book.
## Vector functions
Functions that take vectors as inputs and produce vectors as outputs:
$$
T:\mathbb{R}^{n}\! \to \mathbb{R}^{m}
$$
## Matrix representation of linear transformations
$$
T : \mathbb{R}^n \to \mathbb{R}^m
\qquad
\Leftrightarrow
\qquad
M_T \in \mathbb{R}^{m \times n}
$$
## Input and output spaces
We can understand the properties of linear transformations $T$, and their matrix representations $M_T$ in analogy with ordinary functions:
\begin{align*}
\textrm{function }
f:\mathbb{R}\to \mathbb{R}
& \ \Leftrightarrow \,
\begin{array}{l}
\textrm{linear transformation }
T:\mathbb{R}^{n}\! \to \mathbb{R}^{m} \\
\textrm{represented by the matrix } M_T \in \mathbb{R}^{m \times n}
\end{array} \\
%
\textrm{input } x\in \mathbb{R}
& \ \Leftrightarrow \
\textrm{input } \vec{x} \in \mathbb{R}^n \\
%\textrm{compute }
\textrm{output } f(x) \in \mathbb{R}
& \ \Leftrightarrow \
% \textrm{compute matrix-vector product }
\textrm{output } T(\vec{x}) \equiv M_T\vec{x} \in \mathbb{R}^m \\
%\textrm{function composition }
g\circ\! f \: (x) = g(f(x))
& \ \Leftrightarrow \
% \textrm{matrix product }
S(T(\vec{x})) \equiv M_SM_T \vec{x} \\
\textrm{function inverse } f^{-1}
& \ \Leftrightarrow \
\textrm{matrix inverse } M_T^{-1} \\
\textrm{zeros of } f
& \ \Leftrightarrow \
\textrm{kernel of } T \equiv \textrm{null space of } M_T \equiv \mathcal{N}(A) \\
\textrm{image of } f
& \ \Leftrightarrow \
\begin{array}{l}
\textrm{image of } T \equiv \textrm{column space of } M_T \equiv \mathcal{C}(A)
\end{array}
\end{align*}
Observe we refer to the linear transformation $T$ and its matrix representation $M_T$ interchangeably.
## Finding matrix representations
See [page 269](https://minireference.com/static/excerpts/noBSLA_v2_preview.pdf#page=74) in v2.2 of the book.
## Invertible matrix theorem
See [page 288](https://minireference.com/static/excerpts/noBSLA_v2_preview.pdf#page=78) in the book.
# 6/ Theoretical linear algebra
## Eigenvalues and eigenvectors
An eigenvector of the matirx $A$ is a special input vector, for which the matrix $A$ acts as a scaling:
$$
A\vec{e}_\lambda = \lambda\vec{e}_\lambda,
$$
where $\lambda$ is called the *eigenvalue* and $\vec{e}_\lambda$ is the corresponding eigenvector.
```python
A = Matrix([[1, 5],
[5, 1]])
A
```
```python
A*Vector([1,0])
```
```python
A*Vector([1,1])
```
The *characterisitic polynomial* of the matrix $A$ is defined as
$$
p(\lambda) \equiv \det(A-\lambda \mathbb{1}).
$$
```python
l = symbols('lambda')
(A-l*eye(2)).det()
```
```python
# the roots of the characteristic polynomial are the eigenvalues of A
solve( (A-l*eye(2)).det(), l)
```
```python
# or call `eigenvals` method
A.eigenvals()
```
```python
A.eigenvects()
# can also find eigenvects using (A-6*eye(2)).nullspace() and (A+4*eye(2)).nullspace()
```
```python
Q, Lambda = A.diagonalize()
Q, Lambda
```
```python
Q*Lambda*Q.inv() # == eigendecomposition of A
```
## Special types of matrices
See [page 312](https://minireference.com/static/excerpts/noBSLA_v2_preview.pdf#page=83) in v2.2 of the book.
## Abstract vectors paces
Generalize vector techniques to other vector like quantities. Allow us to talk about basis, dimention, etc.
See [page 318](https://minireference.com/static/excerpts/noBSLA_v2_preview.pdf#page=84) in the book.
## Abstract inner product spaces
Use geometrical notions like length and orthogonaloty for abstract vectors.
See **page 322** in the book.
## Gram–Schmidt orthogonalization
See **page 328**.
## Matrix decompositions
See **page 332**.
## Linear algebra with complex numbers
See [page 339](https://minireference.com/static/excerpts/noBSLA_v2_preview.pdf#page=88) in v2.2 of the book.
# Applications chapters
- Chapter 7: Applications
- Chapter 8: Probability theory
- Chapter 9: Quantum mechanics
# Notation appendix
Check out [page 571](https://minireference.com/static/excerpts/noBSLA_v2_preview.pdf#page=142) in the book.
| 0c5c685e85e3bbb31e1b0bb78d93180024948b8c | 221,561 | ipynb | Jupyter Notebook | Linear_algebra_chapters_overview.ipynb | minireference/noBSLAnotebooks | 3d6acb134266a5e304cb2d51c5ac4dc3eb3949b4 | [
"MIT"
]
| 116 | 2016-04-20T13:56:02.000Z | 2022-03-30T08:55:08.000Z | Linear_algebra_chapters_overview.ipynb | minireference/noBSLAnotebooks | 3d6acb134266a5e304cb2d51c5ac4dc3eb3949b4 | [
"MIT"
]
| 2 | 2021-07-01T17:00:38.000Z | 2021-07-01T19:34:09.000Z | Linear_algebra_chapters_overview.ipynb | minireference/noBSLAnotebooks | 3d6acb134266a5e304cb2d51c5ac4dc3eb3949b4 | [
"MIT"
]
| 29 | 2017-02-04T05:22:23.000Z | 2021-12-28T00:06:50.000Z | 95.377099 | 42,448 | 0.849685 | true | 5,877 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.746139 | 0.595558 | __label__eng_Latn | 0.901683 | 0.222012 |
# Nonlinear regression for KL
```python
import time
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
```
```python
dttm = time.strftime("%Y%m%d-%H%M%S")
dttm
```
```python
np.random.randint(0x7fff_ffff)
```
```python
# random_state = np.random.RandomState(575_727_528)
# random_state = np.random.RandomState()
```
<br>
## Monte Carlo estimate
```python
import tqdm
```
KL-div for $\mathbb{R}$-gaussian vi
$$
KL(\mathcal{N}(w\mid \theta, \alpha \theta^2) \|
\tfrac1{\lvert w \rvert})
\propto - \tfrac12 \log \alpha
+ \mathbb{E}_{\xi \sim \mathcal{N}(1, \alpha)}
\log{\lvert \xi \rvert}
% = - \tfrac12 \log \alpha
% + \log{\sqrt{\alpha}}
% + \tfrac12 \mathbb{E}_{\xi \sim \mathcal{N}(0, 1)}
% \log{\bigl\lvert \tfrac1{\sqrt{\alpha}} + \xi \bigr\rvert^2}
= \tfrac12 \mathbb{E}_{\xi \sim \mathcal{N}(0, 1)}
\log{\bigl\lvert \xi + \tfrac1{\sqrt{\alpha}}\bigr\rvert^2}
\,. $$
```python
def kldiv_real_mc(log_alpha, m=1e5):
kld, m = -0.5 * log_alpha, int(m)
for i, la in enumerate(tqdm.tqdm(log_alpha)):
eps = np.random.randn(m) * np.sqrt(np.exp(la))
kld[i] += np.log(abs(eps + 1)).mean(axis=-1)
return kld
def kldiv_real_mc_reduced(log_alpha, m=1e5):
kld, m = np.zeros_like(log_alpha), int(m)
for i, la in enumerate(tqdm.tqdm(log_alpha)):
eps = np.random.randn(m)
kld[i] = np.log(abs(eps + np.exp(-0.5 * la))).mean(axis=-1)
return kld
```
Suppose $q(z) = \mathcal{CN}(\theta, \alpha \lvert\theta\rvert^2, 0)$ and
$p(z) \propto \lvert z\rvert^{-\beta}$. Each term in the sum
$$
\begin{align}
KL(q\|p)
&= \mathbb{E}_{q(z)} \log \tfrac{q(z)}{p(z)}
= \mathbb{E}_{q(z)} \log q(z) - \mathbb{E}_{q(z)} \log p(z)
% \\
% &= - \log \bigl\lvert \pi e \alpha \lvert\theta\rvert^2 \bigr\rvert
% + \beta \mathbb{E}_{q(z)} \log \lvert z\rvert
% + C
% \\
% &= - \log \pi e
% - \log \alpha \lvert\theta\rvert^2
% + \beta \mathbb{E}_{\varepsilon \sim \mathcal{CN}(1, \alpha, 0)}
% \log \lvert \theta \rvert \lvert \varepsilon\rvert
% + C
% \\
% &= - \log \pi e - \log \alpha
% + \tfrac{\beta - 2}2 \log \lvert \theta \rvert^2
% + \beta \mathbb{E}_{\varepsilon \sim \mathcal{CN}(1, \alpha, 0)}
% \log \lvert \varepsilon\rvert
% + C
% \\
% &= - \log \pi e - \log \alpha
% + \tfrac{\beta - 2}2 \log \lvert \theta \rvert^2
% + \tfrac\beta2 \mathbb{E}_{z \sim \mathcal{CN}(0, \alpha, 0)}
% \log \bigl\lvert z + 1 \bigr\rvert^2
% + C
\\
&= - \log \pi e
+ \tfrac{\beta - 2}2 \log \lvert \theta \rvert^2
+ \tfrac{\beta-2}2 \log\alpha
+ \tfrac\beta2 \mathbb{E}_{z \sim \mathcal{CN}(0, 1, 0)}
\log \bigl\lvert z + \tfrac1{\sqrt{\alpha}} \bigr\rvert^2
+ C
\\
&= - \log \pi e
+ \tfrac{\beta - 2}2 \log \lvert \theta \rvert^2
+ \tfrac{\beta - 2}2 \log\alpha
+ \tfrac\beta2 \mathbb{E}_{\varepsilon \sim \mathcal{N}_2\bigl(0, \tfrac12 I\bigr)}
\log \bigl((\varepsilon_1 + \tfrac1{\sqrt{\alpha}})^2 + \varepsilon_2^2\bigr)
+ C
\end{align}
\,. $$
kl-div divergence for $\mathbb{C}$ gaussian and $\beta=2$:
$$
KL(\mathcal{N}^{\mathbb{C}}(w\mid \theta, \alpha \lvert\theta\rvert^2, 0) \|
\tfrac1{\lvert w \rvert})
\propto
- \log\alpha
+ \mathbb{E}_{\xi \sim \mathcal{CN}(1, \alpha, 0)}
\log \lvert \xi \rvert^2
= \mathbb{E}_{z \sim \mathcal{CN}(0, 1, 0)}
\log \bigl\lvert z + \tfrac1{\sqrt{\alpha}} \bigr\rvert^2
\,. $$
```python
def kldiv_cplx_mc(log_alpha, m=1e5):
kld, m = - log_alpha, int(m)
for i, la in enumerate(tqdm.tqdm(log_alpha)):
eps = np.random.randn(m) + 1j * np.random.randn(m)
eps *= np.sqrt(np.exp(la) / 2)
kld[i] += 2 * np.log(abs(eps + 1)).mean(axis=-1)
return kld
def kldiv_cplx_mc_reduced(log_alpha, m=1e5):
kld, m, isq2 = np.zeros_like(log_alpha), int(m), 1/np.sqrt(2)
for i, la in enumerate(tqdm.tqdm(log_alpha)):
eps = isq2 * np.random.randn(m) + 1j * isq2 * np.random.randn(m)
kld[i] = 2 * np.log(abs(eps + np.exp(-0.5 * la))).mean(axis=-1)
return kld
```
In fact there is a "simple" expression for the expectation of $
\log \lvert z + \mu \rvert^2
$ for $z\sim \mathcal{CN}(0, 1, 0)$ found here
[The Expected Logarithm of a Noncentral Chi-Square Random Variable](http://moser-isi.ethz.ch/explog.html)
.
For $u \sim \mathcal{CN}(0, 1, 0)$ and $\mu\in \mathbb{C}$ we have
$$
g(\mu)
= \mathbb{E} \log \lvert u + \mu\rvert^2
= \log \lvert \mu \rvert^2 - \mathop{Ei}{(-\lvert \mu \rvert^2)}
\,, $$
where $
% \mathop{Ei}(x) = - \int_{-x}^\infty \tfrac{e^{-t}}t dt
\mathop{Ei}(x) = \int_{-\infty}^x \tfrac{e^u}u du
$.
Thus for $z \sim \mathcal{CN}(0, \alpha, 0)$, $\alpha > 0$, we get
$$
\mathbb{E} \log \lvert z + 1\rvert^2
% = \mathbb{E} \log \bigl\lvert \sqrt\alpha u + 1\bigr\rvert^2
= \mathbb{E} \log \alpha \bigl\lvert u + \tfrac1{\sqrt\alpha} \bigr\rvert^2
% = \log \alpha + g\bigl(\tfrac1{\sqrt\alpha}\bigr)
% = \log \alpha + \log \tfrac1\alpha - \mathop{Ei}{(-\tfrac1\alpha)}
= - \mathop{Ei}{(-\tfrac1\alpha)}
\,. $$
Therefore
$$
\begin{align}
KL(q\|p)
&= \mathbb{E}_{q(z)} \log \tfrac{q(z)}{p(z)}
= \mathbb{E}_{q(z)} \log q(z) - \mathbb{E}_{q(z)} \log p(z)
\\
&= - \log \pi e - \log \alpha
+ \tfrac{\beta - 2}2 \log \lvert \theta \rvert^2
+ \tfrac\beta2 \mathbb{E}_{z \sim \mathcal{CN}(0, \alpha, 0)}
\log \bigl\lvert z + 1 \bigr\rvert^2
+ C
\\
&= - \log \pi e - \log \alpha
+ (\beta - 2) \log \lvert \theta \rvert
- \tfrac\beta2 \mathop{Ei}{(-\tfrac1\alpha)}
+ C
\end{align}
\,. $$
For $\beta = 2$ we get
$$
KL(q\|p)
= C - \log \pi e - \log \alpha
- \mathop{Ei}{\bigl(-\tfrac1\alpha \bigr)}
\,. $$
```python
def kl_div_exact(log_alpha):
return - log_alpha - expi(- np.exp(- log_alpha)) # - np.euler_gamma
```
Differentiable exponential integral for torch.
```python
import torch
import torch.nn.functional as F
from scipy.special import expi
class ExpiFunction(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
x_cpu = x.data.cpu().numpy()
output = expi(x_cpu, dtype=x_cpu.dtype)
return torch.from_numpy(output).to(x.device)
@staticmethod
def backward(ctx, grad_output):
x = ctx.saved_tensors[-1]
return grad_output * torch.exp(x) / x
torch_expi = ExpiFunction.apply
input = torch.randn(20, 20).to(torch.double)
assert torch.autograd.gradcheck(torch_expi, input.requires_grad_(True))
```
<br>
Loky backend seems to correctly deal with `np.random`:
[Random state within joblib.Parallel](https://joblib.readthedocs.io/en/latest/auto_examples/parallel_random_state.html)
```python
import joblib
def par_kldiv_real_mc(log_alpha, m=1e5):
def _kldiv_one(log_alpha):
eps = np.random.randn(int(m)) * np.sqrt(np.exp(log_alpha))
return - 0.5 * log_alpha + np.log(abs(eps + 1)).mean(axis=-1)
kldiv_one = joblib.delayed(_kldiv_one)
par_ = joblib.Parallel(n_jobs=-1, backend="loky", verbose=0)
return np.array(par_(kldiv_one(la) for la in tqdm.tqdm(log_alpha)))
```
Compute (or load from cache) the MC estiamte of the **negative** Kullback-Leibler divergence.
* this is a legacy computation: in april 2019 i was estimating the **negative** kl-divergence,
to keep sligned with the approximation of Molchanov et al.(2017).
```python
import os
import gzip
import joblib
filename = "../assets/neg kl-div mc 20190516-134609.gz"
if os.path.exists(filename):
# load from cache
with gzip.open(filename, "rb") as fin:
cache = joblib.load(fin)
## HERE!
neg_kl_real_mc, neg_kl_cplx_mc = cache["real"], cache["cplx"]
alpha = cache["alpha"]
log_alpha = np.log(alpha)
else:
alpha = np.logspace(-8, 8, num=4096)
log_alpha = np.log(alpha)
# get an MC estimate of the negative kl-divergence
neg_kl_real_mc = -kldiv_real_mc(log_alpha, m=1e7)
neg_kl_cplx_mc = -kldiv_cplx_mc(log_alpha, m=1e7)
filename = f"../assets/neg kl-div mc {dttm}.gz"
with gzip.open(filename, "wb", compresslevel=5) as fout:
joblib.dump({
"m" : 1e7,
"alpha" : alpha,
"real": neg_kl_real_mc,
"cplx": neg_kl_cplx_mc,
}, fout)
# end if
print(filename)
```
```text
100%|██████████| 513/513 [16:54<00:00, 1.93s/it]
100%|██████████| 513/513 [39:25<00:00, 4.56s/it]
```
```text
100%|██████████| 4096/4096 [22:08<00:00, 3.25it/s]
100%|██████████| 4096/4096 [56:05<00:00, 1.21it/s]
```
<br>
## Non-linear regression
**Negative** kl-div approximation from [arxiv:1701.05369](https://arxiv.org/pdf/1701.05369.pdf)
$$
- KL(\mathcal{N}(w\mid \theta, \alpha \theta^2) \|
\tfrac1{\lvert w \rvert})
\approx
k_1 \sigma(k_2 + k_3 \log \alpha) + C
- k_4 \log (1 + e^{-\log \alpha})
\bigg\vert_{C,\, k_4 = -k_1,\, \tfrac12}
\,. $$
```python
from scipy.special import expit
def np_neg_kldiv_approx(k, log_alpha):
k1, k2, k3, k4 = k
C = 0 # -k1
sigmoid = expit(k2 + k3 * log_alpha)
softplus = - k4 * np.logaddexp(0, -log_alpha)
return k1 * sigmoid + softplus + C
def tr_neg_kldiv_approx(k, log_alpha):
k1, k2, k3, k4 = k
C = 0 # -k1
sigmoid = torch.sigmoid(k2 + k3 * log_alpha)
softplus = - k4 * F.softplus(- log_alpha)
return k1 * sigmoid + softplus + C
```
$x \mapsto \log(1 + e^x)$ is softplus and needs different
compute paths depending on the sign of $x$:
$$ x\mapsto \log(1+e^{-\lvert x\rvert}) + \max{\{x, 0\}} \,. $$
$$
\log\alpha - \log(1 + e^{\log\alpha})
= \log\tfrac{\alpha}{1 + \alpha}
= - \log\tfrac{1 + \alpha}{\alpha}
= - \log(1 + \tfrac1\alpha)
= - \log(1 + e^{-\log\alpha})
\,. $$
Fit a curve to the MC estimate: level-grad fused function.
```python
tr_neg_kl_cplx_mc = torch.from_numpy(neg_kl_cplx_mc)
tr_neg_kl_real_mc = torch.from_numpy(neg_kl_real_mc)
tr_log_alpha = torch.from_numpy(log_alpha)
def fused_mse(k, log_alpha, target): # torch
tr_k = torch.from_numpy(k).requires_grad_(True)
approx = tr_neg_kldiv_approx(tr_k, log_alpha)
loss = F.mse_loss(approx, target, reduction="mean")
loss.backward()
return loss.item(), tr_k.grad.numpy()
```
<br>
Compare the mc estimate against the exact value
```python
resid = (- kl_div_exact(log_alpha)) - neg_kl_cplx_mc
plt.plot(log_alpha, resid)
abs(resid).mean(), resid.std()
```
<br>
Now let's find the optimal nonlinear regresson fit using L-BFGS:
$$
\frac12 \sum_i \bigl(
y_i - f(\log \alpha_i, \theta)
\bigr)^2 \longrightarrow \min_\theta
\,. $$
```python
from scipy.optimize.lbfgsb import fmin_l_bfgs_b
k_real = fmin_l_bfgs_b(fused_mse, np.r_[0.5, 0., 1., 0.5],
bounds=((None, None), (None, None), (None, None), (0.5, 0.5)),
args=(tr_log_alpha, tr_neg_kl_real_mc))[0]
k_cplx = fmin_l_bfgs_b(fused_mse, np.r_[0.5, 0., 1., 1.],
bounds=((None, None), (None, None), (None, None), (1.0, 1.0)),
args=(tr_log_alpha, tr_neg_kl_cplx_mc))[0]
```
The fit coefficients from the paper.
```python
k_real_1701_05369 = np.r_[0.63576, 1.87320, 1.48695, 0.5]
```
```python
k_real_1701_05369, k_real, k_cplx
```
```text
(array([0.63576, 1.8732 , 1.48695, 0.5 ]),
array([0.63567313, 1.88114543, 1.49136378, 0.5 ]),
array([0.57810091, 1.45926293, 1.36525956, 1. ]))
```
<br>
```python
neg_kl_real_1701_05369 = np_neg_kldiv_approx(k_real_1701_05369, log_alpha)
neg_kl_real_approx = np_neg_kldiv_approx(k_real, log_alpha)
neg_kl_cplx_approx = np_neg_kldiv_approx(k_cplx, log_alpha)
```
```python
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111, xlabel=r"$\log\alpha$", ylabel="-kld")
ax.plot(log_alpha, -(neg_kl_real_mc - neg_kl_real_1701_05369),
label=r"$\mathbb{R}$ - arXiv:1701.05369", alpha=0.5)
ax.plot(log_alpha, -(neg_kl_real_mc - neg_kl_real_approx),
label=r"$\mathbb{R}$ - lbfgs", alpha=0.5)
ax.plot(log_alpha, -(neg_kl_cplx_mc - neg_kl_cplx_approx),
label=r"$\mathbb{C}$ - lbfgs")
ax.axhline(0., c="k", zorder=-10)
ax.legend(ncol=3)
ax.set_title("Regression residuals of the MC estimate of the KL-div for VD")
plt.show()
```
```python
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111, xlabel=r"$\log\alpha$", ylabel="-kld")
ax.plot(log_alpha, kl_div_exact(log_alpha), c="k", label=r"$\mathbb{C}$ - exact", lw=2)
ax.plot(log_alpha, -neg_kl_real_mc, label=r"$\mathbb{R}$")
ax.plot(log_alpha, -neg_kl_cplx_mc, label=r"$\mathbb{C}$")
ax.legend(ncol=3)
ax.set_title("the MC estimate of the KL-div for VD")
plt.show()
```
Indeed, really close in unform norm ($\|\cdot\|_\infty$ on $C^1(\mathbb{R})$).
```python
abs(neg_kl_real_mc - neg_kl_real_1701_05369).max(), \
abs(neg_kl_real_mc - neg_kl_real_approx).max(), \
abs(neg_kl_cplx_mc - neg_kl_cplx_approx).max()
```
```python
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111, xlabel=r"$\log\alpha$", ylabel="-kld")
ax.plot(log_alpha, -neg_kl_cplx_mc, label=r"$\mathbb{C}$ - mc")
ax.plot(log_alpha, -neg_kl_cplx_approx, label=r"$\mathbb{C}$ - approx")
ax.plot(log_alpha, -neg_kl_real_mc, label=r"$\mathbb{R}$ - mc")
ax.plot(log_alpha, -neg_kl_real_approx, label=r"$\mathbb{R}$ - approx")
# ax.set_xlim(0, 20)
# ax.set_ylim(0.55, 0.6)
ax.legend()
```
## Exact kl-div grad for $\mathbb{R}$-gaussian (draft)
Note this paper here
[Moments of the log non-central chi-square distribution](https://arxiv.org/pdf/1503.06266.pdf)
correctly notices that on
[p. 2446 of Lapidoth, Moser (2003)](http://moser-isi.ethz.ch/docs/papers/alap-smos-2003-3.pdf)
there is a missing $\log{2}$ term. In the following analysis resembles the logic of this paper.
Let $(z_i)_{i=1}^m \sim \mathcal{N}(0, 1)$ iid and $
(\mu_i)_{i=1}^m \in \mathbb{R}
$. The random variable $
W = \sum_i (\mu_i + z_i)^2
$ is said to be $\chi^2_m(\lambda)$ distributed (noncentral $\chi^2$)
with noncentrality parameter $\lambda = \sum_i \mu_i^2$.
Consider the mgf of $W$:
$$
M_W(t)
= \mathbb{E}(e^{Wt})
= \prod_i \mathbb{E}(e^{(\mu_i + z_i)^2 t})
\,, $$
by independence. Now for $z \sim \mathcal{N}(\mu, 1)$
$$
\mathbb{E}(e^{z^2 t})
= \tfrac1{\sqrt{2\pi}}
\int_{-\infty}^{+\infty}
e^{z^2 t} e^{-\tfrac{(z-\mu)^2}2}
dz
\,. $$
Now, for $t < \tfrac12$
$$
z^2 t - \tfrac{(z - \mu)^2}2
= - \tfrac12 (1 - 2t) z^2 + z \mu - \tfrac{\mu^2}2
% = - \tfrac12 (1 - 2t) \bigl(
% z^2 - 2 z \tfrac\mu{1 - 2t} + \tfrac{\mu^2}{1 - 2t}
% \bigr)
% = - \tfrac12 (1 - 2t) \bigl( z - \tfrac\mu{1 - 2t} \bigr)^2
% - \tfrac12 (1 - 2t) \bigl(
% \tfrac{\mu^2}{1 - 2t}
% - \tfrac{\mu^2}{(1 - 2t)^2}
% \bigr)
% = - \tfrac12 (1 - 2t) \bigl( z - \tfrac\mu{1 - 2t} \bigr)^2
% - \tfrac{\mu^2}2 \bigl(
% \tfrac{1 - 2t}{1 - 2t} - \tfrac1{1 - 2t}
% \bigr)
% = - \tfrac12 (1 - 2t) \bigl( z - \tfrac\mu{1 - 2t} \bigr)^2
% + \tfrac{\mu^2}2 \tfrac{2t}{1 - 2t}
= - \tfrac12 (1 - 2t) \bigl( z - \tfrac\mu{1 - 2t} \bigr)^2
+ \mu^2 \tfrac{t}{1 - 2t}
\,, $$
whence
$$
\mathbb{E}(e^{z^2 t})
= \tfrac1{\sqrt{2\pi}}
\int_{-\infty}^{+\infty}
e^{z^2 t} e^{-\tfrac{(z-\mu)^2}2}
dz
% = e^{\mu^2 \tfrac{t}{1 - 2t}}
% \tfrac1{\sqrt{2\pi}}
% \int_{-\infty}^{+\infty}
% e^{- \tfrac12 (1 - 2t) \bigl( z - \tfrac\mu{1 - 2t} \bigr)^2}
% dz
= e^{\mu^2 \tfrac{t}{1 - 2t}}
\tfrac1{\sqrt{2\pi}}
\int_{-\infty}^{+\infty}
e^{- \tfrac12 (1 - 2t) z^2}
dz
% = [u = \sqrt{1 - 2t} z]
% = e^{\mu^2 \tfrac{t}{1 - 2t}}
% \tfrac1{\sqrt{2\pi}}
% \int_{-\infty}^{+\infty}
% e^{- \tfrac12 u^2}
% \tfrac{du}{\sqrt{1 - 2t}}
= \tfrac{
\exp{\{\mu^2 \tfrac{t}{1 - 2t}\}}
}{\sqrt{1 - 2t}}
\,. $$
Therefore
$$
M_W(t)
= \mathbb{E}(e^{Wt})
= \prod_i \tfrac{
e^{\mu_i^2 \tfrac{t}{1 - 2t}}
}{\sqrt{1 - 2t}}
% = e^{\lambda \tfrac{t}{1 - 2t}}
% (1 - 2t)^{-\tfrac{m}2}
% = e^{\lambda \tfrac{- t}{2t - 1}}
% (1 - 2t)^{-\tfrac{m}2}
% = e^{\tfrac\lambda2 \tfrac{1 - 2t - 1}{2t - 1}}
% (1 - 2t)^{-\tfrac{m}2}
% = e^{- \tfrac\lambda2 (1 + \tfrac1{2t - 1})}
% (1 - 2t)^{-\tfrac{m}2}
= e^{- \tfrac\lambda2} e^{\tfrac\lambda2 \tfrac1{1 - 2t}}
(1 - 2t)^{-\tfrac{m}2}
\,. $$
Expanding the exponential as infinte series:
$$
M_W(t)
= e^{- \tfrac\lambda2} e^{\tfrac\lambda2 \tfrac1{1 - 2t}}
(1 - 2t)^{-\tfrac{m}2}
% = e^{- \tfrac\lambda2} (1 - 2t)^{-\tfrac{m}2}
% \sum_{n \geq 0} \tfrac{\bigl(\tfrac\lambda2\bigr)^n}{n! (1 - 2t)^n}
= \sum_{n \geq 0} \tfrac{e^{- \tfrac\lambda2} \bigl(\tfrac\lambda2\bigr)^n}{n!}
(1 - 2t)^{-\tfrac{2n + m}2}
= \sum_{n \geq 0} \tfrac{e^{- \tfrac\lambda2} \bigl(\tfrac\lambda2\bigr)^n}{n!}
\mathbb{E}_{x \sim \chi^2_{m + 2n}}(e^{x t})
\,. $$
<br>
Thus (really? this is how we derive this?!) the density of a non-central $\chi^2_m(\lambda)$ is given by
$$
f_W(x)
= e^{- \tfrac\lambda2} \sum_{n \geq 0} \tfrac{\bigl(\tfrac\lambda2\bigr)^n}{n!}
\tfrac{
x^{n + \tfrac{m}2 - 1} e^{-\tfrac{x}2}
}{
2^{n + \tfrac{m}2} \Gamma(n + \tfrac{m}2)
}
% = e^{- \tfrac\lambda2} \sum_{n \geq 0} \tfrac{\bigl(\tfrac\lambda2\bigr)^n}{n!}
% \tfrac{
% \bigl(\tfrac{x}2\bigr)^{n + \tfrac{m}2 - 1} e^{-\tfrac{x}2}
% }{
% 2 \Gamma(n + \tfrac{m}2)
% }
= \frac12 e^{- \tfrac{x + \lambda}2} \bigl(\tfrac{x}2\bigr)^{\tfrac{m}2 - 1}
\sum_{n \geq 0} \tfrac{
\bigl(\tfrac{x \lambda}4\bigr)^n
}{
n! \Gamma(n + \tfrac{m}2)
}
% = \frac12 e^{- \tfrac{x + \lambda}2}
% \bigl(\tfrac{x}\lambda\bigr)^{\tfrac{m}4 - \tfrac12}
% \bigl(\tfrac{x \lambda}4\bigr)^{\tfrac{m}4 - \tfrac12}
% \sum_{n \geq 0} \tfrac{
% \bigl(\tfrac{x \lambda}4\bigr)^n
% }{
% n! \Gamma(n + \tfrac{m}2)
% }
% = \frac12 e^{- \tfrac{x + \lambda}2}
% \bigl(\tfrac{x}\lambda\bigr)^{\tfrac{m}4 - \tfrac12}
% \bigl(\tfrac{\sqrt{x \lambda}}2\bigr)^{\tfrac{m}2 - 1}
% \sum_{n \geq 0} \tfrac{
% \bigl(\tfrac{x \lambda}4\bigr)^n
% }{
% n! \Gamma(n + \tfrac{m}2)
% }
= \frac12 e^{- \tfrac{x + \lambda}2}
\bigl(\tfrac{x}\lambda\bigr)^{\tfrac{m - 2}4}
I_{\bigl(\tfrac{m}2 - 1\bigr)}(\sqrt{x \lambda})
\,, $$
where $I_k$ is the modified Bessel function of the first kind
$$
I_k(s)
= \Bigl(\frac{s}2\Bigr)^k
\sum_{n \geq 0} \tfrac{
\bigl( \tfrac{s}2 \bigr)^{2n}
}{n! \Gamma(n + k + 1)}
\,. $$
The expected logarithm of $W$ is
$$
\mathbb{E}_{W\sim \chi^2_m(\lambda)} \log W
= \int_0^\infty f_W(x) \log x dx
% = \frac12 e^{- \tfrac\lambda2}
% \int_0^\infty \sum_{n \geq 0} \biggl(
% \tfrac{\bigl(\tfrac\lambda2\bigr)^n}{n!}
% \tfrac{e^{- \tfrac{x}2}}{\Gamma(n + \tfrac{m}2)}
% \bigl(\tfrac{x}2\bigr)^{n + \tfrac{m}2 - 1}
% \biggr) \log x dx
% = [\text{ absolute summability and Fubini, or any other conv. thm for integrals of non-negative integrals}]
% = e^{- \tfrac\lambda2} \sum_{n \geq 0}
% \tfrac{\bigl(\tfrac\lambda2\bigr)^n}{n!}
% \tfrac1{\Gamma(n + \tfrac{m}2)}
% \int_0^\infty
% e^{- \tfrac{x}2} \bigl(\tfrac{x}2\bigr)^{n + \tfrac{m}2 - 1}
% (\log2 + \log \tfrac{x}2) \tfrac{dx}2
% = [u = \tfrac{x}2]
% = e^{- \tfrac\lambda2} \sum_{n \geq 0}
% \tfrac{\bigl(\tfrac\lambda2\bigr)^n}{n!}
% \tfrac1{\Gamma(n + \tfrac{m}2)}
% \int_0^\infty
% e^{- u} u^{n + \tfrac{m}2 - 1}
% (\log2 + \log u) du
% = [\text{definitions:} \Gamma, \psi]
% = e^{- \tfrac\lambda2} \sum_{n \geq 0}
% \tfrac{\bigl(\tfrac\lambda2\bigr)^n}{n!}
% (\psi(n + \tfrac{m}2) + \log2)
= \log{2}
+ e^{- \tfrac\lambda2} \sum_{n \geq 0}
\tfrac{\bigl(\tfrac\lambda2\bigr)^n}{n!}
\psi(n + \tfrac{m}2)
\,, $$
<br>
Turns out the expectation $
\mathbb{E}_{z \sim \mathcal{N}(0, 1)}
\log{\bigl\lvert z + \tfrac1{\sqrt{\alpha}} \bigr\rvert^2}
$ is equal to $
\log{2} + g_1(\tfrac1{2\alpha})
$ where
$$
g_m(x)
% = e^{-x} \sum_{n \geq 0} \frac{x^n}{n! \Gamma(n + \tfrac{m}2)}
% \int_0^\infty e^{-t} t^{n + \tfrac{m}2-1} \log{t} dt
% e^{-x} \sum_{n=0}^{\infty} \frac{x^n}{n!} \psi(n + m / 2)
= e^{-x} \sum_{n \geq 0} \frac{x^n}{n!} \psi(n + \tfrac{m}2)
\,, $$
and $\psi(x)$ is the digamma function, i.e. $
\psi(x) = \tfrac{d}{dx} \Gamma(x)
$. The digamma function has the following useful properties:
$
\psi(z+1) = \psi(z) + \tfrac1z
$ and $
\psi(\tfrac12) = -\gamma - 2\log 2
$.
Differentiating the series within $g_m$ yields:
$$
\frac{d}{d x} g_m(x)
= e^{-x} \sum_{n\geq 1} \frac{x^{n-1}}{(n-1)!} \psi(n + \tfrac{m}2) - g_m(x)
% = e^{-x} \sum_{n\geq 0} \frac{x^n}{n!} \psi(n + \tfrac{m}2 + 1) - g_m(x)
% = e^{-x} \sum_{n\geq 0} \frac{x^n}{n!} (
% \psi(n + \tfrac{m}2 + 1) - \psi(n + \tfrac{m}2)
% )
% = e^{-x} \sum_{n\geq 0} \frac{x^n}{n!} \tfrac1{n + \tfrac{m}2}
= e^{-x} \sum_{n\geq 0} \frac{x^n}{n!} (n + \tfrac{m}2)^{-1}
= \cdots
\,. $$
We can differentiate the series in $g_m(x)$, since the sum converges everywhere
on $\mathbb{R}$. Indeed, it is a power series featuring nonegative coefficients,
which is dominated by $
\sum_{n \geq 1} \tfrac{x^n}{n!} \log{(n+\tfrac{m}2)}
$, because $\psi(x)\leq \log x - \tfrac1{2x}$. By the ratio test, the dominating
series has infinite radius:
$$
\lim_{n\to\infty}
\biggl\lvert
\frac{
n! x^{n+1} \log{(n + 1 + \tfrac{m}2)}
}{
x^n \log{(n + \tfrac{m}2)} (n+1)!
}
\biggr\rvert
= \lim_{n\to\infty}
\lvert x \rvert
\biggl\lvert
\frac{
\log{(n + 1 + \tfrac{m}2)}
}{
\log{(n + \tfrac{m}2)} (n+1)
}
\biggr\rvert
% = \lim_{n\to\infty}
% \lvert x \rvert
% \biggl\lvert
% \frac{n + \tfrac{m}2}{
% (n + 1 + \tfrac{m}2)((n+1) + (n + \tfrac{m}2) \log{(n + \tfrac{m}2)})
% }
% \biggr\rvert
= 0 < 1
\,. $$
Since
$$
\frac{\log{x + a + 1}}{x \log{x + a}}
\sim \frac{\log{x+1}}{(x-a) \log{x}}
\sim \frac{\log{x+1}}{x \log{x}}
\sim \frac{\tfrac1{x+1}}{1 + \log{x}}
\to 0
\,. $$
A theroem from calculus states, that the formal series derivative (integral)
coincides with the derivative (integral) of the function, corresponding to
the power series (everywhere on the convergence region). And the convergence
regions of derivative (integral) concide with the region of the original
power series.
Now, observe that $
e^t = \sum_{n\geq 0} \tfrac{t^n}{n!}
$ on $\mathbb{R}$, for $\alpha\neq 0$ we have $
\int_0^x t^{\alpha-1} dt
= \tfrac{x^\alpha}{\alpha}
$ and that
$$
\sum_{n\geq 0} \int_0^x \frac{t^{n+\alpha-1}}{n!} dt
= \int_0^x \sum_{n\geq 0} \frac{t^{n+\alpha-1}}{n!} dt
= \int_0^x t^{\alpha-1} e^t dt
\,. $$
by (MCT) on $
([0, x], \mathcal{B}([0, x]), dx)
$ whence
$$
\cdots
= x^{-\tfrac{m}2} e^{-x} \sum_{n\geq 0}
\frac{x^{n + \tfrac{m}2}}{n!} (n + \tfrac{m}2)^{-1}
% = x^{-\tfrac{m}2} e^{-x} \sum_{n\geq 0}
% \frac1{n!} \int_0^x t^{n + \tfrac{m}2 - 1} dt
% = x^{-\tfrac{m}2} e^{-x}
% \int_0^x t^{\tfrac{m}2 - 1} \sum_{n\geq 0} \frac{t^n}{n!} dt
= x^{-\tfrac{m}2} e^{-x}
\int_0^x t^{\tfrac{m}2 - 1} e^t dt
= \cdots
\,. $$
Using $u^2 = t$ on $[0, \infty]$ we get $2u du = dt$ and
$$
\int_0^x t^{\tfrac{m}2 - 1} e^t dt
% = \int_0^{\sqrt{x}} u^{m - 2} e^{u^2} 2 u du
= 2 \int_0^{\sqrt{x}} u^{m - 1} e^{u^2} du
\,.$$
Therefore
$$
\frac{d}{d x} g_m(x)
% = x^{-\tfrac{m}2} e^{-x}
% \int_0^x t^{\tfrac{m}2 - 1} e^t dt
= 2 x^{-\tfrac{m}2} e^{-x}
\int_0^{\sqrt{x}} u^{m - 1} e^{u^2} du
\,. $$
For $m=1$
$$
g_1(x)
= - \gamma - 2 \log{2}
+ e^{-x} \sum_{n\geq 0} \frac{x^n}{n!} \sum_{p=1}^n \frac2{2p - 1}
\,, $$
and the derivative can be computed thus
$$
\frac{d}{d x} g_1(x)
= 2 \tfrac{F(\sqrt{x})}{\sqrt{x}}
\,, $$
using Dawson's integral, $
F\colon \mathbb{R} \to \mathbb{R}
\colon x \mapsto e^{-x^2} \int_0^x e^{u^2} du
$, which exists as a special function (in `scipy`).
Now in SGD (and specifically in SGVB) we are concerned more with
the gradient field, induced by the potential (which is the loss
function), rather than the value itself. Thus, as far as the regularizing
penalty term is concerned, which is used to regularize the loss
objective, we can essentially ignore it's forward pass value (level)
and just compute its gradient (subgradient, normal cone) with respect
the parameter of interest in sgd. (unless it is a part of a constraint,
ie. downstream computation)
The derivative wrt $\alpha$ is
$$
\tfrac{d}{d \alpha} g_1(\tfrac1{2\alpha})
= -\tfrac1{2 \alpha^2} g_1'(\tfrac1{2\alpha})
% = -\tfrac1{2 \alpha^2} 2 \tfrac{F(\sqrt{\tfrac1{2\alpha}})}{\sqrt{\tfrac1{2\alpha}}}
% = -\tfrac1{2 \alpha^2} 2 \tfrac{F(\tfrac1{\sqrt{2\alpha}})}{\tfrac1{\sqrt{2\alpha}}}
= -\tfrac1{\alpha} \sqrt{\tfrac2{\alpha}} F(\tfrac1{\sqrt{2\alpha}})
\,. $$
Since $\alpha$ is nonegative, it it typically parametereized through its
logarithm and computed when needed. Thus in particular the gradient of
the divergence penalty w.r.t $\log \alpha$ is
$$
\frac{d}{d\log \alpha}
\tfrac12 \mathbb{E}_{z \sim \mathcal{N}(0,1)}
\log \lvert z + \tfrac1{\sqrt{\alpha}} \rvert^2
= \frac12 \frac{d\alpha}{d\log \alpha}
\frac{d}{d\alpha} \bigl(\mathbb{E}\cdots \bigr)
= - \frac\alpha2 \tfrac1{\alpha}
\sqrt{\tfrac2{\alpha}} F(\tfrac1{\sqrt{2\alpha}})
= - \tfrac1{\sqrt{2\alpha}} F(\tfrac1{\sqrt{2\alpha}})
\,. $$
```python
from scipy.special import dawsn
def kl_real_deriv(log_alpha):
tmp = np.exp(- 0.5 * (log_alpha + np.log(2)))
return -dawsn(tmp) * tmp
```
For $\beta = 2$ we get
$$
KL(q\|p)
= C - \log \pi e - \log \alpha
- \mathop{Ei}{\bigl(-\tfrac1\alpha \bigr)}
\,. $$
```python
def kl_cplx_deriv(log_alpha):
return -1 + np.exp(-np.exp(-log_alpha))
```
$$
\frac{d}{d y} \mathop{Ei}(-e^{-y})
= \frac{d}{d x} \mathop{Ei}(x) \bigg\vert_{x=-e^{-y}}
\frac{d(-e^{-y})}{d y}
= \frac{e^x}{x} \bigg\vert_{x=-e^{-y}} e^{-y}
= - e^{-e^{-y}}
\,. $$
Use autograd to get the derivative of the approximation
```python
def kldiv_approx_deriv(log_alpha, k):
k, x = map(torch.from_numpy, (k, log_alpha))
x.requires_grad_(True)
kldiv = -tr_neg_kldiv_approx(k, x).sum()
grad, = torch.autograd.grad(kldiv, x, grad_outputs=torch.tensor(1.).to(x))
return log_alpha, grad.cpu().numpy()
```
Let's estimate the derivative wrt $\log\alpha$ using symmetric differences.
```python
def symm_diff(x, y):
"""Symmetric difference derivative approximation.
Assumes x_i is sorted (axis=0), y_i = y(x_i).
"""
return x[1:-1], (y[2:] - y[:-2]) /(x[2:] - x[:-2])
```
Darken a given colour.
```python
from matplotlib.colors import to_rgb
from colorsys import rgb_to_hls, hls_to_rgb
def darker(color, a=0.5):
"""Adapted from this stackoverflow question_.
.. _question: https://stackoverflow.com/questions/37765197/
"""
h, l, s = rgb_to_hls(*to_rgb(color))
return hls_to_rgb(h, max(0, min(a * l, 1)), s)
```
Plot the `numerical` derivative of the MC estimate, the exact derivative and
the derivative of the fit approximation.
```python
fig = plt.figure(figsize=(6, 3), dpi=300)
ax = fig.add_subplot(111, xlabel=r"$\log\alpha$", ylabel=r"$\partial KL$")
# plot kl-real
line, = ax.plot(*kldiv_approx_deriv(log_alpha, k_real_1701_05369),
label=r"$\partial {KL}_\mathbb{R}$-approx",
linestyle="-", c="C1")
color, zorder = darker(line.get_color(), .15), line.get_zorder()
ax.plot(log_alpha, kl_real_deriv(log_alpha),
label=r"$\partial {KL}_\mathbb{R}$-exact",
linestyle=":", c=color, zorder=zorder + 5, lw=2, alpha=0.5)
color = darker(line.get_color(), 1.5)
ax.plot(*symm_diff(log_alpha, -neg_kl_real_mc),
label=r"$\Delta {KL}_\mathbb{R}$-MC",
c=color, zorder=zorder - 5, alpha=0.5)
# plot kl-cplx
line, = ax.plot(*kldiv_approx_deriv(log_alpha, k_cplx),
label=r"$\partial {KL}_\mathbb{C}$-approx",
linestyle="-", c="C3")
color, zorder = darker(line.get_color(), .15), line.get_zorder()
ax.plot(log_alpha, kl_cplx_deriv(log_alpha),
label=r"$\partial {KL}_\mathbb{C}$-exact",
linestyle="--", c=color, zorder=zorder + 5, lw=2, alpha=0.5)
color = darker(line.get_color(), 1.5)
ax.plot(*symm_diff(log_alpha, -neg_kl_cplx_mc),
label=r"$\Delta {KL}_\mathbb{C}$-MC",
c=color, zorder=zorder - 5, alpha=0.5)
# ax.axhline(0., c="k", zorder=-10, alpha=0.15)
# ax.axhline(1., c="k", zorder=-10, alpha=0.15)
ax.legend(ncol=2, fontsize='small')
ax.set_xlim(-8, +8)
plt.tight_layout()
fig.savefig("../assets/grad_log.pdf", dpi=300, format="pdf")
plt.show()
```
Absolute difference for the exact and approximation's derivative.
```python
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111, xlabel=r"$\log\alpha$", ylabel="-kld")
ax.plot(log_alpha, abs(
kldiv_approx_deriv(log_alpha, k_real_1701_05369)[1]
- (kl_real_deriv(log_alpha))
), label=r"$\partial {KL}_\mathbb{R}$: approx - exact")
ax.plot(log_alpha, abs(
kldiv_approx_deriv(log_alpha, k_cplx)[1]
- (kl_cplx_deriv(log_alpha))
), label=r"$\partial {KL}_\mathbb{C}$: approx - exact")
ax.axhline(0., c="k", zorder=-10)
ax.legend(ncol=2)
plt.show()
```
Less cluttered kl-divergence plots
```python
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111, xlabel=r"$\log\alpha$", ylabel="-kld")
# mid_log_alpha = (log_alpha[1:] + log_alpha[:-1]) / 2
# d_log = log_alpha[1:] - log_alpha[:-1]
ax.plot(*symm_diff(log_alpha, -neg_kl_real_mc), alpha=0.5,
label=r"$\partial {KL}_\mathbb{R}$-MC symm.d")
ax.plot(log_alpha, kl_real_deriv(log_alpha), label=r"$\partial {KL}_\mathbb{R}$-exact")
ax.plot(*symm_diff(log_alpha, -neg_kl_cplx_mc), alpha=0.5,
label=r"$\partial {KL}_\mathbb{C}$-MC symm.d")
ax.plot(log_alpha, kl_cplx_deriv(log_alpha), label=r"$\partial {KL}_\mathbb{C}$-exact")
ax.axhline(0., c="k", zorder=-10)
ax.legend(ncol=2)
plt.show()
```
```python
assert False, """STOP!"""
```
```python
from scipy.special import erf, expit
x = np.linspace(-10, 10, num=513)
plt.plot(x, 1 - expit(x))
plt.plot(x, 1 - (erf(x) + 1) / 2)
```
<br>
## Draft
Consider a complex random vector $z \in \mathbb{C}^d$
$$
z \sim \mathcal{CN}_d \bigl(\theta, K, C \bigr)
\Leftrightarrow
\begin{pmatrix}\Re z \\ \Im z\end{pmatrix}
\sim \mathcal{N}_{2 d} \biggl(
\begin{pmatrix}\Re \theta \\ \Im \theta \end{pmatrix},
\tfrac12 \begin{pmatrix}
\Re (K + C) & - \Im (K - C) \\
\Im (K + C) & \Re (K - C)
\end{pmatrix}
\biggr)
\,. $$
If $x \sim \mathcal{N}_{2n}\Bigl( \mu, \Sigma\Bigr)$ with $x = (x_1, x_2)$, then $z = x_1 + i x_2$
is a complex gaussian random vector, $z \sim \mathcal{CN}_n(\mu_1 + i \mu_2, K, C)$ with
$$
\begin{align}
K &= \Sigma_{11} + \Sigma_{22} + i (\Sigma_{21} - \Sigma_{12})
\,, \\
C &= \Sigma_{11} - \Sigma_{22} + i (\Sigma_{21} + \Sigma_{12})
\,.
\end{align}
$$
Note that $\Sigma_{12} = \Sigma_{21}^\top$, and $\Sigma_{11}, \Sigma_{22} \succeq 0$ imply that
$K\succeq 0$, $K^H = K$, $C = C^\top$, and $\overline{\Gamma} \succeq \overline{C} \Gamma^{-1} C $.
If $A\in \mathbb{C}^{n\times d}$ and $b\in \mathbb{C}^n$, then
$$
A z + b \sim \mathcal{CN}_n \bigl(A \theta + b, A K A^H, A C A^\top \bigr)
\,. $$
Indeed,
$$
\mathbb{E} Az + b = A \mathbb{E} z + b = A \theta + b
\,, $$
and for $\xi = Az - A\theta$ we have
$$
\mathbb{E} A (z-\theta)(z-\theta)^H A^H = A K A^H
\,,
\mathbb{E} A (z-\theta)(z-\theta)^\top A^\top = A C A^\top
\,. $$
A complex vector is called _proper_ (or circularly-symmetric) iff the pseudo-covariance
matrix $C$ vanishes. This means that the corresponding blocks in the double-vector real
representation obey $\Sigma_{11} = \Sigma_{22}$ and $\Sigma_{21} = - \Sigma_{12}$. This
last condition implies that $\Sigma_{12}$ is skew-symmetric: $\Sigma_{12} = -\Sigma_{12}^\top$.
This also means that the real and imaginary components of each element of $z$ are
uncorrelated, since skew-symmetry means that its main diagonal is zero.
<br>
### Output of a random complex-linear function
Consider $y = Wx = (I \otimes x^\top) \mathop{vec}(W)$ for
$$
\mathop{vec}(W) \sim \mathcal{CN}_{[d_1\times d_0]}
\Bigl(\mathop{vec} \theta, K, C\Bigr)
\,, $$
for $K \in \mathbb{C}^{[d_1\times d_0]\times [d_1\times d_0]}$ diagonal $K = \mathop{diag}(k_\omega)$.
Since $K = K^H$ we must have $k_\omega = \overline{k_\omega}$, whence $k_\omega \in \mathbb{R}$
and $k_\omega \geq 0$. The relation matrix $C$ is also diagonal with $c_\omega \in \mathbb{C}$.
Observe that for $A = I \otimes x^\top$ we have $A^H = I \otimes \bar{x}$
and $A^\top = I \otimes x$ both $[d_1\times d_0]\times [d_1\times 1]$ matrices.
$$
A K A^H
= \sum_{i=1}^{d_1} \sum_{j=1}^{d_0}
A (e_i \otimes e_j) k_{ij} (e_i \otimes e_j)^\top A^H
= \sum_{i=1}^{d_1} \sum_{j=1}^{d_0} e_i e_i^\top x_j k_{ij} \bar{x}_j
= \sum_{i=1}^{d_1} e_i e_i^\top \Bigl\{ \sum_{j=1}^{d_0} k_{ij} x_j \bar{x}_j \Bigr\}
\,, $$
and
$$
A C A^\top
= \sum_{i=1}^{d_1} \sum_{j=1}^{d_0}
A (e_i \otimes e_j) c_{ij} (e_i \otimes e_j)^\top A^\top
= \sum_{i=1}^{d_1} \sum_{j=1}^{d_0} e_i e_i^\top x_j c_{ij} x_j
= \sum_{i=1}^{d_1} e_i e_i^\top \Bigl\{ \sum_{j=1}^{d_0} c_{ij} x_j x_j \Bigr\}
\,, $$
Therefore
$$
y \sim \mathcal{CN}_{d_1}\Bigl(
\theta x, \sum_{i=1}^{d_1} e_i e_i^\top \Bigl\{ \sum_{j=1}^{d_0} k_{ij} x_j \bar{x}_j \Bigr\},
\sum_{i=1}^{d_1} e_i e_i^\top \Bigl\{ \sum_{j=1}^{d_0} c_{ij} x_j x_j \Bigr\}
\Bigr)
= \bigotimes_{i=1}^{d_1}
\mathcal{CN}\Bigl(
\sum_{j=1}^{d_0} \theta_{ij} x_j,
\sum_{j=1}^{d_0} k_{ij} x_j \bar{x}_j,
\sum_{j=1}^{d_0} c_{ij} x_j x_j
\Bigr)
\,. $$
Let's suppose that the complex random vector $\mathop{vec}W$ is proper, i.e. $C = 0$.
Then $y$ is itself proper, has independent components and
$$
y_i \sim \mathcal{CN}\Bigl(
\sum_{j=1}^{d_0} \theta_{ij} x_j,
\sum_{j=1}^{d_0} k_{ij} \lvert x_j \rvert^2, 0
\Bigr)
\,. $$
In a form, more aligned with `the reparametrization trick`, the expression for the
output is
$$
y_i
= \sum_{j=1}^{d_0} \theta_{ij} x_j
+ \sqrt{\sum_{j=1}^{d_0} k_{ij} \lvert x_j \rvert^2}
\varepsilon_i
\,,
\varepsilon_i \sim \mathcal{CN}\Bigl(
0, 1, 0
\Bigr)
\,. $$
Observe that if $z \sim \mathcal{CN}(\mu, \gamma, 0)$ for $\gamma \in \mathbb{C}$, then
$\gamma = \gamma^H = \bar{\gamma}$, $\gamma\in \mathbb{R}$ and $\gamma \geq 0$:
$$
\begin{pmatrix}\Re z \\ \Im z\end{pmatrix}
= \begin{pmatrix}\Re \mu \\ \Im \mu \end{pmatrix}
+ \sqrt{\gamma} \varepsilon
\,,
\varepsilon \sim \mathcal{N}_{2} \bigl( 0, \tfrac12 I\bigr)
\,,
$$
The reparametrization $z_\omega = \theta_\omega \varepsilon_\omega$ for
$\varepsilon_\omega \sim \mathcal{CN}(1, \alpha_\omega, 0)$.
$$
z_\omega
= \theta_\omega \varepsilon_\omega
\,,
\varepsilon_\omega
\sim \mathcal{CN}(1, \alpha_\omega, 0)
\Leftrightarrow
z_\omega
\sim \mathcal{CN}(\theta_\omega, \alpha_\omega \lvert \theta_\omega\rvert^2, 0)
\Leftrightarrow
z_\omega
= \theta_\omega + \sigma_\omega \varepsilon_\omega
\,, \sigma_\omega^2
= \alpha_\omega \lvert \theta_\omega\rvert^2
\,,
\varepsilon_\omega
\sim \mathcal{CN}(0, 1, 0)
\,. $$
<br>
### Entropy and divergences
The entropy of a generic gaussian random vector $x\sim q(x) = \mathcal{N}_d(x\mid\mu ,\Sigma)$
is
$$
\mathbb{E}_{x\sim q} \log q(x)
= - \tfrac{d}2\log 2\pi - \tfrac12 \log\det\Sigma
- \tfrac12 \mathbb{E}_{x\sim q}\mathop{tr}{\Sigma^{-1} (x-\mu)(x-\mu)^\top}
= - \tfrac12 \log\det 2\pi e \Sigma
\,. $$
Therefore the entropy of a gaussian complex random vector $z\sim \mathcal{CN}_d(\theta, K, C)$
is exaclty the entropy of the $2d$ double-real gaussian vector with a special
convariance structure: for $z\sim q(z) = \mathcal{CN}_d(z\mid \theta, K, C)$
$$
\mathbb{E}_{z\sim q} \log q(z)
= - \tfrac12 \log\det \pi e \begin{pmatrix}
\Re (K + C) & \Im (- K + C) \\
\Im (K + C) & \Re (K - C)
\end{pmatrix}
= - \tfrac{2d}2 \log\pi e
- \tfrac12 \log\det \begin{pmatrix}
\Re (K + C) & \Im (- K + C) \\
\Im (K + C) & \Re (K - C)
\end{pmatrix}
\,. $$
If $C=0$, i.e. the complex vector is proper, then
$$
\det \begin{pmatrix}
\Re (K + C) & \Im (- K + C) \\
\Im (K + C) & \Re (K - C)
\end{pmatrix}
= \det \begin{pmatrix}
\Re K & - \Im K \\
\Im K & \Re K
\end{pmatrix}
= \det \hat{K}
= \det K \overline{\det K}
= \lvert \det K \rvert^2
\,, $$
whence the differential entropy becomes
$$
\mathbb{E}_{z\sim q} \log q(z)
= - \log (\pi e)^d\lvert \det K\rvert
= - \log \lvert \det \pi e K\rvert
\,. $$
<br>
Let $q$ be any distribution on $z$: a product with block terms, or even dirac $\delta_{z_*}$
distributions (for MLE), a mixture or whatever. $q$ may depend on anything, even $\theta$!
Then
$$
\log p(D; \theta)
% = \mathbb{E}_{z\sim q} \log p(D; \theta)
% = \mathbb{E}_{z\sim q} \log \tfrac{p(D, z; \theta)}{p(z\mid D; \theta)}
= \underbrace{
\mathbb{E}_{z\sim q} \log \tfrac{p(D, z; \theta)}{q(z)}
}_{\mathfrak{L}(\theta, q)}
+ \underbrace{
\mathbb{E}_{z\sim q} \log \tfrac{q(z)}{p(z\mid D; \theta)}
}_{KL(q \| p(\cdot \mid D; \theta))}
= \mathbb{E}_{z\sim q} \log p(D\mid z; \theta)
- \mathbb{E}_{z\sim q} \log \tfrac{q(z)}{p(z)}
+ \mathbb{E}_{z\sim q} \log \tfrac{q(z)}{p(z\mid D; \theta)}
\,, $$
is constant w.r.t. $q$ at any $\theta$. Since KL-divergence is nonnegative (from
Jensen's inequality), the Evidence Lower Bound $\mathfrak{L}(\theta, q)$ bound the
original log-likelihood $\log p(D; \theta)$ from below.
Let's maximize the ELBO with respec to $q$ and $\theta$. However, since the whole
right hand side is constant with respect to $q$, maximization of $\mathfrak{L}(\theta, q)$
w.r.t. $q \in \mathcal{F}$ (holding $\theta$ fixed) is equivalent to minimizing
$KL(q \| p(\cdot \mid D; \theta))$ w.r.t. $q$. This is the E-step of the EM.
The M-step is maximizing the ELBO over $\theta$ holding $q$ fixed. Note that some of
$q$ may be `offlaid` to the M-step from the E-step!
<br>
Suppose that the prior $p(z)$ and the variational distribution are fully factorized:
$p(z) = \otimes_{\omega} p(z_\omega)$, $q(z) = \otimes_{\omega} q(z_\omega)$. Then
$$
\mathbb{E}_{z\sim q} \log \tfrac{q(z)}{p(z)}
= \sum_\omega \mathbb{E}_{z\sim q} \log \tfrac{q(z_\omega)}{p(z_\omega)}
= \sum_\omega \mathbb{E}_{q(z_\omega)} \log \tfrac{q(z_\omega)}{p(z_\omega)}
\,. $$
<span style="color:red">**NOTE**</span> we treat $p$ and $q$ as symbols, represending
the density of the argument random variable w.r.t. some carrier measure.
$$
z_\omega
= \theta_\omega \varepsilon_\omega
\,,
\varepsilon_\omega
\sim \mathcal{CN}(1, \alpha_\omega, 0)
\Leftrightarrow
z_\omega
\sim \mathcal{CN}(\theta_\omega, \alpha_\omega \lvert \theta_\omega\rvert^2, 0)
\,. $$
Suppose $q(z) = \mathcal{CN}(\theta, \alpha \lvert\theta\rvert^2, 0)$ and
$p(z) \propto \lvert z\rvert^{-\beta}$. Each term in the sum
$$
\begin{align}
KL(q\|p)
&= \mathbb{E}_{q(z)} \log \tfrac{q(z)}{p(z)}
= \mathbb{E}_{q(z)} \log q(z) - \mathbb{E}_{q(z)} \log p(z)
\\
&= - \log \bigl\lvert \pi e \alpha \lvert\theta\rvert^2 \bigr\rvert
+ \beta \mathbb{E}_{q(z)} \log \lvert z\rvert
+ C
\\
&= - \log \pi e
- \log \alpha \lvert\theta\rvert^2
+ \beta \mathbb{E}_{\varepsilon \sim \mathcal{CN}(1, \alpha, 0)}
\log \lvert \theta \rvert \lvert \varepsilon\rvert
+ C
\\
&= - \log \pi e - \log \alpha
+ \tfrac{\beta - 2}2 \log \lvert \theta \rvert^2
+ \beta \mathbb{E}_{\varepsilon \sim \mathcal{CN}(1, \alpha, 0)}
\log \lvert \varepsilon\rvert
+ C
\\
&= - \log \pi e - \log \alpha
+ \tfrac{\beta - 2}2 \log \lvert \theta \rvert^2
+ \tfrac\beta2 \mathbb{E}_{z \sim \mathcal{CN}(0, \alpha, 0)}
\log \bigl\lvert z + 1 \bigr\rvert^2
+ C
\\
&= - \log \pi e - \log \alpha
+ \tfrac{\beta - 2}2 \log \lvert \theta \rvert^2
+ \tfrac\beta2 \mathbb{E}_{\varepsilon \sim \mathcal{N}_2\bigl(0, \tfrac\alpha2 I\bigr)}
\log \bigl((\varepsilon_1 + 1)^2 + \varepsilon_2^2\bigr)
+ C
\end{align}
\,. $$
For $\beta = 2$ the parameter $\theta$ vanishes from the divergence,
and it becomes just the expectation.
Compare this expression to the real variational dropout: for
$q(z) = \mathcal{N}(\theta, \alpha \theta^2)$ and $p(z) \propto \lvert z\rvert^{-1}$
we have
$$
\begin{align}
KL(q\|p)
&= \mathbb{E}_{q(z)} \log \tfrac{q(z)}{p(z)}
= \mathbb{E}_{q(z)} \log q(z) - \mathbb{E}_{q(z)} \log p(z)
\\
&= - \tfrac12 \log 2 \pi e - \tfrac12 \log \alpha \theta^2
+ \mathbb{E}_{\xi \sim \mathcal{N}(0, \alpha)}
\log \bigl\lvert \theta (\xi + 1) \bigr\rvert
+ C
\\
&= - \tfrac12 \log 2 \pi e - \tfrac12 \log \alpha
+ \tfrac12 \mathbb{E}_{\xi \sim \mathcal{N}(0, \alpha)}
\log (\xi + 1)^2
+ C
\end{align}
\,. $$
<hr>
<br>
# trunk
[Complex-Valued Random Variable](https://www.mins.ee.ethz.ch/teaching/wirelessIT/handouts/miscmath1.pdf)
[Caution: The Complex Normal Distribution!](http://www.ee.imperial.ac.uk/wei.dai/PaperDiscussion/Material2014/ReadingGroup_May.pdf)
A complex Gaussian random vector $z\in \mathbb{C}^d$ is completely determiend by the
distribution of $\begin{pmatrix}\Re z \\ \Im z\end{pmatrix} \in \mathbb{R}^{[2\times d]}$
or equivalently a $\mathbb{C}^{[2\times d]}$ vector:
$$
\begin{pmatrix}z \\ \bar{z} \end{pmatrix}
= \underbrace{\begin{pmatrix}I & iI \\ I & -iI \end{pmatrix}}_{\Xi}
\begin{pmatrix}\Re z \\ \Im z\end{pmatrix}
\,. $$
For a rv $z$ this vector is called the augmented $z$ vector.
Now the transjugation (Hermitian transpose) of $\Xi$ is
$$
\Xi^H
= \begin{pmatrix}I & I \\ -iI & iI \end{pmatrix}
\,, $$
and $ \Xi^H \Xi = \Xi \Xi^H = \begin{pmatrix} 2I & 0 \\ 0 & 2I \end{pmatrix}$.
Therefore, $\Xi^{-H} = \tfrac12 \Xi$ and $\Xi^{-1} = \tfrac12 \Xi^H$.
Thus
$$
\begin{pmatrix}\Re z \\ \Im z\end{pmatrix}
= \Xi^{-1} \begin{pmatrix}z \\ \bar{z} \end{pmatrix}
= \tfrac12 \Xi^H \begin{pmatrix}z \\ \bar{z} \end{pmatrix}
\,. $$
Consider the complex covariance of the augmented $z$:
$$
\mathbb{E}
\begin{pmatrix}z \\ \bar{z} \end{pmatrix}
\begin{pmatrix}z \\ \bar{z} \end{pmatrix}^H
= \begin{pmatrix}
\mathbb{E} z \bar{z}^\top & \mathbb{E} z z^\top \\
\mathbb{E} \bar{z} \bar{z}^\top & \mathbb{E} \bar{z} z^\top
\end{pmatrix}
= \begin{pmatrix} K & C \\ \bar{C} & \bar{K} \end{pmatrix}
\,, $$
since effectively $z^H = (\bar{z})^\top = \overline{\bigl(z^\top\bigr)}$.
Since $$
\bigl(z z^\top\bigr)^H
= \overline{\bigl(z z^\top\bigr)}^\top
= \bigl(\bar{z} \bar{z}^\top\bigr)^\top
= \bar{z}^{\top\top} \bar{z}^\top
= \bar{z} \bar{z}^\top
\,, $$
we have $\bar{C} = C^H$. This implies the following constraints on $K$ (the
covariance) and $C$ (the pseudo-covariance): $K \succeq 0$, $K = K^H$ and
$C=C^\top$.
This complex covariance matrix corresponds to the following real-vector covariance:
$$
\mathbb{E}
\begin{pmatrix}\Re z \\ \Im z\end{pmatrix}
\begin{pmatrix}\Re z \\ \Im z\end{pmatrix}^\top
= \mathbb{E}
\begin{pmatrix}\Re z \\ \Im z\end{pmatrix}
\begin{pmatrix}\Re z \\ \Im z\end{pmatrix}^H
= \Xi^{-1}
\begin{pmatrix} K & C \\ \bar{C} & \bar{K} \end{pmatrix}
\Xi^{-H}
= \tfrac14 \Xi^H \begin{pmatrix}
K & C \\ \bar{C} & \bar{K}
\end{pmatrix} \Xi % \begin{pmatrix}I & iI \\ I & -iI \end{pmatrix}
% = \tfrac14 \begin{pmatrix}I & I \\ -iI & iI \end{pmatrix}
% \begin{pmatrix}
% K + C & i (K - C) \\ \bar{C} + \bar{K} & i(\bar{C} - \bar{K})
% \end{pmatrix}
% = \tfrac14 \begin{pmatrix}
% K + \bar{K} + C + \bar{C}
% & i (K - \bar{K} - (C - \bar{C}))
% \\
% - i (K - \bar{K} + C - \bar{C})
% & K + \bar{K} - (C + \bar{C})
% \end{pmatrix}
= \tfrac12 \begin{pmatrix}
\Re (K + C) & - \Im (K - C) \\
\Im (K + C) & \Re (K - C)
\end{pmatrix}
= \tfrac12 \begin{pmatrix}
\Re (K + C) & \Im (\overline{K - C}) \\
\Im (K + C) & \Re (\overline{K - C})
\end{pmatrix}
\,, $$
since $- \Im z = \Im \bar{z}$ and $\Re z = \Re \bar{z}$.
$$ \Im (K+C)^\top
= \Im (K^\top+C^\top)
= \Im (\bar{K} + C)
= \Im \bar{K} + \Im C
= -\Im K + \Im C
= -\Im (K - C)
\,. $$
$$ \Re (K-C)^\top
= \Re (K^\top - C^\top)
= \Re (\bar{K} - C)
= \Re \bar{K} - \Re C
= \Re K - \Re C
= \Re (K - C)
\,. $$
<br>
<span style="color:red;">**TODO**</span>
Futhermore, the whole matrix must be complex positive semi-definite.
$$
\begin{pmatrix}a \\ b \end{pmatrix}^\top
\begin{pmatrix}
\Re (K + C) & - \Im (K - C) \\
\Im (K + C) & \Re (K - C)
\end{pmatrix}
\begin{pmatrix}a \\ b \end{pmatrix}
= \tfrac14 \begin{pmatrix}a \\ b \end{pmatrix}^H \Xi^H
\begin{pmatrix}
K & C \\ \bar{C} & \bar{K}
\end{pmatrix}
\Xi \begin{pmatrix}a \\ b \end{pmatrix}
= \tfrac14 \begin{pmatrix}a + i b \\ a - i b \end{pmatrix}^H
\begin{pmatrix}
K & C \\ \bar{C} & \bar{K}
\end{pmatrix}
\begin{pmatrix}a + i b \\ a - i b \end{pmatrix}
= \tfrac14 \begin{pmatrix}z \\ \bar{z} \end{pmatrix}^H
\begin{pmatrix}
K & C \\ \bar{C} & \bar{K}
\end{pmatrix}
\begin{pmatrix}z \\ \bar{z} \end{pmatrix}
% = \tfrac14 \begin{pmatrix}z \\ \bar{z} \end{pmatrix}^H
% \begin{pmatrix}
% K z + C \bar{z} \\ \bar{C} z + \bar{K} \bar{z}
% \end{pmatrix}
% = \tfrac14 \bigl(
% z^H K z + z^H C \bar{z} + \bar{z}^H \bar{C} z + \bar{z}^H \bar{K} \bar{z}
% \bigr)
= \tfrac14 \bigl(
z^H K z + z^H C \bar{z} + \overline{z^H C \bar{z}} + \overline{z^H K z}
\bigr)
= \tfrac12 \Re (z^H K z) + \tfrac12 \Re (z^H C \bar{z})
\,. $$
<br>
Let $A$ and $B$ be conforming $\mathbb{C}$-valued matrices. Let
$$
\hat{\cdot}
\colon \mathbb{C}^{n \times m} \to \mathbb{R}^{2n \times 2m}
\colon A \mapsto \begin{pmatrix}
\Re A & - \Im A \\
\Im A & \Re A \\
\end{pmatrix}
\,. $$
$$
\begin{align}
AB = \Re A \Re B - \Im A \Im B + i (\Re A \Im B + \Im A \Re B)
&\Leftrightarrow
\begin{pmatrix}
\Re A & - \Im A \\
\Im A & \Re A \\
\end{pmatrix}
\begin{pmatrix}
\Re B & - \Im B \\
\Im B & \Re B \\
\end{pmatrix}
= \begin{pmatrix}
\Re A \Re B - \Im A \Im B & - (\Re A \Im B + \Im A \Re B) \\
\Im A \Re B + \Re A \Im B & \Re A \Re B - \Im A \Im B\\
\end{pmatrix}
\,, \\
\widehat{AB} &= \hat{A} \hat{B}
\,, \\
\widehat{A^H} &= \hat{A}^\top
\,, \\
I_{2n}
= \widehat{I}
= \widehat{A A^{-1}}
= \hat{A} \widehat{A^{-1}}
&\Rightarrow
\widehat{A^{-1}} = (\hat{A})^{-1}
\,, \\
\det \hat{A}
= \det \begin{pmatrix}
I & i I \\
0 & I \\
\end{pmatrix}
\begin{pmatrix}
\Re A & - \Im A \\
\Im A & \Re A \\
\end{pmatrix}
\begin{pmatrix}
I & - i I \\
0 & I \\
\end{pmatrix}
% = \det \begin{pmatrix}
% \Re A + i \Im A & - \Im A + i \Re A \\
% \Im A & \Re A \\
% \end{pmatrix}
% \begin{pmatrix}
% I & - i I \\
% 0 & I \\
% \end{pmatrix}
&= \det \begin{pmatrix}
A & i A \\
\Im A & \Re A \\
\end{pmatrix}
\begin{pmatrix}
I & - i I \\
0 & I \\
\end{pmatrix}
= \det \begin{pmatrix}
A & 0 \\
\Im A & \bar{A} \\
\end{pmatrix}
= \det A \det \bar{A}
= \det A \overline{\det A}
\,.
\end{align}
$$
On vectors the hat-operator works by treating vectors as one-column matrices.
$$
\widehat{u^H v}
= \widehat{u^H} \hat{v}
= \hat{u}^\top \hat{v}
= \begin{pmatrix}
\Re u^\top & \Im u^\top \\
- \Im u^\top & \Re u^\top
\end{pmatrix} \begin{pmatrix}
\Re v & - \Im v \\
\Im v & \Re v
\end{pmatrix}
= \begin{pmatrix}
\Re u^\top \Re v + \Im u^\top \Im v
& - (\Re u^\top \Im v - \Im u^\top \Re v) \\
\Re u^\top \Im v - \Im u^\top \Re v
& \Im u^\top \Im v + \Re u^\top \Re v
\end{pmatrix}
\,. $$
If $Q\in \mathbb{C}^{n\times n}$ is postive semidefinite, then $z^H Q z \geq 0$ for all $z\in \mathbb{C}^n$.
Then
$$
0 \leq z^H Q z
= \Re z^H Q z
\equiv \begin{pmatrix}
\Re z^H Q z & 0 \\
0 & \Re z^H Q z
\end{pmatrix}
= \widehat{z^H Q z}
= \widehat{z^H} \widehat{Q z}
= \hat{z}^\top \hat{Q} \hat{z}
\,. $$
<hr>
$$
\mathbb{E}_{\xi \sim \mathcal{N}(0, \alpha)}
\log (\xi + 1)^2
= \log \alpha
+ \mathbb{E}_{\xi \sim \mathcal{N}(0, 1)}
\log (\xi + \tfrac1{\sqrt\alpha})^2
= \log \alpha + g\bigl(\tfrac1{\sqrt\alpha}\bigr)
\,. $$
$$
g(\mu)
= \mathbb{E}_{\xi \sim \mathcal{N}(0, 1)} \log (\xi + \mu)^2
\,. $$
$$\begin{align}
\int_{-\infty}^\infty
\tfrac1{\sqrt{2\pi}} e^{-\tfrac12x^2}
\log \lvert x + \mu\rvert dx
&= \int_{-\infty}^{-\mu}
\tfrac1{\sqrt{2\pi}} e^{-\tfrac12x^2}
\log (- x - \mu) dx
+ \int_{-\mu}^\infty
\tfrac1{\sqrt{2\pi}} e^{-\tfrac12x^2}
\log (x + \mu) dx
\\
&= \int_\mu^\infty
\tfrac1{\sqrt{2\pi}} e^{-\tfrac12x^2}
\log (x - \mu) dx
+ \int_{-\mu}^\infty
\tfrac1{\sqrt{2\pi}} e^{-\tfrac12x^2}
\log (x + \mu) dx
\end{align}$$
Supppose $x \sim \chi^2_k$, the cumulant function of $y = \log x$ is
$$
K(t)
= \log \mathbb{E} e^{ty}
= \log \mathbb{E} x^t
\,, $$
which is known to be
$$
K(t)
= t \log2 + \log\Gamma\bigl(\tfrac{k}2 + t\bigr) - \log \Gamma\bigl(\tfrac{k}2\bigr)
\,. $$
Now
$$
\mathbb{E} y^k
!= \tfrac{d^k}{dt^k} K(t) \big\vert_{t=0}
\,. $$
<br>
| c927391663567ba6d760cbd8e210987339d8f0ac | 79,504 | ipynb | Jupyter Notebook | experiments/kl-div-approx.ipynb | ivannz/complex_paper | 644c39d2573531da2e1a37c0cf1a70dd125d227a | [
"MIT"
]
| 9 | 2020-03-26T07:40:36.000Z | 2021-09-27T07:39:04.000Z | experiments/kl-div-approx.ipynb | ivannz/complex_paper | 644c39d2573531da2e1a37c0cf1a70dd125d227a | [
"MIT"
]
| null | null | null | experiments/kl-div-approx.ipynb | ivannz/complex_paper | 644c39d2573531da2e1a37c0cf1a70dd125d227a | [
"MIT"
]
| 2 | 2020-08-24T02:48:32.000Z | 2021-09-27T07:39:05.000Z | 33.645366 | 137 | 0.431639 | true | 20,553 | Qwen/Qwen-72B | 1. YES
2. YES | 0.847968 | 0.754915 | 0.640144 | __label__kor_Hang | 0.157683 | 0.325599 |
# The Multivariate Gaussian distribution
The density of a multivariate Gaussian with mean vector $\mu$ and covariance matrix $\Sigma$ is given as
\begin{align}
\mathcal{N}(x; \mu, \Sigma) &= |2\pi \Sigma|^{-1/2} \exp\left( -\frac{1}{2} (x-\mu)^\top \Sigma^{-1} (x-\mu) \right) \\
& = \exp\left(-\frac{1}{2} x^\top \Sigma^{-1} x + \mu^\top \Sigma^{-1} x - \frac{1}{2} \mu^\top \Sigma^{-1} \mu -\frac{1}{2}\log \det(2\pi \Sigma) \right) \\
\end{align}
Here, $|X|$ denotes the determinant of a square matrix.
$\newcommand{\trace}{\mathop{Tr}}$
\begin{align}
{\cal N}(s; \mu, P) & = |2\pi P|^{-1/2} \exp\left(-\frac{1}2 (s-\mu)^\top P^{-1} (s-\mu) \right)
\\
& = \exp\left(
-\frac{1}{2}s^\top{P^{-1}}s + \mu^\top P^{-1}s { -\frac{1}{2}\mu^\top{P^{-1}\mu -\frac12|2\pi P|}}
\right) \\
\log {\cal N}(s; \mu, P) & = -\frac{1}{2}s^\top{P^{-1}}s + \mu^\top P^{-1}s + \text{ const} \\
& = -\frac{1}{2}\trace {P^{-1}} s s^\top + \mu^\top P^{-1}s + \text{ const} \\
\end{align}
## Special Cases
To gain the intuition, we take a look to a few special cases
### Bivariate Gaussian
#### Example 1: Identity covariance matrix
$
x = \left(\begin{array}{c} x_1 \\ x_2 \end{array} \right)
$
$
\mu = \left(\begin{array}{c} 0 \\ 0 \end{array} \right)
$
$
\Sigma = \left(\begin{array}{cc} 1& 0 \\ 0 & 1 \end{array} \right) = I_2
$
\begin{align}
\mathcal{N}(x; \mu, \Sigma) &= |2\pi I_{2}|^{-1/2} \exp\left( -\frac{1}{2} x^\top x \right)
= (2\pi)^{-1} \exp\left( -\frac{1}{2} \left( x_1^2 + x_2^2\right) \right) = (2\pi)^{-1/2} \exp\left( -\frac{1}{2} x_1^2 \right)(2\pi)^{-1/2} \exp\left( -\frac{1}{2} x_2^2 \right)\\
& = \mathcal{N}(x; 0, 1) \mathcal{N}(x; 0, 1)
\end{align}
#### Example 2: Diagonal covariance
$\newcommand{\diag}{\text{diag}}$
$
x = \left(\begin{array}{c} x_1 \\ x_2 \end{array} \right)
$
$
\mu = \left(\begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right)
$
$
\Sigma = \left(\begin{array}{cc} s_1 & 0 \\ 0 & s_2 \end{array} \right) = \diag(s_1, s_2)
$
\begin{eqnarray}
\mathcal{N}(x; \mu, \Sigma) &=& \left|2\pi \left(\begin{array}{cc} s_1 & 0 \\ 0 & s_2 \end{array} \right)\right|^{-1/2} \exp\left( -\frac{1}{2} \left(\begin{array}{c} x_1 - \mu_1 \\ x_2-\mu_2 \end{array} \right)^\top \left(\begin{array}{cc} 1/s_1 & 0 \\ 0 & 1/s_2 \end{array} \right) \left(\begin{array}{c} x_1 - \mu_1 \\ x_2-\mu_2 \end{array} \right) \right) \\
&=& ((2\pi)^2 s_1 s_2 )^{-1/2} \exp\left( -\frac{1}{2} \left( \frac{(x_1-\mu_1)^2}{s_1} + \frac{(x_2-\mu_2)^2}{s_2}\right) \right) \\
& = &\mathcal{N}(x; \mu_1, s_1) \mathcal{N}(x; \mu_2, s_2)
\end{eqnarray}
#### Example 3:
$
x = \left(\begin{array}{c} x_1 \\ x_2 \end{array} \right)
$
$
\mu = \left(\begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right)
$
$
\Sigma = \left(\begin{array}{cc} 1 & \rho \\ \rho & 1 \end{array} \right)
$
for $1<\rho<-1$.
Need $K = \Sigma^{-1}$. When $|\Sigma| \neq 0$ we have $K\Sigma^{-1} = I$.
$
\left(\begin{array}{cc} 1 & \rho \\ \rho & 1 \end{array} \right) \left(\begin{array}{cc} k_{11} & k_{12} \\ k_{21} & k_{22} \end{array} \right) = \left(\begin{array}{cc} 1& 0 \\ 0 & 1 \end{array} \right)
$
\begin{align}
k_{11} &+ \rho k_{21} & & &=1 \\
\rho k_{11} &+ k_{21} & & &=0 \\
&& k_{12} &+ \rho k_{22} &=0 \\
&& \rho k_{12} &+ k_{22} &=1 \\
\end{align}
Solving these equations leads to the solution
$$
\left(\begin{array}{cc} k_{11} & k_{12} \\ k_{21} & k_{22} \end{array} \right) = \frac{1}{1-\rho^2}\left(\begin{array}{cc} 1 & -\rho \\ -\rho & 1 \end{array} \right)
$$
Plotting the Equal probability contours
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from notes_utilities import pnorm_ball_points
RHO = np.arange(-0.9,1,0.3)
plt.figure(figsize=(20,20/len(RHO)))
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
for i,rho in enumerate(RHO):
plt.subplot(1,len(RHO),i+1)
plt.axis('equal')
ax = plt.gca()
ax.set_xlim(-4,4)
ax.set_ylim(-4,4)
S = np.mat([[1, rho],[rho,1]])
A = np.linalg.cholesky(S)
dx,dy = pnorm_ball_points(3*A)
plt.title(r'$\rho =$ '+str(rho if np.abs(rho)>1E-9 else 0), fontsize=16)
ln = plt.Line2D(dx,dy,markeredgecolor='k', linewidth=1, color='b')
ax.add_line(ln)
ax.set_axis_off()
#ax.set_visible(False)
plt.show()
```
```python
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
from IPython.display import clear_output, display, HTML
from matplotlib import rc
from notes_utilities import bmatrix, pnorm_ball_line
rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
## for Palatino and other serif fonts use:
#rc('font',**{'family':'serif','serif':['Palatino']})
rc('text', usetex=True)
fig = plt.figure(figsize=(5,5))
S = np.array([[1,0],[0,1]])
dx,dy = pnorm_ball_points(S)
ln = plt.Line2D(dx,dy,markeredgecolor='k', linewidth=1, color='b')
dx,dy = pnorm_ball_points(np.eye(2))
ln2 = plt.Line2D(dx,dy,markeredgecolor='k', linewidth=1, color='k',linestyle=':')
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
ax = fig.gca()
ax.set_xlim((-4,4))
ax.set_ylim((-4,4))
txt = ax.text(-1,-3,'$\left(\right)$',fontsize=15)
ax.add_line(ln)
ax.add_line(ln2)
plt.close(fig)
def set_line(s_1, s_2, rho, p, a, q):
S = np.array([[s_1**2, rho*s_1*s_2],[rho*s_1*s_2, s_2**2]])
A = np.linalg.cholesky(S)
#S = A.dot(A.T)
dx,dy = pnorm_ball_points(A,p=p)
ln.set_xdata(dx)
ln.set_ydata(dy)
dx,dy = pnorm_ball_points(a*np.eye(2),p=q)
ln2.set_xdata(dx)
ln2.set_ydata(dy)
txt.set_text(bmatrix(S))
display(fig)
ax.set_axis_off()
interact(set_line, s_1=(0.1,2,0.01), s_2=(0.1, 2, 0.01), rho=(-0.99, 0.99, 0.01), p=(0.1,4,0.1), a=(0.2,10,0.1), q=(0.1,4,0.1))
```
<p>Failed to display Jupyter Widget of type <code>interactive</code>.</p>
<p>
If you're reading this message in Jupyter Notebook or JupyterLab, it may mean
that the widgets JavaScript is still loading. If this message persists, it
likely means that the widgets JavaScript library is either not installed or
not enabled. See the <a href="https://ipywidgets.readthedocs.io/en/stable/user_install.html">Jupyter
Widgets Documentation</a> for setup instructions.
</p>
<p>
If you're reading this message in another notebook frontend (for example, a static
rendering on GitHub or <a href="https://nbviewer.jupyter.org/">NBViewer</a>),
it may mean that your frontend doesn't currently support widgets.
</p>
<function __main__.set_line>
```python
%run plot_normballs.py
```
```python
%run matrix_norm_sliders.py
```
<p>Failed to display Jupyter Widget of type <code>interactive</code>.</p>
<p>
If you're reading this message in Jupyter Notebook or JupyterLab, it may mean
that the widgets JavaScript is still loading. If this message persists, it
likely means that the widgets JavaScript library is either not installed or
not enabled. See the <a href="https://ipywidgets.readthedocs.io/en/stable/user_install.html">Jupyter
Widgets Documentation</a> for setup instructions.
</p>
<p>
If you're reading this message in another notebook frontend (for example, a static
rendering on GitHub or <a href="https://nbviewer.jupyter.org/">NBViewer</a>),
it may mean that your frontend doesn't currently support widgets.
</p>
Exercise:
$
x = \left(\begin{array}{c} x_1 \\ x_2 \end{array} \right)
$
$
\mu = \left(\begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right)
$
$
\Sigma = \left(\begin{array}{cc} s_{11} & s_{12} \\ s_{12} & s_{22} \end{array} \right)
$
Need $K = \Sigma^{-1}$. When $|\Sigma| \neq 0$ we have $K\Sigma^{-1} = I$.
$
\left(\begin{array}{cc} s_{11} & s_{12} \\ s_{12} & s_{22} \end{array} \right) \left(\begin{array}{cc} k_{11} & k_{12} \\ k_{21} & k_{22} \end{array} \right) = \left(\begin{array}{cc} 1& 0 \\ 0 & 1 \end{array} \right)
$
Derive the result
$$
K = \left(\begin{array}{cc} k_{11} & k_{12} \\ k_{21} & k_{22} \end{array} \right)
$$
Step 1: Verify
$$
\left(\begin{array}{cc} s_{11} & s_{12} \\ s_{21} & s_{22} \end{array} \right) = \left(\begin{array}{cc} 1 & -s_{12}/s_{22} \\ 0 & 1 \end{array} \right) \left(\begin{array}{cc} s_{11}-s_{12}^2/s_{22} & 0 \\ 0 & s_{22} \end{array} \right) \left(\begin{array}{cc} 1 & 0 \\ -s_{12}/s_{22} & 1 \end{array} \right)
$$
Step 2: Show that
$$
\left(\begin{array}{cc} 1 & a\\ 0 & 1 \end{array} \right)^{-1} = \left(\begin{array}{cc} 1 & -a\\ 0 & 1 \end{array} \right)
$$
and
$$
\left(\begin{array}{cc} 1 & 0\\ b & 1 \end{array} \right)^{-1} = \left(\begin{array}{cc} 1 & 0\\ -b & 1 \end{array} \right)
$$
Step 3: Using the fact $(A B)^{-1} = B^{-1} A^{-1}$ and $s_{12}=s_{21}$, show that and simplify
$$
\left(\begin{array}{cc} s_{11} & s_{12} \\ s_{21} & s_{22} \end{array} \right)^{-1} =
\left(\begin{array}{cc} 1 & 0 \\ s_{12}/s_{22} & 1 \end{array} \right)
\left(\begin{array}{cc} 1/(s_{11}-s_{12}^2/s_{22}) & 0 \\ 0 & 1/s_{22} \end{array} \right) \left(\begin{array}{cc} 1 & s_{12}/s_{22} \\ 0 & 1 \end{array} \right)
$$
## Gaussian Processes Regression
In Bayesian machine learning, a frequent problem encountered is the regression problem where we are given a pairs of inputs $x_i \in \mathbb{R}^N$ and associated noisy observations $y_i \in \mathbb{R}$. We assume the following model
\begin{eqnarray*}
y_i &\sim& {\cal N}(y_i; f(x_i), R)
\end{eqnarray*}
The interesting thing about a Gaussian process is that the function $f$ is not specified in close form, but we assume that the function values
\begin{eqnarray*}
f_i & = & f(x_i)
\end{eqnarray*}
are jointly Gaussian distributed as
\begin{eqnarray*}
\left(
\begin{array}{c}
f_1 \\
\vdots \\
f_L \\
\end{array}
\right) & = & f_{1:L} \sim {\cal N}(f_{1:L}; 0, \Sigma(x_{1:L}))
\end{eqnarray*}
Here, we define the entries of the covariance matrix $\Sigma(x_{1:L})$ as
\begin{eqnarray*}
\Sigma_{i,j} & = & K(x_i, x_j)
\end{eqnarray*}
for $i,j \in \{1, \dots, L\}$. Here, $K$ is a given covariance function. Now, if we wish to predict the value of $f$ for a new $x$, we simply form the following joint distribution:
\begin{eqnarray*}
\left(
\begin{array}{c}
f_1 \\
f_2 \\
\vdots \\
f_L \\
f \\
\end{array}
\right) & \sim & {\cal N}\left( \left(\begin{array}{c}
0 \\
0 \\
\vdots \\
0 \\
0 \\
\end{array}\right)
, \left(\begin{array}{cccccc}
K(x_1,x_1) & K(x_1,x_2) & \dots & K(x_1, x_L) & K(x_1, x) \\
K(x_2,x_1) & K(x_2,x_2) & \dots & K(x_2, x_L) & K(x_2, x) \\
\vdots &\\
K(x_L,x_1) & K(x_L,x_2) & \dots & K(x_L, x_L) & K(x_L, x) \\
K(x,x_1) & K(x,x_2) & \dots & K(x, x_L) & K(x, x) \\
\end{array}\right) \right) \\
\left(
\begin{array}{c}
f_{1:L} \\
f
\end{array}
\right) & \sim & {\cal N}\left( \left(\begin{array}{c}
\mathbf{0} \\
0 \\
\end{array}\right)
, \left(\begin{array}{cc}
\Sigma(x_{1:L}) & k(x_{1:L}, x) \\
k(x_{1:L}, x)^\top & K(x, x) \\
\end{array}\right) \right) \\
\end{eqnarray*}
Here, $k(x_{1:L}, x)$ is a $L \times 1$ vector with entries $k_i$ where
\begin{eqnarray*}
k_i = K(x_i, x)
\end{eqnarray*}
Popular choices of covariance functions to generate smooth regression functions include a Bell shaped one
\begin{eqnarray*}
K_1(x_i, x_j) & = & \exp\left(-\frac{1}2 \| x_i - x_j \|^2 \right)
\end{eqnarray*}
and a Laplacian
\begin{eqnarray*}
K_2(x_i, x_j) & = & \exp\left(-\frac{1}2 \| x_i - x_j \| \right)
\end{eqnarray*}
where $\| x \| = \sqrt{x^\top x}$ is the Euclidian norm.
## Part 1
Derive the expressions to compute the predictive density
\begin{eqnarray*}
p(\hat{y}| y_{1:L}, x_{1:L}, \hat{x})
\end{eqnarray*}
\begin{eqnarray*}
p(y | y_{1:L}, x_{1:L}, x) &=& {\cal N}(y; m, S) \\
m & = & \\
S & = &
\end{eqnarray*}
## Part 2
Write a program to compute the mean and covariance of $p(\hat{y}| y_{1:L}, x_{1:L}, \hat{x})$ to generate a for the following data:
x = [-2 -1 0 3.5 4]
y = [4.1 0.9 2 12.3 15.8]
Try different covariance functions $K_1$ and $K_2$ and observation noise covariances $R$ and comment on the nature of the approximation.
## Part 3
Suppose we are using a covariance function parameterised by
\begin{eqnarray*}
K_\beta(x_i, x_j) & = & \exp\left(-\frac{1}\beta \| x_i - x_j \|^2 \right)
\end{eqnarray*}
Find the optimum regularisation parameter $\beta^*(R)$ as a function of observation noise variance via maximisation of the marginal likelihood, i.e.
\begin{eqnarray*}
\beta^* & = & \arg\max_{\beta} p(y_{1:N}| x_{1:N}, \beta, R)
\end{eqnarray*}
Generate a plot of $b^*(R)$ for $R = 0.01, 0.02, \dots, 1$ for the dataset given in 2.
```python
def cov_fun_bell(x1,x2,delta=1):
return np.exp(-0.5*np.abs(x1-x2)**2/delta)
def cov_fun_exp(x1,x2):
return np.exp(-0.5*np.abs(x1-x2))
def cov_fun(x1,x2):
return cov_fun_bell(x1,x2,delta=0.1)
R = 0.05
x = np.array([-2, -1, 0, 3.5, 4]);
y = np.array([4.1, 0.9, 2, 12.3, 15.8]);
Sig = cov_fun(x.reshape((len(x),1)),x.reshape((1,len(x)))) + R*np.eye(len(x))
SigI = np.linalg.inv(Sig)
xx = np.linspace(-10,10,100)
yy = np.zeros_like(xx)
ss = np.zeros_like(xx)
for i in range(len(xx)):
z = np.r_[x,xx[i]]
CrossSig = cov_fun(x,xx[i])
PriorSig = cov_fun(xx[i],xx[i]) + R
yy[i] = np.dot(np.dot(CrossSig, SigI),y)
ss[i] = PriorSig - np.dot(np.dot(CrossSig, SigI),CrossSig)
plt.plot(x,y,'or')
plt.plot(xx,yy,'b.')
plt.plot(xx,yy+3*np.sqrt(ss),'b:')
plt.plot(xx,yy-3*np.sqrt(ss),'b:')
plt.show()
```
| 1f68767ca35ed5df801b06172173bad1037c5247 | 156,783 | ipynb | Jupyter Notebook | MultivariateGaussian.ipynb | atcemgil/notes | 380d310a87767d9b1fe88229588dfe00a61d2353 | [
"MIT"
]
| 191 | 2016-01-21T19:44:23.000Z | 2022-03-25T20:50:50.000Z | MultivariateGaussian.ipynb | ShakirSofi/notes | d6388ab38c734c341f5916b2d03189dfe4962edb | [
"MIT"
]
| 2 | 2018-02-18T03:41:04.000Z | 2018-11-21T11:08:49.000Z | MultivariateGaussian.ipynb | atcemgil/notes | 380d310a87767d9b1fe88229588dfe00a61d2353 | [
"MIT"
]
| 138 | 2015-10-04T21:57:21.000Z | 2021-06-15T19:35:55.000Z | 223.019915 | 80,002 | 0.8825 | true | 5,221 | Qwen/Qwen-72B | 1. YES
2. YES | 0.928409 | 0.868827 | 0.806626 | __label__eng_Latn | 0.541411 | 0.712396 |
# Hierarchical Model for Abalone Length
Abalone were collected from various sites on the coast of California north of San Francisco. Here I'm going to develop a model to predict abalone lengths based on sites and harvest method - diving or rock-picking. I'm interested in how abalone lengths vary between sites and harvesting methods. This should be a hierarchical model as the abalone at the different sites are from the same population and should exhibit similar effects based on harvesting method. The hierarchical model will be beneficial since some of the sites are missing a harvesting method.
```python
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import sampyl as smp
from sampyl import np
import pandas as pd
```
```python
plt.style.use('seaborn')
plt.rcParams['font.size'] = 14.
plt.rcParams['legend.fontsize'] = 14.0
plt.rcParams['axes.titlesize'] = 16.0
plt.rcParams['axes.labelsize'] = 14.0
plt.rcParams['xtick.labelsize'] = 13.0
plt.rcParams['ytick.labelsize'] = 13.0
```
Load our data here. This is just data collected in 2017.
```python
data = pd.read_csv('Clean2017length.csv')
data.head()
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>data year</th>
<th>full lengths</th>
<th>group_id</th>
<th>site_code</th>
<th>Full_ID</th>
<th>Mode</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2017</td>
<td>181</td>
<td>1.0</td>
<td>5</td>
<td>2017_06_24_005_30_01_01</td>
<td>R</td>
</tr>
<tr>
<th>1</th>
<td>2017</td>
<td>182</td>
<td>1.0</td>
<td>5</td>
<td>2017_06_24_005_30_01_01</td>
<td>R</td>
</tr>
<tr>
<th>2</th>
<td>2017</td>
<td>183</td>
<td>1.0</td>
<td>5</td>
<td>2017_06_24_005_30_01_01</td>
<td>R</td>
</tr>
<tr>
<th>3</th>
<td>2017</td>
<td>191</td>
<td>1.0</td>
<td>5</td>
<td>2017_06_24_005_30_01_01</td>
<td>R</td>
</tr>
<tr>
<th>4</th>
<td>2017</td>
<td>191</td>
<td>1.0</td>
<td>5</td>
<td>2017_06_24_005_30_01_01</td>
<td>R</td>
</tr>
</tbody>
</table>
</div>
Important columns here are:
* **full lengths:** length of abalone
* **mode:** Harvesting method, R: rock-picking, D: diving
* **site_code:** codes for 15 different sites
First some data preprocessing to get it into the correct format for our model.
```python
# Convert sites from codes into sequential integers starting at 0
unique_sites = data['site_code'].unique()
site_map = dict(zip(unique_sites, np.arange(len(unique_sites))))
data = data.assign(site=data['site_code'].map(site_map))
# Convert modes into integers as well
# Filter out 'R/D' modes, bad data collection
data = data[(data['Mode'] != 'R/D')]
mode_map = {'R':0, 'D':1}
data = data.assign(mode=data['Mode'].map(mode_map))
```
## A Hierarchical Linear Model
Here we'll define our model. We want to make a linear model for each site in the data where we predict the abalone length given the mode of catching and the site.
$$ y_s = \alpha_s + \beta_s * x_s + \epsilon $$
where $y_s$ is the predicted abalone length, $x$ denotes the mode of harvesting, $\alpha_s$ and $\beta_s$ are coefficients for each site $s$, and $\epsilon$ is the model error. We'll use this prediction for our likelihood with data $D_s$, using a normal distribution with mean $y_s$ and variance $ \epsilon^2$ :
$$ \prod_s P(D_s \mid \alpha_s, \beta_s, \epsilon) = \prod_s \mathcal{N}\left(D_s \mid y_s, \epsilon^2\right) $$
The abalone come from the same population just in different locations. We can take these similarities between sites into account by creating a hierarchical model where the coefficients are drawn from a higher-level distribution common to all sites.
$$
\begin{align}
\alpha_s & \sim \mathcal{N}\left(\mu_{\alpha}, \sigma_{\alpha}^2\right) \\
\beta_s & \sim \mathcal{N}\left(\mu_{\beta}, \sigma_{\beta}^2\right) \\
\end{align}
$$
```python
class HLM(smp.Model):
def __init__(self, data=None):
super().__init__()
self.data = data
# Now define the model (log-probability proportional to the posterior)
def logp_(self, μ_α, μ_β, σ_α, σ_β, site_α, site_β, ϵ):
# Population priors - normals for population means and half-Cauchy for population stds
self.add(smp.normal(μ_α, sig=500),
smp.normal(μ_β, sig=500),
smp.half_cauchy(σ_α, beta=5),
smp.half_cauchy(σ_β, beta=0.5))
# Priors for site coefficients, sampled from population distributions
self.add(smp.normal(site_α, mu=μ_α, sig=σ_α),
smp.normal(site_β, mu=μ_β, sig=σ_β))
# Prior for likelihood uncertainty
self.add(smp.half_normal(ϵ))
# Our estimate for abalone length, α + βx
length_est = site_α[self.data['site'].values] + site_β[self.data['site'].values]*self.data['mode']
# Add the log-likelihood
self.add(smp.normal(self.data['full lengths'], mu=length_est, sig=ϵ))
return self()
```
```python
sites = data['site'].values
modes = data['mode'].values
lengths = data['full lengths'].values
# Now define the model (log-probability proportional to the posterior)
def logp(μ_α, μ_β, σ_α, σ_β, site_α, site_β, ϵ):
model = smp.Model()
# Population priors - normals for population means and half-Cauchy for population stds
model.add(smp.normal(μ_α, sig=500),
smp.normal(μ_β, sig=500),
smp.half_cauchy(σ_α, beta=5),
smp.half_cauchy(σ_β, beta=0.5))
# Priors for site coefficients, sampled from population distributions
model.add(smp.normal(site_α, mu=μ_α, sig=σ_α),
smp.normal(site_β, mu=μ_β, sig=σ_β))
# Prior for likelihood uncertainty
model.add(smp.half_normal(ϵ))
# Our estimate for abalone length, α + βx
length_est = site_α[sites] + site_β[sites]*modes
# Add the log-likelihood
model.add(smp.normal(lengths, mu=length_est, sig=ϵ))
return model()
```
```python
model = HLM(data=data)
```
```python
start = {'μ_α': 201., 'μ_β': 5., 'σ_α': 1., 'σ_β': 1.,
'site_α': np.ones(len(site_map))*201,
'site_β': np.zeros(len(site_map)),
'ϵ': 1.}
model.logp_(*start.values())
```
-509218.07501755428
```python
start = {'μ_α': 201., 'μ_β': 5., 'σ_α': 1., 'σ_β': 1.,
'site_α': np.ones(len(site_map))*201,
'site_β': np.zeros(len(site_map)),
'ϵ': 1.}
# Using NUTS is slower per sample, but more likely to give good samples (and converge)
sampler = smp.NUTS(logp, start)
chain = sampler(1100, burn=100, thin=2)
```
/Users/mat/miniconda3/lib/python3.6/site-packages/autograd/tracer.py:14: UserWarning: Output seems independent of input.
warnings.warn("Output seems independent of input.")
/Users/mat/miniconda3/lib/python3.6/site-packages/autograd/tracer.py:48: RuntimeWarning: overflow encountered in exp
return f_raw(*args, **kwargs)
Progress: [##############################] 1100 of 1100 samples
There are some checks for convergence you can do, but they aren't implemented yet. Instead, we can visually inspect the chain. In general, the samples should be stable, the first half should vary around the same point as the second half.
```python
fig, ax = plt.subplots()
ax.plot(chain.site_α);
```
```python
fig.savefig('/Users/mat/Desktop/chains.png', dpi=150)
```
```python
chain.site_α.T.shape
```
(14, 500)
```python
fig, ax = plt.subplots(figsize=(16,9))
for each in chain.site_α.T:
ax.hist(each, range=(185, 210), bins=60, alpha=0.5)
ax.set_xticklabels('')
ax.set_yticklabels('');
```
```python
fig.savefig('/Users/mat/Desktop/posteriors.png', dpi=300)
```
With the posterior distribution, we can look at many different results. Here I'll make a function that plots the means and 95% credible regions (range that contains central 95% of the probability) for the coefficients $\alpha_s$ and $\beta_s$.
```python
def coeff_plot(coeff, ax=None):
if ax is None:
fig, ax = plt.subplots(figsize=(3,5))
CRs = np.percentile(coeff, [2.5, 97.5], axis=0)
means = coeff.mean(axis=0)
ax.errorbar(means, np.arange(len(means)), xerr=np.abs(means - CRs), fmt='o')
ax.set_yticks(np.arange(len(site_map)))
ax.set_yticklabels(site_map.keys())
ax.set_ylabel('Site')
ax.grid(True, axis='x', color="#CCCCCC")
ax.tick_params(axis='both', length=0)
for each in ['top', 'right', 'left', 'bottom']:
ax.spines[each].set_visible(False)
return ax
```
Now we can look at how abalone lengths vary between sites for the rock-picking method ($\alpha_s$).
```python
ax = coeff_plot(chain.site_α)
ax.set_xlim(175, 225)
ax.set_xlabel('Abalone Length (mm)');
```
Here I'm plotting the mean and 95% credible regions (CR) of $\alpha$ for each site. This coefficient measures the average length of rock-picked abalones. We can see that the average abalone length varies quite a bit between sites. The CRs give a measure of the uncertainty in $\alpha$, wider CRs tend to result from less data at those sites.
Now, let's see how the abalone lengths vary between harvesting methods (the difference for diving is given by $\beta_s$).
```python
ax = coeff_plot(chain.site_β)
#ax.set_xticks([-5, 0, 5, 10, 15])
ax.set_xlabel('Mode effect (mm)');
```
Here I'm plotting the mean and 95% credible regions (CR) of $\beta$ for each site. This coefficient measures the difference in length of dive picked abalones compared to rock picked abalones. Most of the $\beta$ coefficients are above zero which indicates that abalones harvested via diving are larger than ones picked from the shore. For most of the sites, diving results in 5 mm longer abalone, while at site 72, the difference is around 12 mm. Again, wider CRs mean there is less data leading to greater uncertainty.
Next, I'll overlay the model on top of the data and make sure it looks right. We'll also see that some sites don't have data for both harvesting modes but our model still works because it's hierarchical. That is, we can get a posterior distribution for the coefficient from the population distribution even though the actual data is missing.
```python
def model_plot(data, chain, site, ax=None, n_samples=20):
if ax is None:
fig, ax = plt.subplots(figsize=(4,6))
site = site_map[site]
xs = np.linspace(-1, 3)
for ii, (mode, m_data) in enumerate(data[data['site'] == site].groupby('mode')):
a = chain.site_α[:, site]
b = chain.site_β[:, site]
# now sample from the posterior...
idxs = np.random.choice(np.arange(len(a)), size=n_samples, replace=False)
# Draw light lines sampled from the posterior
for idx in idxs:
ax.plot(xs, a[idx] + b[idx]*xs, color='#E74C3C', alpha=0.05)
# Draw the line from the posterior means
ax.plot(xs, a.mean() + b.mean()*xs, color='#E74C3C')
# Plot actual data points with a bit of noise for visibility
mode_label = {0: 'Rock-picking', 1: 'Diving'}
ax.scatter(ii + np.random.randn(len(m_data))*0.04,
m_data['full lengths'], edgecolors='none',
alpha=0.8, marker='.', label=mode_label[mode])
ax.set_xlim(-0.5, 1.5)
ax.set_xticks([0, 1])
ax.set_xticklabels('')
ax.set_ylim(150, 250)
ax.grid(True, axis='y', color="#CCCCCC")
ax.tick_params(axis='both', length=0)
for each in ['top', 'right', 'left', 'bottom']:
ax.spines[each].set_visible(False)
return ax
```
```python
fig, axes = plt.subplots(figsize=(10, 5), ncols=4, sharey=True)
for ax, site in zip(axes, [5, 52, 72, 162]):
ax = model_plot(data, chain, site, ax=ax, n_samples=30)
ax.set_title(site)
first_ax = axes[0]
first_ax.legend(framealpha=1, edgecolor='none')
first_ax.set_ylabel('Abalone length (mm)');
```
For site 5, there are few data points for the diving method so there is a lot of uncertainty in the prediction. The prediction is also pulled lower than the data by the population distribution. Similarly, for site 52 there is no diving data, but we still get a (very uncertain) prediction because it's using the population information.
Finally, we can look at the harvesting mode effect for the population. Here I'm going to print out a few statistics for $\mu_{\beta}$.
```python
fig, ax = plt.subplots()
ax.hist(chain.μ_β, bins=30);
b_mean = chain.μ_β.mean()
b_CRs = np.percentile(chain.μ_β, [2.5, 97.5])
p_gt_0 = (chain.μ_β > 0).mean()
print(
"""Mean: {:.3f}
95% CR: [{:.3f}, {:.3f}]
P(mu_b) > 0: {:.3f}
""".format(b_mean, b_CRs[0], b_CRs[1], p_gt_0))
```
We can also look at the population distribution for $\beta_s$ by sampling from a normal distribution with mean and variance sampled from $\mu_\beta$ and $\sigma_\beta$.
$$
\beta_s \sim \mathcal{N}\left(\mu_{\beta}, \sigma_{\beta}^2\right)
$$
```python
import scipy.stats as stats
```
```python
samples = stats.norm.rvs(loc=chain.μ_β, scale=chain.σ_β)
plt.hist(samples, bins=30);
plt.xlabel('Dive harvesting effect (mm)')
```
It's apparent that dive harvested abalone are roughly 5 mm longer than rock-picked abalone. Maybe this is a bias of the divers to pick larger abalone. Or, it's possible that abalone that stay in the water grow larger.
| de71f232dbf8083e3b70004e13a8a7497a0bd8cc | 534,447 | ipynb | Jupyter Notebook | examples/Abalone Model.ipynb | maedoc/sampyl | 3849fa688d477b4ed723e08b931c73043f8471c5 | [
"MIT"
]
| 308 | 2015-06-30T18:16:04.000Z | 2022-03-14T17:21:59.000Z | examples/Abalone Model.ipynb | maedoc/sampyl | 3849fa688d477b4ed723e08b931c73043f8471c5 | [
"MIT"
]
| 20 | 2015-07-02T06:12:20.000Z | 2020-11-26T16:06:57.000Z | examples/Abalone Model.ipynb | maedoc/sampyl | 3849fa688d477b4ed723e08b931c73043f8471c5 | [
"MIT"
]
| 66 | 2015-07-27T11:19:03.000Z | 2022-03-24T03:35:53.000Z | 667.224719 | 236,178 | 0.94168 | true | 4,004 | Qwen/Qwen-72B | 1. YES
2. YES | 0.855851 | 0.771843 | 0.660583 | __label__eng_Latn | 0.937694 | 0.373087 |
*Images generated by a deep neural network that interprets text to generate images (DALL·E 2)*
# PC lab: intro to Neural networks & PyTorch
Deep learning is the subfield of machine learning that concerns neural networks with representation learning capabilities. As of recent years, it is arguably the most quickly growing field within machine learning, enjoying major breakthroughs every year (Listing a couple ones from last year(s): GPT-3, AlphaFold v2, DALL·E 2, AlphaZero). Although the popularity of neural nets is a recent phenomenon, they were first described by Warren McCulloch and Walter Pitts in 1943. Early progress in training competitive neural networks was stalled by a multitude of reasons, such as the limited computer resources, sub-optimal network architectures and the use of smaller datasets. In this PC-lab we will introduce you to the basics of implementing a neural network using contemporary practices.
# 1.1 Background
### From linear models to neurons to neural networks
The core unit of every (artificial) neural network is considered the neuron. Every neuron can be observed as a linear combination of **one or more inputs** $\mathbf{x}$ with weights $\mathbf{w}$ (and optionally adding a **bias** $b$), outputting a **single output** $z$:
$z = \sum\limits_{i=1}^{n}(w_ix_i) + b$
equivalently written as dot product:
$z = \mathbf{x} \cdot \mathbf{w} + b$
When multiple neurons are placed next to eachother, we get multiple output neurons, this is called a **layer**. Mathematically, our weights and biases (intercepts) now become a matrix and vector respectively, and we obtain as output a vector $\mathbf{z}$:
$\mathbf{z} = \mathbf{x} \cdot \mathbf{W} + \mathbf{b}$
Multiple **layers** can be stacked sequentially to make up a deep neural network. Performing this gives us a big linear model, because multiple matrix multiplications performed consecutively can also be written as just one: (i.e. $\mathbf{x} \cdot \mathbf{W_1} \cdot \mathbf{W_2} ...$ could also be written as just $\mathbf{x} \cdot \mathbf{W_{all}}$). To obtain the *deep learning magic*, we need to make the whole thing **non-linear**. We do this by adding **activation functions** after every (hidden) layer. The most classical activation is the sigmoid activation $\sigma()$, used also in logistic regression. Nowadays, we usually opt for a more simple activation function: the **ReLU** $ReLU(z) = max\{0,z\}$. This function has the favorable property that its derivative is very efficient to compute (1 when $z$ is positive, 0 when it is negative). It also acts as a switch: a neuron will have a "dead" ($0$) activation whenever $z$ is negative.
For the output layer of a neural network: our activation depends on the task at hand. For binary classification, we use sigmoid to constrain our output between 0 and 1. For multi-class, we use a softmax operation so the output of all neurons sums to 1. For regression, we simply do not use an activation (or a custom one depending on your data: if you already know that your outputs can't be negative but can take all positive numbers ($\mathbb{R}^+$), then maybe a ReLU activation in the output nodes makes sense)
In order to build more intuition for neural networks: consider the following figure where we "visually build up" a neural network starting from Linear regression with four input features (**a**), to Logistic regression (**b**), to an archetypical output neuron with ReLU activation (**c**). For multi-output settings, we visualize multi-output regression (**d**), multi-label classification (more than one class can be 1 in a sample) (**e**), and multi-class classification via softmax (**f**). Finally, a simple neural network with two hidden layers for binary classification (sigmoid output head) is shown under **g**.
**This figure makes it crystal clear that the most simple neural network is just a bunch of linear regressions stacked on top of eachother with non-linearities inbetween.** More advanced neural network architectures exist that modify how we make information flow between inputs. In this example, everything is just connected to everything with linear weights. This type of neural network is what we call an **MLP** or a **multi layer perceptron**.
Keep in mind that all of the above methods usually fit a bias/intercept in addition to weights fitted on the input features.
For an MLP, visually it would look like this:
<div class="alert alert-success">
<b>THOUGHT EXERCISE:</b>
<p> How much weights does the model pictured above have (including biases)? </p>
</div>
Keep in mind that every layer consists of (1) a matrix multiplication of an input $X \in \mathbb{R}^{N, D}$ with a set of weights $W \in \mathbb{R}^{D, M}$ and (2) a non-linearity. With $D$ the number of input features/dimensions/nodes (including one node/feature for the intercept, see the following equation) and $M$ the number of output dimensions/nodes.
\begin{equation}
XW =
\begin{bmatrix}
1 & x_{0,1} & ... & x_{0,D-1} & x_{0,D} \\
1 & x_{1,1} & ... & x_{1,D-1} & x_{1,D} \\
... & ... & ... & ... & ...\\
1 & x_{N-1,1} & ... & x_{N-1,D-1} & x_{N-1,D} \\
1 & x_{N,1} & ... & x_{N,D-1} & x_{N,D} \\
\end{bmatrix}
\begin{bmatrix}
W_{0,0} & W_{0,1} & ... & W_{0,M-1} & W_{0,M} \\
W_{1,0} & W_{1,1} & ... & W_{1,M-1} & W_{1,M} \\
... & ... & ... & ... & ...\\
W_{D-1,0} & W_{D-1,1} & ... & W_{D-1,M-1} & W_{D-1,M} \\
W_{D,0} & W_{D,1} & ... & W_{D,M-1} & W_{D,M} \\
\end{bmatrix}
\end{equation}
The whole network is trained by first performing a forward pass to get predictions and compute a loss w.r.t. ground truth known labels, then doing backpropagation (essentially applying the chain rule of derivatives) to obtain the gradient of all weights w.r.t. the loss. These gradients are then used by gradient descent (or more modern variants such as Adam) to optimize the neural network.
Training neural networks is (typically) more computationally demanding than more traditional machine learning methods, and we usually use neural networks when we have large datasets. For these two reasons, it is not a good idea to process the whole dataset at once in every training step/iteration. Therefore, to train the network, we process samples in batches, called **(mini-batch) stochastic gradient descent**. This allows for faster training and improved convergence of the loss during gradient descent. Advantages of stochastic gradient descent or other optimization algorithms for loss calculation are not discussed in this PC-lab, but have been [extensively discussed](https://ruder.io/optimizing-gradient-descent/) before.
### Dropout and Normalization layers
Dropout is a popular addition to neural networks. It is a form of regularization during which we stochastically deactivate a percentage of neurons in every training step by putting their activation to zero. This regularization only happens during training, as during testing we (usually) want deterministic outputs. Conceptually, it is similar to other regularization techniques such as ridge regression and subsampling of features in random forest training, in the sense that it will force our model to look at all features, because sometimes one single feature will not be available during training. The difference here is that we do it by stochastically putting nodes to zero during training, and that we can perform it not only on our input features, but also on our hidden nodes.
Mathematically, it can be performed by simply sampling a boolean vector and doing element-wise multiplication.
Visually, it would look a bit like this, where nodes in cyan are dropped out, and the associated cyan weights do not have any influence on training anymore (in that training step):
Normalization is another operation that we usually put between layers in order to more easily train our networks. In essence, this ensure that our hidden outputs are numerically well behaved, which speeds up training and helps with better convergence. The most popular ones are batch normalization and layer normalization.
More info here: [link](https://www.baeldung.com/cs/normalizing-inputs-artificial-neural-network)
With current techniques, most people often put the order of operations like this:
`Layer -> Activation -> Dropout -> Normalization -> and_repeat ...`
## 1.2 PyTorch
To implement neural networks with more ease, a few high-level python libraries are available: (PyTorch, TensorFlow/keras, JAX, ...). These libraries provide functionality in terms of automatic differentiation (backprop), ready-to-use implementations for various layers, loss functions ...
In this lab, we will use [PyTorch](https://pytorch.org). PyTorch is the most popular library for deep learning in academia as of today. For this course it offers the advantage that it has the most 'pythonic' syntax, to the point where almost all NumPy functions have a PyTorch counterpart.
If you want to run this notebook locally, you can find the installation instructions for PyTorch [here](https://pytorch.org/get-started/locally/). Make sure to select the right installation options depending on your system (if you have a GPU or not).
```python
import torch
import numpy as np
```
### Tensors
Tensors are the fundamental data structures in PyTorch. They are analogous to NumPy arrays. The difference is that tensors can also run on GPU hardware. GPU hardware is optimized for many small computations. Matrix multiplications, the building blocks of all deep learning, run orders-of-magnitude faster on GPU than on CPU. Let's see how tensors are constructed and what we can do with them:
```python
x = [[5,8],[9,8]]
print(torch.tensor(x))
print(np.array(x))
```
```python
x_numpy = np.array(x)
print(torch.from_numpy(x_numpy))
x_torch = torch.tensor(x)
print(x_torch.numpy())
```
```python
print(np.random.randn(8).shape)
print(np.random.randn(8,50).shape)
print(torch.randn(8).shape) # an alternative for .shape in PyTorch is .size()
print(torch.randn(8,50).shape)
```
```python
print(np.zeros((8,50)).shape)
print(torch.zeros(8,50).shape) # works with 'ones' as well
```
In PyTorch, the standard data type for floats is `float32`, which is synonymous to `float` within its framework. `float64` is synonymous to `double`.
This is different from the NumPy defaults and naming conventions: NumPy default data type for float is `float64`. Keep this in mind when converting numpy arrays to tensors and back!
```python
print(np.zeros(8).dtype)
print(torch.zeros(8).dtype)
```
Conversion of data types:
```python
x = torch.randn(8)
print(x.dtype)
x = x.to(torch.float64)
print(x.dtype)
```
`torch.long` is synonymous to `torch.int64`. The only difference between int32 and int64 is the amount of bytes with which you will store every integer. If you go up to very high numbers, you will get numerical overflow faster with more compressed data types. We recommend you to always use the defaults: `torch.long` and `torch.float`
```python
x = torch.randint(low=0, high=8, size=(8,), dtype=torch.int32)
print(x)
print(x.dtype)
x = x.to(torch.long)
print(x.dtype)
```
Indexing and other operations work as in NumPy arrays
```python
x = torch.randn(8,50,60)
print(x.shape)
print(x[:4,10:-10].shape)
x[0,0,:10] = 0
print(x[0,0,:16])
print(torch.min(x), torch.max(x), torch.min(torch.abs(x)))
# most of these functions are also tensor methods:
print(x.min(), x.max(), x.abs().min())
```
Joining tensors via concatenation:
```python
print(x.shape)
x_cat0 = torch.cat([x, x], dim=0)
print(x_cat0.shape)
x_cat1 = torch.cat([x, x, x], dim=1)
print(x_cat1.shape)
```
Matrix multiplication: let's say we have an input `x`, consisting of 8 samples with 26 features, that we linearly combine with weights `w` to get a single output for every sample:
```python
x = torch.randn(8,26)
w = torch.randn(26,1)
y_hat = torch.matmul(x, w) # an alternative and equivalent syntax is x @ w
print(y_hat)
print(y_hat.shape)
```
Note that matrix multiplication is different from element-wise multiplication. For element-wise, `*` is used.
```python
x = torch.ones(8)
print(x)
x = x - 1.5
print(x)
x -= 1.5
print(x)
x += torch.randn(1)
print(x)
x += torch.randn(8)
print(x)
```
Broadcasting works as in NumPy: [link](https://pytorch.org/docs/stable/notes/broadcasting.html)
Keep in mind, just like in NumPy, whatever you want to do with a tensor, there's probably an elegant operation for it implemented somewhere, you just have to look for it (on google and in the documentation)
# 2 Building a neural network in PyTorch
## 2.1 The building blocks
Neural networks are initialized through the use of class objects. You have encountered class objects already during this course: sklearn models are all class objects. The difference here is that we will code our own class first, before using it.
Many of the functionalities necessary to create [**all types of neural networks**](http://www.asimovinstitute.org/neural-network-zoo/) have [**already been implemented**](http://pytorch.org/docs/master/nn.html).
Let's inspect the most basic building blocks first: the [linear layer](https://pytorch.org/docs/master/generated/torch.nn.Linear.html#torch.nn.Linear) and the [ReLU](https://pytorch.org/docs/master/generated/torch.nn.ReLU.html#torch.nn.ReLU)
A linear layer is an object that will perform a matrix multiplication once called. Here, we instantiate such a layer with 20 input nodes and 40 output nodes:
```python
import torch.nn as nn
nn.Linear(20, 40)
```
Let's simulate some random data for this layer: A data set (or batch) with 16 samples and 20 features:
```python
x = torch.randn(16, 20)
```
Now let's use our linear layer on this data:
```python
layer = nn.Linear(20, 40)
print(x.shape)
z = layer(x)
print(z.shape)
```
What happens when we try to feed our layer an input with a different number of features?
```python
x = torch.randn(16, 30)
layer(x)
```
Let's see the ReLU in action:
```python
relu = nn.ReLU()
x = torch.randn(2, 4)
print(x)
z = relu(x)
print(z)
```
As you may have noticed, `nn.Module`s are class objects, a bit like scikit-learn models, that you instantiate and then call.
You can chain `nn.Module`s with the use of `nn.Sequential`:
```python
linear_and_relu = nn.Sequential(nn.Linear(20, 40), nn.ReLU())
x = torch.randn(16, 20)
z = linear_and_relu(x)
```
Or even longer constructs:
Always keep in mind what happens with the dimensions of your input and outputs with every layer, if your first layer outputs 40 features/nodes/hidden dimensions, then logically the next will have to take 40 as input.
```python
a_whole_damn_network = nn.Sequential(
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 16),
nn.ReLU(),
nn.Linear(16, 4),
nn.ReLU(),
nn.Linear(4, 1)
)
x = torch.randn(16, 128)
z = a_whole_damn_network(x)
print(z.shape)
print(z)
```
The output $z$ that we now obtain after this whole network are called **logits**. They are the real-numbered $\mathbb{R}$ outputs that we obtain at the end of the network before our last activation function. This last activation function will be a, depending on the task at hand, sigmoid, softmax, or nothing at all for regression
Similar implementations exist for Dropout and Normalization layers in PyTorch, we invite you to look them up.
## 2.2 Class object neural networks and hyperparameters
We've seen how to implement a neural network using PyTorch `nn.Sequential`. It is more flexible however to write our own model class. This allows us to have more control over which operations we use and define our own hyperparameters. To make a PyTorch model, we specify our class object to be a submodule of `nn.Module` and inherit its methods via `super().__init__()`. Further, we specify all necessary attributes (such as layers) in our `__init__` function (executed upon initialization) and implement a `forward` function which will be executed when we call the object after being initialized.
The following code shows two examples, the first one of a very basic construction of a neural network without hyperparameters. The other one shows the same network, but where we set up our `__init__` function to process hyperparameters as input arguments. We can for example specify a hyperparameter whether we want to use dropout or not. (We could go even further to have an extra hyperparameter specifying the probability of dropout, ...).
```python
class BasicModel(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Linear(50, 40)
self.relu1 = nn.ReLU()
self.dropout1 = nn.Dropout(0.2)
self.layer2 = nn.Linear(40, 20)
self.relu2 = nn.ReLU()
self.dropout2 = nn.Dropout(0.2)
self.layer3 = nn.Linear(20, 10)
self.relu3 = nn.ReLU()
self.dropout3 = nn.Dropout(0.2)
self.layer4 = nn.Linear(10, 5)
# Think again: why do we not want a relu and dropout after our last layer again?
def forward(self, x):
# call them in separate lines:
x = self.layer1(x)
x = self.relu1(x)
x = self.dropout1(x)
# or together:
x = self.dropout2(self.relu2(self.layer2(x)))
x = self.dropout3(self.relu3(self.layer3(x)))
# we could've also wrapped everything in a nn.Sequential ..
x = self.layer4(x)
return x
class HyperparameterModel(nn.Module):
def __init__(self, dimensions_from_input_to_output = [50, 40, 20, 10, 5], dropout = True):
super().__init__()
layers = []
# iterate through all layers:
for i in range(len(dimensions_from_input_to_output) - 2):
layer = nn.Linear(dimensions_from_input_to_output[i], dimensions_from_input_to_output[i + 1])
layers.append(layer)
layers.append(nn.ReLU())
if dropout == True:
layers.append(nn.Dropout(0.2))
# the last layer separate from the loop because we don't want a ReLU and dropout after the last layer
layer = nn.Linear(dimensions_from_input_to_output[i+1], dimensions_from_input_to_output[i + 2])
layers.append(layer)
# wrap the layers in a sequential
self.net = nn.Sequential(*layers)
def forward(self, x):
return self.net(x)
```
Testing out the basic model:
```python
net = BasicModel()
net
```
```python
x = torch.randn(4, 50)
y = net(x)
print(y)
print(y.shape)
```
Testing out the hyperparameter model, when no arguments are specified the default ones are chosen:
```python
net = HyperparameterModel()
net
```
```python
x = torch.randn(4, 50)
y = net(x)
print(y)
print(y.shape)
```
Or by specifying hyperparameters, the following code shows a bit of a deeper model:
```python
net = HyperparameterModel(dimensions_from_input_to_output = [50, 160, 80, 40, 20, 10, 5], dropout = False)
net
```
```python
x = torch.randn(4, 50)
y = net(x)
print(y)
print(y.shape)
```
<div class="alert alert-success">
<b>EXERCISE:</b>
<p> Copy over any of the two networks above (preferably the one you understand best), and modify it to take as input 784 features and as output 10 nodes. We will use this network later in the PC lab. You can test your network for bugs using some randomly generated data (as above).</p>
</div>
```python
######## YOUR CODE HERE #########
#################################
```
## 2.3 Data and training
You may have noticed that the outputs of your model have a `grad_fn` attribute. This grad function will be used by PyTorch automatic differentation engine to perform backward passes and compute gradients for every parameter with respect to the loss/cost function.
Now that we have our model and know that PyTorch will automatically compute gradients for us, we can move on to train a model. To do this, we need a couple more things:
The most basic blueprint of PyTorch model training consists of
- Get your data
- Wrap your data splits in a [data loader](https://pytorch.org/docs/stable/data.html)
- Instantiate the model
- Instantiate a [loss function](https://pytorch.org/docs/stable/nn.html#loss-functions)
- Instantiate an [optimizer object](https://pytorch.org/docs/stable/optim.html), to which you pass the parameters you want to optimize
- Iterate through your training data, for every batch:
- reset the gradients
- do forward pass
- compute loss
- backward pass
- update parameters
(Optionally):
- After every full iteration through all training data samples (called an epoch), loop through all batches of validation data:
- forward pass
- compute loss and validation scores
In this way, we can monitor how good our model is performing on left-out validation data during training, this is used in early stopping. Notably, we do not compute and reset gradients and update parameters during our validation iterations, because we do not want to fit our model on this data.
Let's start with step one: get your data and wrap them in data loaders. We will first illustrate how we can convert our usual pandas or numpy datasets to be compatible with PyTorch with some random data:
```python
X_train = np.random.randn(100, 784)
y_train = np.random.randint(low = 0, high = 10, size = (100))
```
```python
X_train = torch.from_numpy(X_train)
print(X_train.dtype)
```
Remember to look at your data types: by default NumPy is `float64`, but if you instantiate a model, by default it will have weights in `float32`. It is therefore advised to convert your data to `float32`. In PyTorch, simply `float` is shorthand for `float32`.
```python
X_train = X_train.float()
# Equivalent: X_train.to(torch.float) or X_train.to(torch.float32)
print(X_train.dtype)
```
```python
y_train = torch.tensor(y_train)
print(y_train.dtype)
```
Now that we have our X_train and y_train as tensors, we can make them PyTorch-ready by wrapping them in a PyTorch dataset, and then by wrapping them in a DataLoader, which will create batches for us:
```python
train_dataset = torch.utils.data.TensorDataset(X_train, y_train)
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=16, pin_memory=True, shuffle=True)
```
Now we can use our train_dataloader as such:
```python
for batch in train_dataloader:
X, y = batch
print(X.shape, y.shape)
```
As you can see, we can iterate through our batches by use of a for loop, and it will spit out a training batch consisting of a list of X and y tensors. We can also test out code by isolating one training batch like this (only necessary for testing out code):
```python
batch = next(iter(train_dataloader))
batch
```
For the remainder of the PC lab, we will work with the MNIST dataset included in `torchvision`. (If you're running this code locally, you may have to pip install torchvision).
```python
from torchvision import datasets
from torchvision.transforms import ToTensor
train_data = datasets.MNIST(
root = 'data',
train = True,
transform = ToTensor(),
download = True,
)
test_data = datasets.MNIST(
root = 'data',
train = False,
transform = ToTensor()
)
X_train = train_data.data
y_train = train_data.targets
X_test = test_data.data
y_test = test_data.targets
```
```python
print('shapes:')
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
print('first training image tensor:')
print(X_train[0])
print('first five labels:')
print(y_train[:5])
```
Our data is images, each data sample has $28 \times 28$ input features, signifying the pixels. In order to feed this data to our model, we will need to flatten these features:
```python
X_train = X_train.reshape(-1, 28 * 28)
X_test = X_test.reshape(-1, 28 * 28)
```
In addition, the grayscale values of our images go from 0 to 255. It is perhaps good practice to min-max standardize these numbers by dividing through 255:
```python
X_train = X_train / 255
X_test = X_test / 255
```
Finally, let's check our datatypes to see if everything is looking good to go:
```python
X_train.dtype, X_test.dtype, y_train.dtype, y_test.dtype
```
Let's split up our training set in a training and validation set and finally wrap our data in a data loader:
```python
np.random.seed(42)
train_indices, val_indices = np.split(np.random.permutation(len(X_train)), [int(len(X_train)*0.8)])
X_val = X_train[val_indices]
y_val = y_train[val_indices]
X_train = X_train[train_indices]
y_train = y_train[train_indices]
train_dataset = torch.utils.data.TensorDataset(X_train, y_train)
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=16, pin_memory=True, shuffle=True)
val_dataset = torch.utils.data.TensorDataset(X_val, y_val)
val_dataloader = torch.utils.data.DataLoader(val_dataset, batch_size=16, pin_memory=True, shuffle=True)
test_dataset = torch.utils.data.TensorDataset(X_test, y_test)
test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=16, pin_memory=True, shuffle=True)
```
Let's visualize a random batch:
```python
batch = next(iter(train_dataloader))
X_batch, y_batch = batch
import matplotlib.pyplot as plt
figure = plt.figure(figsize=(10, 8))
cols, rows = 4, 4
for i in range(cols * rows):
img, label = X_batch[i], y_batch[i]
figure.add_subplot(rows, cols, i+1)
plt.title(label.item())
plt.axis("off")
plt.imshow(img.reshape(-1, 28, 28).squeeze(), cmap="gray")
plt.show()
```
Now that we have our data ready, let's reiterate our PyTorch model blueprint:
The most basic blueprint of PyTorch model training consists of
- Get your data
- Wrap your data splits in a [data loader](https://pytorch.org/docs/stable/data.html)
- Instantiate the model
- Instantiate a [loss function](https://pytorch.org/docs/stable/nn.html#loss-functions)
- Instantiate an [optimizer object](https://pytorch.org/docs/stable/optim.html), to which you pass the parameters you want to optimize
- Iterate through your training data, for every batch:
- reset the gradients
- do forward pass
- compute loss
- backward pass
- update parameters
(Optionally):
- After every full iteration through all training data samples (called an epoch), loop through all batches of validation data:
- forward pass
- compute loss and validation scores
In one of the previous exercises, we already implemented a model compatible with MNIST: 784 input features and 10 output nodes (one for each class). Hence, we can move on to loss function and optimizers. For multi-class classification of the digits, we will need to use the [Cross Entropy Loss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss). Take a look at what kind of inputs this loss function expects. According to the documentation: `The input is expected to contain raw, unnormalized scores for each class.` Meaning that we can pass logits directly to this loss function, and we do not have to apply a softmax operation ourselves.
For optimizer choice, we can choose [vanilla gradient descent](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html#torch.optim.SGD) or the nowadays-industry-standard [Adam optimizer](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html#torch.optim.Adam), which is itself a pimped version of stochastic gradient descent (with momentum).
Take note that we can also specify our desired learning rate in our model. This learning should almost always be tuned as it will influence how fast our model trains but also convergence.
```python
model = # your model from previous exercises here.
loss_function = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.005) # SGD = stochastic gradient descent
```
Now we're ready to perform training. We'll build up the training loop step-wise in the following codeblocks. Let's first start with passing one batch to the model, computing the loss, performing backpropagation and updating the weights:
```python
batch = next(iter(train_dataloader))
X_batch, y_batch = batch
y_hat_batch = model(X_batch)
loss = loss_function(y_hat_batch, y_batch) # Compute loss
loss.backward() # Calculate gradients
optimizer.step() # Update weights using defined optimizer
print(X_batch.shape, y_batch.shape)
print(y_hat_batch.shape)
print("Outputs as logits, first two samples:")
print(y_hat_batch[:2])
print('loss:', loss)
```
Everytime we perform a training step, we should reset the gradients so that the gradients computed on the previous batch do not influence the next. We can do this by calling `.zero_grad()` on our optimizer. In practice, it is most safe to call this before every forward pass. So if we train a complete epoch:
```python
all_losses = []
for batch in train_dataloader:
optimizer.zero_grad()
X_batch, y_batch = batch
y_hat_batch = model(X_batch)
loss = loss_function(y_hat_batch, y_batch) # Compute loss
loss.backward() # Calculate gradients
optimizer.step() # Update weights using defined optimizer
all_losses.append(loss.item())
```
Plotting the loss function of every batch during one epoch:
```python
plt.plot(np.arange(len(all_losses)), all_losses)
smoothed_losses = np.convolve(all_losses, np.ones(50)/50, mode = "valid")
plt.plot(np.arange(len(smoothed_losses)), smoothed_losses)
```
We have evaluated the training progress during one epoch on our training set. What if we want to see the performance of the validation set during training, so that we can see if/when the model starts overfitting? It is common practice to perform a pass through the whole validation set after every training epoch. However, we should not compute gradients now, we can block automatic gradient computation using `torch.no_grad()`. Also, we need to tell PyTorch to put our model in evaluation mode (so it does not perform dropout anymore, etc...). After the validation epoch, we should put the model back to training mode again. The code should look something like this:
```python
predictions = []
true_labels = []
losses = []
model.eval()
with torch.no_grad():
for batch in val_dataloader:
X_batch, y_batch = batch
y_hat_batch = model(X_batch)
loss = loss_function(y_hat_batch, y_batch)
losses.append(loss.item())
predictions.append(y_hat_batch)
true_labels.append(y_batch)
model.train()
predictions = torch.cat(predictions)
true_labels = torch.cat(true_labels)
accuracy = (true_labels == predictions.argmax(-1)).sum().item() / len(predictions)
print(accuracy)
print(np.mean(losses))
```
<div class="alert alert-success">
<b>EXERCISE:</b>
<p> Using the code above, put it all together to train a model for multiple epochs. After every epoch, print or save some training and validation statistics.
Monitor how good your model is training. What things could you change? In particular, try the Adam optimizer or try using gradient descent with the momentum argument. How does this influence training speed? What is it doing? Try tweaking the learning rate.
You should be able to obtain an accuracy of +- 96% </p>
</div>
```python
N_EPOCHS = 20
model = # your model from previous exercises here.
# loss function & optimizer
for i in range(1, N_EPOCHS + 1):
# train loop
# eval loop
# record or print some variables that you want to keep track of during training
```
<div class="alert alert-success">
<b>EXERCISE:</b>
<p> After training, what is your performance on the test dataset? </p>
</div>
```python
### YOUR CODE HERE ###
######################
```
## 2.4 Extra: The autoencoder
An autoencoder tries to reconstruct its own input. By constraining the dimensionality of our hidden layers (the bottleneck), the model is forced to learn a low-dimensional manifold of the data, much like PCA does (in fact, *architecturally* there are only two differences between PCA and an autoencoder: (1) PCA is linear whereas an autoencoder can be arbitrarily complex and (2) the bottleneck in PCA is constrained to be orthogonal, whereas in an autoencoder no such constraints are in place. Other than that, they are also optimized differently.)
Terminology-wise, we can split up our neural network into two parts: the encoder, which encodes the data to a lower-dimensional space, and the decoder, which decodes that encoding back to the original sample space. During training, samples go through both encoder and decoder. After training, we can opt to use different parts of the autoencoder: if we are interested in the lower-dimensional space of samples (for e.g. clustering or visualization), we only use the encoder. If we want to denoise samples (look up denoising autoencoder), we are interested in both encoder and decoder. Finally, if we want to generate samples, we can start from randomly generated vectors in the bottleneck space (e.g. random 10 dimensional vectors) and put them through the decoder to generate new samples (look up variational autoencoders).
It is very easy to implement a simple autoencoder in PyTorch, especially with the code that we have already written. The only things we need to change are:
- The architecture of the network should be changed so that it represents an hourglass with a bottleneck (most easy for visualization is if you have a bottleneck with 2 hidden dimensions, so that you can plot it directly, otherwise if you have e.g. a bottleneck with 10 hidden dimensions, then you need to do an additional PCA or t-SNE first on the samples). A symmetrical network in terms of hidden dimensions is most conventional.
- The loss function should be adjusted to a regression based loss (such as MSE), since our input/output pixels are continuous as well.
- The loss function should be called on the original input, instead of the target classes
- Accuracy can not be computed on reconstruction, only loss.
The following shows how you can easily make an autoencoder:
```python
model = nn.Sequential(
HyperparameterModel(dimensions_from_input_to_output= [784, 256, 128, 64]),
HyperparameterModel(dimensions_from_input_to_output= [64, 128, 256, 784])
)
# Alternatively, we could have made our new class for autoencoders
# or we could have just specified one HyperparameterModel passing [784, 128, 2, 128, 784] as argument
# The construct with the Sequential has the advantage that it allows us to call the encoder and decoder separately
# as model[0](input_data) and model[1](encoded_data)
```
<div class="alert alert-success">
<b>EXTRA EXERCISE:</b>
<p> Train this autoencoder model by reusing the code used in the supervised learning task and adjusting code where necessary, for a to-do list of necessary changes, consult the list above. </p>
</div>
```python
N_EPOCHS = 20
model = # your model from previous exercises here.
# loss function & optimizer
for i in range(1, N_EPOCHS + 1):
# train loop
# eval loop
# record or print some variables that you want to keep track of during training
```
We can now visualize the manifold our model has learned. In order to do so, we encode samples to our $x$-dimensional bottleneck. After which we can use t-SNE to give us a final 2-dimensional space for visualization. Does our model encode digit identity well in its "latent space"?
What about if we fit t-SNE directly on the pixel values? Does our autoencoder noticeably add value? **Note**: t-SNE may take a couple minutes to run.
```python
inputs = []
encoded_inputs = []
classes = []
with torch.no_grad():
model.eval()
for batch in val_dataloader:
X_batch, y_batch = batch
inputs.append(X_batch)
encoded_inputs.append(model[0](X_batch))
classes.append(y_batch)
classes = torch.cat(classes).numpy()
inputs = torch.cat(inputs).numpy()
encoded_inputs = torch.cat(encoded_inputs).numpy()
```
```python
from sklearn.manifold import TSNE
import seaborn as sns
encoded_tsne = TSNE(verbose = 10, init = "pca", learning_rate = "auto").fit_transform(encoded_inputs)
plt.figure(figsize = (10, 10))
sns.scatterplot(x = encoded_tsne[:,0], y = encoded_tsne[:, 1], hue = classes, alpha = .75, palette="deep")
plt.show()
```
```python
encoded_tsne = TSNE(verbose = 10, init = "pca", learning_rate = "auto").fit_transform(inputs)
plt.figure(figsize = (10, 10))
sns.scatterplot(x = encoded_tsne[:,0], y = encoded_tsne[:, 1], hue = classes, alpha = .75, palette="deep")
```
Our autoencoder may not add much value visually on top of just fitting t-SNE directly on pixel values, but the advantage of an autoencoder do not only come from visualization. For example, we can reconstruct samples from the latent space back to the original space with the decoder. As a last, let's visualize some reconstructions:
```python
with torch.no_grad():
batch = next(iter(val_dataloader))
X_batch, y_batch = batch
model.eval()
y_hat_batch = model(X_batch)
figure = plt.figure(figsize=(10, 8))
cols, rows = 4, 4
for i in range(cols * rows):
img, label = X_batch[i], y_batch[i]
figure.add_subplot(rows, cols, i+1)
plt.title(label.item())
plt.axis("off")
plt.imshow(img.reshape(-1, 28, 28).squeeze(), cmap="gray")
plt.show()
figure = plt.figure(figsize=(10, 8))
cols, rows = 4, 4
for i in range(cols * rows):
img, label = y_hat_batch[i], y_batch[i]
figure.add_subplot(rows, cols, i+1)
plt.title(label.item())
plt.axis("off")
plt.imshow(img.reshape(-1, 28, 28).squeeze(), cmap="gray")
plt.show()
```
These are just samples generated from an encoding of a true original sample. What if we want to generate completely new synthetic digits?
For this we need to sample vectors in our latent space.
Naively we could just sample vectors from a random gaussian.
```python
samples = torch.randn(16, 64)
with torch.no_grad():
model.eval()
decoded_samples = model[1](samples)
figure = plt.figure(figsize=(10, 8))
cols, rows = 4, 4
for i in range(cols * rows):
img = decoded_samples[i]
figure.add_subplot(rows, cols, i+1)
plt.axis("off")
plt.imshow(img.reshape(-1, 28, 28).squeeze(), cmap="gray")
plt.show()
```
The images vaguely look like something that could be digits, but are not convincing to say the least.
One way to make these generations better is to not sample random vectors, but try to inform our sampling. For this dataset, we have class labels, so we can try to see what distribution samples from a certain class (e.g. 7) generally follow in the latent space. We can then sample from that distribution, instead of from a standard gaussian.
To do this: we (1) first encode samples in the latent space, (2) take the samples belonging to the cluster we're interested in, (3) fit a distribution to the latent space: in this case we will simply compute the mean and standard deviation of all hidden dimensions, and (4) finally, sample some vectors from that distribution.
```python
class_ = 7
encoded_inputs = []
classes = []
with torch.no_grad():
model.eval()
for batch in val_dataloader:
X_batch, y_batch = batch
encoded_inputs.append(model[0](X_batch))
classes.append(y_batch)
classes = torch.cat(classes)
encoded_inputs = torch.cat(encoded_inputs)
encoded_inputs_class = encoded_inputs[classes == class_]
mu = encoded_inputs_class.mean(0)
std = encoded_inputs_class.std(0)
print(mu)
print(std)
figure = plt.figure(figsize=(10, 8))
cols, rows = 4, 4
for i in range(cols * rows):
sampled_vector = torch.normal(mu, std).unsqueeze(0)
with torch.no_grad():
model.eval()
decoded_sample = model[1](sampled_vector)
img = decoded_sample
figure.add_subplot(rows, cols, i+1)
plt.axis("off")
plt.imshow(img.reshape(-1, 28, 28).squeeze(), cmap="gray")
plt.show()
```
Looks like conditioning on a class-specific distribution helps the model along a bit, but it is still clear that not every sampled vector taken from the latent space gives us a convincing decoded image. An advanced technique based on autoencoders to give us better image generation quality is the Variational Autoencoder, which is out of scope for this course. Another (non-autoencoder) based technique is GANs.
### Extra: Using GPUs
Matrix multiplication run orders of magnitude faster on GPU hardware. If you have a local GPU and have installed a PyTorch version with GPU, you should be able to run this code locally. Otherwise, In Google Colab, you can request access to a GPU via `Runtime > Change runtime type > Hardware accelerator = GPU`.
Briefly, the steps needed to train on GPUs consist of
1. Putting your model on the GPU
2. During your training loop, putting every batch on the GPU before the forward pass
3. If you have a validation loop, doing the same there for every batch.
4. If you have variables that you will use after training (e.g. predictions on the validation set), remember to return this back to the CPU, as GPUs have limited memory.
In PyTorch, we put variables and models on the GPU by specifying their 'device' to be 'cuda' (the parallel computing platform for nvidia GPUs that PyTorch uses).
The following code illustrates how to train your models on GPU hardware:
```python
X, y = next(iter(train_dataloader))
model = model.to('cuda')
print(X.device)
X = X.to('cuda')
print(X.device)
y_hat = model(X)
y_hat
```
We encourage you to try out training on GPUs during the next PC lab(s)
| 6a21b4e234b233ba0240d57b35b872efd072163d | 69,153 | ipynb | Jupyter Notebook | predmod/ANN_intro/PClab12_ANN.ipynb | gdewael/teaching | a78155041918422a843f31c863dd11e8afc5646a | [
"MIT"
]
| null | null | null | predmod/ANN_intro/PClab12_ANN.ipynb | gdewael/teaching | a78155041918422a843f31c863dd11e8afc5646a | [
"MIT"
]
| null | null | null | predmod/ANN_intro/PClab12_ANN.ipynb | gdewael/teaching | a78155041918422a843f31c863dd11e8afc5646a | [
"MIT"
]
| null | null | null | 37.159054 | 976 | 0.55208 | true | 9,897 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92079 | 0.83762 | 0.771272 | __label__eng_Latn | 0.993498 | 0.630255 |
```python
import numpy as np
from numpy.linalg import *
rg = matrix_rank
from IPython.display import display, Math, Latex, Markdown
from sympy import *
from sympy import Matrix, solve_linear_system
```
```python
pr = lambda s: display(Markdown('$'+str(latex(s))+'$'))
def psym_matrix(a, intro='',ending='',row=False):
try:
if row:
return(intro+str(latex(a))+ending)
else:
display(Latex("$$"+intro+str(latex(a))+ending+"$$"))
except TypeError:
display(latex(a)) #TODO MAY BY...
pr = lambda s: display(Markdown('$'+str(latex(s))+'$'))
def pmatrix(a, intro='',ending='',row=False):
if len(a.shape) > 2:
raise ValueError('pmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{pmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{pmatrix}']
if row:
return(intro+'\n'.join(rv)+ending)
else:
display(Latex('$$'+intro+'\n'.join(rv)+ending+'$$'))
def crearMatrix(name,shape=(2,2)):
X = []
for i in range(shape[0]):
row = []
for j in range(shape[1]):
row.append(Symbol(name+'_{'+str(i*10+j+11)+'}'))
X.append(row)
return Matrix(X)
from re import sub
def matrix_to_word(A):
replaced = sub(r"begin{matrix}", r"(■( ", psym_matrix(A,row=1))
replaced = sub(r"{", r"", replaced)
replaced = sub(r"}", r"", replaced)
replaced = sub(r"\\\\\\\\", r" @ ", replaced)
replaced = sub(r"\\\\", r" @ ", replaced)
replaced = sub(r"[\[\]]", r"", replaced)
replaced = sub(r"left", r"", replaced)
replaced = sub(r".endmatrix.right", r" ))", replaced)
replaced = sub(r"\\", r"", replaced)
print(replaced )
def isNilpotent(A):
i = 0
for n in range(25):
if A**n == zeros(4,4):
return i;
else:
i+=1
return 0
```
```python
D1= crearMatrix("d^1")
C1= crearMatrix("c^1")
D2= crearMatrix("d^2")
C2= crearMatrix("c^2")
psym_matrix(D1, intro="D^1=")
psym_matrix(C1, intro="C^1=")
psym_matrix(D2, intro="D^2=")
psym_matrix(C2, intro="C^2=")
```
$$D^1=\left[\begin{matrix}d^1_{11} & d^1_{12}\\d^1_{21} & d^1_{22}\end{matrix}\right]$$
$$C^1=\left[\begin{matrix}c^1_{11} & c^1_{12}\\c^1_{21} & c^1_{22}\end{matrix}\right]$$
$$D^2=\left[\begin{matrix}d^2_{11} & d^2_{12}\\d^2_{21} & d^2_{22}\end{matrix}\right]$$
$$C^2=\left[\begin{matrix}c^2_{11} & c^2_{12}\\c^2_{21} & c^2_{22}\end{matrix}\right]$$
```python
D1 = Matrix([[1,1],
[1,2]])
C1 = Matrix([[2,1],
[1,0]])
D2 = Matrix([[1,-1],
[0,1]])
C2 = Matrix([[1,1],
[0,1]])
```
```python
R1 =( Matrix([ # How???
[ C1[0,0], 0 , C1[0,1], 0 ,],
[ 0 , C1[0,0], 0 , C1[0,1],],
[ C1[1,0], 0 , C1[1,1], 0 ,],
[ 0 , C1[1,0], 0 , C1[1,1],]
])).T
T1 = Matrix([ # How???
[D1[0,0],D1[0,1], 0 , 0 ],
[D1[1,0],D1[1,1], 0 , 0 ],
[ 0 , 0 ,D1[0,0],D1[0,1]],
[ 0 , 0 ,D1[1,0],D1[1,1]]
])
R2 = (Matrix([ # How???
[ C2[0,0], 0 , C2[0,1], 0 ,],
[ 0 , C2[0,0], 0 , C2[0,1],],
[ C2[1,0], 0 , C2[1,1], 0 ,],
[ 0 , C2[1,0], 0 , C2[1,1],]
])).T
T2 = (Matrix([ # How???
[D2[0,0],D2[0,1], 0 , 0 ],
[D2[1,0],D2[1,1], 0 , 0 ],
[ 0 , 0 ,D2[0,0],D2[0,1]],
[ 0 , 0 ,D2[1,0],D2[1,1]]
]))
```
```python
psym_matrix(R1, ending=psym_matrix(T1,row=1))
psym_matrix(R2, ending=psym_matrix(T2,row=1))
```
$$\left[\begin{matrix}2 & 0 & 1 & 0\\0 & 2 & 0 & 1\\1 & 0 & 0 & 0\\0 & 1 & 0 & 0\end{matrix}\right]\left[\begin{matrix}1 & 1 & 0 & 0\\1 & 2 & 0 & 0\\0 & 0 & 1 & 1\\0 & 0 & 1 & 2\end{matrix}\right]$$
$$\left[\begin{matrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\1 & 0 & 1 & 0\\0 & 1 & 0 & 1\end{matrix}\right]\left[\begin{matrix}1 & -1 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & -1\\0 & 0 & 0 & 1\end{matrix}\right]$$
```python
psym_matrix(R1*T1+R2*T2)
```
$$\left[\begin{matrix}3 & 1 & 1 & 1\\2 & 5 & 1 & 2\\2 & 0 & 1 & -1\\1 & 3 & 0 & 1\end{matrix}\right]$$
```python
psym_matrix(crearMatrix('g').T)
```
$$\left[\begin{matrix}g_{11} & g_{21}\\g_{12} & g_{22}\end{matrix}\right]$$
```python
```
| 27794297f01ebb83276edff152b040c2bd9954bc | 8,967 | ipynb | Jupyter Notebook | 7ex/6.ipynb | TeamProgramming/ITYM | 1169db0238a74a2b2720b9539cb3bbcd16d0b380 | [
"MIT"
]
| null | null | null | 7ex/6.ipynb | TeamProgramming/ITYM | 1169db0238a74a2b2720b9539cb3bbcd16d0b380 | [
"MIT"
]
| null | null | null | 7ex/6.ipynb | TeamProgramming/ITYM | 1169db0238a74a2b2720b9539cb3bbcd16d0b380 | [
"MIT"
]
| null | null | null | 26.6875 | 229 | 0.427122 | true | 1,752 | Qwen/Qwen-72B | 1. YES
2. YES | 0.879147 | 0.785309 | 0.690401 | __label__eng_Latn | 0.186849 | 0.442365 |
```python
import sympy as sp
```
```python
s = sp.Symbol('s')
```
```python
V = sp.Symbol('V')
```
```python
top = (2*V*s)*(1/s+2*s/(4*s**2+1)) - 0
```
```python
bottom = (1/s+(6*s+3)/(3*s+1))*(1/s+2*s/(4*s**2+1)) - (1/s)**2
```
```python
sp.simplify(top/bottom)
```
2*V*s*(18*s**3 + 6*s**2 + 3*s + 1)/(36*s**3 + 24*s**2 + 8*s + 3)
```python
```
| e178a047d949025d989cb87cbd398e8eacfef633 | 1,692 | ipynb | Jupyter Notebook | notebooks/paper_figures/misc/Untitled2.ipynb | fossabot/PyNumDiff | dccad2ad7a875f2ecccb0db2bb6e2afa392916d1 | [
"MIT"
]
| null | null | null | notebooks/paper_figures/misc/Untitled2.ipynb | fossabot/PyNumDiff | dccad2ad7a875f2ecccb0db2bb6e2afa392916d1 | [
"MIT"
]
| null | null | null | notebooks/paper_figures/misc/Untitled2.ipynb | fossabot/PyNumDiff | dccad2ad7a875f2ecccb0db2bb6e2afa392916d1 | [
"MIT"
]
| null | null | null | 17.265306 | 73 | 0.456265 | true | 166 | Qwen/Qwen-72B | 1. YES
2. YES | 0.937211 | 0.822189 | 0.770565 | __label__yue_Hant | 0.176523 | 0.628612 |
```python
from sympy import *
init_printing()
import pandas as pd
import numpy as np
from myhdl import *
from myhdlpeek import *
import random
```
```python
def TruthTabelGenrator(BoolSymFunc):
"""
Function to generate a truth table from a sympy boolian expression
BoolSymFunc: sympy boolian expression
return TT: a Truth table stored in a pandas dataframe
"""
colsL=sorted([i for i in list(BoolSymFunc.rhs.atoms())], key=lambda x:x.sort_key())
colsR=sorted([i for i in list(BoolSymFunc.lhs.atoms())], key=lambda x:x.sort_key())
bitwidth=len(colsL)
cols=colsL+colsR; cols
TT=pd.DataFrame(columns=cols, index=range(2**bitwidth))
for i in range(2**bitwidth):
inputs=[int(j) for j in list(np.binary_repr(i, bitwidth))]
outputs=BoolSymFunc.rhs.subs({j:v for j, v in zip(colsL, inputs)})
inputs.append(int(bool(outputs)))
TT.iloc[i]=inputs
return TT
```
```python
def TTMinMaxAppender(TruthTable):
"""
Function that takes a Truth Table from "TruthTabelGenrator" function
and appends a columns for the Minterm and Maxterm exspersions for each
TruthTable: Truth table from "TruthTabelGenrator"
return TruthTable: truth table with appened min max term exspersions
return SOPTerms: list of Sum of Poroduct terms
return POSTerms: list of Product of Sum Terms
"""
Mmaster=[]; mmaster=[]; SOPTerms=[]; POSTerms=[]
for index, row in TruthTable.iterrows():
if 'm' not in list(row.index):
rowliterals=list(row[:-1].index)
Mm=list(row[:-1])
Mi=[]; mi=[]
for i in range(len(rowliterals)):
if Mm[i]==0:
Mi.append(rowliterals[i])
mi.append(~rowliterals[i])
elif Mm[i]==0:
M.append(rowliterals[i])
m.append(~rowliterals[i])
Mi=Or(*Mi, simplify=False); mi=And(*mi)
Mmaster.append(Mi); mmaster.append(mi)
if row[-1]==0:
POSTerms.append(index)
elif row[-1]==1:
SOPTerms.append(index)
else:
if row[-3]==0:
POSTerms.append(index)
elif row[-3]==1:
SOPTerms.append(index)
if 'm' not in list(TruthTable.columns):
TruthTable['m']=mmaster; TruthTable['M']=Mmaster
return TruthTable, SOPTerms, POSTerms
```
```python
termsetBuilder=lambda literalsList: set(list(range(2**len(literalsList))))
```
```python
def POS_SOPformCalcater(literls, SOPlist, POSlist, DC=None):
"""
Wraper function around sympy's SOPform and POSfrom boolian function
genrator from the SOP, POS, DontCar (DC) list
"""
minterms=[]; maxterms=[]
for i in SOPlist:
minterms.append([int(j) for j in list(np.binary_repr(i, len(literls)))])
for i in POSlist:
maxterms.append([int(j) for j in list(np.binary_repr(i, len(literls)))])
if DC!=None:
dontcares=[]
for i in DC:
dontcares.append([int(j) for j in list(np.binary_repr(i, len(literls)))])
DC=dontcares
return simplify(SOPform(literls, minterms, DC)), simplify(POSform(literls, maxterms, DC))
```
```python
def Combo_TB(inputs=[]):
"""
Basic myHDL test bench for simple compintorial logic testing
"""
#the # of inputs contorls everything
Ninputs=len(inputs)
#genrate sequantil number of inputs for comparsion to known
SequntialInputs=np.arange(2**Ninputs)
#run the test for 2^Ninputs Seq and 2^Ninputs randomly =2*2^Ninputs cycles
for t in range(2*2**Ninputs):
#run sequantial
try:
#genrate binary bit repsersintion of current sequantl input
NextSeqInput=np.binary_repr(SequntialInputs[t], width=Ninputs)
#pass each bit into the inputs
for i in range(Ninputs):
inputs[i].next=bool(int(NextSeqInput[i]))
#run the random to cheack for unexsected behavior
except IndexError:
NextRanInput=[random.randint(0,1) for i in range(Ninputs)]
for i in range(Ninputs):
inputs[i].next=NextRanInput[i]
#virtural clock for combo only
yield delay(1)
now()
```
```python
bool(int('0'))
```
False
```python
def VerilogTextReader(loc, printresult=True):
"""
Function that reads in a Verilog file and can print to screen the file
contant
"""
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
```
```python
def VHDLTextReader(loc, printresult=True):
"""
Function that reads in a vhdl file and can print to screen the file
contant
"""
with open(f'{loc}.vhd', 'r') as vhdText:
VHDLText=vhdText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VHDLText)
return VHDLText
```
```python
def MakeDFfromPeeker(data):
"""
Helper function to read the Peeker JSON information from a myHDL test bench
simulationn and move the data into a pands dataframe for easer futer parsing
and snyslsisis
(note need to update functionality to read in numericl )
"""
for i in range(len(data['signal'])):
datainstance=data['signal'][i]['wave']
while True:
ith=datainstance.find('.')
if ith==-1:
break
else:
datainstance=datainstance.replace('.', datainstance[ith-1], 1)
data['signal'][i]['wave']=datainstance
DataDF=pd.DataFrame(columns=[i['name'] for i in data['signal']])
for i in data['signal']:
DataDF[i['name']]=list(i['wave'])
return DataDF
```
```python
```
```python
def shannon_exspanson(f, term):
"""
function to perform shannon's expansion theorm
f is not a full equation
"""
cof0=simplify(f.subs(term, 0)); cof1=simplify(f.subs(term, 1))
return ((~term & cof0 | (term & cof1))), cof0, cof1
```
## Testing cell have been converted to Markdown so as to not clutter .py file
x_1in, x_2in, x_3in, y_out=symbols('x_1in, x_2in, x_3in, y_out')
AND3Def1=Eq(y_out, x_1in & x_2in & x_3in)
AND3Def2=Eq(y_out, And(x_1in , x_2in, x_3in))
AND3Def1, AND3Def2
F=AND3Def1; F
list(F.rhs.atoms())
colsL=sorted([i for i in list(F.rhs.atoms())], key=lambda x:x.sort_key())
colsR=sorted([i for i in list(F.lhs.atoms())], key=lambda x:x.sort_key())
bitwidth=len(colsL)
cols=colsL+colsR; cols
TT=pd.DataFrame(columns=cols, index=range(2**bitwidth)); TT
for i in range(2**bitwidth):
print([int(i) for i in list(np.binary_repr(i, bitwidth))])
for i in range(2**bitwidth):
inputs=[int(j) for j in list(np.binary_repr(i, bitwidth))]
outputs=F.rhs.subs({j:v for j, v in zip(colsL, inputs)})
inputs.append(int(bool(outputs)))
TT.iloc[i]=inputs
TT
inputs=[0,0,0]
outputs=F.rhs.subs({j:v for j, v in zip(colsL, inputs)})
outputs
TT=TruthTabelGenrator(AND3Def1)
TT
T0=TT.iloc[0]; T0
POS=[]
T0[-1]
if T0[-1]==0:
POS.append(0)
POS
T0literal=list(T0[:-1].index); T0literal
Mm0=list(T0[:-1]); Mm0
M=[]; m=[]
for i in range(len(T0literal)):
if Mm0[i]==0:
M.append(T0literal[i])
m.append(~T0literal[i])
elif Mm0[i]==0:
M.append(T0literal[i])
m.append(~T0literal[i])
M=Or(*M); m=And(*m)
TT=TruthTabelGenrator(AND3Def1)
TT
Taple, SOP, POS=TTMinMaxAppender(TT)
SOP, POS
TT
F, w, x, y, z=symbols('F, w, x, y, z')
Feq=Eq(F,(y&z)|(z&~w)); Feq
FTT=TruthTabelGenrator(Feq)
FTT
_, SOP, POS=TTMinMaxAppender(FTT)
SOP, POS
FTT
for i in SOP:
print([int(j) for j in list(np.binary_repr(i, 4))])
POS_SOPformCalcater([w, y, z], SOP, POS)
SOP
| b256f808ee6c1f533ab5344d8fc0dade41917e51 | 14,770 | ipynb | Jupyter Notebook | myHDL_DigLogicFundamentals/sympy_myhdl_tools.ipynb | PyLCARS/PythonUberHDL | f7ae2293d6efaca7986d62540798cdf061383d06 | [
"BSD-3-Clause"
]
| 31 | 2017-10-09T12:15:14.000Z | 2022-02-28T09:05:21.000Z | myHDL_DigLogicFundamentals/sympy_myhdl_tools.ipynb | cfelton/PythonUberHDL | f7ae2293d6efaca7986d62540798cdf061383d06 | [
"BSD-3-Clause"
]
| null | null | null | myHDL_DigLogicFundamentals/sympy_myhdl_tools.ipynb | cfelton/PythonUberHDL | f7ae2293d6efaca7986d62540798cdf061383d06 | [
"BSD-3-Clause"
]
| 12 | 2018-02-09T15:36:20.000Z | 2021-04-20T21:39:12.000Z | 25.731707 | 101 | 0.488219 | true | 2,307 | Qwen/Qwen-72B | 1. YES
2. YES | 0.771843 | 0.851953 | 0.657574 | __label__eng_Latn | 0.571936 | 0.366096 |
# Programming Assignment
## Naive Bayes and logistic regression
### Instructions
In this notebook, you will write code to develop a Naive Bayes classifier model to the Iris dataset using Distribution objects from TensorFlow Probability. You will also explore the connection between the Naive Bayes classifier and logistic regression.
Some code cells are provided you in the notebook. You should avoid editing provided code, and make sure to execute the cells in order to avoid unexpected errors. Some cells begin with the line:
`#### GRADED CELL ####`
Don't move or edit this first line - this is what the automatic grader looks for to recognise graded cells. These cells require you to write your own code to complete them, and are automatically graded when you submit the notebook. Don't edit the function name or signature provided in these cells, otherwise the automatic grader might not function properly.
### How to submit
Complete all the tasks you are asked for in the worksheet. When you have finished and are happy with your code, press the **Submit Assignment** button at the top of this notebook.
### Let's get started!
We'll start running some imports, and loading the dataset. Do not edit the existing imports in the following cell. If you would like to make further Tensorflow imports, you should add them here.
```python
#### PACKAGE IMPORTS ####
# Run this cell first to import all required packages. Do not make any imports elsewhere in the notebook
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from sklearn import datasets, model_selection
%matplotlib inline
# If you would like to make further imports from TensorFlow or TensorFlow Probability, add them here
```
#### The Iris dataset
In this assignment, you will use the [Iris dataset](https://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html). It consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. For a reference, see the following papers:
- R. A. Fisher. "The use of multiple measurements in taxonomic problems". Annals of Eugenics. 7 (2): 179–188, 1936.
Your goal is to construct a Naive Bayes classifier model that predicts the correct class from the sepal length and sepal width features. Under certain assumptions about this classifier model, you will explore the relation to logistic regression.
#### Load and prepare the data
We will first read in the Iris dataset, and split the dataset into training and test sets.
```python
# Load the dataset
iris = datasets.load_iris()
```
```python
# Use only the first two features: sepal length and width
data = iris.data[:, :2]
targets = iris.target
```
```python
# Randomly shuffle the data and make train and test splits
x_train, x_test, y_train, y_test = model_selection.train_test_split(data, targets, test_size=0.2)
```
```python
# Plot the training data
labels = {0: 'Iris-Setosa', 1: 'Iris-Versicolour', 2: 'Iris-Virginica'}
label_colours = ['blue', 'orange', 'green']
def plot_data(x, y, labels, colours):
for c in np.unique(y):
inx = np.where(y == c)
plt.scatter(x[inx, 0], x[inx, 1], label=labels[c], c=colours[c])
plt.title("Training set")
plt.xlabel("Sepal length (cm)")
plt.ylabel("Sepal width (cm)")
plt.legend()
plt.figure(figsize=(8, 5))
plot_data(x_train, y_train, labels, label_colours)
plt.show()
```
### Naive Bayes classifier
We will briefly review the Naive Bayes classifier model. The fundamental equation for this classifier is Bayes' rule:
$$
P(Y=y_k | X_1,\ldots,X_d) = \frac{P(X_1,\ldots,X_d | Y=y_k)P(Y=y_k)}{\sum_{k=1}^K P(X_1,\ldots,X_d | Y=y_k)P(Y=y_k)}
$$
In the above, $d$ is the number of features or dimensions in the inputs $X$ (in our case $d=2$), and $K$ is the number of classes (in our case $K=3$). The distribution $P(Y)$ is the class prior distribution, which is a discrete distribution over $K$ classes. The distribution $P(X | Y)$ is the class-conditional distribution over inputs.
The Naive Bayes classifier makes the assumption that the data features $X_i$ are conditionally independent give the class $Y$ (the 'naive' assumption). In this case, the class-conditional distribution decomposes as
$$
\begin{align}
P(X | Y=y_k) &= P(X_1,\ldots,X_d | Y=y_k)\\
&= \prod_{i=1}^d P(X_i | Y=y_k)
\end{align}
$$
This simplifying assumption means that we typically need to estimate far fewer parameters for each of the distributions $P(X_i | Y=y_k)$ instead of the full joint distribution $P(X | Y=y_k)$.
Once the class prior distribution and class-conditional densities are estimated, the Naive Bayes classifier model can then make a class prediction $\hat{Y}$ for a new data input $\tilde{X} := (\tilde{X}_1,\ldots,\tilde{X}_d)$ according to
$$
\begin{align}
\hat{Y} &= \text{argmax}_{y_k} P(Y=y_k | \tilde{X}_1,\ldots,\tilde{X}_d) \\
&= \text{argmax}_{y_k}\frac{P(\tilde{X}_1,\ldots,\tilde{X}_d | Y=y_k)P(Y=y_k)}{\sum_{k=1}^K P(\tilde{X}_1,\ldots,\tilde{X}_d | Y=y_k)P(Y=y_k)}\\
&= \text{argmax}_{y_k} P(\tilde{X}_1,\ldots,\tilde{X}_d | Y=y_k)P(Y=y_k)
\end{align}
$$
#### Define the class prior distribution
We will begin by defining the class prior distribution. To do this we will simply take the maximum likelihood estimate, given by
$$
P(Y=y_k) = \frac{\sum_{n=1}^N \delta(Y^{(n)}=y_k)}{N},
$$
where the superscript $(n)$ indicates the $n$-th dataset example, $\delta(Y^{(n)}=y_k) = 1$ if $Y^{(n)}=y_k$ and 0 otherwise, and $N$ is the total number of examples in the dataset. The above is simply the proportion of data examples belonging to class $k$.
You should now write a function that builds the prior distribution from the training data, and returns it as a `Categorical` Distribution object.
* The input to your function `y` will be a numpy array of shape `(num_samples,)`
* The entries in `y` will be integer labels $k=0, 1,\ldots, K-1$
* Your function should build and return the prior distribution as a `Categorical` distribution object
* The probabilities for this distribution will be a length-$K$ vector, with entries corresponding to $P(Y = y_k)$ for $k=0,1,\ldots,K-1$
* Your function should work for any value of $K\ge 1$
* This Distribution will have an empty batch shape and empty event shape
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def get_prior(y):
"""
This function takes training labels as a numpy array y of shape (num_samples,) as an input.
Your function should
This function should build a Categorical Distribution object with empty batch shape
and event shape, with the probability of each class given as above.
Your function should return the Distribution object.
"""
```
```python
# Run your function to get the prior
prior = get_prior(y_train)
# prior.prob([1]) + prior.prob([0]) + prior.prob([2])
```
```python
# Plot the prior distribution
labels = ['Iris-Setosa', 'Iris-Versicolour', 'Iris-Virginica']
plt.bar([0, 1, 2], prior.probs.numpy(), color=label_colours)
plt.xlabel("Class")
plt.ylabel("Prior probability")
plt.title("Class prior distribution")
plt.xticks([0, 1, 2], labels)
plt.show()
```
#### Define the class-conditional densities
We now turn to the definition of the class-conditional distributions $P(X_i | Y=y_k)$ for $i=0, 1$ and $k=0, 1, 2$. In our model, we will assume these distributions to be univariate Gaussian:
$$
\begin{align}
P(X_i | Y=y_k) &= N(X_i | \mu_{ik}, \sigma_{ik})\\
&= \frac{1}{\sqrt{2\pi\sigma_{ik}^2}} \exp\left\{-\frac{1}{2} \left(\frac{x - \mu_{ik}}{\sigma_{ik}}\right)^2\right\}
\end{align}
$$
with mean parameters $\mu_{ik}$ and standard deviation parameters $\sigma_{ik}$, twelve parameters in all. We will again estimate these parameters using maximum likelihood. In this case, the estimates are given by
$$
\begin{align}
\hat{\mu}_{ik} &= \frac{\sum_n X_i^{(n)} \delta(Y^{(n)}=y_k)}{\sum_n \delta(Y^{(n)}=y_k)} \\
\hat{\sigma}^2_{ik} &= \frac{\sum_n (X_i^{(n)} - \hat{\mu}_{ik})^2 \delta(Y^{(n)}=y_k)}{\sum_n \delta(Y^{(n)}=y_k)}
\end{align}
$$
Note that the above are just the means and variances of the sample data points for each class.
You should now write a function the computes the class-conditional Gaussian densities, using the maximum likelihood parameter estimates given above, and returns them in a single, batched `MultivariateNormalDiag` Distribution object.
* The inputs to the function are
* a numpy array `x` of shape `(num_samples, num_features)` for the data inputs
* a numpy array `y` of shape `(num_samples,)` for the target labels
* Your function should work for any number of classes $K\ge 1$ and any number of features $d\ge 1$
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def get_class_conditionals(x, y):
"""
This function takes training data samples x and labels y as inputs.
This function should build the class-conditional Gaussian distributions above.
It should construct a batch of distributions for each feature and each class, using the
parameter estimates above for the means and standard deviations.
The batch shape of this distribution should be rank 2, where the first dimension corresponds
to the number of classes and the second corresponds to the number of features.
Your function should then return the Distribution object.
"""
```
```python
# Run your function to get the class-conditional distributions
class_conditionals = get_class_conditionals(x_train, y_train)
class_conditionals
```
<tfp.distributions.MultivariateNormalDiag 'MultivariateNormalDiag' batch_shape=[3] event_shape=[2] dtype=float32>
```python
# Random cell to verify some outputs
n_classes = np.unique(y_train)
n_features = x_train.shape[1]
print(n_classes)
means = np.zeros((n_classes.shape[0],n_features))
print(means)
variances = np.zeros((n_classes.shape[0],n_features))
for i, n_class in enumerate(n_classes):
means[i] = x_train[y_train == n_class].mean(axis = 0, keepdims = True)
variances[i] = x_train[y_train == n_class].var(axis = 0, keepdims = True)
print(np.sqrt(variances))
dist = tfd.MultivariateNormalDiag(loc = means.T,
scale_diag = variances.T)
dist
```
[0 1 2]
[[0. 0.]
[0. 0.]
[0. 0.]]
[[0.35776423 0.3668604 ]
[0.51815126 0.29051522]
[0.56943066 0.32364794]]
<tfp.distributions.MultivariateNormalDiag 'MultivariateNormalDiag' batch_shape=[2] event_shape=[3] dtype=float64>
```python
## Random cell to verify some outputs
locs = np.array([
[[2],[3]],
[[3],[4]],
[[5],[6]]
], dtype = 'float32')
scale_diags = np.array([
[[1],[2]],
[[3],[4]],
[[5],[6]]
], dtype = 'float32')
MVND = tfd.MultivariateNormalDiag(loc = locs,
scale_diag = scale_diags)
MVND
```
<tfp.distributions.MultivariateNormalDiag 'MultivariateNormalDiag' batch_shape=[3, 2] event_shape=[1] dtype=float32>
We can visualise the class-conditional densities with contour plots by running the cell below. Notice how the contours of each distribution correspond to a Gaussian distribution with diagonal covariance matrix, since the model assumes that each feature is independent given the class.
```python
# Plot the training data with the class-conditional density contours
def get_meshgrid(x0_range, x1_range, num_points=100):
x0 = np.linspace(x0_range[0], x0_range[1], num_points)
x1 = np.linspace(x1_range[0], x1_range[1], num_points)
return np.meshgrid(x0, x1)
def contour_plot(x0_range, x1_range, prob_fn, batch_shape, colours, levels=None, num_points=100):
X0, X1 = get_meshgrid(x0_range, x1_range, num_points=num_points)
Z = prob_fn(np.expand_dims(np.array([X0.ravel(), X1.ravel()]).T, 1))
Z = np.array(Z).T.reshape(batch_shape, *X0.shape)
for batch in np.arange(batch_shape):
if levels:
plt.contourf(X0, X1, Z[batch], alpha=0.2, colors=colours, levels=levels)
else:
plt.contour(X0, X1, Z[batch], colors=colours[batch], alpha=0.3)
plt.figure(figsize=(10, 6))
plot_data(x_train, y_train, labels, label_colours)
x0_min, x0_max = x_train[:, 0].min(), x_train[:, 0].max()
x1_min, x1_max = x_train[:, 1].min(), x_train[:, 1].max()
contour_plot((x0_min, x0_max), (x1_min, x1_max), class_conditionals.prob, 3, label_colours)
plt.title("Training set with class-conditional density contours")
plt.show()
```
#### Make predictions from the model
Now the prior and class-conditional distributions are defined, you can use them to compute the model's class probability predictions for an unknown test input $\tilde{X} = (\tilde{X}_1,\ldots,\tilde{X}_d)$, according to
$$
P(Y=y_k | \tilde{X}_1,\ldots,\tilde{X}_d) = \frac{P(\tilde{X}_1,\ldots,\tilde{X}_d | Y=y_k)P(Y=y_k)}{\sum_{k=1}^K P(\tilde{X}_1,\ldots,\tilde{X}_d | Y=y_k)P(Y=y_k)}
$$
The class prediction can then be taken as the class with the maximum probability:
$$
\hat{Y} = \text{argmax}_{y_k} P(Y=y_k | \tilde{X}_1,\ldots,\tilde{X}_d)
$$
You should now write a function to return the model's class probabilities for a given batch of test inputs of shape `(batch_shape, 2)`, where the `batch_shape` has rank at least one.
* The inputs to the function are the `prior` and `class_conditionals` distributions, and the inputs `x`
* Your function should use these distributions to compute the probabilities for each class $k$ as above
* As before, your function should work for any number of classes $K\ge 1$
* It should then compute the prediction by taking the class with the highest probability
* The predictions should be returned in a numpy array of shape `(batch_shape)`
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def predict_class(prior, class_conditionals, x):
"""
This function takes the prior distribution, class-conditional distribution, and
a batch of inputs in a numpy array of shape (batch_shape, 2).
This function should compute the class probabilities for each input in the batch, using
the prior and class-conditional distributions, according to the above equation.
Note that the batch_shape of x could have rank higher than one!
Your function should then return the class predictions by taking the class with the
maximum probability in a numpy array of shape (batch_shape,).
"""
```
```python
## Coding cell to test the codes
prior = get_prior(y_train)
prior_logs = np.log(prior.probs)
cond_probs = class_conditionals.log_prob(x_test[:,None,:])
joint_likelihood = tf.add(prior_logs, cond_probs)
norm_factor = tf.math.reduce_logsumexp(joint_likelihood, axis = -1, keepdims = True)
log_prob = joint_likelihood - norm_factor
tf.argmax(log_prob, axis = -1).numpy()
```
array([1, 2, 0, 1, 2, 2, 0, 2, 2, 0, 0, 1, 1, 1, 0, 1, 0, 1, 2, 1, 0, 1,
0, 1, 2, 2, 2, 2, 2, 0])
```python
# Get the class predictions
predictions = predict_class(prior, class_conditionals, x_test)
predictions
```
array([1, 2, 0, 1, 2, 2, 0, 2, 2, 0, 0, 1, 1, 1, 0, 1, 0, 1, 2, 1, 0, 1,
0, 1, 2, 2, 2, 2, 2, 0])
```python
# Evaluate the model accuracy on the test set
accuracy = accuracy_score(y_test, predictions)
print("Test accuracy: {:.4f}".format(accuracy))
```
Test accuracy: 0.6667
```python
# Plot the model's decision regions
plt.figure(figsize=(10, 6))
plot_data(x_train, y_train, labels, label_colours)
x0_min, x0_max = x_train[:, 0].min(), x_train[:, 0].max()
x1_min, x1_max = x_train[:, 1].min(), x_train[:, 1].max()
contour_plot((x0_min, x0_max), (x1_min, x1_max),
lambda x: predict_class(prior, class_conditionals, x),
1, label_colours, levels=[-0.5, 0.5, 1.5, 2.5],
num_points=500)
plt.title("Training set with decision regions")
plt.show()
```
### Binary classifier
We will now draw a connection between the Naive Bayes classifier and logistic regression.
First, we will update our model to be a binary classifier. In particular, the model will output the probability that a given input data sample belongs to the 'Iris-Setosa' class: $P(Y=y_0 | \tilde{X}_1,\ldots,\tilde{X}_d)$. The remaining two classes will be pooled together with the label $y_1$.
```python
# Redefine the dataset to have binary labels
y_train_binary = np.array(y_train)
y_train_binary[np.where(y_train_binary == 2)] = 1
y_test_binary = np.array(y_test)
y_test_binary[np.where(y_test_binary == 2)] = 1
```
```python
# Plot the training data
labels_binary = {0: 'Iris-Setosa', 1: 'Iris-Versicolour / Iris-Virginica'}
label_colours_binary = ['blue', 'red']
plt.figure(figsize=(8, 5))
plot_data(x_train, y_train_binary, labels_binary, label_colours_binary)
plt.show()
```
We will also make an extra modelling assumption that for each class $k$, the class-conditional distribution $P(X_i | Y=y_k)$ for each feature $i=0, 1$, has standard deviation $\sigma_i$, which is the same for each class $k$.
This means there are now six parameters in total: four for the means $\mu_{ik}$ and two for the standard deviations $\sigma_i$ ($i, k=0, 1$).
We will again use maximum likelihood to estimate these parameters. The prior distribution will be as before, with the class prior probabilities given by
$$
P(Y=y_k) = \frac{\sum_{n=1}^N \delta(Y^{(n)}=y_k)}{N},
$$
We will use your previous function `get_prior` to redefine the prior distribution.
```python
# Redefine the prior
prior_binary = get_prior(y_train_binary)
```
```python
# Plot the prior distribution
plt.bar([0, 1], prior_binary.probs.numpy(), color=label_colours_binary)
plt.xlabel("Class")
plt.ylabel("Prior probability")
plt.title("Class prior distribution")
plt.xticks([0, 1], labels_binary)
plt.show()
```
For the class-conditional densities, the maximum likelihood estimate for the means are again given by
$$
\hat{\mu}_{ik} = \frac{\sum_n X_i^{(n)} \delta(Y^{(n)}=y_k)}{\sum_n \delta(Y^{(n)}=y_k)} \\
$$
However, the estimate for the standard deviations $\sigma_i$ is updated. There is also a closed-form solution for the shared standard deviations, but we will instead learn these from the data.
You should now write a function that takes the training inputs and target labels as input, as well as an optimizer object, number of epochs and a TensorFlow Variable. This function should be written according to the following spec:
* The inputs to the function are:
* a numpy array `x` of shape `(num_samples, num_features)` for the data inputs
* a numpy array `y` of shape `(num_samples,)` for the target labels
* a `tf.Variable` object `scales` of length 2 for the standard deviations $\sigma_i$
* `optimiser`: an optimiser object
* `epochs`: the number of epochs to run the training for
* The function should first compute the means $\mu_{ik}$ of the class-conditional Gaussians according to the above equation
* Then create a batched multivariate Gaussian distribution object using `MultivariateNormalDiag` with the means set to $\mu_{ik}$ and the scales set to `scales`
* Run a custom training loop for `epochs` number of epochs, in which:
* the average per-example negative log likelihood for the whole dataset is computed as the loss
* the gradient of the loss with respect to the `scales` variables is computed
* the `scales` variables are updated by the `optimiser` object
* At each iteration, save the values of the `scales` variable and the loss
* The function should return a tuple of three objects:
* a numpy array of shape `(epochs,)` of loss values
* a numpy array of shape `(epochs, 2)` of values for the `scales` variable at each iteration
* the final learned batched `MultivariateNormalDiag` distribution object
_NB: ideally, we would like to constrain the `scales` variable to have positive values. We are not doing that here, but in later weeks of the course you will learn how this can be implemented._
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def learn_stdevs(x, y, scales, optimiser, epochs):
"""
This function takes the data inputs, targets, scales variable, optimiser and number of
epochs as inputs.
This function should set up and run a custom training loop according to the above
specifications, by setting up the class conditional distributions as a MultivariateNormalDiag
object, and updating the trainable variables (the scales) in a custom training loop.
Your function should then return the a tuple of three elements: a numpy array of loss values
during training, a numpy array of scales variables during training, and the final learned
MultivariateNormalDiag distribution object.
"""
train_loss_results, train_scales_results = [], []
n_classes = np.unique(y)
n_features = x.shape[1]
means = np.zeros((n_classes.shape[0],n_features))
for i, n_class in enumerate(n_classes):
means[i] = x[y == n_class].mean(axis = 0, keepdims = True)
means = tf.constant(means, dtype = tf.float32)
MVND = tfd.MultivariateNormalDiag(loc = means, scale_diag = scales)
x = tf.constant(x, dtype = tf.float32)
def nll(x, y, distribution):
predictions = - distribution.log_prob(x[:,None,:])
p1 = tf.reduce_sum(predictions[y==0][:,0])
p2 = tf.reduce_sum(predictions[y==1][:,1])
return p1 + p2
@tf.function
def get_loss_and_grads(x,y, distribution):
with tf.GradientTape() as tape:
tape.watch(distribution.trainable_variables) # Watch whichever whose gradient is taken with respect to
loss = nll(x,y, distribution) # Get the loss
grads = tape.gradient(loss, distribution.trainable_variables) # Calculate the gradient
return loss, grads
for i in range(epochs):
loss, grads = get_loss_and_grads(x,y, MVND)
optimiser.apply_gradients(zip(grads, MVND.trainable_variables))
scales_value = scales.value().numpy()
train_loss_results.append(loss)
train_scales_results.append(scales_value)
print(f"Step {i:03d}: Loss: {loss:.3f}: Scales: {scales_value}.")
return np.array(train_loss_results), np.array(train_scales_results), MVND
```
```python
## Coding cell to see if code works
def nll(x, y, distribution):
predictions = - distribution.log_prob(x[:,None,:])
p1 = tf.reduce_sum(predictions[y==0][:,0])
p2 = tf.reduce_sum(predictions[y==1][:,1])
return p1 + p2
@tf.function
def get_loss_and_grads(x,y, distribution):
with tf.GradientTape() as tape:
tape.watch(distribution.trainable_variables) # Watch whichever whose gradient is taken with respect to
loss = nll(x,y, distribution) # Get the loss
grads = tape.gradient(loss, distribution.trainable_variables) # Calculate the gradient
return loss, grads
n_classes = np.unique(y_train_binary)
n_features = x_train.shape[1]
means = np.zeros((n_classes.shape[0],n_features))
for i, n_class in enumerate(n_classes):
means[i] = x_train[y_train_binary == n_class].mean(axis = 0, keepdims = True)
means = tf.constant(means, dtype = tf.float32)
scales = tf.Variable([1., 1.])
distribution = tfd.MultivariateNormalDiag(loc = means, scale_diag = scales)
x = tf.constant(x_train, dtype = tf.float32)
for i in range(500):
loss, grads = get_loss_and_grads(x,y_train_binary, distribution)
opt.apply_gradients(zip(grads, distribution.trainable_variables))
scales_value = scales.value().numpy()
# train_loss_results.append(loss)
# train_scales_results.append(scales_value)
print(f"Step {i:03d}: Loss: {loss:.3f}: Scales: {scales_value}.")
```
```python
# Define the inputs to your function
scales = tf.Variable([1., 1.])
opt = tf.keras.optimizers.Adam(learning_rate=0.01)
epochs = 500
```
```python
# Run your function to learn the class-conditional standard deviations
nlls, scales_arr, class_conditionals_binary = learn_stdevs(x_train, y_train_binary, scales, opt, epochs)
```
Step 000: Loss: 246.750: Scales: [0.99 0.99].
Step 001: Loss: 244.870: Scales: [0.9799999 0.9799982].
Step 002: Loss: 242.982: Scales: [0.9699997 0.96999323].
Step 003: Loss: 241.085: Scales: [0.9599995 0.959984 ].
Step 004: Loss: 239.180: Scales: [0.9499995 0.9499693].
Step 005: Loss: 237.268: Scales: [0.94 0.9399478].
Step 006: Loss: 235.347: Scales: [0.93000144 0.92991835].
Step 007: Loss: 233.418: Scales: [0.9200043 0.9198798].
Step 008: Loss: 231.481: Scales: [0.9100094 0.909831 ].
Step 009: Loss: 229.537: Scales: [0.9000174 0.8997708].
Step 010: Loss: 227.586: Scales: [0.8900294 0.889698 ].
Step 011: Loss: 225.628: Scales: [0.8800465 0.8796117].
Step 012: Loss: 223.663: Scales: [0.87007016 0.86951065].
Step 013: Loss: 221.693: Scales: [0.8601019 0.85939395].
Step 014: Loss: 219.716: Scales: [0.8501435 0.8492606].
Step 015: Loss: 217.735: Scales: [0.840197 0.8391097].
Step 016: Loss: 215.749: Scales: [0.83026475 0.82894033].
Step 017: Loss: 213.759: Scales: [0.82034934 0.8187516 ].
Step 018: Loss: 211.765: Scales: [0.8104538 0.8085427].
Step 019: Loss: 209.770: Scales: [0.8005813 0.798313 ].
Step 020: Loss: 207.773: Scales: [0.79073554 0.7880618 ].
Step 021: Loss: 205.776: Scales: [0.7809207 0.77778846].
Step 022: Loss: 203.779: Scales: [0.77114123 0.7674925 ].
Step 023: Loss: 201.784: Scales: [0.76140225 0.75717336].
Step 024: Loss: 199.793: Scales: [0.75170934 0.7468307 ].
Step 025: Loss: 197.805: Scales: [0.74206877 0.73646426].
Step 026: Loss: 195.824: Scales: [0.7324874 0.7260738].
Step 027: Loss: 193.850: Scales: [0.72297275 0.71565926].
Step 028: Loss: 191.885: Scales: [0.71353316 0.7052206 ].
Step 029: Loss: 189.931: Scales: [0.7041779 0.694758 ].
Step 030: Loss: 187.990: Scales: [0.6949171 0.6842717].
Step 031: Loss: 186.064: Scales: [0.68576175 0.67376214].
Step 032: Loss: 184.155: Scales: [0.676724 0.66322994].
Step 033: Loss: 182.264: Scales: [0.6678172 0.6526758].
Step 034: Loss: 180.395: Scales: [0.6590556 0.64210075].
Step 035: Loss: 178.549: Scales: [0.650455 0.63150597].
Step 036: Loss: 176.729: Scales: [0.6420323 0.6208929].
Step 037: Loss: 174.936: Scales: [0.63380575 0.6102632 ].
Step 038: Loss: 173.173: Scales: [0.62579477 0.5996191 ].
Step 039: Loss: 171.441: Scales: [0.6180201 0.5889628].
Step 040: Loss: 169.743: Scales: [0.6105036 0.5782972].
Step 041: Loss: 168.081: Scales: [0.603268 0.5676255].
Step 042: Loss: 166.455: Scales: [0.5963369 0.5569515].
Step 043: Loss: 164.867: Scales: [0.5897342 0.5462796].
Step 044: Loss: 163.319: Scales: [0.5834838 0.53561485].
Step 045: Loss: 161.810: Scales: [0.5776093 0.524963 ].
Step 046: Loss: 160.342: Scales: [0.5721331 0.5143308].
Step 047: Loss: 158.916: Scales: [0.56707615 0.50372595].
Step 048: Loss: 157.532: Scales: [0.5624568 0.49315727].
Step 049: Loss: 156.191: Scales: [0.55829054 0.482635 ].
Step 050: Loss: 154.895: Scales: [0.5545891 0.47217083].
Step 051: Loss: 153.645: Scales: [0.5513598 0.46177828].
Step 052: Loss: 152.443: Scales: [0.5486052 0.45147288].
Step 053: Loss: 151.294: Scales: [0.5463228 0.44127247].
Step 054: Loss: 150.200: Scales: [0.54450464 0.43119755].
Step 055: Loss: 149.168: Scales: [0.5431374 0.42127168].
Step 056: Loss: 148.203: Scales: [0.5422027 0.4115218].
Step 057: Loss: 147.314: Scales: [0.54167753 0.40197864].
Step 058: Loss: 146.507: Scales: [0.5415346 0.39267722].
Step 059: Loss: 145.789: Scales: [0.5417431 0.38365704].
Step 060: Loss: 145.170: Scales: [0.5422694 0.3749624].
Step 061: Loss: 144.654: Scales: [0.5430778 0.36664233].
Step 062: Loss: 144.245: Scales: [0.54413146 0.35875043].
Step 063: Loss: 143.946: Scales: [0.5453927 0.35134405].
Step 064: Loss: 143.753: Scales: [0.54682416 0.3444829 ].
Step 065: Loss: 143.659: Scales: [0.5483892 0.33822718].
Step 066: Loss: 143.654: Scales: [0.55005246 0.33263466].
Step 067: Loss: 143.719: Scales: [0.55178034 0.3277574 ].
Step 068: Loss: 143.836: Scales: [0.5535415 0.32363778].
Step 069: Loss: 143.980: Scales: [0.5553068 0.32030493].
Step 070: Loss: 144.130: Scales: [0.55705 0.31777135].
Step 071: Loss: 144.264: Scales: [0.5587476 0.3160309].
Step 072: Loss: 144.365: Scales: [0.560379 0.3150581].
Step 073: Loss: 144.425: Scales: [0.56192666 0.31480917].
Step 074: Loss: 144.437: Scales: [0.56337565 0.31522402].
Step 075: Loss: 144.406: Scales: [0.56471413 0.3162297 ].
Step 076: Loss: 144.337: Scales: [0.5659329 0.31774402].
Step 077: Loss: 144.241: Scales: [0.56702536 0.31967944].
Step 078: Loss: 144.128: Scales: [0.56798726 0.3219466 ].
Step 079: Loss: 144.011: Scales: [0.5688167 0.32445747].
Step 080: Loss: 143.897: Scales: [0.5695138 0.32712793].
Step 081: Loss: 143.795: Scales: [0.57008046 0.32987973].
Step 082: Loss: 143.709: Scales: [0.57052034 0.332642 ].
Step 083: Loss: 143.641: Scales: [0.5708386 0.33535203].
Step 084: Loss: 143.591: Scales: [0.5710414 0.33795598].
Step 085: Loss: 143.559: Scales: [0.5711363 0.3404088].
Step 086: Loss: 143.541: Scales: [0.5711315 0.34267417].
Step 087: Loss: 143.536: Scales: [0.57103604 0.34472406].
Step 088: Loss: 143.540: Scales: [0.5708594 0.34653822].
Step 089: Loss: 143.551: Scales: [0.5706113 0.34810352].
Step 090: Loss: 143.564: Scales: [0.57030183 0.34941334].
Step 091: Loss: 143.579: Scales: [0.569941 0.35046673].
Step 092: Loss: 143.592: Scales: [0.56953883 0.35126778].
Step 093: Loss: 143.604: Scales: [0.569105 0.35182494].
Step 094: Loss: 143.612: Scales: [0.56864905 0.35215026].
Step 095: Loss: 143.616: Scales: [0.56817985 0.35225895].
Step 096: Loss: 143.617: Scales: [0.5677058 0.35216865].
Step 097: Loss: 143.615: Scales: [0.5672348 0.351899 ].
Step 098: Loss: 143.609: Scales: [0.566774 0.35147113].
Step 099: Loss: 143.601: Scales: [0.56632984 0.35090715].
Step 100: Loss: 143.591: Scales: [0.5659079 0.35022986].
Step 101: Loss: 143.580: Scales: [0.565513 0.34946218].
Step 102: Loss: 143.569: Scales: [0.56514925 0.34862694].
Step 103: Loss: 143.559: Scales: [0.56481975 0.34774643].
Step 104: Loss: 143.549: Scales: [0.5645269 0.3468421].
Step 105: Loss: 143.541: Scales: [0.5642723 0.34593424].
Step 106: Loss: 143.534: Scales: [0.56405675 0.34504172].
Step 107: Loss: 143.529: Scales: [0.5638804 0.34418175].
Step 108: Loss: 143.526: Scales: [0.56374264 0.34336954].
Step 109: Loss: 143.524: Scales: [0.56364244 0.34261832].
Step 110: Loss: 143.524: Scales: [0.563578 0.34193894].
Step 111: Loss: 143.524: Scales: [0.56354725 0.34134 ].
Step 112: Loss: 143.526: Scales: [0.5635477 0.34082767].
Step 113: Loss: 143.527: Scales: [0.56357634 0.3404058 ].
Step 114: Loss: 143.529: Scales: [0.56363016 0.34007582].
Step 115: Loss: 143.531: Scales: [0.5637059 0.3398371].
Step 116: Loss: 143.532: Scales: [0.5638002 0.33968687].
Step 117: Loss: 143.533: Scales: [0.56390965 0.3396206 ].
Step 118: Loss: 143.533: Scales: [0.5640309 0.33963212].
Step 119: Loss: 143.533: Scales: [0.5641606 0.33971402].
Step 120: Loss: 143.532: Scales: [0.56429565 0.33985785].
Step 121: Loss: 143.531: Scales: [0.5644331 0.34005442].
Step 122: Loss: 143.530: Scales: [0.5645702 0.3402941].
Step 123: Loss: 143.529: Scales: [0.5647045 0.34056705].
Step 124: Loss: 143.527: Scales: [0.5648337 0.3408636].
Step 125: Loss: 143.526: Scales: [0.56495595 0.3411743 ].
Step 126: Loss: 143.525: Scales: [0.5650696 0.34149036].
Step 127: Loss: 143.524: Scales: [0.5651734 0.34180355].
Step 128: Loss: 143.524: Scales: [0.5652662 0.34210652].
Step 129: Loss: 143.523: Scales: [0.5653474 0.34239286].
Step 130: Loss: 143.523: Scales: [0.5654164 0.34265712].
Step 131: Loss: 143.523: Scales: [0.56547314 0.34289494].
Step 132: Loss: 143.523: Scales: [0.56551766 0.343103 ].
Step 133: Loss: 143.523: Scales: [0.56555027 0.34327897].
Step 134: Loss: 143.524: Scales: [0.56557137 0.34342158].
Step 135: Loss: 143.524: Scales: [0.5655817 0.3435304].
Step 136: Loss: 143.524: Scales: [0.56558204 0.3436059 ].
Step 137: Loss: 143.524: Scales: [0.56557333 0.34364936].
Step 138: Loss: 143.524: Scales: [0.5655565 0.34366268].
Step 139: Loss: 143.524: Scales: [0.56553274 0.34364834].
Step 140: Loss: 143.524: Scales: [0.5655031 0.34360927].
Step 141: Loss: 143.524: Scales: [0.5654687 0.34354874].
Step 142: Loss: 143.524: Scales: [0.56543076 0.34347025].
Step 143: Loss: 143.524: Scales: [0.5653902 0.34337747].
Step 144: Loss: 143.524: Scales: [0.5653482 0.34327406].
Step 145: Loss: 143.523: Scales: [0.5653057 0.34316364].
Step 146: Loss: 143.523: Scales: [0.56526357 0.3430497 ].
Step 147: Loss: 143.523: Scales: [0.5652227 0.3429355].
Step 148: Loss: 143.523: Scales: [0.5651837 0.34282398].
Step 149: Loss: 143.523: Scales: [0.5651472 0.34271783].
Step 150: Loss: 143.523: Scales: [0.5651138 0.34261927].
Step 151: Loss: 143.523: Scales: [0.5650838 0.34253022].
Step 152: Loss: 143.523: Scales: [0.5650575 0.3424521].
Step 153: Loss: 143.523: Scales: [0.56503516 0.342386 ].
Step 154: Loss: 143.523: Scales: [0.56501687 0.3423325 ].
Step 155: Loss: 143.523: Scales: [0.56500256 0.3422919 ].
Step 156: Loss: 143.523: Scales: [0.56499225 0.34226406].
Step 157: Loss: 143.523: Scales: [0.5649857 0.34224853].
Step 158: Loss: 143.523: Scales: [0.5649827 0.34224457].
Step 159: Loss: 143.523: Scales: [0.56498307 0.3422512 ].
Step 160: Loss: 143.523: Scales: [0.5649864 0.34226727].
Step 161: Loss: 143.523: Scales: [0.56499237 0.34229144].
Step 162: Loss: 143.523: Scales: [0.5650006 0.3423223].
Step 163: Loss: 143.523: Scales: [0.56501067 0.34235832].
Step 164: Loss: 143.523: Scales: [0.56502223 0.34239808].
Step 165: Loss: 143.523: Scales: [0.56503487 0.3424401 ].
Step 166: Loss: 143.523: Scales: [0.5650482 0.342483 ].
Step 167: Loss: 143.523: Scales: [0.5650619 0.34252554].
Step 168: Loss: 143.523: Scales: [0.5650757 0.34256652].
Step 169: Loss: 143.523: Scales: [0.56508917 0.34260496].
Step 170: Loss: 143.523: Scales: [0.5651021 0.34264 ].
Step 171: Loss: 143.523: Scales: [0.56511426 0.342671 ].
Step 172: Loss: 143.523: Scales: [0.5651254 0.34269747].
Step 173: Loss: 143.523: Scales: [0.5651355 0.34271908].
Step 174: Loss: 143.523: Scales: [0.5651443 0.34273568].
Step 175: Loss: 143.523: Scales: [0.56515175 0.34274727].
Step 176: Loss: 143.523: Scales: [0.5651579 0.34275398].
Step 177: Loss: 143.523: Scales: [0.56516266 0.3427561 ].
Step 178: Loss: 143.523: Scales: [0.56516606 0.34275398].
Step 179: Loss: 143.523: Scales: [0.56516814 0.34274808].
Step 180: Loss: 143.523: Scales: [0.56516904 0.34273896].
Step 181: Loss: 143.523: Scales: [0.5651688 0.34272715].
Step 182: Loss: 143.523: Scales: [0.56516755 0.34271327].
Step 183: Loss: 143.523: Scales: [0.5651654 0.34269792].
Step 184: Loss: 143.523: Scales: [0.5651625 0.34268168].
Step 185: Loss: 143.523: Scales: [0.5651589 0.3426651].
Step 186: Loss: 143.523: Scales: [0.56515485 0.34264874].
Step 187: Loss: 143.523: Scales: [0.5651505 0.34263304].
Step 188: Loss: 143.523: Scales: [0.5651459 0.3426184].
Step 189: Loss: 143.523: Scales: [0.56514126 0.34260514].
Step 190: Loss: 143.523: Scales: [0.5651367 0.34259355].
Step 191: Loss: 143.523: Scales: [0.5651322 0.3425838].
Step 192: Loss: 143.523: Scales: [0.56512797 0.34257603].
Step 193: Loss: 143.523: Scales: [0.56512403 0.34257028].
Step 194: Loss: 143.523: Scales: [0.56512046 0.34256652].
Step 195: Loss: 143.523: Scales: [0.5651173 0.34256467].
Step 196: Loss: 143.523: Scales: [0.5651146 0.3425646].
Step 197: Loss: 143.523: Scales: [0.5651124 0.3425662].
Step 198: Loss: 143.523: Scales: [0.5651107 0.3425692].
Step 199: Loss: 143.523: Scales: [0.56510943 0.3425734 ].
Step 200: Loss: 143.523: Scales: [0.56510866 0.34257856].
Step 201: Loss: 143.523: Scales: [0.5651083 0.3425844].
Step 202: Loss: 143.523: Scales: [0.56510836 0.34259072].
Step 203: Loss: 143.523: Scales: [0.5651088 0.34259725].
Step 204: Loss: 143.523: Scales: [0.56510955 0.34260377].
Step 205: Loss: 143.523: Scales: [0.56511056 0.3426101 ].
Step 206: Loss: 143.523: Scales: [0.5651118 0.34261602].
Step 207: Loss: 143.523: Scales: [0.56511325 0.34262142].
Step 208: Loss: 143.523: Scales: [0.5651148 0.34262615].
Step 209: Loss: 143.523: Scales: [0.5651164 0.34263015].
Step 210: Loss: 143.523: Scales: [0.565118 0.34263334].
Step 211: Loss: 143.523: Scales: [0.5651196 0.34263572].
Step 212: Loss: 143.523: Scales: [0.5651212 0.34263727].
Step 213: Loss: 143.523: Scales: [0.5651226 0.34263805].
Step 214: Loss: 143.523: Scales: [0.5651239 0.34263808].
Step 215: Loss: 143.523: Scales: [0.5651251 0.34263745].
Step 216: Loss: 143.523: Scales: [0.5651261 0.34263623].
Step 217: Loss: 143.523: Scales: [0.565127 0.34263453].
Step 218: Loss: 143.523: Scales: [0.56512773 0.34263244].
Step 219: Loss: 143.523: Scales: [0.56512827 0.34263006].
Step 220: Loss: 143.523: Scales: [0.5651286 0.3426275].
Step 221: Loss: 143.523: Scales: [0.5651288 0.34262484].
Step 222: Loss: 143.523: Scales: [0.56512886 0.34262222].
Step 223: Loss: 143.523: Scales: [0.56512874 0.3426197 ].
Step 224: Loss: 143.523: Scales: [0.5651285 0.34261733].
Step 225: Loss: 143.523: Scales: [0.56512815 0.34261522].
Step 226: Loss: 143.523: Scales: [0.56512773 0.34261337].
Step 227: Loss: 143.523: Scales: [0.56512725 0.34261185].
Step 228: Loss: 143.523: Scales: [0.5651267 0.34261066].
Step 229: Loss: 143.523: Scales: [0.5651261 0.3426098].
Step 230: Loss: 143.523: Scales: [0.5651255 0.3426093].
Step 231: Loss: 143.523: Scales: [0.5651249 0.34260908].
Step 232: Loss: 143.523: Scales: [0.5651244 0.34260917].
Step 233: Loss: 143.523: Scales: [0.56512386 0.34260952].
Step 234: Loss: 143.523: Scales: [0.5651234 0.34261012].
Step 235: Loss: 143.523: Scales: [0.56512296 0.3426109 ].
Step 236: Loss: 143.523: Scales: [0.5651226 0.34261182].
Step 237: Loss: 143.523: Scales: [0.5651223 0.34261283].
Step 238: Loss: 143.523: Scales: [0.56512207 0.3426139 ].
Step 239: Loss: 143.523: Scales: [0.5651219 0.342615 ].
Step 240: Loss: 143.523: Scales: [0.56512177 0.34261608].
Step 241: Loss: 143.523: Scales: [0.5651217 0.3426171].
Step 242: Loss: 143.523: Scales: [0.5651217 0.34261802].
Step 243: Loss: 143.523: Scales: [0.56512177 0.34261882].
Step 244: Loss: 143.523: Scales: [0.5651219 0.3426195].
Step 245: Loss: 143.523: Scales: [0.565122 0.34262004].
Step 246: Loss: 143.523: Scales: [0.5651222 0.34262043].
Step 247: Loss: 143.523: Scales: [0.56512237 0.34262067].
Step 248: Loss: 143.523: Scales: [0.56512254 0.3426208 ].
Step 249: Loss: 143.523: Scales: [0.5651227 0.3426208].
Step 250: Loss: 143.523: Scales: [0.5651229 0.34262067].
Step 251: Loss: 143.523: Scales: [0.5651231 0.34262043].
Step 252: Loss: 143.523: Scales: [0.56512326 0.3426201 ].
Step 253: Loss: 143.523: Scales: [0.56512344 0.34261972].
Step 254: Loss: 143.523: Scales: [0.5651236 0.3426193].
Step 255: Loss: 143.523: Scales: [0.56512374 0.34261885].
Step 256: Loss: 143.523: Scales: [0.56512386 0.3426184 ].
Step 257: Loss: 143.523: Scales: [0.565124 0.34261796].
Step 258: Loss: 143.523: Scales: [0.56512403 0.34261754].
Step 259: Loss: 143.523: Scales: [0.5651241 0.34261715].
Step 260: Loss: 143.523: Scales: [0.56512415 0.3426168 ].
Step 261: Loss: 143.523: Scales: [0.56512415 0.3426165 ].
Step 262: Loss: 143.523: Scales: [0.56512415 0.34261626].
Step 263: Loss: 143.523: Scales: [0.56512415 0.34261608].
Step 264: Loss: 143.523: Scales: [0.5651241 0.342616 ].
Step 265: Loss: 143.523: Scales: [0.56512403 0.34261596].
Step 266: Loss: 143.523: Scales: [0.565124 0.34261596].
Step 267: Loss: 143.523: Scales: [0.5651239 0.34261602].
Step 268: Loss: 143.523: Scales: [0.56512386 0.34261614].
Step 269: Loss: 143.523: Scales: [0.5651238 0.3426163].
Step 270: Loss: 143.523: Scales: [0.56512374 0.34261647].
Step 271: Loss: 143.523: Scales: [0.5651237 0.34261665].
Step 272: Loss: 143.523: Scales: [0.5651236 0.34261686].
Step 273: Loss: 143.523: Scales: [0.56512356 0.34261706].
Step 274: Loss: 143.523: Scales: [0.5651235 0.34261724].
Step 275: Loss: 143.523: Scales: [0.56512344 0.34261742].
Step 276: Loss: 143.523: Scales: [0.5651234 0.34261757].
Step 277: Loss: 143.523: Scales: [0.5651233 0.3426177].
Step 278: Loss: 143.523: Scales: [0.5651233 0.3426178].
Step 279: Loss: 143.523: Scales: [0.5651233 0.3426179].
Step 280: Loss: 143.523: Scales: [0.5651233 0.34261796].
Step 281: Loss: 143.523: Scales: [0.5651233 0.342618 ].
Step 282: Loss: 143.523: Scales: [0.5651233 0.342618 ].
Step 283: Loss: 143.523: Scales: [0.5651233 0.34261796].
Step 284: Loss: 143.523: Scales: [0.5651233 0.3426179].
Step 285: Loss: 143.523: Scales: [0.5651233 0.34261784].
Step 286: Loss: 143.523: Scales: [0.5651233 0.34261775].
Step 287: Loss: 143.523: Scales: [0.5651233 0.34261766].
Step 288: Loss: 143.523: Scales: [0.5651233 0.34261757].
Step 289: Loss: 143.523: Scales: [0.5651233 0.34261748].
Step 290: Loss: 143.523: Scales: [0.5651234 0.3426174].
Step 291: Loss: 143.523: Scales: [0.56512344 0.3426173 ].
Step 292: Loss: 143.523: Scales: [0.5651235 0.34261724].
Step 293: Loss: 143.523: Scales: [0.5651235 0.34261718].
Step 294: Loss: 143.523: Scales: [0.5651235 0.34261715].
Step 295: Loss: 143.523: Scales: [0.5651235 0.34261712].
Step 296: Loss: 143.523: Scales: [0.5651235 0.34261712].
Step 297: Loss: 143.523: Scales: [0.5651235 0.34261712].
Step 298: Loss: 143.523: Scales: [0.5651235 0.34261712].
Step 299: Loss: 143.523: Scales: [0.5651235 0.34261715].
Step 300: Loss: 143.523: Scales: [0.5651235 0.34261718].
Step 301: Loss: 143.523: Scales: [0.5651235 0.3426172].
Step 302: Loss: 143.523: Scales: [0.5651235 0.34261724].
Step 303: Loss: 143.523: Scales: [0.5651235 0.34261727].
Step 304: Loss: 143.523: Scales: [0.5651235 0.3426173].
Step 305: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 306: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 307: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 308: Loss: 143.523: Scales: [0.5651235 0.34261742].
Step 309: Loss: 143.523: Scales: [0.5651235 0.34261745].
Step 310: Loss: 143.523: Scales: [0.5651235 0.34261748].
Step 311: Loss: 143.523: Scales: [0.5651235 0.34261748].
Step 312: Loss: 143.523: Scales: [0.5651235 0.34261748].
Step 313: Loss: 143.523: Scales: [0.5651235 0.34261748].
Step 314: Loss: 143.523: Scales: [0.5651235 0.34261748].
Step 315: Loss: 143.523: Scales: [0.5651235 0.34261748].
Step 316: Loss: 143.523: Scales: [0.5651235 0.34261745].
Step 317: Loss: 143.523: Scales: [0.5651235 0.34261742].
Step 318: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 319: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 320: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 321: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 322: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 323: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 324: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 325: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 326: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 327: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 328: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 329: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 330: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 331: Loss: 143.523: Scales: [0.5651235 0.34261733].
Step 332: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 333: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 334: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 335: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 336: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 337: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 338: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 339: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 340: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 341: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 342: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 343: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 344: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 345: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 346: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 347: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 348: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 349: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 350: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 351: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 352: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 353: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 354: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 355: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 356: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 357: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 358: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 359: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 360: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 361: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 362: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 363: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 364: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 365: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 366: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 367: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 368: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 369: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 370: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 371: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 372: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 373: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 374: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 375: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 376: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 377: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 378: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 379: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 380: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 381: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 382: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 383: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 384: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 385: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 386: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 387: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 388: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 389: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 390: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 391: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 392: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 393: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 394: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 395: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 396: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 397: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 398: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 399: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 400: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 401: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 402: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 403: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 404: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 405: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 406: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 407: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 408: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 409: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 410: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 411: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 412: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 413: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 414: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 415: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 416: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 417: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 418: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 419: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 420: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 421: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 422: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 423: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 424: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 425: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 426: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 427: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 428: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 429: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 430: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 431: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 432: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 433: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 434: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 435: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 436: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 437: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 438: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 439: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 440: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 441: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 442: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 443: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 444: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 445: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 446: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 447: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 448: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 449: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 450: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 451: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 452: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 453: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 454: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 455: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 456: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 457: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 458: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 459: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 460: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 461: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 462: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 463: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 464: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 465: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 466: Loss: 143.523: Scales: [0.5651235 0.3426174].
Step 467: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 468: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 469: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 470: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 471: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 472: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 473: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 474: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 475: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 476: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 477: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 478: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 479: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 480: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 481: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 482: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 483: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 484: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 485: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 486: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 487: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 488: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 489: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 490: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 491: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 492: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 493: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 494: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 495: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 496: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 497: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 498: Loss: 143.523: Scales: [0.5651235 0.34261736].
Step 499: Loss: 143.523: Scales: [0.5651235 0.34261736].
```python
# View the distribution parameters
print("Class conditional means:")
print(class_conditionals_binary.loc.numpy())
print("\nClass conditional standard deviations:")
print(class_conditionals_binary.stddev().numpy())
```
Class conditional means:
[[5.007317 3.4170732]
[6.2544303 2.8607595]]
Class conditional standard deviations:
[[0.5651235 0.34261736]
[0.5651235 0.34261736]]
```python
# Plot the loss and convergence of the standard deviation parameters
fig, ax = plt.subplots(1, 2, figsize=(14, 5))
ax[0].plot(nlls)
ax[0].set_title("Loss vs epoch")
ax[0].set_xlabel("Epoch")
ax[0].set_ylabel("Average negative log-likelihood")
for k in [0, 1]:
ax[1].plot(scales_arr[:, k], color=label_colours_binary[k], label=labels_binary[k])
ax[1].set_title("Standard deviation ML estimates vs epoch")
ax[1].set_xlabel("Epoch")
ax[1].set_ylabel("Standard deviation")
plt.legend()
plt.show()
```
We can also plot the contours of the class-conditional Gaussian distributions as before, this time with just binary labelled data. Notice the contours are the same for each class, just with a different centre location.
```python
# Plot the training data with the class-conditional density contours
plt.figure(figsize=(10, 6))
plot_data(x_train, y_train_binary, labels_binary, label_colours_binary)
x0_min, x0_max = x_train[:, 0].min(), x_train[:, 0].max()
x1_min, x1_max = x_train[:, 1].min(), x_train[:, 1].max()
contour_plot((x0_min, x0_max), (x1_min, x1_max), class_conditionals_binary.prob, 2, label_colours_binary)
plt.title("Training set with class-conditional density contours")
plt.show()
```
We can also plot the decision regions for this binary classifier model, notice that the decision boundary is now linear.
```python
# Plot the model's decision regions
plt.figure(figsize=(10, 6))
plot_data(x_train, y_train_binary, labels_binary, label_colours_binary)
x0_min, x0_max = x_train[:, 0].min(), x_train[:, 0].max()
x1_min, x1_max = x_train[:, 1].min(), x_train[:, 1].max()
contour_plot((x0_min, x0_max), (x1_min, x1_max),
lambda x: predict_class(prior_binary, class_conditionals_binary, x),
1, label_colours_binary, levels=[-0.5, 0.5, 1.5],
num_points=500)
plt.title("Training set with decision regions")
plt.show()
```
#### Link to logistic regression
In fact, we can see that our predictive distribution $P(Y=y_0 | X)$ can be written as follows:
$$
\begin{align}
P(Y=y_0 | X) =& ~\frac{P(X | Y=y_0)P(Y=y_0)}{P(X | Y=y_0)P(Y=y_0) + P(X | Y=y_1)P(Y=y_1)}\\
=& ~\frac{1}{1 + \frac{P(X | Y=y_1)P(Y=y_1)}{P(X | Y=y_0)P(Y=y_0)}}\\
=& ~\sigma(a)
\end{align}
$$
where $\sigma(a) = \frac{1}{1 + e^{-a}}$ is the sigmoid function, and $a = \log\frac{P(X | Y=y_0)P(Y=y_0)}{P(X | Y=y_1)P(Y=y_1)}$ is the _log-odds_.
With our additional modelling assumption of a shared covariance matrix $\Sigma$, it can be shown (using the Gaussian pdf) that $a$ is in fact a linear function of $X$:
$$
a = w^T X + w_0
$$
where
$$
\begin{align}
w =& ~\Sigma^{-1} (\mu_0 - \mu_1)\\
w_0 =& -\frac{1}{2}\mu_0^T \Sigma^{-1}\mu_0 + \frac{1}{2}\mu_1^T\Sigma^{-1}\mu_1 + \log\frac{P(Y=y_0)}{P(Y=y_1)}
\end{align}
$$
The model therefore takes the form $P(Y=y_0 | X) = \sigma(w^T X + w_0)$, with weights $w\in\mathbb{R}^2$ and bias $w_0\in\mathbb{R}$. This is the form used by logistic regression, and explains why the decision boundary above is linear.
In the above we have outlined the derivation of the generative logistic regression model. The parameters are typically estimated with maximum likelihood, as we have done.
Finally, we will use the above equations to directly parameterise the output Bernoulli distribution of the generative logistic regression model.
You should now write the following function, according to the following specification:
* The inputs to the function are:
* the prior distribution `prior` over the two classes
* the (batched) class-conditional distribution `class_conditionals`
* The function should use the parameters of the above distributions to compute the weights and bias terms $w$ and $w_0$ as above
* The function should then return a tuple of two numpy arrays for $w$ and $w_0$
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def get_logistic_regression_params(prior, class_conditionals):
"""
This function takes the prior distribution and class-conditional distribution as inputs.
This function should compute the weights and bias terms of the generative logistic
regression model as above, and return them in a 2-tuple of numpy arrays of shapes
(2,) and () respectively.
"""
sigma_inv = np.linalg.inv(class_conditionals_binary.covariance())
mu_0 = class_conditionals_binary.loc.numpy()[:,0]
mu_1 = class_conditionals_binary.loc.numpy()[:,1]
w = np.dot(sigma_inv,(mu_0-mu_1))
w_0 = (-0.5)*mu_0.T @ sigma_inv @ mu_0 + 0.5*mu_1.T @ sigma_inv @ mu_1 + np.log(prior.probs[0]/prior.probs[1])
return w, w_0
```
```python
sigma_inv = np.linalg.inv(class_conditionals_binary.covariance())
print(class_conditionals_binary.loc)
mu_0 = class_conditionals_binary.loc.numpy()[:,0]
mu_1 = class_conditionals_binary.loc.numpy()[:,1]
print(mu_0, mu_1)
print(prior.probs[0])
w = np.dot(sigma_inv,(mu_0-mu_1))
w_0 = -0.5*mu_0.T @ sigma_inv @ mu_0 + 0.5*mu_1.T @ sigma_inv @ mu_1 + np.log(prior.probs[0]/prior.probs[1])
w_0
```
tf.Tensor(
[[5.007317 3.4170732]
[6.2544303 2.8607595]], shape=(2, 2), dtype=float32)
[5.007317 6.2544303] [3.4170732 2.8607595]
tf.Tensor(0.34166667, shape=(), dtype=float32)
-152.75925
```python
# Run your function to get the logistic regression parameters
w, w0 = get_logistic_regression_params(prior_binary, class_conditionals_binary)
```
We can now use these parameters to make a contour plot to display the predictive distribution of our logistic regression model.
```python
# Plot the training data with the logistic regression prediction contours
fig, ax = plt.subplots(1, 1, figsize=(10, 6))
plot_data(x_train, y_train_binary, labels_binary, label_colours_binary)
x0_min, x0_max = x_train[:, 0].min(), x_train[:, 0].max()
x1_min, x1_max = x_train[:, 1].min(), x_train[:, 1].max()
X0, X1 = get_meshgrid((x0_min, x0_max), (x1_min, x1_max))
logits = np.dot(np.array([X0.ravel(), X1.ravel()]).T, w) + w0
Z = tf.math.sigmoid(logits)
lr_contour = ax.contour(X0, X1, np.array(Z).T.reshape(*X0.shape), levels=10)
ax.clabel(lr_contour, inline=True, fontsize=10)
contour_plot((x0_min, x0_max), (x1_min, x1_max),
lambda x: predict_class(prior_binary, class_conditionals_binary, x),
1, label_colours_binary, levels=[-0.5, 0.5, 1.5],
num_points=300)
plt.title("Training set with prediction contours")
plt.show()
```
Congratulations on completing this programming assignment! In the next week of the course we will look at Bayesian neural networks and uncertainty quantification.
| cf86e6433944aecec47b0571eb2af4bd8353f71a | 418,924 | ipynb | Jupyter Notebook | Probabilistic Deep Learning with TensorFlow 2/.ipynb_checkpoints/Week 1 Programming Assignment COMPLETED-checkpoint.ipynb | jymchng/Coursera_Intro_to_TF_2 | 888e8f3921acdd17b96e7f90d3e24b5550b8c361 | [
"MIT"
]
| null | null | null | Probabilistic Deep Learning with TensorFlow 2/.ipynb_checkpoints/Week 1 Programming Assignment COMPLETED-checkpoint.ipynb | jymchng/Coursera_Intro_to_TF_2 | 888e8f3921acdd17b96e7f90d3e24b5550b8c361 | [
"MIT"
]
| null | null | null | Probabilistic Deep Learning with TensorFlow 2/.ipynb_checkpoints/Week 1 Programming Assignment COMPLETED-checkpoint.ipynb | jymchng/Coursera_Intro_to_TF_2 | 888e8f3921acdd17b96e7f90d3e24b5550b8c361 | [
"MIT"
]
| null | null | null | 227.676087 | 89,232 | 0.892121 | true | 22,971 | Qwen/Qwen-72B | 1. YES
2. YES | 0.879147 | 0.909907 | 0.799942 | __label__eng_Latn | 0.468292 | 0.696865 |
# Das Solow-Modell
Das Solow Modell ist einfaches Modell, das die langfristige Entwicklung einer geschlossenen Volkswirtschaft beschreibt.
## Produktion und Produktion pro Kopf
Das Solow-Modell basiert auf einer aggregierten Produktionsfunktion $Y$ mit den Inputfaktoren Kapital $K$ und Arbeit $N$:
\begin{align}
Y=F(K,N)
\end{align}
Die aggregierte Produktionsfunktion hat per Annahme folgende Eigenschaften:
* positive, aber fallende Grenzerträge: eine Erhöhung eines einzelnen Inputfaktors führt immer zu mehr Output führt. Dieser Zuwachs wird aber kleiner, je mehr vom von diesem Faktor bereits verwendet wird.
* mathematisch: $\frac{\partial Y}{\partial K}>0$, $\frac{\partial Y}{\partial N}>0$, $\frac{\partial^2 Y}{\partial K^2}<0$, $\frac{\partial^2 Y}{\partial N^2}<0$
* konstante Skalenerträge: einer Verdoppelung beides Inputfaktoren führt zu einer Verdoppelung der Produktion.
* mathematisch: $F(x\cdot K,x\cdot N)=x\cdot F(K,N)$
Eine einfache Produktionsfunktion, die beide Eigenschaften erfüllt, ist die Cobb-Douglas Produktionsfunktion:
$$Y=F(K,N)=K^\alpha N ^{1-\alpha}$$
Der Parameter $\alpha\in(0,1)$ ist die Elastizität des Output bezüglich des Kapitals und gibt an, um wie viel Prozent der Output steigt, wenn das Kapital um einen Prozent erhöht wird.
In der Regel betrachten wir die Volkswirtschaft im Solow Modell aber nicht in aggregierten Einheiten sondern in pro-Kopf (bzw. pro-Arbeiter) Einheiten an.
Dazu definieren wir
* Kapital pro Arbeiter: $k=\frac{K}{N}$ und wird als Kapitalintensität bezeichnet
* Output pro Arbeiter: $y=\frac{Y}{N}$
Berechnen wir den Output pro Arbeiter ergibt sich
\begin{align}
y &=\frac{Y}{N}&\\
&=\frac{1}{N}F(K,N) & \text{konstante Skalenerträge: }F(x\cdot K,x\cdot N)=x\cdot F(K,N) \text{ für } x=\frac{1}{N}\\
&=F\left(\frac{1}{N}K,\frac{1}{N}N\right) &\\
&=F(k,1) &
\end{align}
Da das zweite Argument $(1)$ fix ist, schreiben wir der Einfachheit halber $y=F(k,1)=f(k)$.
Der Output pro Arbeiter hängt also nur von der Kapitalintensität ab.
Für das Beispiel der Cobb-Douglas-Funktion mit $F(K,N)=K^{\alpha}N^{1-\alpha}$ ergibt sich
\begin{align}
y=f(k) = F(k,1)= k^\alpha 1^{1-\alpha}=k^\alpha.
\end{align}
Um ein Gefühl für aggregierte Produktionsfunktion und Output pro Kopf zu bekommen, betrachten wir die folgenden Grafiken.
In der linken Grafik wird die aggregierte Produktionsfunktion dargestellt, in der rechten der Output pro Kopf.
Sie können den Parameter $\alpha$ über einen Regler verändern.
```python
%matplotlib inline
from ipywidgets import interactive
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits import mplot3d
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
plt.rcParams['figure.figsize'] = [24/2.54, 18/2.54] # Größe der Grafik anpassen
plt.rcParams['font.size']=12 # Schriftgröße
```
```python
def f(alpha):
fig = plt.figure(figsize=plt.figaspect(0.5))
ax = fig.add_subplot(1, 2, 1, projection='3d')
K=np.linspace(start=0, stop=10) # mögliche Werte für K
N=np.linspace(start=0, stop=10) # mögliche Werte für N
K, N = np.meshgrid(K, N)
Y= np.multiply(K**(alpha) , N**(1-alpha)) # Berechne Output, a**b bedeutet a^b
y_int = K[1,:]**(alpha)
cont = ax.plot_wireframe(K, N, Y, rstride=10, cstride=10)
ax.plot3D(K[1,:],np.ones_like(K[1,:]), y_int,'red')
ax.set_xlabel('Kapital')
ax.set_ylabel('Arbeit')
ax.set_zlabel('Output')
ax.view_init(15, -135)
ax = fig.add_subplot(1, 2, 2)
k = np.geomspace(0.0001, 10, num=50) # mögliche Werte für Kapitalintensität
y = k**alpha # Output pro Arbeiter
ax.plot(k, y,'red')
ax.set_xlabel('Kapitalintensität')
ax.set_ylabel('Output pro Arbeiter')
ax.axis(ymin=0, ymax=4)
plt.show()
interactive_plot = interactive(f, # Name der Funktion, die die Grafik erstellt
# Slider für die Inputs der Grafik-Funktion
alpha=widgets.FloatSlider(value=0.4, description='$\\alpha$',
max=1, min=0, step=0.05)
)
output = interactive_plot.children[-1]
interactive_plot
```
interactive(children=(FloatSlider(value=0.4, description='$\\alpha$', max=1.0, step=0.05), Output()), _dom_cla…
Zum Üben gibt es hier einige vertiefende Aufgaben.
### Vertiefende Aufgaben
1. Zeigen Sie, dass die Cobb-Douglas Produktionsfunktion $Y=K^\alpha N^{1-\alpha}$ in der Tat positive und fallende Grenzerträge und konstante Skalenerträge aufweist.
2. Wie verändert sich die (Cobb-Douglas) Produktionsfunktion, wenn $\alpha$ nahe oder bei 0 bzw. 1 liegt? Wie hoch sind die Grenzerträge von Arbeit und Kapital in beiden Fällen? Nutzen Sie dazu die interaktiven Grafiken!
3. Zeigen Sie, das $\alpha$ die Elastizität des Outputs bezüglich des Kapitals ist: $\varepsilon_{Y,K}=\frac{\partial Y}{\partial K}/\frac{Y}{K}=\alpha$.
4. Wie ist die Elastizität des Outputs bezüglich des Kapitals zu interpretieren? Was sagt Ihnen ein Wert von $\alpha=0.4$?
## Veränderung des Kapitalstocks
Bis jetzt haben wir uns angeschaut, wie die Volkswirtschaft in einer einzelnen Periode produziert.
Zu jedem Zeitpunkt können wir die Volkswirtschaft anhand des Kapitalstocks und der Arbeiter beschreiben.
Jetzt wollen wir verschiedene Perioden mit einigen einfachen Regeln für die Entwicklung von Kapitalstock und Arbeitern miteinander verbinden.
Um zu verdeutlichen, von welcher Periode wir sprechen, verwenden wir den Zeitindex $t$, ob $t$ für Monate oder Jahre steht ist nicht wichtig. Unter Verwendung von Zeitindizes lautet
* die aggregierte Produktionsfunktion $Y_t=K_t^\alpha N_t^{1-\alpha}$
* der Output pro Kopf $y_t=k_t^\alpha$
Für den Moment nehmen wir an, dass die Anzahl der Arbeiter konstant ist, also gilt $N_t=N$.
Wir lockern diese Annahme später, für den Einstieg macht sie uns das Leben aber deutlich einfacher.
Wir nehmen an, dass ein fixer Anteil $\delta\in(0,1)$ (auch Abnutzungsrate) des Kapitalstocks jedes Jahr abgenutzt wird, also kaputt geht. Im nächsten Jahr ist also noch ein Anteil von $(1-\delta)$ des alten Kapitalstocks vorhanden.
Wir nehmen an, dass die Volkswirtschaft geschlossen ist.
Die gesamtwirtschaftliche Ersparnis entspricht also den gesamtwirtschaftlichen Investitionen.
Wir nehmen an, dass ein konstanter Anteil $s\in(0,1)$ (auch Sparquote) des Einkommens gespart bzw. investiert wird.
Das bedeutet, dass die Investitionen $I_t=sY_t$ betragen, der Rest des Einkommens wird konsumiert.
Die Änderung des aggregierten Kapitalstocks $\Delta K_{t+1}=K_{t+1}-K_t$ ist also gleich der Investitionen abzüglich der Abnutzung:
\begin{align}
\Delta K_{t+1}=sY_t-\delta K_t
\end{align}
Die Änderung der Kapitalintensität (Kapital pro Arbeiter) können wir berechnen, indem wir durch $N$ teilen:
\begin{align}
\Delta k_{t+1} &=\frac{\Delta K_{t+1}}{N}\\
&=\frac{sY_t-\delta K_t}{N}\\
&=s\frac{Y_t}{N}-\delta\frac{K_t}{N}\\
&=sy_t-\delta k_t
\end{align}
Die Änderung der Kapitalintensität ist also gleich der Investitionen je Arbeiter abzüglich der Abnutzung.
Wenn wir auf beiden Seiten $k_t$ abziehen, bekommen wir eine einfache Gleichung, mit der wir den Kapitalstock der nächsten Periode berechnen können:
\begin{align}
\Delta k_{t+1}=k_{t+1}-k_t&=sy_t-\delta k_t & +k_t\\
k_{t+1}&=(1-\delta)k_t+sy_t
\end{align}
Wir haben gesehen, dass für den Spezialfall der Cobb-Douglas Produktionsfunktion gilt: $y_t=f(k_t)=k_t^\alpha$.
Damit können wir $k_{t+1}$ genauer beschreiben:
\begin{align}
k_{t+1}&=(1-\delta)k_t+sk_t^\alpha
\end{align}
Mit dieser einfachen Rechenregel können wir für eine gegebene Kapitalintensität heute $k_t$ die Kapitalintensität morgen $k_{t+1}$ berechnen. Wie sich die Kapitalintensität für verschiedene Startwerte $k_0$ entwickelt, zeigt die folgende Grafik.
Die Parameter für die Outputelastizität des Kapitals $\alpha$, die Abnutzungsrate $\delta$ und die Sparquote $s$ können Sie über Schieberegler verändern.
```python
def f(alpha, delta, savr):
max_time=50
t = np.arange(0,max_time,1) # Zeitpunkte
n_paths=20
k = np.zeros((n_paths,max_time))
k[:,0] = np.linspace(start=0, stop=10, num=n_paths) # verschiedene Anfangswerte für Kapitalintensität
k_star=(savr/delta)**(1/(1-alpha))
for tt in t[1:max_time]: # loop über alle Zeitpunkte außer 0
y=k[:,tt-1]**alpha
k[:,tt] = (1-delta)*k[:,tt-1]+savr*y
plt.plot(k.T)
plt.annotate('$k^*$',(max_time-0.5,k_star),)
plt.ylim(0,10)
plt.ylabel('Kapitalintensität')
plt.xlabel('Zeit t')
plt.show()
interactive_plot = interactive(f, # Name der Funktion, die die Grafik erstellt
# Slider für die Inputs der Grafik-Funktion
alpha=widgets.FloatSlider(value=0.4, description='$\\alpha$',max=0.95, min=0.05, step=0.05),
delta=widgets.FloatSlider(value=0.4, description='$\\delta$',max=0.95, min=0.05, step=0.05),
savr=widgets.FloatSlider(value=0.4, description='$s$',max=0.95, min=0.05, step=0.05)
)
output = interactive_plot.children[-1]
interactive_plot
```
interactive(children=(FloatSlider(value=0.4, description='$\\alpha$', max=0.95, min=0.05, step=0.05), FloatSli…
Die Grafik zeigt, dass sich unabhängig die Kapitalintensität unabhängig vom Startwert im Laufe der Zeit gegen einen (bestimmten) langfristigen Wert $k^*$ konvergiert.
Die einzige Ausnahme ist der Spezialfall $k_0=0$. In diesem Fall gibt es zu Beginn gar kein Kapital, wodurch nichts produziert und damit gespart werden kann.
Die Kapitalintensität steckt in diesem Fall für immer bei $k_t=0$ fest.
Für alle anderen Startwerte stellt sich Laufe der Zeit eine bestimmte langfristige Kapitalintensität $k^*$ ein.
Wie wir diese bestimmen können und wie diese von den Parametern abhängt ist das Thema des nächsten Kapitels.
## Steady State
Wir haben gesehen, dass sich im Solow-Modell unabhängig vom aktuellen Kapitalstock $k_t$ langfristig eine Kapitalintensität $k^*$ einstellt. Diesen Wert bezeichnen wir auch als Steady State Kapitalintensität. Das gleiche gilt für den Output pro Kopf, der gegen $y^*$ konvergiert. Im Folgenden wollen wir diese Kapitalintensität bestimmen.
Im Steady State ist die Kapitalintensität konstant, die Änderung also per Definition 0.
\begin{align}
\Delta k_{t+1}=0&= sy^*-\delta k^* & y^*=(k^*)^\alpha\\
&= s(k^*)^\alpha-\delta k^*
\end{align}
In dieser Gleichung gibt es nur eine Unbekannte, nämlich die Steady State Kapitalintensität $k^*$, nach der wir auflösen:
\begin{align}
k^* = \left(\frac{s}{\delta} \right)^\frac{1}{1-\alpha}
\end{align}
Für gegebene Parameter können wir also nun die Steady State Kapitalintensität berechnen.
Inwiefern die Politik diese Parameter beeinflussen kann, und ob es einen "optimalen" Steady State gibt, ist das Thema des nächsten Kapitels.
Um die Inhalte aus diesem Kapitel zu wiederholen, können Sie folgende vertiefende Aufgaben lösen:
### Vertiefende Aufgaben
1. Leiten Sie aus der aggregierten Produktionsfunktion und der Veränderung des aggregierten Kapitalstocks die entsprechende Intensitätsform, also Produktion und Veränderung des Kapitalstocks je Arbeiter, her.
2. Zeigen Sie, dass für die Steady State Kapitalintensität gilt $k^*=\left(\frac{s}{\delta}\right)^{\frac{1}{1-\alpha}}$
3. Nutzen Sie die interaktive Grafik, um herauszufinden, wie Sparrate $s$, Outputelastizität bezüglich des Kapitals $\alpha$ und Abnutzungsrate $\delta$ die langfristige Kapitalintensität $k^*$ beeinflussen.
4. Überprüfen Sie Ihre Ergebnisse, indem Sie zeigen, dass die ersten Ableitungen der langfristigen Kapitalintensität wie folgt nach den Parametern wie folgt lauten:
1. Ableitung nach der Sparquote $s$: $$\frac{\partial k^*}{\partial s}=\frac{1}{s}\cdot\frac{1}{1-\alpha}k^*>0$$
2. Ableitung nach der Abnutzungsrate $\delta$: $$\frac{\partial k^*}{\partial \delta}=-\frac{1}{\delta}\cdot\frac{1}{1-\alpha}\cdot k^*<0$$
3. Ableitung nach der Outputelastizität bezüglich des Kapitals $\alpha$: $$\frac{\partial k^*}{\partial \alpha}=\ln\left(\frac{s}{\delta}\right)\cdot \frac{1}{(1-\alpha)^2}\cdot k^*$$
5. Unter welcher Bedingung ist die Ableitung $\frac{\partial k^*}{\partial \alpha}$ positiv bzw. negativ? Warum?
## Die Goldene Regel
Im letzten Kapitel haben wir die Steady State Kapitalintensität hergeleitet:
$k^* = \left(\frac{s}{\delta} \right)^\frac{1}{1-\alpha}$.
Per Definition ergeben sich damit
1. der Output pro Arbeiter im Steady State $y^*=f(k^*)=\left(\left(\frac{s}{\delta}\right)^{\frac{1}{1-\alpha}}\right)^{\alpha}=\left(\frac{s}{\delta}\right)^{\frac{\alpha}{1-\alpha}}$ und
2. Kosum pro Arbeiter im Steady State $c^*=(1-s)\cdot y^*=(1-s)\cdot \left(\frac{s}{\delta}\right)^{\frac{\alpha}{1-\alpha}}$
Der Steady Konsum pro Arbeiter hängt also nur von den Parametern (und nicht dem ursprünglichen Kapitalstock $k_0$) ab.
Von den Parametern ist an dieser Stelle die Sparquote von besonderem Interesse, da sie (per Annahme) durch Politikmaßnahmen,
z.B. Sparanreizen, beeinflusst werden kann.
Daher stellt sich die Frage, ob es eine optimale Sparquote gibt, die den Steady State Konsum pro Kopf maximiert.
Mathematisch handelt es sich bei dieser Sparquote um die Lösung für das Optimierungsproblem
$$\max_s c^*=(1-s)\cdot \left(\frac{s}{\delta}\right)^{\frac{\alpha}{1-\alpha}}$$
Wir können uns dieses Optimierungsproblem auch grafisch veranschaulichen.
Dargestellt sind Output pro Kopf $y_t=k_t^{\alpha}$, Abnutzung pro Kopf $\delta k_t$ und Ersparnis pro Kopf $sk_t^\alpha$
Der Konsum im Steady State ist in rot eingetragen und ist gleich der Differenz aus Output pro Kopf und Ersparnis pro Kopf im Steady State.
Für den Moment ist nur der Regler für die Sparquote interessant. Verändern Sie diesen, wird der Steady State Konsum berechnet. Wenn Sie hier $s=\alpha$ einstellen, wird der Steady State Konsum maximal und entspricht der Goldenen Regel.
```python
def f(alpha, delta, sav_r): # was soll mit den Werten aus den Widgets passieren
plt.figure(2)
k = np.geomspace(0.0001, 100, num=100) # Kapitalintensität
output = k**(alpha) # Output/Kopf k^alpha
depreciation = k * delta # Abschreibung/Abnutzung
saving = output * sav_r # Ersparnis
# Steady State
ss_k = (sav_r/delta)**(1/(1-alpha)) # Kapitalintensität
ss_y = ss_k ** alpha # Output
ss_s = ss_y * sav_r # Ersparnis
# Golden Rule
gr_s=alpha # Sparquote nach der goldenen Regel
gr_k=(gr_s/delta)**(1/(1-alpha)) # Kapitalintensität nach der goldenen Regel
gr_y=gr_k**alpha # Output nach der goldenen Regel
gr_s=gr_s*gr_y # Ersparnisse/Investitionen nach der goldenen Regel
plt.plot([ss_k,ss_k],[ss_y, ss_s],marker='o', markersize=1, color="red") # zeichne Konsum im Steady State ein
plt.annotate('Steady State Konsum',(ss_k+1,0.5*(ss_y+ss_s)),)
plt.plot([gr_k,gr_k],[gr_y, gr_s],marker='o', markersize=1, color="black") # zeichne Konsum im Steady State ein
plt.annotate('Goldene Regel Konsum',(gr_k+1,gr_s-0.125),)
outp, = plt.plot(k, output, label='Output')
abnu, = plt.plot(k, depreciation, label='Abnutzung')
savi, = plt.plot(k, saving, label='Ersparnis')
plt.ylim(0, 5)
plt.xlim(k[0],k[-1])
plt.xlabel('Kapitalintensität')
plt.ylabel('Euro pro Kopf')
plt.legend(handles=[outp, abnu, savi], bbox_to_anchor=(0,1.02,1,0.2), loc="lower left",mode="expand",ncol=3,frameon=False)
# plt.grid()
plt.show()
interactive_plot = interactive(f, # Name der Funktion, die die Grafik erstellt
# Slider für die Inputs der Grafik-Funktion
alpha=widgets.FloatSlider(value=0.3, description='$\\alpha$', max=0.4, min=0, step=0.01),
delta=widgets.FloatSlider(value=0.05, description='$\\delta$', max=0.1, min=0.01, step=0.01),
sav_r=widgets.FloatSlider(value=0.5, description='$s$', max=1, min=0, step=0.05),
)
output = interactive_plot.children[-1]
#output.layout.height = '350px'
interactive_plot
```
interactive(children=(FloatSlider(value=0.3, description='$\\alpha$', max=0.4, step=0.01), FloatSlider(value=0…
Für jede Parameterkombination ist der Steady State Konsum die Strecke zwischen Steady State Investition/Abnutzung und Output.
Die schwarze Strecke ist der Steady State Konsum nach der Goldenen Regel, also der maximale Steady State Konsum, der durch Variation der Sparrate erreicht werden kann. Es lässt sich zeigen, dass für diese Sparquote gilt
$$s_g=\alpha.$$
## Vertiefende Aufgaben
1. Lösen Sie das Optimierungsproblem $\max_s c^*=(1-s)\cdot\left(\frac{s}{\delta}\right)^{\frac{\alpha}{1-\alpha}}$ und zeigen Sie, dass $s_{g}=\alpha$.
2. Bestimmen Sie die Steady State Kapitalintensität, den Output und den Konsum nach der Goldenen Regel.
```python
%load_ext watermark
# python, ipython, packages, and machine characteristics
%watermark -v -m -p ipywidgets,matplotlib,numpy,watermark
# date
print (" ")
%watermark -u -n -t -z
```
Python implementation: CPython
Python version : 3.7.4
IPython version : 7.8.0
ipywidgets : 7.5.1
matplotlib : 3.1.1
numpy : 1.16.5
mpl_toolkits: unknown
watermark : 2.1.0
Compiler : MSC v.1915 64 bit (AMD64)
OS : Windows
Release : 10
Machine : AMD64
Processor : Intel64 Family 6 Model 142 Stepping 12, GenuineIntel
CPU cores : 8
Architecture: 64bit
Last updated: Sun Jan 17 2021 15:28:02Mitteleuropäische Zeit
```python
```
| 78e86dd1cfd3dfb314b5c12375d487d81cf62216 | 23,406 | ipynb | Jupyter Notebook | Solow Steady State.ipynb | knutniemann/solow_steady_state | 86f41f5c60b0fb0d749898889d14ad39e9d04ed1 | [
"MIT"
]
| null | null | null | Solow Steady State.ipynb | knutniemann/solow_steady_state | 86f41f5c60b0fb0d749898889d14ad39e9d04ed1 | [
"MIT"
]
| null | null | null | Solow Steady State.ipynb | knutniemann/solow_steady_state | 86f41f5c60b0fb0d749898889d14ad39e9d04ed1 | [
"MIT"
]
| null | null | null | 48.259794 | 347 | 0.601769 | true | 5,489 | Qwen/Qwen-72B | 1. YES
2. YES | 0.833325 | 0.774583 | 0.645479 | __label__deu_Latn | 0.964808 | 0.337996 |
## Exercise 03.1
Compare the computed values of
$$
d_0 = a \cdot b + a \cdot c
$$
and
$$
d_1 = a \cdot (b + c)
$$
when $a = 100$, $b = 0.1$ and $c = 0.2$. Store $d_{0}$ in the variable `d0` and $d_{1}$ in the variable `d1`.
Try checking for equality, e.g. `print(d0 == d1)`.
```python
# YOUR CODE HERE
raise NotImplementedError()
```
```python
assert d0 == 30.0
assert d1 != 30.0
assert d0 != d1
```
## Exercise 03.2
For the polynomial
\begin{align}
f(x, y) &= (x + y)^{6}
\\
&= x^6 + 6x^{5}y + 15x^{4}y^{2} + 20x^{3}y^{3} + 15x^{2}y^{4} + 6xy^{5} + y^{6}
\end{align}
compute $f$ using: (i) the compact form $(x + y)^{6}$; and (ii) the expanded form for:
(a) $x = 10$ and $y = 10.1$
(b) $x = 10$ and $y = -10.1$
and compare the number of significant digits for which the answers are the same.
Store the answer for the compact version using the variable `f0`, and using the variable `f1` for the expanded version.
For case (b), compare the computed and analytical solutions and consider the relative error.
Which approach would you recommend for computing this expression?
#### (a) $x = 10$ and $y = 10.1$
```python
x = 10.0
y = 10.1
# YOUR CODE HERE
raise NotImplementedError()
```
```python
import math
assert math.isclose(f0, 65944160.60120103, rel_tol=1e-10)
assert math.isclose(f1, 65944160.601201, rel_tol=1e-10)
```
#### (b) $x = 10$ and $y = -10.1$
```python
x = 10.0
y = -10.1
# YOUR CODE HERE
raise NotImplementedError()
```
```python
import math
assert math.isclose(f0, 1.0e-6, rel_tol=1e-10)
assert math.isclose(f1, 1.0e-6, rel_tol=1e-2)
```
## Exercise 03.3
Consider the expression
$$
f = \frac{1}{\sqrt{x^2 - 1} - x}
$$
When $x$ is very large, the denominator approaches zero, which can cause problems.
Try rephrasing the problem and eliminating the fraction by multiplying the numerator and denominator by $\sqrt{x^2 - 1} + x$ and evaluate the two versions of the expression when:
(a) $x = 1 \times 10^{7}$
(b) $x = 1 \times 10^{9}$ (You may get a Python error for this case. Why?)
#### (a) $x = 1 \times 10^{7}$
```python
# YOUR CODE HERE
raise NotImplementedError()
```
#### (b) $x = 1 \times 10^{9}$
```python
# YOUR CODE HERE
raise NotImplementedError()
```
| 5d6e8640e4c12cc895cca1590f8a4f4800307885 | 7,544 | ipynb | Jupyter Notebook | Assignment/03 Exercises.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL | 4fa4bf850f065e9ac1cea0365b93257e1f04e2cb | [
"MIT"
]
| 21 | 2019-06-28T05:11:17.000Z | 2022-03-16T02:02:28.000Z | Assignment/03 Exercises.ipynb | chandhukogila/Python-Basic-For-All-3.x | f4105833759a271fa0777f3d6fb96db32bbfaaa4 | [
"MIT"
]
| 2 | 2021-12-28T14:15:58.000Z | 2021-12-28T14:16:02.000Z | Assignment/03 Exercises.ipynb | chandhukogila/Python-Basic-For-All-3.x | f4105833759a271fa0777f3d6fb96db32bbfaaa4 | [
"MIT"
]
| 18 | 2019-07-07T03:20:33.000Z | 2021-05-08T10:44:18.000Z | 22.722892 | 188 | 0.519883 | true | 789 | Qwen/Qwen-72B | 1. YES
2. YES | 0.964855 | 0.954647 | 0.921096 | __label__eng_Latn | 0.955869 | 0.97835 |
```python
import matplotlib.pyplot as plt
import numpy as np
import sympy as sym
import pandas as pd
from google.colab import drive
drive.mount('/content/drive')
```
Mounted at /content/drive
En este cuaderno se calcularán numéricamente los campos producidos por una carga en movimiento en dos sistemas de referencia: el de la carga y el de un observador en reposo.
Primero buscamos el tensor de Faraday en el sistema de la carga. Podemos hacerlo directamente pues conocemos la forma del campo eléctrico producido por una carga puntual.
```python
def Ex(x,y,z):
return q*x/(np.sqrt(x**2+y**2+z**2)**3)
def Ey(x,y,z):
return q*y/(np.sqrt(x**2+y**2+z**2)**3)
def Ez(x,y,z):
return q*z/(np.sqrt(x**2+y**2+z**2)**3)
c=299792458
q=300
x=1
y=1
z=1
Fprima=np.array([[0, Ex(x,y,z)/c, Ey(x,y,z)/c, Ez(x,y,z)/c],[-Ex(x,y,z)/c, 0, 0, 0],[-Ey(x,y,z)/c, 0, 0, 0], [-Ez(x,y,z)/c, 0, 0, 0]], dtype=np.float128)
print(Fprima)
```
[[ 0.0000000e+00 1.9258332e-07 1.9258332e-07 1.9258332e-07]
[-1.9258332e-07 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[-1.9258332e-07 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[-1.9258332e-07 0.0000000e+00 0.0000000e+00 0.0000000e+00]]
Ahora que tenemos el tensor de Faraday, podemos transformar al frame en reposo con una transformación de Lorentz.
```python
vx = input('Ingrese la velocidad para el boost en x')
vy = input('Ingrese la velocidad para el boost en y')
vz = input('Ingrese la velocidad para el boost en z')
vx = float(vx)
betax=vx/c
vy = float(vy)
betay=vy/c
vz = float(vz)
betaz=vz/c
v2=vx**2+vy**2+vz**2
beta2=betax**2+betay**2+betaz**2
if v2>c**2:
print('Este valor de velocidad no es posible, está por encima de c, los cálculos NO SON CORRECTOS')
```
Ingrese la velocidad para el boost en x300
Ingrese la velocidad para el boost en y0
Ingrese la velocidad para el boost en z0
Ahora se calcula directamente la T. de Lorentz con la forma general
```python
Mgeneral=np.array([[gamma, -gamma*betax, -gamma*betay, -gamma*betaz],[-gamma*betax, 1+(gamma-1)*betax**2/beta2, (gamma-1)*betax*betay/beta2,(gamma-1)*betax*betaz/beta2],[-gamma*betay, (gamma-1)*betay*betax/beta2 ,1+(gamma-1)*betay**2/beta2, (gamma-1)*betay*betaz/beta2],[-gamma*betaz, (gamma-1)*betaz*betax/beta2, (gamma-1)*betay*betaz/beta2, 1+(gamma-1)*betaz**2/beta2]], dtype=np.float128)
print(Mgeneral)
```
[[ 1.09108945e+00 -1.09184480e-06 -0.00000000e+00 -0.00000000e+00]
[-1.09184480e-06 1.09108945e+00 0.00000000e+00 0.00000000e+00]
[-0.00000000e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00]
[-0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
Con la T. de Lorentz calculada, podemos transformar al frame en el que nos interesa el movimiento.
```python
F=Mgeneral.dot(Fprima.dot(Mgeneral))
print(F)
```
[[ 0.00000000e+00 7.64219524e-10 7.00418764e-10 7.00418764e-10]
[-7.64219524e-10 0.00000000e+00 -7.00903653e-16 -7.00903653e-16]
[-7.00418764e-10 7.00903653e-16 0.00000000e+00 0.00000000e+00]
[-7.00418764e-10 7.00903653e-16 0.00000000e+00 0.00000000e+00]]
| 7dc2614acb688a389ad5ca18e01b48bd243dae75 | 6,356 | ipynb | Jupyter Notebook | datos/colabs/Campo de una carga en movimiento.ipynb | Sekilloda/sekilloda.github.io | 1a272eb607400a71a2971569e6ac2426f81661f7 | [
"MIT"
]
| null | null | null | datos/colabs/Campo de una carga en movimiento.ipynb | Sekilloda/sekilloda.github.io | 1a272eb607400a71a2971569e6ac2426f81661f7 | [
"MIT"
]
| null | null | null | datos/colabs/Campo de una carga en movimiento.ipynb | Sekilloda/sekilloda.github.io | 1a272eb607400a71a2971569e6ac2426f81661f7 | [
"MIT"
]
| null | null | null | 6,356 | 6,356 | 0.702486 | true | 1,270 | Qwen/Qwen-72B | 1. YES
2. YES | 0.851953 | 0.746139 | 0.635675 | __label__spa_Latn | 0.417715 | 0.315217 |
Imagine a rocket is resting stationary in intergalatic space, subjected to no gravity at all. Due to (many simultaneous) software malfunction(s) one of the thrusters on the side of the rocket turns on, and stays on.
This gives the rocket angular momentum as well as linear momentum, so it gets pushed to one side and then starts spinning too.
The faster it spins, the faster the thrusters change direction, so the direction of the boost changes more and more frequently.
What kind of trajectory will it draw out if the astronauts never manage to turn it off before they blackout from the centripedal force?
Let's ignore the fact that we live in a 3 dimensional world (and therefore the implications of the [tennis racket theorem](https://en.wikipedia.org/wiki/Tennis_racket_theorem), and assume that this problem only cause the rocket to spin in a single plane.
The key to solving this problem is to realize that, the angular speed of the rocket is a linear function of time: the longer it stay out of control, the faster it spins.
Therefore the angular speed at any one point is $\propto t$.
The acceleration vector $\vec{a}$ is a vector of constant magnitude, whose direction rotates faster and faster.
The velocity vector of the rocket is obtained by integrating the acceleration vector $\vec{a}$ wrt. time.
\begin{equation}
v_x = S(t)
\end{equation}
\begin{equation}
v_y = C(t)
\end{equation}
Where S and C are the [Fresnel integrals](https://en.wikipedia.org/wiki/Fresnel_integral).
We can then obtain the displacement vector by integrating wrt. time again.
The Fresnel integrals:
S:
C:
```python
# imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.special as spc
import scipy.integrate as inte
import scipy.constants as k
```
```python
time_steps = 1001 # number of time steps to integrate over
t = np.linspace(0,10,time_steps)
c = (np.sqrt(k.pi/2)) # correction factor for the fresnel functions
# CHANGE CONSTNATS HERE
FM = 3 # Acceleration exerted by the nozzle, i.e.
# The ratio of (Mass of rocket):(force exerted by the ejected gas onto the nozzle)
# (in m/s2)
R_K = 1 # The radius of gyration of the rocket
# END OF USER-DEFINED CONSTANTS SECTION
R2K = R_K/2
u = np.sqrt(FM*R2K)
cu= c/u # constants
l = cu*FM # more constants
```
```python
vi = spc.fresnel(t/cu)
vx = l*vi[0] # the x-component of the velocity time series
vy = l*vi[1] # the y-component of the velocity time series
# create function to integrate the velocity over
velox = lambda x : spc.fresnel(x)[0]
veloy = lambda x : spc.fresnel(x)[1]
# calculate the positions by integration
x, y = [], []
for i in range (0, time_steps):
xi = inte.quad(velox, 0, t[i]/cu)[0] # only the first element of the resulting tuple is the integration result
x.append( l*cu*xi )
yi = inte.quad(veloy, 0, t[i]/cu)[0]
y.append( l*cu*yi )
velocity, = plt.plot(vx, vy, label='velocity')
plt.title('Phase plot of the rocket')
plt.legend()
plt.show()
position, = plt.plot(x, y, label='position')
plt.title('Trajectory of the rocket')
plt.legend()
plt.show()
```
Side note: the rocket's velocity trace forms what is called an [Euler spirl](https://en.wikipedia.org/wiki/Euler_spiral).
Exercise for the reader:
What do you think the graph would look like if we add gravity back into the picture? I.e. if this whole imaginary accident happened while it is closed to the surface, during a launch.
```python
# run the code below to find out!
g = 0.62 # m s^{-2} # Let's say we're launching from Pluto's surface
vi = spc.fresnel(t/cu)
vx = l*vi[0]
vy = l*vi[1]-g*t
# create function to integrate the velocity over
velox = lambda x : spc.fresnel(x)[0]
veloy = lambda x : spc.fresnel(x)[1]
# calculate the positions by integration
x, y = [], []
for i in range (0, time_steps):
xi = inte.quad(velox, 0, t[i]/cu)[0] # only the first element of the resulting tuple is the integration result
x.append( l*cu*xi )
yi = inte.quad(veloy, 0, t[i]/cu)[0]
y.append( l*cu*yi )
velocity, = plt.plot(vx, vy, label='velocity')
plt.title('Phase plot of the rocket')
plt.legend()
plt.show()
position, = plt.plot(x, y, label='position')
plt.title('Trajectory of the rocket')
plt.legend()
plt.show()
```
| feef2b28dd8a23f9ed0a4a828a172c5d71db6530 | 52,446 | ipynb | Jupyter Notebook | MalfunctioningRocket.ipynb | OceanNuclear/Freebody | d8abcffbffd7a1d76edb187ce03f23c51fec7073 | [
"MIT"
]
| null | null | null | MalfunctioningRocket.ipynb | OceanNuclear/Freebody | d8abcffbffd7a1d76edb187ce03f23c51fec7073 | [
"MIT"
]
| null | null | null | MalfunctioningRocket.ipynb | OceanNuclear/Freebody | d8abcffbffd7a1d76edb187ce03f23c51fec7073 | [
"MIT"
]
| null | null | null | 228.026087 | 30,828 | 0.916276 | true | 1,147 | Qwen/Qwen-72B | 1. YES
2. YES | 0.927363 | 0.746139 | 0.691942 | __label__eng_Latn | 0.985542 | 0.445944 |
<a href="https://colab.research.google.com/github/LuisIMT/ComputacionIII-2021-1/blob/master/LIMT_Comp_II_Sistemas_de_ecuaciones_lineales.ipynb" target="_parent"></a>
# Computación II - Resolución de sistemas de ecuaciones lineales
+ Autor: Luis Ignacio Murillo Torres
+ CoAutor: Ulises Olivares
+ Fecha01: 17/02/2021
+ FechaMod: 22/02/2012
```
import numpy as np
import sympy
class GaussJordan:
#Conductor
def intercambiarFilas(self, fila1, fila2,M):
for i in range(len(M[0])):
tmp = M[fila2][i]
M[fila2][i] = M[fila1][i]
M[fila1][i] = tmp
return M
def multiplicarFila(self, k, fila, colInicial, M):
for i in range (colInicial, len(M[0])):
M[fila][i] = k * M[fila][i]
return M
def restarFila(self,fila1,fila2,colInicial,M):
for i in np.arange(colInicial, len(M[0])):
#for i in np.arange(0, len(M[0])):
M[fila1][i] = M[fila2][i]-M[fila1][i]
return M
def buscarPivote(self, filas, col, M):
indiceFila = -1
maxNum = np.inf *-1
for i in range(col+1, filas):
if(M[i][col] > maxNum and M[i][col] != 0):
indiceFila = i
maxNum = M[i][col]
return indiceFila
def eliminicacionGaussiana(self, f, c, M):
indicePiv = -1
for i in range(f):
pivote = M[i][i]
if pivote == 0:
indicePiv = self.buscarPivote(f, i, M)
if indicePiv == -1:
print("El sistema no tiene solución")
exit(0)
else:
M = self.intercambiarFilas(indicePiv, i, M)
pivote = M[i][i]
for j in np.arange(i+1, f):
if M[j][i] !=0:
k = pivote / M[j][i]
M = self.multiplicarFila(k,j,i,M)
M = self.restarFila(j,i,i,M)
print("Matriz resultante : \n", M)
return M
def eliminacionGaussJordan(self,filaa,columanaa,M):
indicePiv = -1
for i in range(filaa):
pivote = M[i][i]
if pivote == 0:
indicePiv = self.buscarPivote(filaa, i, M)
if indicePiv == -1:
print("El sistema no tiene solución")
exit(0)
else:
M = self.intercambiarFilas(indicePiv, i, M)
pivote = M[i][i]
for j in range(filaa):
for n in range(filaa):
if j != n and M[j][n] != 0:
k = pivote / M[j][n]
M = self.multiplicarFila(k,j,n,M)
M = self.restarFila(j,n,n,M)
print("Matriz resultante:\n ",np.round(M, decimals= 1))
M = self.NormalizationGJ(filaa,columanaa,M)
return M
def NormalizationGJ (self,fila,columna,M):
for i in range(fila):
n = 1/M[i][i]
M = self.multiplicarFila(n,i,0,M)
print("Matriz normatizada\n",np.round(M, decimals= 1))
return M
def calculoMatrixInversa(self,f,M):
I = np.identity(f)
print("Matriz indentidad\n", I)
MAum = np.concatenate([M,I],axis = 1)
print("Matriz aumentada\n", MAum)
return MAum
"""def vectorSolution(self, f,c,M):
r = np.zero(len(M[1]), dtype = float)
for i range(f):
n = 1/M[i][i]
r[i] = self.multiplicarFila(n,i,0,M)
print("x%d = %0.5f",(i,x[i]))"""
def AnalisisDatos (self, f, c, M):
M = self.calculoMatrixInversa(f,M)
M = self.eliminacionGaussJordan(f,c,M)
#M = self.NormalizationGJ(f,2*f,M)
return M
def main():
f3 = 3
f4 = 4
f5 = 5
#c = f+1
#M = np.random.randint(10, size = (f,f), dtype = np.int32)
M = np.array([[3,2,1],[0,4,2],[0,0,3],])
print("matrix :\n", M)
N = np.array([[4,3,2,1],[0,6,4,2],[0,0,6,3],[0,0,0,4]])
O = np.array([[5,4,3,2,1],[0,8,6,4,2],[0,0,9,6,3],[0,0,0,8,4],[0,0,0,0,5]])
objG = GaussJordan()
#print("Realizando eliminación Gaussiana")
#print("Realizando eliminación Gauss-Jordan\t\n")
#objG.eliminacionGaussJordan(f,c,M)
#N=objG.calculoMatrixInversa(f,M)
#objG.eliminicacionGaussiana(f,2*f,N)
#print("Valores propios de la matrix\n")
#objG.eliminicacionGaussiana(f,2*f,N)
#objG.eliminacionGaussJordan(f,2*f,N)
objG.AnalisisDatos(f3,2*f3,M)
objG.AnalisisDatos(f4,2*f4,N)
objG.AnalisisDatos(f5,2*f5,O)
if __name__ == "__main__":
main()
```
matrix :
[[3 2 1]
[0 4 2]
[0 0 3]]
Matriz indentidad
[[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]]
Matriz aumentada
[[3. 2. 1. 1. 0. 0.]
[0. 4. 2. 0. 1. 0.]
[0. 0. 3. 0. 0. 1.]]
Matriz resultante:
[[ 3. 0. 0. -36. 22.5 -3. ]
[ 0. 4. 0. 0. -1.5 1. ]
[ 0. 0. 3. 0. 0. 1. ]]
Matriz normatizada
[[ 1. 0. 0. -12. 7.5 -1. ]
[ 0. 1. 0. 0. -0.4 0.2]
[ 0. 0. 1. 0. 0. 0.3]]
Matriz indentidad
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
Matriz aumentada
[[4. 3. 2. 1. 1. 0. 0. 0.]
[0. 6. 4. 2. 0. 1. 0. 0.]
[0. 0. 6. 3. 0. 0. 1. 0.]
[0. 0. 0. 4. 0. 0. 0. 1.]]
Matriz resultante:
[[ 4. 0. 0. 0. 72. -48. 10.7 -2. ]
[ 0. 6. 0. 0. 0. -12. 10.7 -2. ]
[ 0. 0. 6. 0. 0. 0. -1.3 1. ]
[ 0. 0. 0. 4. 0. 0. 0. 1. ]]
Matriz normatizada
[[ 1. 0. 0. 0. 18. -12. 2.7 -0.5]
[ 0. 1. 0. 0. 0. -2. 1.8 -0.3]
[ 0. 0. 1. 0. 0. 0. -0.2 0.2]
[ 0. 0. 0. 1. 0. 0. 0. 0.2]]
Matriz indentidad
[[1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 0. 1. 0. 0.]
[0. 0. 0. 1. 0.]
[0. 0. 0. 0. 1.]]
Matriz aumentada
[[5. 4. 3. 2. 1. 1. 0. 0. 0. 0.]
[0. 8. 6. 4. 2. 0. 1. 0. 0. 0.]
[0. 0. 9. 6. 3. 0. 0. 1. 0. 0.]
[0. 0. 0. 8. 4. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 5. 0. 0. 0. 0. 1.]]
Matriz resultante:
[[ 5.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 1.3333e+03
-9.1670e+02 2.2590e+02 -6.1100e+01 1.3300e+01]
[ 0.0000e+00 8.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
-2.5000e+02 2.2590e+02 -6.1100e+01 1.3300e+01]
[ 0.0000e+00 0.0000e+00 9.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 -7.4000e+00 7.6000e+00 -1.7000e+00]
[ 0.0000e+00 0.0000e+00 0.0000e+00 8.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 -1.2000e+00 1.0000e+00]
[ 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 5.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00 1.0000e+00]]
Matriz normatizada
[[ 1.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 2.667e+02
-1.833e+02 4.520e+01 -1.220e+01 2.700e+00]
[ 0.000e+00 1.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00
-3.120e+01 2.820e+01 -7.600e+00 1.700e+00]
[ 0.000e+00 0.000e+00 1.000e+00 0.000e+00 0.000e+00 0.000e+00
0.000e+00 -8.000e-01 8.000e-01 -2.000e-01]
[ 0.000e+00 0.000e+00 0.000e+00 1.000e+00 0.000e+00 0.000e+00
0.000e+00 0.000e+00 -2.000e-01 1.000e-01]
[ 0.000e+00 0.000e+00 0.000e+00 0.000e+00 1.000e+00 0.000e+00
0.000e+00 0.000e+00 0.000e+00 2.000e-01]]
| 07f4a91b054f58863f121d9857a6a5f25c85b94c | 11,657 | ipynb | Jupyter Notebook | LIMT_Comp_II_Sistemas_de_ecuaciones_lineales.ipynb | LuisIMT/ComputacionIII-2021-1 | 5bbee3ddb3bb6c4d4554ed311da7dc359589e620 | [
"W3C"
]
| 1 | 2020-09-23T21:24:29.000Z | 2020-09-23T21:24:29.000Z | LIMT_Comp_II_Sistemas_de_ecuaciones_lineales.ipynb | LuisIMT/ComputacionIII-2021-1 | 5bbee3ddb3bb6c4d4554ed311da7dc359589e620 | [
"W3C"
]
| null | null | null | LIMT_Comp_II_Sistemas_de_ecuaciones_lineales.ipynb | LuisIMT/ComputacionIII-2021-1 | 5bbee3ddb3bb6c4d4554ed311da7dc359589e620 | [
"W3C"
]
| null | null | null | 41.632143 | 272 | 0.373767 | true | 3,406 | Qwen/Qwen-72B | 1. YES
2. YES | 0.933431 | 0.828939 | 0.773757 | __label__yue_Hant | 0.105685 | 0.636029 |
## Project One: Maximum Sub-Array Summation
### Group 12
#### Group Members
* Kyle Guthrie
* Michael C. Stramel
* Alex Miranda
# Enumeration
## Pseudo-code
The "Enumeration" maximum sub-array algorithm is described by the following pseudo-code:
~~~~
ENUMERATION-MAX-SUBARRAY(A[1,...,N]) {
if N == 0 {
return 0, A
} else {
max_sum = -Infinity
}
for i from 1 to N {
for j from i to N {
current_sum = 0
for k from i to j {
current_sum = current_sum + A[k]
if current_sum > max_sum {
max_sum = current_sum
start_index = i
end_index = j
}
}
}
}
return max_sum, A[start_index, ..., end_index]
}
~~~~
## Theoretical Run-time Analysis
The outer $i$ loop runs from $1$ to $N$, the first inner $j$ loop runs from $i$ to $N$, and the second inner loop runs from $i$ to $j$. We can compute the number of iterations as:
* $\sum_{i=1}^N \sum_{j=i}^N \sum_{k=i}^j \Theta(1)$
* $\sum_{i=1}^N \sum_{j=i}^N (j - i + 1) \Theta(1)$
* $\sum_{i=1}^N (\sum_{j=i}^N (1 - i) + \sum_{j=i}^N j) \Theta(1)$
* $\sum_{i=1}^N ((i - 1)(i - N - 1) - \frac{1}{2}(i + N)(i - N - 1)) \Theta(1)$
* $\sum_{i=1}^N ((i^2 - iN - 2i + N + 1) - \frac{1}{2}(i^2 - i - N^2 - N)) \Theta(1)$
* $\sum_{i=1}^N \frac{1}{2}i^2 - iN - \frac{3}{2}i + \frac{1}{2}N^2 + \frac{3}{2}N + 1) \Theta(1)$
* $\sum_{i=1}^N \frac{1}{2}(i^2 - 2iN - 3i) + \sum_{i=1}^N \frac{1}{2}(N^2 + 3N + 2) \Theta(1)$
So we can find the sums term by term for the terms with $i$ while the sum of terms without $i$ will remain constant therefore:
* $\sum_{i=1}^N \frac{1}{2}(N^2 + 3N + 2) = (\frac{1}{2}N^2 + \frac{3}{2}N + 1)\sum_{i=1}^N 1$
* $(\frac{1}{2}N^2 + \frac{3}{2}N + 1)\sum_{i=1}^N 1 = (\frac{1}{2}N^2 + \frac{3}{2}N + 1) * N$
* $(\frac{1}{2}N^2 + \frac{3}{2}N + 1) * N = \frac{1}{2}N^3 + \frac{3}{2}N^2 + N$
* $\sum_{i=1}^N \frac{1}{2} i^2 = \frac{1}{2}(\frac{1}{6}N(N + 1)(2N + 1))$
* $\frac{1}{2}(\frac{1}{6}N(N + 1)(2N + 1)) = \frac{1}{6}N^3 + \frac{1}{4}N^2 + \frac{1}{12}N$
* $\sum_{i=1}^N \frac{1}{2} (-2iN) = -\frac{1}{2}N(N + 1) * N = -\frac{1}{2}(N^3 + N^2)$
* $\sum_{i=1}^N \frac{1}{2} (-3i) = \frac{1}{2}(-\frac{3}{2}N(N + 1)) = -\frac{3}{4}(N^2 + N)$
After collecting like terms:
* $\frac{1}{6}N^3 + \frac{1}{2}N^2 + \frac{1}{3}N$
* $\sum_{i=1}^N \sum_{j=i}^N \sum_{k=i}^j \Theta(1) = \frac{1}{6}N^3 + \frac{1}{2}N^2 + \frac{1}{3}N \cdot \Theta(1)$
* $\frac{1}{6}N^3 + \frac{1}{2}N^2 + \frac{1}{3}N \cdot \Theta(1) = \Theta(N^3)$ (Because the $N^3$ will dominate)
Thus the runtime of the whole algorithm is equivalent to $\Theta(N^{3})$.
## Experimental Analysis
For a series of array sizes $N$, $10$ random arrays were generated and run through the "Enumeration" algorithm. The CPU clock time was recorded for each of the $10$ random array inputs, and an average run time was computed. Below is the plot of average run time versus $N$ for the "Enumeration" algorithm.
<center><font color='red' size="6"></font></center>
The equation of the best fit curve to the runtime data where $x$ stands in for $N$:
$$y = 2.12118 * 10^{-8} * x^3 + 1.53069 * 10^{-3} * x - 1.77696 * 10^{-1}$$
The highest degree of the best fit curve is three and as shown in the plot above, fits the data points very closely which stands to corroborate the theoretical run-time of $\Theta(N^{3})$.
Based on the average run time data curve fit, we would expect the "Enumeration" to be able to process the following number of elements in the given amount of time:
| Time | Max Input Size |
|:----------:|:--------------:|
| 10 seconds | 798 |
| 30 seconds | 1150 |
| 60 seconds | 1445 |
# Better Enumeration
## Pseudo-Code
The "Better Enumeration" maximum sub-array algorithm is described by the following pseudo-code:
~~~~
BETTER-ENUMERATION-MAX-SUBARRAY(A[1, ..., N])
maximum sum = -Infinity
for i from 1 to N
current sum = 0
for j from i to N
current sum = current sum + A[j]
if current sum > maximum sum
maximum sum = current sum
start index = i
end index = j
return maximum sum, A[start index, ..., end index]
~~~~
## Theoretical Run-time Analysis
The outer $i$ loop runs from $1$ to $N$, and the inner $j$ loop runs from $i$ to $N$. Inside the inner loop are constant time operations. We can compute the number of iterations of these constant time operations as:
\begin{equation}
\begin{split}
\sum_{i=1}^N \sum_{j=i}^N \Theta(1) & = \sum_{i=1}^N (N + 1 - i)\cdot \Theta(1) =N(N+1)\cdot \Theta(1) -\frac{1}{2}N(N+1)\cdot \Theta(1) \\
& = \frac{1}{2}N(N+1)\cdot \Theta(1) = \Theta(N^2)
\end{split}
\end{equation}
Thus, the theoritical run-time is $\Theta(N^2)$.
## Experimental Analysis
For a series of array sizes $N$, 10 random arrays were generated and run through the "Better Enumeration" algorithm. The CPU clock time was recorded for each of the 10 random array inputs, and an average run time was computed. Below is the plot of average run time versus $N$ for the "Better Enumeration" algorithm.
<center><font color='red' size="6"></font></center>
A curve fit was applied to the average run time data, which resulted in the following fit equation as a function of $x$ standing in for $N$:
$$y = 5.48722 * 10^{-8} * x^{2} + 1.42659 * 10^{-4} * x - 7.776 * 10^{-1}$$
The fit curve for the plotted data has the same degree as the theoretical runtime of $\Theta(N^{2})$ so the experimental appears to match the theoretical runtime.
Based on the average run time data curve fit, we would expect the "Better Enumeration" to be able to process the following number of elements in the given amount of time:
| Time | Max Input Size |
|:----------:|:--------------:|
| 10 seconds | 12775 |
| 30 seconds | 22419 |
| 60 seconds | 32006 |
# Divide and Conquer
## Pseudo-code
The "Divide and Conquer" maximum sub-array algorithm is described by the following pseudo-code:
~~~~
DIVIDE_AND_CONQUER(A[1,...,N]){
if N == 0 {
return 0, A
} else if N == 1 {
return A[0], A
}
tmp_max = 0
mid_max = 0
mid_start = 0
mid_end = 0
~~~~
~~~~
left_max = 0
right_max = 0
midpoint = N / 2
mid_start = midpoint
mid_end = midpoint
for i from A[N,...,midpoint] {
tmp_max = tmp_max + A[i]
if tmp_max > left_max {
left_max = tmp_max
mid_start = i
}
}
~~~~
~~~~
tmp_max = 0
for i from A[midpoint,...,N] {
tmp_max = tmp_max + A[i]
if tmp_max > right_max {
right_max = tmp_max
mid_end = i + 1
}
}
~~~~
~~~~
mid_max = left_max + right_max
left_max, left_subarray = DIVIDE_AND_CONQUER(A[0,...,midpoint])
right_max, right_subarray = DIVIDE_AND_CONQUER(A[midpoint,...,N])
if mid_max >= left_max and mid_max >= right_max {
return mid_max, A[mid_start,...,mid_end]
} else if left_max >= right_max and left_max > mid_max {
return left_max, left_subarray
} else if right_max > left_max and right_max > mid_max {
return right_max, right_subarray
}
}
~~~~
## Theoretical Run-time Analysis
In order to determine the run-time of the divide and conquer algorithm we will need to derive its recurrence. We will make a simplifying assumption that the original problem size is a power of two so that all of the subproblem sizes will remain integers. We will denote $T(n)$ as the run-time of "divide and conquer" on a subarray of $n$ elements. The base case is when $n = 0$ or $n = 1$ which will take constant time and as a result:
* $T(1) = \Theta(1)$
The recursive case occurs when $n \gt 1$. The variable initializing prior to the first for loop will also take constant time. The following for loops are used to find the maximum sub-array that crosses the array's midpoint and will add a runtime of $\Theta(n)$. After the loops there are two recursive calls to the main function where we spend $T(\frac{n}{2})$ solving each of them so the total contribution to the running time is $2T(\frac{n}{2})$. The remaining if-else block would also run in constant time $(\Theta(1))$. So the recurrence in total for $n \gt 1$ is:
* $T(n) = 2T(\frac{n}{2}) + \Theta(n) + \Theta(1)$
* $T(n) = 2T(\frac{n}{2}) + \Theta(n)$
So the complete recurrence is:
* $T(n) = \Theta(1)$ (when $n = 0$ or $n = 1$)
* $T(n) = 2T(\frac{n}{2}) + \Theta(n)$ (when $n \gt 1$)
The recurrence can be solved using the master method as shown below:
* $T(n) = 2T(\frac{n}{2}) + \Theta(n)$
* $a = 2, b = 2, f(n) = \Theta(n)$
* $n^{log_{b}(a)} = n^{log_{2}(2)} = n^{1}$
* $\Theta(n^{log_{b}(a)}) = \Theta(n)$ (Case 2 applies)
* $T(n) = \Theta(n^{log_{2}(2)} * log_{2}(n)) = \Theta(n * log_2(n))$ (By the master theorem)
So substituting in the runtime we get:
* $T(n) = \Theta(n * log_2(n)) + \Theta(n)$
The $\Theta(n)$ term drops off leaving us with:
* $T(n) = \Theta(n * log_2(n))$
The runtime above is for the divide and conquer algorithm for finding maximum sub arrays.
## Experimental Analysis
For a series of array sizes $N$, $10$ random arrays were generated and run through the "Divide and Conquer" algorithm. The CPU clock time was recorded for each of the $10$ random array inputs, and an average run time was computed. Below is the plot of average run time versus $N$ for the "Divide and Conquer" algorithm.
A linear fit and logarithmic fit were applied to the average run time data, which resulted in the following fit equations as a function of $N$:
<center><font color='red' size="6"></font></center>
The function of the best logarithmic fit curve where $x$ is substituted for $N$:
$$y = 2.57865 * 10^{-7} * x * log(x) + 1.000$$
The function of the best linear fit curve where $x$ is substituted for $N$:
$$y = 6.15402 * 10^{-6} * x - 9.15974 * 10^{-1}$$
This plot has both a linear fit and a logarithmic fit to show how similiar they present for the values plotted. The logarithmic curve fits the data points almost exactly therefore it shows that the experimental values are strongly aligned with the theoretically derived runtime of $\Theta(n * log_2(n))$.
Based on the average run time data curve fit, we would expect the "Divide and Conquer" algorithm to be able to process the following number of elements in the given amount of time:
| Time | Max Input Size |
|:----------:|:--------------:|
| 10 seconds | $1687210$ |
| 30 seconds | $5050390$ |
| 60 seconds | $9848770$ |
# Linear-time
## Pseudo-Code
The "Linear-time" maximum sub-array algorithm is described by the following pseudo-code:
~~~~
LINEAR-TIME-MAX-SUBARRAY(A[1, ..., N])
maximum sum = -Infinity
sum ending here = -Infinity
for i from 1 to N
ending here high index = j
if ending here sum > 0
ending here sum = ending here sum + A[i]
else
ending here low index = j
ending here sum = A[i]
if ending here sum > maximum sum
maximum sum = ending here sum
start index = ending here low index
end index = ending here high index
return maximum sum, A[start index, ..., end index]
~~~~
## Theoretical Run-time Analysis
The $i$ loop runs from $1$ to $N$. Inside the loop are constant time operations. We can compute the number of iterations of these constant time operations as:
\begin{equation}
\begin{split}
\sum_{i=1}^N \Theta(1) & = N\cdot \Theta(1) \\
& = \Theta(N)
\end{split}
\end{equation}
Thus, the theoritical run-time is $\Theta(N)$.
## Experimental Analysis
For a series of array sizes $N$, 10 random arrays were generated and run through the "Linear-time" algorithm. The CPU clock time was recorded for each of the 10 random array inputs, and an average run time was computed. Below is the plot of average run time versus $N$ for the "Linear-time" algorithm.
<center><font color='red' size="6"></font></center>
A curve fit was applied to the average run time data, which resulted in the following fit equation as a function of $x$ standing in for $N$:
$$y = 2.04735 * 10^{-7} * x - 1.4449$$
The best fit curve fits the plotted data extremely well, showing that the runtimes reflect a linear trend. The observed linear trend in the data matches with the theoretically derived runtime of $\Theta(n)$.
Based on the average run time data curve fit, we would expect the "Linear-time" algorithm to be able to process the following number of elements in the given amount of time:
| Time | Max Input Size |
|:----------:|:--------------:|
| 10 seconds | 55901100 |
| 30 seconds | 153588000 |
| 60 seconds | 300119000 |
# Testing
Several sets of test data were used to ensure the accuracy of each of the algorithms. These test data were comprised of arrays of values with known maximum sub-array solutions. These test sets were run through each algorithm, and the output was compared to the known solution.
Along with the provided set of test cases, several additional test cases were generated in order to test the algorithm accuracy under very specific conditions. Some examples of these test cases include:
* The trivial case of a single array element
* Arrays with a single positive value as the first or last element (to test the handling of the boundaries)
* Arrays with a single positive value in the middle of the arrary
* Arrays where the running sum reaches 0 at some point (i.e. multiple maximum sum sub-arrays possible)
All algorithms correctly solved all of the test data sets.
# Algorithm Comparison Plots
Three plots were generated with various combinations of linear and log scales to show the performance of the various algorithms together.
Linear plot:
Linear Log plot:
Log Log plot:
| 9586778459bb132df15b04b737e0964af5c4667f | 20,438 | ipynb | Jupyter Notebook | report/final_report.ipynb | OSU-CS-325/Project_One_MSS | 73ecbf2eeb1c6b35e3d63f3054649b26cb7c7d0d | [
"MIT"
]
| null | null | null | report/final_report.ipynb | OSU-CS-325/Project_One_MSS | 73ecbf2eeb1c6b35e3d63f3054649b26cb7c7d0d | [
"MIT"
]
| null | null | null | report/final_report.ipynb | OSU-CS-325/Project_One_MSS | 73ecbf2eeb1c6b35e3d63f3054649b26cb7c7d0d | [
"MIT"
]
| null | null | null | 35.482639 | 584 | 0.533956 | true | 4,234 | Qwen/Qwen-72B | 1. YES
2. YES | 0.79053 | 0.928409 | 0.733935 | __label__eng_Latn | 0.984535 | 0.543509 |
# 01 - Introduction to seismic modelling
This notebook is the first in a series of tutorials highlighting various aspects of seismic inversion based on Devito operators. In this first example we aim to highlight the core ideas behind seismic modelling, where we create a numerical model that captures the processes involved in a seismic survey. This forward model will then form the basis for further tutorials on the implementation of inversion processes using Devito operators.
## Modelling workflow
The core process we are aiming to model is a seismic survey, which consists of two main components:
- **Source** - A source is positioned at a single or a few physical locations where artificial pressure is injected into the domain we want to model. In the case of land survey, it is usually dynamite blowing up at a given location, or a vibroseis (a vibrating engine generating continuous sound waves). For a marine survey, the source is an air gun sending a bubble of compressed air into the water that will expand and generate a seismic wave.
- **Receiver** - A set of microphones or hydrophones are used to measure the resulting wave and create a set of measurements called a *Shot Record*. These measurements are recorded at multiple locations, and usually at the surface of the domain or at the bottom of the ocean in some marine cases.
In order to create a numerical model of a seismic survey, we need to solve the wave equation and implement source and receiver interpolation to inject the source and record the seismic wave at sparse point locations in the grid.
## The acoustic seismic wave equation
The acoustic wave equation for the square slowness $m$, defined as $m=\frac{1}{c^2}$, where $c$ is the speed of sound in the given physical media, and a source $q$ is given by:
\begin{cases}
&m \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) = q \ \text{in } \Omega \\
&u(.,t=0) = 0 \\
&\frac{d u(x,t)}{dt}|_{t=0} = 0
\end{cases}
with the zero initial conditions to guarantee unicity of the solution.
The boundary conditions are Dirichlet conditions:
\begin{equation}
u(x,t)|_\delta\Omega = 0
\end{equation}
where $\delta\Omega$ is the surface of the boundary of the model $\Omega$.
# Finite domains
The last piece of the puzzle is the computational limitation. In the field, the seismic wave propagates in every direction to an "infinite" distance. However, solving the wave equation in a mathematically/discrete infinite domain is not feasible. In order to compensate, Absorbing Boundary Conditions (ABC) or Perfectly Matched Layers (PML) are required to mimic an infinite domain. These two methods allow to approximate an infinite media by damping and absorbing the waves at the limit of the domain to avoid reflections.
The simplest of these methods is the absorbing damping mask. The core idea is to extend the physical domain and to add a Sponge mask in this extension that will absorb the incident waves. The acoustic wave equation with this damping mask can be rewritten as:
\begin{cases}
&m \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) + \eta \frac{d u(x,t)}{dt}=q \ \text{in } \Omega \\
&u(.,0) = 0 \\
&\frac{d u(x,t)}{dt}|_{t=0} = 0
\end{cases}
where $\eta$ is the damping mask equal to $0$ inside the physical domain and increasing inside the sponge layer. Multiple choice of profile can be chosen for $\eta$ from linear to exponential.
# Seismic modelling with devito
We describe here a step by step setup of seismic modelling with Devito in a simple 2D case. We will create a physical model of our domain and define a single source and an according set of receivers to model for the forward model. But first, we initialize some basic utilities.
```python
#NBVAL_IGNORE_OUTPUT
# Adding ignore due to (probably an np notebook magic) bug
import numpy as np
%matplotlib inline
```
## Define the physical problem
The first step is to define the physical model:
- What are the physical dimensions of interest
- What is the velocity profile of this physical domain
We will create a simple velocity model here by hand for demonstration purposes. This model essentially consists of two layers, each with a different velocity: $1.5km/s$ in the top layer and $2.5km/s$ in the bottom layer. We will use this simple model a lot in the following tutorials, so we will rely on a utility function to create it again later.
```python
#NBVAL_IGNORE_OUTPUT
from examples.seismic import Model, plot_velocity
# Define a physical size
shape = (101, 101) # Number of grid point (nx, nz)
spacing = (10., 10.) # Grid spacing in m. The domain size is now 1km by 1km
origin = (0., 0.) # What is the location of the top left corner. This is necessary to define
# the absolute location of the source and receivers
# Define a velocity profile. The velocity is in km/s
v = np.empty(shape, dtype=np.float32)
v[:, :51] = 1.5
v[:, 51:] = 2.5
# With the velocity and model size defined, we can create the seismic model that
# encapsulates this properties. We also define the size of the absorbing layer as 10 grid points
model = Model(vp=v, origin=origin, shape=shape, spacing=spacing,
space_order=2, nbl=10, bcs="damp")
plot_velocity(model)
```
# Acquisition geometry
To fully define our problem setup we also need to define the source that injects the wave to model and the set of receiver locations at which to sample the wavefield. The source time signature will be modelled using a Ricker wavelet defined as
\begin{equation}
q(t) = (1-2\pi^2 f_0^2 (t - \frac{1}{f_0})^2 )e^{- \pi^2 f_0^2 (t - \frac{1}{f_0})}
\end{equation}
To fully define the source signature we first need to define the time duration for our model and the timestep size, which is dictated by the CFL condition and our grid spacing. Luckily, our `Model` utility provides us with the critical timestep size, so we can fully discretize our model time axis as an array:
```python
from examples.seismic import TimeAxis
t0 = 0. # Simulation starts a t=0
tn = 1000. # Simulation last 1 second (1000 ms)
dt = model.critical_dt # Time step from model grid spacing
time_range = TimeAxis(start=t0, stop=tn, step=dt)
```
The source is positioned at a $20m$ depth and at the middle of the $x$ axis ($x_{src}=500m$), with a peak wavelet frequency of $10Hz$.
```python
#NBVAL_IGNORE_OUTPUT
from examples.seismic import RickerSource
f0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz)
src = RickerSource(name='src', grid=model.grid, f0=f0,
npoint=1, time_range=time_range)
# First, position source centrally in all dimensions, then set depth
src.coordinates.data[0, :] = np.array(model.domain_size) * .5
src.coordinates.data[0, -1] = 20. # Depth is 20m
# We can plot the time signature to see the wavelet
src.show()
```
Similarly to our source object, we can now define our receiver geometry as a symbol of type `Receiver`. It is worth noting here that both utility classes, `RickerSource` and `Receiver` are thin wrappers around the Devito's `SparseTimeFunction` type, which encapsulates sparse point data and allows us to inject and interpolate values into and out of the computational grid. As we have already seen, both types provide a `.coordinates` property to define the position within the domain of all points encapsulated by that symbol.
In this example we will position receivers at the same depth as the source, every $10m$ along the x axis. The `rec.data` property will be initialized, but left empty, as we will compute the receiver readings during the simulation.
```python
#NBVAL_IGNORE_OUTPUT
from examples.seismic import Receiver
# Create symbol for 101 receivers
rec = Receiver(name='rec', grid=model.grid, npoint=101, time_range=time_range)
# Prescribe even spacing for receivers along the x-axis
rec.coordinates.data[:, 0] = np.linspace(0, model.domain_size[0], num=101)
rec.coordinates.data[:, 1] = 20. # Depth is 20m
# We can now show the source and receivers within our domain:
# Red dot: Source location
# Green dots: Receiver locations (every 4th point)
plot_velocity(model, source=src.coordinates.data,
receiver=rec.coordinates.data[::4, :])
```
# Finite-difference discretization
Devito is a finite-difference DSL that solves the discretized wave-equation on a Cartesian grid. The finite-difference approximation is derived from Taylor expansions of the continuous field after removing the error term.
## Time discretization
We only consider the second order time discretization for now. From the Taylor expansion, the second order discrete approximation of the second order time derivative is:
\begin{equation}
\begin{aligned}
\frac{d^2 u(x,t)}{dt^2} = \frac{\mathbf{u}(\mathbf{x},\mathbf{t+\Delta t}) - 2 \mathbf{u}(\mathbf{x},\mathbf{t}) + \mathbf{u}(\mathbf{x},\mathbf{t-\Delta t})}{\mathbf{\Delta t}^2} + O(\mathbf{\Delta t}^2).
\end{aligned}
\end{equation}
where $\mathbf{u}$ is the discrete wavefield, $\mathbf{\Delta t}$ is the discrete
time-step (distance between two consecutive discrete time points) and $O(\mathbf{\Delta
t}^2)$ is the discretization error term. The discretized approximation of the
second order time derivative is then given by dropping the error term. This derivative is represented in Devito by `u.dt2` where u is a `TimeFunction` object.
## Spatial discretization
We define the discrete Laplacian as the sum of the second order spatial
derivatives in the three dimensions:
\begin{equation}
\begin{aligned}
\Delta \mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t})= \sum_{j=1}^{j=\frac{k}{2}} \Bigg[\alpha_j \Bigg(&
\mathbf{u}(\mathbf{x+jdx},\mathbf{y},\mathbf{z},\mathbf{t})+\mathbf{u}(\mathbf{x-jdx},\mathbf{y},\mathbf{z},\mathbf{t}) + \\
&\mathbf{u}(\mathbf{x},\mathbf{y+jdy},\mathbf{z},\mathbf{t})+\mathbf{u}(\mathbf{x},\mathbf{y-jdy},\mathbf{z}\mathbf{t}) + \\
&\mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z+jdz},\mathbf{t})+\mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z-jdz},\mathbf{t})\Bigg) \Bigg] + \\
&3\alpha_0 \mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}).
\end{aligned}
\end{equation}
This derivative is represented in Devito by `u.laplace` where u is a `TimeFunction` object.
## Wave equation
With the space and time discretization defined, we can fully discretize the wave-equation with the combination of time and space discretizations and obtain the following second order in time and $k^{th}$ order in space discrete stencil to update one grid point at position $\mathbf{x}, \mathbf{y},\mathbf{z}$ at time $\mathbf{t}$, i.e.
\begin{equation}
\begin{aligned}
\mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t+\Delta t}) = &2\mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}) - \mathbf{u}(\mathbf{x},\mathbf{y}, \mathbf{z},\mathbf{t-\Delta t}) +\\
& \frac{\mathbf{\Delta t}^2}{\mathbf{m(\mathbf{x},\mathbf{y},\mathbf{z})}} \Big(\Delta \mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}) + \mathbf{q}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}) \Big).
\end{aligned}
\end{equation}
```python
# In order to represent the wavefield u and the square slowness we need symbolic objects
# corresponding to time-space-varying field (u, TimeFunction) and
# space-varying field (m, Function)
from devito import TimeFunction
# Define the wavefield with the size of the model and the time dimension
u = TimeFunction(name="u", grid=model.grid, time_order=2, space_order=2)
# We can now write the PDE
pde = model.m * u.dt2 - u.laplace + model.damp * u.dt
# The PDE representation is as on paper
pde
```
$\displaystyle \operatorname{damp}{\left(x,y \right)} \frac{\partial}{\partial t} u{\left(t,x,y \right)} - \frac{\partial^{2}}{\partial x^{2}} u{\left(t,x,y \right)} - \frac{\partial^{2}}{\partial y^{2}} u{\left(t,x,y \right)} + \frac{\frac{\partial^{2}}{\partial t^{2}} u{\left(t,x,y \right)}}{\operatorname{vp}^{2}{\left(x,y \right)}}$
```python
# This discrete PDE can be solved in a time-marching way updating u(t+dt) from the previous time step
# Devito as a shortcut for u(t+dt) which is u.forward. We can then rewrite the PDE as
# a time marching updating equation known as a stencil using customized SymPy functions
from devito import Eq, solve
stencil = Eq(u.forward, solve(pde, u.forward))
stencil
```
$\displaystyle u{\left(t + dt,x,y \right)} = \frac{- \frac{- \frac{2.0 u{\left(t,x,y \right)}}{dt^{2}} + \frac{u{\left(t - dt,x,y \right)}}{dt^{2}}}{\operatorname{vp}^{2}{\left(x,y \right)}} + \frac{\partial^{2}}{\partial x^{2}} u{\left(t,x,y \right)} + \frac{\partial^{2}}{\partial y^{2}} u{\left(t,x,y \right)} + \frac{\operatorname{damp}{\left(x,y \right)} u{\left(t,x,y \right)}}{dt}}{\frac{\operatorname{damp}{\left(x,y \right)}}{dt} + \frac{1}{dt^{2} \operatorname{vp}^{2}{\left(x,y \right)}}}$
# Source injection and receiver interpolation
With a numerical scheme to solve the homogenous wave equation, we need to add the source to introduce seismic waves and to implement the measurement operator, and interpolation operator. This operation is linked to the discrete scheme and needs to be done at the proper time step. The semi-discretized in time wave equation with a source reads:
\begin{equation}
\begin{aligned}
\mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t+\Delta t}) = &2\mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}) - \mathbf{u}(\mathbf{x},\mathbf{y}, \mathbf{z},\mathbf{t-\Delta t}) +\\
& \frac{\mathbf{\Delta t}^2}{\mathbf{m(\mathbf{x},\mathbf{y},\mathbf{z})}} \Big(\Delta \mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}) + \mathbf{q}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}) \Big).
\end{aligned}
\end{equation}
It shows that in order to update $\mathbf{u}$ at time $\mathbf{t+\Delta t}$ we have to inject the value of the source term $\mathbf{q}$ of time $\mathbf{t}$. In Devito, it corresponds the update of $u$ at index $t+1$ (t = time implicitly) with the source of time $t$.
On the receiver side, the problem is either as it only requires to record the data at the given time step $t$ for the receiver at time $time=t$.
```python
# Finally we define the source injection and receiver read function to generate the corresponding code
src_term = src.inject(field=u.forward, expr=src * dt**2 / model.m)
# Create interpolation expression for receivers
rec_term = rec.interpolate(expr=u.forward)
```
# Devito operator and solve
After constructing all the necessary expressions for updating the wavefield, injecting the source term and interpolating onto the receiver points, we can now create the Devito operator that will generate the C code at runtime. When creating the operator, Devito's two optimization engines will log which performance optimizations have been performed:
* **DSE:** The Devito Symbolics Engine will attempt to reduce the number of operations required by the kernel.
* **DLE:** The Devito Loop Engine will perform various loop-level optimizations to improve runtime performance.
**Note**: The argument `subs=model.spacing_map` causes the operator to substitute values for our current grid spacing into the expressions before code generation. This reduces the number of floating point operations executed by the kernel by pre-evaluating certain coefficients.
```python
#NBVAL_IGNORE_OUTPUT
from devito import Operator
op = Operator([stencil] + src_term + rec_term, subs=model.spacing_map)
```
Now we can execute the create operator for a number of timesteps. We specify the number of timesteps to compute with the keyword `time` and the timestep size with `dt`.
```python
#NBVAL_IGNORE_OUTPUT
op(time=time_range.num-1, dt=model.critical_dt)
```
Operator `Kernel` ran in 0.01 s
PerformanceSummary([(PerfKey(name='section0', rank=None),
PerfEntry(time=0.004936999999999981, gflopss=0.0, gpointss=0.0, oi=0.0, ops=0, itershapes=[])),
(PerfKey(name='section1', rank=None),
PerfEntry(time=2.600000000000001e-05, gflopss=0.0, gpointss=0.0, oi=0.0, ops=0, itershapes=[])),
(PerfKey(name='section2', rank=None),
PerfEntry(time=0.0007560000000000057, gflopss=0.0, gpointss=0.0, oi=0.0, ops=0, itershapes=[]))])
After running our operator kernel, the data associated with the receiver symbol `rec.data` has now been populated due to the interpolation expression we inserted into the operator. This allows us the visualize the shot record:
```python
#NBVAL_IGNORE_OUTPUT
from examples.seismic import plot_shotrecord
plot_shotrecord(rec.data, model, t0, tn)
```
```python
assert np.isclose(np.linalg.norm(rec.data), 370, rtol=1)
```
| 9a2c590881b3c48f798c51353a3279126fe51ae3 | 288,901 | ipynb | Jupyter Notebook | examples/seismic/tutorials/01_modelling.ipynb | ofmla/curso_sapct | 9e092d556d5b951474f278031a5f512f633050cc | [
"MIT"
]
| 1 | 2021-05-31T04:56:33.000Z | 2021-05-31T04:56:33.000Z | examples/seismic/tutorials/01_modelling.ipynb | ofmla/curso_sapct | 9e092d556d5b951474f278031a5f512f633050cc | [
"MIT"
]
| null | null | null | examples/seismic/tutorials/01_modelling.ipynb | ofmla/curso_sapct | 9e092d556d5b951474f278031a5f512f633050cc | [
"MIT"
]
| null | null | null | 68.622565 | 34,824 | 0.675519 | true | 4,589 | Qwen/Qwen-72B | 1. YES
2. YES | 0.896251 | 0.894789 | 0.801956 | __label__eng_Latn | 0.987423 | 0.701546 |
# Lecture 5
## Differentiation III:
### Exponentials and Partial Differentiation
```python
import numpy as np
import sympy as sp
sp.init_printing()
##################################################
##### Matplotlib boilerplate for consistency #####
##################################################
from ipywidgets import interact
from ipywidgets import FloatSlider
from matplotlib import pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg')
global_fig_width = 10
global_fig_height = global_fig_width / 1.61803399
font_size = 12
plt.rcParams['axes.axisbelow'] = True
plt.rcParams['axes.edgecolor'] = '0.8'
plt.rcParams['axes.grid'] = True
plt.rcParams['axes.labelpad'] = 8
plt.rcParams['axes.linewidth'] = 2
plt.rcParams['axes.titlepad'] = 16.0
plt.rcParams['axes.titlesize'] = font_size * 1.4
plt.rcParams['figure.figsize'] = (global_fig_width, global_fig_height)
plt.rcParams['font.sans-serif'] = ['Computer Modern Sans Serif', 'DejaVu Sans', 'sans-serif']
plt.rcParams['font.size'] = font_size
plt.rcParams['grid.color'] = '0.8'
plt.rcParams['grid.linestyle'] = 'dashed'
plt.rcParams['grid.linewidth'] = 2
plt.rcParams['lines.dash_capstyle'] = 'round'
plt.rcParams['lines.dashed_pattern'] = [1, 4]
plt.rcParams['xtick.labelsize'] = font_size
plt.rcParams['xtick.major.pad'] = 4
plt.rcParams['xtick.major.size'] = 0
plt.rcParams['ytick.labelsize'] = font_size
plt.rcParams['ytick.major.pad'] = 4
plt.rcParams['ytick.major.size'] = 0
##################################################
```
## Wake Up Exercise
Find $\displaystyle y' = \frac{{\rm d}y}{{\rm d}x}$ when $y$ is given by:
1. $y=5x^2$
2. $y=\root 4\of x$
3. $y=x+{1\over\sqrt{x^3}}$
4. $y=\sqrt{6x^4+2}$
5. $y={x\over 3x+2}$
6. $y=x^2\sin x$
## Examples of applying chain rule to the exponential function.
1. $y=e^{-ax}$. Let $u=-ax\Rightarrow\frac{{\rm d}u}{{\rm d}x}=-a$. Thus $y=e^u$ and
$$\frac{{\rm d}y}{{\rm d}u}=e^u~~\Rightarrow~~\frac{{\rm d}y}{{\rm d}x}=\frac{{\rm d}y}{{\rm d}u}\times\frac{{\rm d}u}{{\rm d}x}=e^u\times
(-a)=-ae^{-ax}.$$
2. $\displaystyle y = e^{x^2}$. Then, letting $u = x^2$:
$$\frac{{\rm d}}{{\rm d}x}e^{x^2}=\frac{{\rm d}y}{{\rm d}x}=\frac{{\rm d}y}{{\rm d}u}\times\frac{{\rm d}u}{{\rm d}x}=e^u\cdot 2x =
e^{x^2}\cdot 2x.$$
An important generalization:
$\frac{{\rm d}}{{\rm d}x}e^{f(x)}=e^{f(x)}f'(x)$ for any function $f(x)$.
## Example with the natural logarithm.
1. $y=\ln(a-x)^2=2\ln(a-x)=2\ln u$. Let $u=(a-x)$:
$$\Rightarrow {{\rm d}u\over {\rm d}x}=-1~~{\rm and~~~~~}{{\rm d}y\over {\rm d}u}={2\over u}~~~
{\rm Thus~~~~}{{\rm d}y\over {\rm d}x}={2\over u}\times (-1)={-2\over a-x}$$
This also generalises:
$$\frac{{\rm d}}{{\rm d}x}\ln(f(x)) = {f'(x)\over f(x)}$$
## The Derivative of $a^x$:
By the properties of logarithms and indices we have
$\displaystyle a^x = \left({e^{\ln a}}\right)^x=e^{\left({x\cdot\ln a}\right)}$.
Thus, as we saw above we have:
$$\frac{{\rm d}}{{\rm d}x}a^x
= \frac{{\rm d}}{{\rm d}x}e^{\left({x\cdot\ln a}\right)}
= e^{\left({x\cdot\ln a}\right)}\frac{{\rm d}}{{\rm d}x}{\left({x\cdot\ln a}\right)}
=a^x\cdot\ln a$$
Similarly, in general:
$$\frac{{\rm d}}{{\rm d}x}a^{f(x)} = a^{f(x)}\cdot \ln a\cdot f'(x)$$
## Sympy Example
Lets try and use Sympy to prove the rule:
$$\frac{{\rm d}}{{\rm d}x}a^{f(x)} = a^{f(x)}\cdot \ln a\cdot f'(x)$$
```python
x, a = sp.symbols('x a') # declare the variables x and a
f = sp.Function('f') # declare a function dependent on another variable
sp.diff(a**f(x),x) # write the expression we wish to evaluate
```
## The Derivative of $\log_a x\,\,$:
Recall the conversion formula $\displaystyle \log_a x = {{\ln x}\over {\ln a}}$
and note that $\ln a$ is a constant. Thus:
$$\frac{{\rm d}}{{\rm d}x}\log_a x
= \frac{{\rm d}}{{\rm d}x}\left({1\over{\ln a}}\cdot\ln x\right)
= \left({1\over{\ln a}}\right)\cdot\frac{{\rm d}}{{\rm d}x}\ln x
= \left({1\over{\ln a}}\right)\cdot{1\over {x}}
= {1\over{x\cdot\ln a}}$$
In general:
$$\displaystyle \frac{{\rm d}}{{\rm d}x}\log_a f(x) = {{f'(x)} \over {f(x){(\ln a)}}}$$
## Sympy Example
Lets try and use Sympy again to prove the rule:
$$\frac{{\rm d}}{{\rm d}x}\log_a f(x) = {{f'(x)} \over {f(x){(\ln a)}}}$$
```python
sp.diff(sp.log(f(x),a),x)
```
## Further examples:
1. Product Rule: Let $\displaystyle y = x^2\,e^x$. Then:
$${{dy\over dx}}={d\over dx}x^2e^x={d\over dx}x^2\cdot e^x+x^2\cdot{d\over dx}e^x = (2x + x^2)e^x$$
2. Quotient Rule: Let $\displaystyle y = {{e^x}\over x}$. Then:
$${{dy\over dx}}={{{{d\over dx}e^x}\cdot x - e^x\cdot {d\over dx}x}\over {x^2}}={{e^x\cdot x - e^x\cdot 1\over {x^2}}}={{x - 1}\over x^2}e^x$$
3. Chain Rule: $\displaystyle y = e^{x^2}$. Then, letting $f(x) = x^2$:
$$\frac{{\rm d}}{{\rm d}x}e^{x^2} = e^{f(x)}f'(x) = e^{x^2}\cdot 2x$$
4. $\displaystyle y=\ln (x^2 + 1)$. Then, letting $f(x) = x^2+1$:
$$\frac{{\rm d}}{{\rm d}x}\ln(x^2+1) = {f'(x)\over f(x)} = {2x\over {x^2+1}}$$
5. $\displaystyle {{\rm d}\over {\rm d}x}2^{x^3}=2^{x^3}\cdot\ln 2\cdot 3x^2$
6. $\displaystyle {{\rm d}\over {\rm d}x}10^{x^2+1}=10^{x^2+1}\cdot\ln 10\cdot 2x$
7. $\displaystyle \frac{{\rm d}}{{\rm d}x}\log_{10}(7x+5)={7\over {(7x+5)\cdot \ln10}}$
8. $\displaystyle \frac{{\rm d}}{{\rm d}x}\log_2(3^x+x^4)={{3^x\cdot\ln3 + 4x^3}\over{\ln 2\cdot(3^x+x^4)}}$
## Functions of several variables: Partial Differentiation
**Definition:** Given a function $z=f(x,y)$ of two variables $x$ and $y$, the **partial derivative of $z$ with respect to $x$** is the function obtained by differentiating $f(x,y)$ with respect to $x$, holding $y$ constant.
We denote this using $\partial$ (the "curly" delta, sometimes pronounced "del") as shown below:
$$\frac{\partial z}{\partial x}=\frac{\partial}{\partial x}f(x,y) = f_x(x,y)$$
## Example 1
$f(x,y)=z=x^2-2y^2$
$$f_x={\partial z\over \partial x}=2x\qquad\mbox{and}\qquad f_y={\partial z\over \partial y}=-4y$$
## Example 2
Let $z=3x^2y+5xy^2$. Then the partial derivative of $z$ with respect to $x$, holding $y$ fixed, is:
\begin{align*}
\frac{\partial z}{\partial x}&=\frac{\partial}{\partial x}\,\left(3x^2y+5xy^2\right) \\
&=3y\cdot 2x + 5y^2\cdot 1 \\
&=6xy+5y^2
\end{align*}
while the partial of $z$ with respect to $y$ holding $x$ fixed is:
\begin{align*}
\frac{\partial z}{\partial y}&=\frac{\partial}{\partial y}\,\left(3x^2y+5xy^2\right)\,
=3x^2\cdot 1 + 5x\cdot 2y = 3x^2+10xy
\end{align*}
## Sympy example
In the previous slide we had:
$$\frac{\partial}{\partial x}\,\left(3x^2y+5xy^2\right)\, = 6xy+5y^2$$
Lets redo this in Sympy:
```python
x, y = sp.symbols('x y')
sp.diff(3*x**2*y + 5*x*y**2,x)
```
## Higher-Order Partial Derivatives:
Given $z = f(x,y)$ there are now four distinct possibilities for the
second-order partial derivatives.
(a) With respect to $x$ twice:
$$\frac{\partial}{\partial x}\left(\frac{\partial z}{\partial x}\right)
=\frac{\partial^2z}{\partial x^2}
=z_{xx}$$
(b) With respect to $y$ twice:
$$\frac{\partial}{\partial y}\left(\frac{\partial z}{\partial y}\right)
=\frac{\partial^2z}{\partial y^2}
=z_{yy}$$
(c) First with respect to $x$, then with respect to $y$:
$$\frac{\partial}{\partial y}\left(\frac{\partial z}{\partial x}\right)
=\frac{\partial^2z}{\partial y\partial x}
=z_{xy}$$
(d) First with respect to $y$, then with respect to $x$:
$$\frac{\partial}{\partial x}\left(\frac{\partial z}{\partial y}\right)
=\frac{\partial^2z}{\partial x\partial y}
=z_{yx}$$
## Example
(LaPlace's Equation for Equilibrium Temperature Distribution on a Copper Plate.)
Let $T(x,y)$ give the temperature at the point $(x,y)$.
According to a result of the French mathematician Pierre LaPlace (1749 - 1827), at every point $(x,y)$ the second-order partials of $T$ must satisfy the equation
$$T_{xx} + T_{yy} = 0$$
The function $T(x,y)=y^2-x^2$ satisfies LaPlace's equation:
First with respect to $x$:
$$T_x(x,y)=0-2x=-2x\qquad\mbox{so}\qquad T_{xx}(x,y)=-2$$
Then with respect to $y$:
$$T_y(x,y)=2y-0=2y\qquad\mbox{so}\qquad T_{yy}(x,y)=2$$
Finally:
$$T_{xx}(x,y)+T_{yy}(x,y) = 2 + (-2) = 0$$
which proves the result.
The function $z=x^2y - xy^2$ does *not* satisfy LaPlace's equation (and so
cannot be a model for thermal equilibrium).
First note that
$$z_x = 2xy - y^2$$
$$z_{xx}=2y$$
and
$$z_y = x^2 - 2xy$$
$$z_{yy} =-2x$$
Therefore:
$$z_{xx}+z_{yy}=2y-2x\ne 0$$
We can verify this in Sympy like so:
```python
T1 = y**2 - x**2
sp.diff(T1, x, x) + sp.diff(T1, y, y)
```
and for the second function:
```python
T2 = x**2*y - x*y**2
sp.diff(T2, x, x) + sp.diff(T2, y, y)
```
## A Note on the Mixed Partials $f_{xy}$ and $f_{yx}$:
If all of the partials of $f(x,y)$ exist, then $f_{xy}=f_{yx}$ for all $(x,y)$.
### Example:
Let $z = x^2y^3+3x^2-2y^4$. Then $z_x=2xy^3+6x$ and $z_y = 3x^2y^2-8y^3$.
Taking the partial of $z_x$ with respect to $y$ we get
$$z_{xy}=\frac{\partial}{\partial y}\left(2xy^3+6x\right)=6xy^2$$
Taking the partial of $z_y$ with respect to $x$ we get the same thing:
$$z_{yx}=\frac{\partial}{\partial x}\left(3x^2y^2-8y^3\right)=6xy^2$$
So the operators ${\partial \over \partial x}$ and ${\partial \over \partial y}$ are **commutative**:
$${\rm~i.e.~~~~}~{\partial\over \partial x}\biggr({\partial z\over \partial y}\biggl)~~~~
={\partial\over \partial y}\biggr({\partial z\over \partial
x}\biggl)$$
| 514db20528827296308650b8e9404962734cb773 | 18,113 | ipynb | Jupyter Notebook | lectures/lecture-05-differentiation3.ipynb | SABS-R3/2020-essential-maths | 5a53d60f1e8fdc04b7bb097ec15800a89f67a047 | [
"Apache-2.0"
]
| 1 | 2021-11-27T12:07:13.000Z | 2021-11-27T12:07:13.000Z | lectures/lecture-05-differentiation3.ipynb | SABS-R3/2021-essential-maths | 8a81449928e602b51a4a4172afbcd70a02e468b8 | [
"Apache-2.0"
]
| null | null | null | lectures/lecture-05-differentiation3.ipynb | SABS-R3/2021-essential-maths | 8a81449928e602b51a4a4172afbcd70a02e468b8 | [
"Apache-2.0"
]
| 1 | 2020-10-30T17:34:52.000Z | 2020-10-30T17:34:52.000Z | 25.262204 | 233 | 0.471705 | true | 3,648 | Qwen/Qwen-72B | 1. YES
2. YES | 0.7773 | 0.76908 | 0.597806 | __label__eng_Latn | 0.696672 | 0.227234 |
<a href="https://colab.research.google.com/github/arashash/deep_exercises/blob/main/Ch2_Linear-Algebra/Ch2_Exam1.ipynb" target="_parent"></a>
# Chapter 2 - Linear Algebra
## 2.1 Scalars, Vectors, Matrices and Tensors
### Q1 [10 Points, M]
Denote the set of all n-dimensional binary vectors with Cartesian product notation
### Q2 [10 Points, S]
Given the vector
$
\boldsymbol{x}=\left[\begin{array}{c}
x_{1} \\
x_{2} \\
x_{3} \\
x_{4} \\
\end{array}\right]
$,
and the set
$
S = \{2, 4\}
$,
obtain the vectors $\boldsymbol{x}_{S}$ and $\boldsymbol{x}_{-S}$
### Q3 [20 Points, S]
Evaluate the following expressions with broadcasting rules,
$$
\left[\begin{array}{lll}
0 & 1 & 2
\end{array}\right]+[5]=
$$
$$
\left[\begin{array}{lll}
1 & 1 & 1 \\
1 & 1 & 1 \\
1 & 1 & 1
\end{array}\right]+\left[\begin{array}{lll}
0 & 1 & 1
\end{array}\right]=
$$
$$
\left[\begin{array}{l}
0 \\
1 \\
2
\end{array}\right]+\left[\begin{array}{ll}
0 & 1 & 2
\end{array}\right]=
$$
## 2.2-3 Multiplying Matrices and Vectors and Identity and Inverse Matrices
### Q4 [20 Points, H]
Let $A$ be a $2 \times 2$ matrix, if $A B=B A$ for every $B$ of the size $2 \times 2$, Prove that:
$$
A=\left[\begin{array}{ll}
a & 0 \\
0 & a
\end{array}\right], \
a \in \mathbb{R}
$$
## 2.4 Linear Dependence and Span
### Q5 [10 Points, H]
Prove that if a linear system of equations have two solutions, then it has infinitely many solutions.
### Q6 [5 Points, M]
Given $A x=0$, where $A \in \mathbb{R}^{m \times n}$ is any matrix, and $x \in \mathbb{R}^{n}$ is a vector of unknown variables to be solved, what is the condition such that there is infinitely many solutions?
## 2.5 Norms
### Q7 [15 Points, M]
Prove that **Max Norm** follows these conditions,
$$
\begin{align}
f(\boldsymbol{x})=0 \Rightarrow \boldsymbol{x}=\mathbf{0} \\
f(\boldsymbol{x}+\boldsymbol{y}) \leq f(\boldsymbol{x})+f(\boldsymbol{y}) \\
\forall \alpha \in \mathbb{R}, f(\alpha \boldsymbol{x})=|\alpha| f(\boldsymbol{x})
\end{align}
$$
## 2.6 Special Kinds of Matrices and Vectors
### Q8 [10 Points, M]
Solve the following system of equations,
$$
\frac{1}{2}\left[\begin{array}{cccc}
1 & 1 & 1 & 1 \\
1 & 1 & -1 & -1 \\
1 & -1 & 1 & -1 \\
1 & -1 & -1 & 1
\end{array}\right] \left[\begin{array}{c}
x_{1} \\
x_{2} \\
x_{3} \\
x_{4} \\
\end{array}\right] = \left[\begin{array}{c}
1 \\
2 \\
3 \\
4 \\
\end{array}\right]
$$
| 4a3973de7604531c8a61042c34226df704df2d5c | 6,111 | ipynb | Jupyter Notebook | Ch2_Linear-Algebra/Ch2_Exam1.ipynb | arashash/deep_exercises | 2c40802ee367ba9bf1f6fa5dad96cfa1a74e092b | [
"MIT"
]
| 1 | 2020-12-09T10:27:37.000Z | 2020-12-09T10:27:37.000Z | Ch2_Linear-Algebra/Ch2_Exam1.ipynb | arashash/deep_exercises | 2c40802ee367ba9bf1f6fa5dad96cfa1a74e092b | [
"MIT"
]
| null | null | null | Ch2_Linear-Algebra/Ch2_Exam1.ipynb | arashash/deep_exercises | 2c40802ee367ba9bf1f6fa5dad96cfa1a74e092b | [
"MIT"
]
| null | null | null | 25.676471 | 248 | 0.413189 | true | 869 | Qwen/Qwen-72B | 1. YES
2. YES | 0.949669 | 0.907312 | 0.861647 | __label__eng_Latn | 0.830271 | 0.840227 |
# Orthogonal polynomials
```python
import numpy as np
import numpy.linalg as la
import matplotlib.pyplot as pt
```
## Mini-Introduction to `sympy`
```python
import sympy as sym
# Enable "pretty-printing" in IPython
sym.init_printing()
```
Make a new `Symbol` and work with it:
```python
x = sym.Symbol("x")
myexpr = (x**2-3)**2
myexpr
```
```python
myexpr = (x**2-3)**2
myexpr
myexpr.expand()
```
```python
sym.integrate(myexpr, x)
```
```python
sym.integrate(myexpr, (x, -1, 1))
```
## Orthogonal polynomials
Now write a function `inner_product(f, g)`:
```python
def inner_product(f, g):
return sym.integrate(f*g, (x, -1, 1))
```
Show that it works:
```python
inner_product(1, 1)
```
```python
inner_product(1, x)
```
Next, define a `basis` consisting of a few monomials:
```python
basis = [1, x, x**2, x**3]
#basis = [1, x, x**2, x**3, x**4, x**5]
```
And run Gram-Schmidt on it:
```python
orth_basis = []
for q in basis:
for prev_q in orth_basis:
q = q - inner_product(prev_q, q)*prev_q / inner_product(prev_q,prev_q)
orth_basis.append(q)
legendre_basis = [orth_basis[0],]
#to compute Legendre polynomials need to normalize so that q(1)=1 rather than ||q||=1
for q in orth_basis[1:]:
q = q / q.subs(x,1)
legendre_basis.append(q)
```
```python
legendre_basis
```
These are called the *Legendre polynomials*.
--------------------
What do they look like?
```python
mesh = np.linspace(-1, 1, 100)
pt.figure(figsize=(8,8))
for f in legendre_basis:
f = sym.lambdify(x, f)
pt.plot(mesh, [f(xi) for xi in mesh])
```
-----
These functions are important enough to be included in `scipy.special` as `eval_legendre`:
```python
import scipy.special as sps
for i in range(10):
pt.plot(mesh, sps.eval_legendre(i, mesh))
```
What can we find out about the conditioning of the generalized Vandermonde matrix for Legendre polynomials?
```python
#keep
n = 20
xs = np.linspace(-1, 1, n)
V = np.array([
sps.eval_legendre(i, xs)
for i in range(n)
]).T
la.cond(V)
```
The Chebyshev basis can similarly be defined by Gram-Schmidt, but now with respect to a different inner-product weight function,
$$w(x) = 1/\sqrt{1-x^2}.$$
```python
w = 1 / sym.sqrt(1-x**2)
def cheb_inner_product(f, g):
return sym.integrate(w*f*g, (x, -1, 1))
orth_basis = []
for q in basis:
for prev_q in orth_basis:
q = q - cheb_inner_product(prev_q, q)*prev_q / cheb_inner_product(prev_q,prev_q)
orth_basis.append(q)
cheb_basis = [1,]
#to compute Legendre polynomials need to normalize so that q(1)=1 rather than ||q||=1
for q in orth_basis[1:]:
q = q / q.subs(x,1)
cheb_basis.append(q)
cheb_basis
```
```python
for i in range(10):
pt.plot(mesh, np.cos(i*np.arccos(mesh)))
```
Chebyshev polynomials achieve similar good, but imperfect conditioning on a uniform grid (but perfect conditioning on a grid of Chebyshev nodes).
```python
#keep
n = 20
xs = np.linspace(-1, 1, n)
V = np.array([
np.cos(i*np.arccos(xs))
for i in range(n)
]).T
la.cond(V)
```
```python
```
| 14d10b36ecdbd06408cc2c8cfc59f098e5c9b2be | 283,845 | ipynb | Jupyter Notebook | interpolation/Orthogonal Polynomials.ipynb | JiaheXu/MATH | 9cb2b412ba019794702cacf213471742745d17a6 | [
"MIT"
]
| null | null | null | interpolation/Orthogonal Polynomials.ipynb | JiaheXu/MATH | 9cb2b412ba019794702cacf213471742745d17a6 | [
"MIT"
]
| null | null | null | interpolation/Orthogonal Polynomials.ipynb | JiaheXu/MATH | 9cb2b412ba019794702cacf213471742745d17a6 | [
"MIT"
]
| null | null | null | 512.355596 | 104,482 | 0.937071 | true | 957 | Qwen/Qwen-72B | 1. YES
2. YES | 0.96378 | 0.91118 | 0.878177 | __label__eng_Latn | 0.804833 | 0.878632 |
In the paper, it is stated the premium of a call option implies a certain fair price for the corresponding put option (same asset, strike price and expiration date). The Put-Call Parity is used to validate option pricing models as any pricing model that produces option prices which violate the parity should be considered flawed.
Note, since American options can be exercised before the expiration date, the Put-Call Parity only applies to European options.
## The Formula
Let $C(t)$ and $P(t)$ be the call and put values at time $t$ for a European option with maturity $T$ and strike $K$ on a non-dividend paying asset with spot price $S(t)$. The Parity states:
$$P(t) + S(t) - C(t) = Ke^{-r(T - t)}$$
Rearranging the above to isolate $P(t)$ and $C(t)$ on the left-hand side for non-dividend paying assets:
$$C(t) = S(t) + P(t) - Ke^{-r(T-t)}$$ $$P(t) = C(t) - S(t) + Ke^{-r(T-t)}$$
Rearranging for continuous dividend paying assets:
$$C(t) = S(t)e^{-q(T-t)} + P(t) - Ke^{-r(T-t)}$$ $$P(t) = C(t) - S(t)e^{-q(T-t)} + Ke^{-r(T-t)}$$
With this, we can find a result for a European put option based on the corresponding call option. From the Black-Scholes formula, we can derive the call option:
$$C(S,t) = S \cdot N(d_1) - Ke^{-r(T - t)} N(d_2)$$
From the Put-Call parity statement, we can arrive at a formula for pricing European put options knowing that $1 - N(d) = N(-a)$ for any $a \in \mathbb{R}$
$$P(S,t) = Ke^{-r(T-t)} - S + C(S,t)$$ $$= Ke^{-r(T-t)} - S + (S \cdot N(d_1) - Ke^{-r(T-t)} N(d_2))$$ $$= Ke^{-r(T-t)}(1 - N(d_2)) - S(1 - N(d_1))$$
$$ P(S,t) = Ke^{-r(T-t)} N(-d_2) - S \cdot N(-d_1)$$
## Python Implementation
```python
import numpy as np
import scipy as si
import sympy as sy
import sympy.stats as systats
```
```python
def put_option_price(S, K, T, r, sigma):
#S: spot price
#K: strike price
#T: time to maturity
#r: interest rate
#sigma: volatility of underlying asset
S = float(S)
d1 = (np.log(S / K) + (r + 0.5 * sigma ** 2) * T) / (sigma * np.sqrt(T))
d2 = (np.log(S / K) + (r - 0.5 * sigma ** 2) * T) / (sigma * np.sqrt(T))
put_price = K * np.exp(-r * T) * si.stats.norm.cdf(-d2, 0.0, 1.0) - S * si.stats.norm.cdf(-d1, 0.0, 1.0)
return put_price
```
## Python Implementation with Sympy
```python
def put_option_price_sym(S, K, T, r, sigma):
#S: spot price
#K: strike price
#T: time to maturity
#r: interest rate
#sigma: volatility of underlying asset
N = Normal('x', 0.0, 1.0)
S = float(S)
d1 = (sy.ln(S / K) + (r + 0.5 * sigma ** 2) * T) / (sigma * sy.sqrt(T))
d2 = (sy.ln(S / K) + (r - 0.5 * sigma ** 2) * T) / (sigma * sy.sqrt(T))
put_price = K * sy.exp(-r * T) * cdf(N)(-d2) - S * cdf(N)(-d1)
return put_price
```
For dividend paying assets:
$$C(S,t) = Se^{-q(T - t)} N(d_1) - Ke^{-r(T - t)} N(d_2)$$
$$1 - N(d) = N(-a)$ for any $a \in \mathbb{R}$$
$$P(S,t) = Ke^{-r(T-t)} - Se^{-q(T-t)} + C(S,t)$$ $$= Ke^{-r(T-t)} - Se^{-q(T-t)} + (Se^{-q(T-t)} N(d_1) - Ke^{-r(T-t)} N(d_2))$$ $$= Ke^{-r(T-t)}(1 - N(d_2)) - Se^{-q(T-t)} (1 - N(d_1))$$
$$ P(S,t) = Ke^{-r(T-t)} N(-d_2) - Se^{-q(T-t)}N(-d_1)$$
## Python Implementation of Put Option Pricing for Dividend Paying Assets
```python
def put_option_price_div(S, K, T, r, sigma, q):
#S: spot price
#K: strike price
#T: time to maturity
#r: interest rate
#sigma: volatility of underlying asset
#q: continuous dividend rate
S = float(S)
d1 = (np.log(S / K) + (r + 0.5 * sigma ** 2) * T) / (sigma * np.sqrt(T))
d2 = (np.log(S / K) + (r - 0.5 * sigma ** 2) * T) / (sigma * np.sqrt(T))
put_price_dividend = K * np.exp(-r * T) * si.stats.norm.cdf(-d2, 0.0, 1.0) - S * np.exp(-q * T) * si.stats.norm.cdf(-d1, 0.0, 1.0)
return put_price_dividend
```
## Sympy Implementation of Put Option Pricing for Dividend Paying Assets
```python
def put_option_price_div_sym(S, K, T, r, sigma, q):
#S: spot price
#K: strike price
#T: time to maturity
#r: interest rate
#sigma: volatility of underlying asset
#q: continuous dividend rate
N = Normal('x', 0.0, 1.0)
S = float(S)
d1 = (sy.ln(S / K) + (r + 0.5 * sigma ** 2) * T) / (sigma * sy.sqrt(T))
d2 = (sy.ln(S / K) + (r - 0.5 * sigma ** 2) * T) / (sigma * sy.sqrt(T))
put_price_dividend = K * sy.exp(-r * T) * cdf(N)(-d2) - S * sy.exp(-q * T) * cdf(N)(-d1)
return put_price_dividend
```
## References
[Sherbin, A. (2015). How to price and trade options: identify, analyze, and execute the best trade probabilities.
Hoboken, NJ: John Wiley & Sons, Inc.](https://amzn.to/37ajBnM)
[Ursone, P. (2015). How to calculate options prices and their Greeks: exploring the Black Scholes model from Delta
to Vega. Chichester: Wiley.](https://amzn.to/2UzXDrD)
| cecc3620cf03a6ee9981340238b520a688516471 | 8,369 | ipynb | Jupyter Notebook | content/posts/Put-Call Parity of Vanilla European Options.ipynb | aschleg/aaronschlegel.me | 2f2e143218445da0b6298671c67f9c4afa055d59 | [
"MIT"
]
| 2 | 2018-02-19T00:18:17.000Z | 2020-01-17T15:11:31.000Z | content/posts/Put-Call Parity of Vanilla European Options.ipynb | aschleg/aaronschlegel.me | 2f2e143218445da0b6298671c67f9c4afa055d59 | [
"MIT"
]
| 1 | 2018-01-26T00:11:30.000Z | 2018-01-26T00:11:30.000Z | content/posts/Put-Call Parity of Vanilla European Options.ipynb | aschleg/aaronschlegel.me | 2f2e143218445da0b6298671c67f9c4afa055d59 | [
"MIT"
]
| null | null | null | 27.349673 | 336 | 0.483092 | true | 1,736 | Qwen/Qwen-72B | 1. YES
2. YES | 0.948155 | 0.815232 | 0.772966 | __label__eng_Latn | 0.771547 | 0.634192 |
# Chapter 8 (Pine): Curve Fitting Solutions
${\large\bf 1.}$ We linearize the equation $V(t)=V_0e^{\Gamma t}$ by taking the logarithm: $\ln V = \ln V_0 + \Gamma t$. Comparing with the equation for a straight line $Y = A + BX$, we see that
$$
\begin{align}
Y &= \ln V \;,& X &= t \\
A &= \ln V_0\;,& B &= \Gamma
\end{align}
$$
$\bf{(a)}$ & $\bf{(c)}$ There are two parts to this problem: (1) writing the fitting function with $\chi^2$ weighting and (2) transforming the data to linear form so that it can be fit to an exponential.
The first part is done with the function ``LineFitWt(x, y, sig)``. There is also an ancillary function ``rechisq(x, y, dy, slope, yint)`` that calcuates the reduced chi-squared $\chi_r^2$ for a particular set of data & fitting parameters.
The second part involves transforming the data and its uncertainties. This is done following the procedure described in *Introduction to Python for Science (by Pine)* in $\S 8.1.1$.
```
import numpy as np
import matplotlib.pyplot as plt
def LineFitWt(x, y, sig):
"""
Returns slope and y-intercept of weighted linear fit to
(x,y) data set.
Inputs: x and y data array and uncertainty array (unc)
for y data set.
Outputs: slope and y-intercept of best fit to data and
uncertainties of slope & y-intercept.
"""
sig2 = sig**2
norm = (1./sig2).sum()
xhat = (x/sig2).sum() / norm
yhat = (y/sig2).sum() / norm
slope = ((x-xhat)*y/sig2).sum()/((x-xhat)*x/sig2).sum()
yint = yhat - slope*xhat
sig2_slope = 1./((x-xhat)*x/sig2).sum()
sig2_yint = sig2_slope * (x*x/sig2).sum() / norm
return slope, yint, np.sqrt(sig2_slope), np.sqrt(sig2_yint)
def redchisq(x, y, dy, slope, yint):
chisq = (((y-yint-slope*x)/dy)**2).sum()
return chisq/float(x.size-2)
# Read data from data file
t, V, dV = np.loadtxt("RLcircuit.txt", skiprows=2, unpack=True)
########## Code to tranform & fit data starts here ##########
# Transform data and parameters from ln V = ln V0 - Gamma t
# to linear form: Y = A + B*X, where Y = ln V, X = t, dY = dV/V
X = t # transform t data for fitting (not needed as X=t)
Y = np.log(V) # transform N data for fitting
dY = dV/V # transform uncertainties for fitting
# Fit transformed data X, Y, dY to obtain fitting parameters
# B & A. Also returns uncertainties dA & dB in B & A
B, A, dB, dA = LineFitWt(X, Y, dY)
# Return reduced chi-squared
redchisqr = redchisq(X, Y, dY, B, A)
# Determine fitting parameters for original exponential function
# N = N0 exp(-Gamma t) ...
V0 = np.exp(A)
Gamma = -B
# ... and their uncertainties
dV0 = V0 * dA
dGamma = dB
###### Code to plot transformed data and fit starts here ######
# Create line corresponding to fit using fitting parameters
# Only two points are needed to specify a straight line
Xext = 0.05*(X.max()-X.min())
Xfit = np.array([X.min()-Xext, X.max()+Xext]) # smallest & largest X points
Yfit = B*Xfit + A # generates Y from X data &
# fitting function
plt.errorbar(X, Y, dY, fmt="b^")
plt.plot(Xfit, Yfit, "c-", zorder=-1)
plt.title(r"$\mathrm{Fit\ to:}\ \ln V = \ln V_0-\Gamma t$ or $Y = A + BX$")
plt.xlabel('time (ns)')
plt.ylabel('ln voltage (volts)')
plt.xlim(-50, 550)
plt.text(210, 1.5, u"A = ln V0 = {0:0.4f} \xb1 {1:0.4f}".format(A, dA))
plt.text(210, 1.1, u"B = -Gamma = {0:0.4f} \xb1 {1:0.4f} /ns".format(B, dB))
plt.text(210, 0.7, "$\chi_r^2$ = {0:0.3f}".format(redchisqr))
plt.text(210, 0.3, u"V0 = {0:0.2f} \xb1 {1:0.2f} V".format(V0, dV0))
plt.text(210, -0.1,u"Gamma = {0:0.4f} \xb1 {1:0.4f} /ns".format(Gamma, dGamma))
plt.show()
plt.savefig("RLcircuit.pdf")
```
$\bf{(b)}$ The value of $\chi_r^2$ returned by the fitting routine is $0.885$, which is near 1, so it seem that the error bars are about right and an exponential is a good model for the data.
${\bf (d)}$ Starting from $\Gamma = R/L$ and assuming negligible uncertainty in $R$, we have
$$\begin{align}
L &= \frac{R}{\Gamma} = \frac{10^4~\Omega}{(0.0121~\text{ns}^{-1})(10^9~\text{ns/s})} = 8.24 \times 10^{-4}~\text{henry}
= 824~\mu\text{H}\\
\delta L &= \left|\frac{\partial L}{\partial \Gamma}\right|\delta\Gamma = \frac{R}{\Gamma^2}\delta\Gamma
= L \frac{\delta\Gamma}{\Gamma} = 1.1 \times 10^{-5}~\text{henry} = 11~\mu\text{H}
\end{align}$$
Here are the calculations:
```
R = 10.e3
Gamma *= 1.e9 # convert Gamma from 1/ns to 1/s
L = R/Gamma
print("L = {0:0.2e} henry".format(L))
dGamma *= 1.e9 # convert dGamma from 1/ns to 1/s
dL = L*(dGamma/Gamma)
print("dL = {0:0.1e} henry".format(dL))
```
L = 8.24e-04 henry
dL = 1.1e-05 henry
${\large\bf 2.}$ Here we want to use a linear fitting routine ($Y = A + BX$) to fit a power law model
$$m = Kn^p\;,$$
where $K$ and $p$ are fitting parameters. We transform the equation by taking the logarithm of both sides, which gives
$$\ln m = \ln K + p\ln n\;.$$
Thus, identifying the transformed variables as
$$y=\ln m\;,\quad x=\ln n\;,$$
and the $y$-intercept and slope and are given by $A=\ln K$ and $B=p$, respectively.
The uncertainties in $y$ are related to those in $m$ by
$$\delta y = \left| \frac{\partial y}{\partial m} \right|\delta m = \frac{\delta m}{m}$$
The uncertainties in the fitting paramters follow from $K=e^A$ and $p=B$:
$$ \delta K = e^A \delta A\;,\quad \delta p = \delta B\;.$$
These transformations are implemented in the code below. We use the same fitting routine used in Problem 1 above.
```
import numpy as np
import matplotlib.pyplot as plt
def LineFitWt(x, y, sig):
"""
Returns slope and y-intercept of weighted linear fit to
(x,y) data set.
Inputs: x and y data array and uncertainty array (unc)
for y data set.
Outputs: slope and y-intercept of best fit to data and
uncertainties of slope & y-intercept.
"""
sig2 = sig**2
norm = (1./sig2).sum()
xhat = (x/sig2).sum() / norm
yhat = (y/sig2).sum() / norm
slope = ((x-xhat)*y/sig2).sum()/((x-xhat)*x/sig2).sum()
yint = yhat - slope*xhat
sig2_slope = 1./((x-xhat)*x/sig2).sum()
sig2_yint = sig2_slope * (x*x/sig2).sum() / norm
return slope, yint, np.sqrt(sig2_slope), np.sqrt(sig2_yint)
def redchisq(x, y, dy, slope, yint):
chisq = (((y-yint-slope*x)/dy)**2).sum()
return chisq/float(x.size-2)
# Read data from data file
n, m, dm = np.loadtxt("Mass.txt", skiprows=4, unpack=True)
########## Code to tranform & fit data starts here ##########
# Transform data and parameters to linear form: Y = A + B*X
X = np.log(m) # transform t data for fitting
Y = np.log(n) # transform N data for fitting
dY = dm/m # transform uncertainties for fitting
# Fit transformed data X, Y, dY to obtain fitting parameters
# B & A. Also returns uncertainties dA & dB in B & A
B, A, dB, dA = LineFitWt(X, Y, dY)
# Return reduced chi-squared
redchisqr = redchisq(X, Y, dY, B, A)
# Determine fitting parameters for original exponential function
# N = N0 exp(-Gamma t) ...
p = B
K = np.exp(A)
# ... and their uncertainties
dp = dB
dK = np.exp(A)*dA
###### Code to plot transformed data and fit starts here ######
# Create line corresponding to fit using fitting parameters
# Only two points are needed to specify a straight line
Xext = 0.05*(X.max()-X.min())
Xfit = np.array([X.min()-Xext, X.max()+Xext])
Yfit = B*Xfit + A # generates Y from X data &
# fitting function
plt.errorbar(X, Y, dY, fmt="gs")
plt.plot(Xfit, Yfit, "k-", zorder=-1)
plt.title(r"Fit to $\ln m=\ln K + p\, \ln n$ or $Y=A+BX$")
plt.xlabel(r'$\ln m$', fontsize=16)
plt.ylabel(r'$\ln n$', fontsize=16)
plt.text(10, 7.6, u"A = ln K = {0:0.1f} \xb1 {1:0.1f}".format(A, dA))
plt.text(10, 7.3, u"B = p = {0:0.2f} \xb1 {1:0.2f}".format(B, dB))
plt.text(10, 7.0, u"K = {0:0.1e} \xb1 {1:0.1e}".format(K, dK))
plt.text(10, 6.7, "$\chi_r^2$ = {0:0.3f}".format(redchisqr))
plt.show()
plt.savefig("Mass.pdf")
```
${\large\bf 3.}$ (a)
```
import numpy as np
import matplotlib.pyplot as plt
# define function to calculate reduced chi-squared
def RedChiSqr(func, x, y, dy, params):
resids = y - func(x, *params)
chisq = ((resids/dy)**2).sum()
return chisq/float(x.size-params.size)
# define fitting function
def oscDecay(t, A, B, C, tau, omega):
y = A * (1.0 + B*np.cos(omega*t)) * np.exp(-0.5*t*t/(tau*tau)) + C
return y
# read in spectrum from data file
t, decay, unc = np.loadtxt("oscDecayData.txt", skiprows=4, unpack=True)
# initial values for fitting parameters (guesses)
A0 = 15.0
B0 = 0.6
C0 = 1.2*A0
tau0 = 16.0
omega0 = 2.*np.pi/8. # period of oscillations in data is about 8
# plot data and fit with estimated fitting parameters
tFit = np.linspace(0., 49.5, 250)
plt.plot(tFit, oscDecay(tFit, A0, B0, C0, tau0, omega0), 'b-')
plt.errorbar(t, decay, yerr=unc, fmt='or', ecolor='black', ms=4)
plt.show()
```
(b)
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec # for unequal plot boxes
import scipy.optimize
# define function to calculate reduced chi-squared
def RedChiSqr(func, x, y, dy, params):
resids = y - func(x, *params)
chisq = ((resids/dy)**2).sum()
return chisq/float(x.size-params.size)
# define fitting function
def oscDecay(t, A, B, C, tau, omega):
y = A * (1.0 + B*np.cos(omega*t)) * np.exp(-0.5*t*t/(tau*tau)) + C
return y
# read in spectrum from data file
t, decay, unc = np.loadtxt("oscDecayData.txt", skiprows=4, unpack=True)
# initial values for fitting parameters (guesses)
A0 = 15.0
B0 = 0.6
C0 = 1.2*A0
tau0 = 16.0
omega0 = 2.*np.pi/8.
# fit data using SciPy's Levenberg-Marquart method
nlfit, nlpcov = scipy.optimize.curve_fit(oscDecay,
t, decay, p0=[A0, B0, C0, tau0, omega0], sigma=unc)
# calculate reduced chi-squared
rchi = RedChiSqr(oscDecay, t, decay, unc, nlfit)
# create fitting function from fitted parameters
A, B, C, tau, omega = nlfit
t_fit = np.linspace(0.0, 50., 512)
d_fit = oscDecay(t_fit, A, B, C, tau, omega)
# Create figure window to plot data
fig = plt.figure(1, figsize=(8,8)) # extra length for residuals
gs = gridspec.GridSpec(2, 1, height_ratios=[6, 2])
# Top plot: data and fit
ax1 = fig.add_subplot(gs[0])
ax1.plot(t_fit, d_fit)
ax1.errorbar(t, decay, yerr=unc, fmt='or', ecolor='black', ms=4)
ax1.set_xlabel('time (ms)')
ax1.set_ylabel('decay (arb units)')
ax1.text(0.55, 0.8, 'A = {0:0.1f}\nB = {1:0.3f}\nC = {2:0.1f}'.format(A, B, C),
transform = ax1.transAxes)
ax1.text(0.75, 0.8, '$\\tau$ = {0:0.1f}\n$\omega$ = {1:0.3f}\n$\chi^2$ = {2:0.3f}'
.format(tau, omega, rchi), transform = ax1.transAxes)
ax1.set_title('$d(t) = A (1+B\,\cos\,\omega t) e^{-t^2/2\\tau^2} + C$')
# Bottom plot: residuals
resids = decay - oscDecay(t, A, B, C, tau, omega)
ax2 = fig.add_subplot(gs[1])
ax2.axhline(color="gray")
ax2.errorbar(t, resids, yerr = unc, ecolor="black", fmt="ro", ms=4)
ax2.set_xlabel('time (ms)')
ax2.set_ylabel('residuals')
ax2.set_ylim(-5, 5)
yticks = (-5, 0, 5)
ax2.set_yticks(yticks)
plt.savefig('FitOscDecay.pdf')
plt.show()
```
(c)
```
# initial values for fitting parameters (guesses)
A0 = 15.0
B0 = 0.6
C0 = 1.2*A0
tau0 = 16.0
omega0 = 3.0 * 0.781
# fit data using SciPy's Levenberg-Marquart method
nlfit, nlpcov = scipy.optimize.curve_fit(oscDecay,
t, decay, p0=[A0, B0, C0, tau0, omega0], sigma=unc)
# calculate reduced chi-squared
rchi = RedChiSqr(oscDecay, t, decay, unc, nlfit)
# create fitting function from fitted parameters
A, B, C, tau, omega = nlfit
t_fit = np.linspace(0.0, 50., 512)
d_fit = oscDecay(t_fit, A, B, C, tau, omega)
# Create figure window to plot data
fig = plt.figure(1, figsize=(8,6))
# Top plot: data and fit
ax1 = fig.add_subplot(111)
ax1.plot(t_fit, d_fit)
ax1.errorbar(t, decay, yerr=unc, fmt='or', ecolor='black', ms=4)
ax1.set_xlabel('time (ms)')
ax1.set_ylabel('decay (arb units)')
ax1.text(0.55, 0.8, 'A = {0:0.1f}\nB = {1:0.3f}\nC = {2:0.1f}'.format(A, B, C),
transform = ax1.transAxes)
ax1.text(0.75, 0.8, '$\\tau$ = {0:0.1f}\n$\omega$ = {1:0.3f}\n$\chi^2$ = {2:0.3f}'
.format(tau, omega, rchi), transform = ax1.transAxes)
ax1.set_title('$d(t) = A (1+B\,\cos\,\omega t) e^{-t^2/2\\tau^2} + C$')
plt.show()
```
(d) The program finds the optimal values for all the fitting paramters again
```
# initial values for fitting parameters (guesses)
A0 = 15.0
B0 = 0.6
C0 = 1.2*A0
tau0 = 16.0
omega0 = 2.*np.pi/8.
# fit data using SciPy's Levenberg-Marquardt method
nlfit, nlpcov = scipy.optimize.curve_fit(oscDecay,
t, decay, p0=[A0, B0, C0, tau0, omega0], sigma=unc)
# unpack optimal values of fitting parameters from nlfit
A, B, C, tau, omega = nlfit
# calculate reduced chi square for different values around the optimal
omegaArray = np.linspace(0.05, 2.95, 256)
redchiArray = np.zeros(omegaArray.size)
for i in range(omegaArray.size):
nlfit = np.array([A, B, C, tau, omegaArray[i]])
redchiArray[i] = RedChiSqr(oscDecay, t, decay, unc, nlfit)
plt.figure(figsize=(8,4))
plt.plot(omegaArray, redchiArray)
plt.xlabel('$\omega$')
plt.ylabel('$\chi_r^2$')
plt.savefig('VaryChiSq.pdf')
plt.show()
```
```
```
| e863ad829fbcca8c2b0ac36784d1bbd05878b780 | 180,417 | ipynb | Jupyter Notebook | Book/chap8/Problems/.ipynb_checkpoints/Chap08PineSolns-checkpoint.ipynb | lorenghoh/pyman | 9b4ddd52c5577fc85e2601ae3128f398f0eb673c | [
"CC0-1.0"
]
| 1 | 2020-02-16T16:15:04.000Z | 2020-02-16T16:15:04.000Z | Book/chap8/Problems/.ipynb_checkpoints/Chap08PineSolns-checkpoint.ipynb | lorenghoh/pyman | 9b4ddd52c5577fc85e2601ae3128f398f0eb673c | [
"CC0-1.0"
]
| null | null | null | Book/chap8/Problems/.ipynb_checkpoints/Chap08PineSolns-checkpoint.ipynb | lorenghoh/pyman | 9b4ddd52c5577fc85e2601ae3128f398f0eb673c | [
"CC0-1.0"
]
| 1 | 2020-01-08T23:35:54.000Z | 2020-01-08T23:35:54.000Z | 307.87884 | 43,205 | 0.898169 | true | 4,660 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.839734 | 0.758527 | __label__eng_Latn | 0.732029 | 0.600644 |
This notebook will replicate H2 (minimal basis, B-K) energy evaluation using pytket, pytket_qiskit and pytket_honeywell
python~=3.7
requirements:
pytket==0.4.3
pytket_qiskit>0.3.4
pytket_honeywell==0.0.1
sympy
numpy
```python
import os
from functools import reduce
import operator
import copy
from collections import Counter
import sympy
import numpy as np
from pytket.circuit import Circuit
from pytket.extensions.qiskit import AerBackend, IBMQBackend
from pytket.extensions.honeywell import HoneywellBackend
from pytket.circuit import PauliExpBox, fresh_symbol
from pytket.pauli import Pauli
from pytket.utils.measurements import append_pauli_measurement
from pytket.utils import expectation_from_counts
from pytket.passes import DecomposeBoxes
from h2_hamiltonians import bond_length_hams
```
Generate ansatz circuit using pytket using known operators
```python
ansatz = Circuit(4,4)
param = fresh_symbol("t")
coeff = -2*param/sympy.pi
box = PauliExpBox((Pauli.Y, Pauli.Z, Pauli.X, Pauli.Z), coeff)
ansatz.X(0)
ansatz.add_pauliexpbox(box, ansatz.qubits)
DecomposeBoxes().apply(ansatz)
```
True
Set up hamiltonian processing code
```python
contracted_measurement_bases = ('Z', 'X', 'Y')
def get_energy_from_counts(coeff_shots):
return sum(coeff * expectation_from_counts(shots) for coeff, shots in coeff_shots)
def operator_only_counts(counts, operator_qubs):
filterstate = lambda s: tuple(np.array(s)[list(operator_qubs)])
filtereredcounters = (Counter({filterstate(state):count}) for state, count in counts.items())
return reduce(operator.add, filtereredcounters)
def submit_hamiltonian(state_circ, measurementops, n_shots, backend, base_name='ansatz'):
circuits = []
circuit_names = []
for entry, basis in zip(measurementops,contracted_measurement_bases):
meas_circ = state_circ.copy()
append_pauli_measurement(entry, meas_circ)
meas_circ = backend.get_compiled_circuit(meas_circ)
circuit_names += [f'{base_name}_{basis}']
circuits.append(meas_circ)
print(circuit_names)
return backend.process_circuits(circuits, n_shots=n_shots)
def calculate_hamiltonian(measured_counts, hamiltonian):
hamcopy = copy.copy(hamiltonian)
constant_coeff = hamcopy.pop(tuple())
hamoptups, coeff_list = zip(*hamcopy.items())
converted_shot_list = []
measurement_mapper = lambda opdict: opdict[0] if 0 in opdict else 'Z'
hamop2measop = {tp: measurement_mapper(dict(tp)) for tp in hamoptups}
meascounts_map = dict(zip(contracted_measurement_bases, measured_counts))
for (optup, coeff) in zip(hamoptups, coeff_list):
op_qubs, _ = zip(*optup)
selected_meas_counts = meascounts_map[hamop2measop[optup]]
converted_shot_list.append(operator_only_counts(selected_meas_counts, op_qubs))
return get_energy_from_counts (zip(coeff_list, converted_shot_list)) + constant_coeff
```
Set up minimal measurement bases required to measure whole hamiltonian
```python
measurement_ops = [((0, 'Z'), (1, 'Z'), (2, 'Z'), (3, 'Z')),
((0, 'X'), (1, 'Z'), (2, 'X'), (3, 'Z')), ((0, 'Y'), (1, 'Z'), (2, 'Y'), (3, 'Z'))]
```
Set up honeywell API key and machine name
```python
hwell_apikey = '<apikey>'
hwell_machine = "HQS-LT-1.0-APIVAL"
```
```python
bond_length = 0.735
popt = bond_length_hams[bond_length]["optimal_parameter"]
hamopt = bond_length_hams[bond_length]["hamiltonian"]
aerbackend = AerBackend()
backend = HoneywellBackend(hwell_apikey, device_name=hwell_machine, label='h2_exp')
# backend = aerbackend
backend = IBMQBackend('ibmq_burlington', hub='ibmq')
state_circuit = ansatz.copy()
state_circuit.symbol_substitution({param:popt})
sub_jobs = submit_hamiltonian(state_circuit, measurement_ops, 100, backend)
print(sub_jobs)
results = [backend.run_circuit(job).get_counts() for job in sub_jobs]
calculate_hamiltonian(results, hamopt)
```
['ansatz_Z', 'ansatz_X', 'ansatz_Y']
[('5e554bacd8204b0018fd539f', 0), ('5e554bacd8204b0018fd539f', 1), ('5e554bacd8204b0018fd539f', 2)]
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
-0.6357045412185943
```python
retrieve_ids = ['', '', '']
retrieved_results = [backend.run_circuit(JobHandle(id)).get_counts() for id in retrieve_ids]
calculate_hamiltonian(retrieved_results, hamopt)
```
| 11d57120221b79e8ef48bb659e1a13f34fbe0b63 | 7,373 | ipynb | Jupyter Notebook | modules/pytket-honeywell/examples/h2_energy_evaluation.ipynb | vanyae-cqc/pytket-extensions | 67c630b77926fc9832ee7dfa3261b792cde31ed3 | [
"Apache-2.0"
]
| 22 | 2021-03-02T15:17:22.000Z | 2022-03-20T21:17:10.000Z | modules/pytket-honeywell/examples/h2_energy_evaluation.ipynb | vanyae-cqc/pytket-extensions | 67c630b77926fc9832ee7dfa3261b792cde31ed3 | [
"Apache-2.0"
]
| 132 | 2021-03-03T08:30:05.000Z | 2022-03-31T21:11:58.000Z | modules/pytket-honeywell/examples/h2_energy_evaluation.ipynb | vanyae-cqc/pytket-extensions | 67c630b77926fc9832ee7dfa3261b792cde31ed3 | [
"Apache-2.0"
]
| 13 | 2021-03-05T17:01:19.000Z | 2022-03-17T15:38:02.000Z | 28.800781 | 128 | 0.581717 | true | 1,243 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.718594 | 0.613617 | __label__eng_Latn | 0.586466 | 0.263968 |
# Introduction
Understanding the behavior of neural networks and why they generalize has been a central pursuit of the theoretical deep learning community.
Our paper [*A Fine-Grained Spectral Perspective on Neural Networks*](https://arxiv.org/abs/1907.10599) attacks this problem by looking at eigenvalues and eigenfunctions, as the name suggests.
We will study the spectra of the *Conjugate Kernel* [[Daniely et al. 2017](http://papers.nips.cc/paper/6427-toward-deeper-understanding-of-neural-networks-the-power-of-initialization-and-a-dual-view-on-expressivity.pdf)], or CK (also called the *Neural Network-Gaussian Process Kernel* [[Lee et al. 2018](http://arxiv.org/abs/1711.00165)]), and the *Neural Tangent Kernel*, or NTK [[Jacot et al. 2018](http://arxiv.org/abs/1806.07572)].
Roughly, the CK and the NTK tell us respectively "what a network looks like at initialization" and "what a network looks like during and after training."
Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks.
## Intuition for the utility of the spectral perspective
Let's take the example of the CK.
We know from [Lee et al. (2018)](http://arxiv.org/abs/1711.00165) that a randomly initialized network is distributed as a Gaussian process $\mathcal N(0, K)$, where $K$ is the corresponding CK, in the infinite-width limit.
If we have the eigendecomposition
\begin{equation}
K = \sum_{i \ge 1} \lambda_i u_i\otimes u_i
\label{eqn:eigendecomposition}
\end{equation}
with eigenvalues $\lambda_i$ in decreasing order and corresponding eigenfunctions $u_i$, then each sample from this GP can be obtained as
$$
\sum_{i \ge 1} \sqrt{\lambda_i} \omega_i u_i,\quad
\omega_i \sim \mathcal N(0, 1).
$$
Training the last layer of a randomly initialized network via full batch gradient descent for an infinite amount of time corresponds to Gaussian process inference with kernel $K$ [[Lee et al. 2018](http://arxiv.org/abs/1711.00165), [2019](http://arxiv.org/abs/1902.06720)].
Thus, the more the GP prior (governed by the CK) is consistent with the ground truth function $f^*$, the more we expect the Gaussian process inference and GD training to generalize well.
We can measure this consistency in the "alignment" between the eigenvalues $\lambda_i$ and the squared coefficients $a_i^2$ of $f^*$'s expansion in the $\{u_i\}_i$ basis.
The former can be interpreted as the expected magnitude (squared) of the $u_i$-component of a sample $f \sim \mathcal N(0, K)$, and the latter can be interpreted as the actual magnitude squared of such component of $f^*$.
Here and in this paper, we will investigate an even cleaner setting where $f^* = u_i$ is an eigenfunction.
Thus we would hope to use a kernel whose $i$th eigenvalue $\lambda_i$ is as large as possible.
A similar intuition holds for NTK, because training all parameters of the network for an infinite amount of time yields the mean prediction of the GP $\mathcal N(0, \text{NTK})$ in expectation [[Lee et al. 2019](http://arxiv.org/abs/1902.06720)].
## A brief summary of the spectral theory of CK and NTK
Now, if the CK and the NTK have spectra difficult to compute, then this perspective is not so useful.
But in idealized settings, where the data distribution is uniform over the boolean cube, the sphere, or from the standard Gaussian, a complete (or almost complete in the Gaussian case) eigendecomposition of the kernel can be obtained, thanks to the symmetry of the domain.
Here and in the paper, we focus on the boolean cube, since in high dimensions, all three distributions are very similar, and the boolean cube eigenvalues are much easier to compute (see paper for more details).
We briefly summarize the spectral theory of CK and NTK (of multilayer perceptrons, or MLPs) on the boolean cube.
First, these kernels are always diagonalized by the *boolean Fourier basis*, which are just monomial functions like $x_1 x_3 x_{10}$.
These Fourier basis functions are naturally graded by their *degree*, ranging from 0 to the dimension $d$ of the cube.
Then the kernel has $d+1$ unique eigenvalues,
$$\mu_0, \ldots, \mu_d$$
corresponding to each of the degrees, so that the eigenspace associated to $\mu_k$ is a $\binom d k$ dimensional space of monomials with degree $k$.
These eigenvalues are simple linear functions of a small number of the kernel values, and can be easily computed.
Let's try computing them ourselves!
# Computing Eigenvalues over a Grid of Hyperparameters
```python
import numpy as np
import scipy as sp
from scipy.special import erf as erf
import matplotlib.pyplot as plt
from itertools import product
import seaborn as sns
sns.set()
from mpl_toolkits.axes_grid1 import ImageGrid
def tight_layout(plt):
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
```
Our methods for doing the theoretical computations lie in the `theory` module.
```python
from theory import *
```
First, let's compute the eigenvalues of erf CK and NTK over a large range of hyperparameters:
- $\sigma_w^2 \in [1, 5]$
- $\sigma_b^2 \in [0, 4]$
- dimension 128 boolean cube
- depth up to 128
- degree $k \le 8$.
Unless stated otherwise, all plots below use these hyperparameters.
We will do the same for relu kernels later.
```python
# range of $\sigma_b^2$
erfvbrange = np.linspace(0, 4, num=41)
# range of $\sigma_w^2$
erfvwrange = np.linspace(1, 5, num=41)
erfvws, erfvbs = np.meshgrid(erfvwrange, erfvbrange, indexing='ij')
# `dim` = $d$
dim = 128
depth = 128
# we will compute the eigenvalues $\mu_k$ for $k = 0, 1, ..., maxdeg$.
maxdeg = 8
```
As mentioned in the paper, any CK or NTK $K$ of multilayer perceptrons (MLPs) takes the form
$$K(x, y) = \Phi\left(\frac{\langle x, y \rangle}{\|x\|\|y\|}, \frac{\|x\|^2}d, \frac{\|y\|^2}d\right)$$
for some function $\Phi: \mathbb R^3 \to \mathbb R$.
On the boolean cube $\{1, -1\}^d$, $\|x\|^2 = d$ for all $x$, and $\langle x, y \rangle / d$ takes value in a discrete set $\{-1, -1+2/d, \ldots, 1-2/d, 1\}$.
Thus $K(x, y)$ only takes a finite number of different values as well.
We first compute these values (see paper for the precise formulas).
```python
# `erfkervals` has two entries, with keys `cks` and `ntks`.
# Each entry is an array with shape (`depth`, len(erfvwrange), len(erfvbrange), `dim`+1)
# The last dimension carries the entries $\Phi(-1), \Phi(-1 + 2/d), ..., \Phi(1)$
erfkervals = boolcubeFgrid(dim, depth, erfvws, erfvbs, VErf, VDerErf)
```
```python
erfkervals['cks'].shape
```
(129, 41, 41, 129)
The eigenvalues $\mu_k, k = 0, 1, \ldots, d$, can be expressed as a simple linear function of $\Phi$'s values, as hinted before.
However, a naive evaluation would lose too much numerical precision because the number of alternating terms.
Instead, we do something more clever, resulting in the following algorithm:
- For $\Delta = 2/d$, we first evaluate $\Phi^{(a)}(x) = \frac 1 2 \left(\Phi^{(a-1)}(x) - \Phi^{(a-1)}(x - \Delta)\right)$ with base case $\Phi^{(0)} = \Phi$, for $a = 0, 1, \ldots$, and for various values of $x$.
- Then we just sum a bunch of nonnegative terms to get the eigenvalue $\mu_k$ associated to degree $k$ monomials
$$\mu_k = \frac 1{2^{d-k}} \sum_{r=0}^{d-k}\binom{d-k}r \Phi^{(k)}(1 - r \Delta).$$
We will actually use an even more clever algorithm here, but with the same line of the reasoning; see the paper and the `twostep` option in the source code for details.
Note that, here we will compute *normalized eigenvalues*, normalized by their trace.
So these normalized eigenvalues, with multiplicity, should sum up to 1.
```python
erfeigs = {}
# `erfeigs['ck']` is an array with shape (`maxdeg`, `depth`+1, len(erfvwrange), len(erfvbrange))
# `erfeigs['ck'][k, L] is the matrix of eigenvalue $\mu_k$ for a depth $L$ erf network,
# as a function of the values of $\sigma_w^2, \sigma_b^2$ in `erfvwrange` and `erfvbrange`
# Note that these eigenvalues are normalized by the trace
# (so that all normalized eigenvalues sum up to 1)
erfeigs['ck'] = relu(boolCubeMuAll(dim, maxdeg, erfkervals['cks']))
# similarly for `erfeigs['ntk']`
erfeigs['ntk'] = relu(boolCubeMuAll(dim, maxdeg, erfkervals['ntks']))
```
To perform fine-grained analysis of how hyperparameters affects the performance of the kernel and thus the network itself, we use a heuristic, the *fractional variance*, defined as
$$
\text{degree $k$ fractional variance} = \frac{\binom d k \mu_k}{\sum_{i=0}^d \binom d i \mu_i}.
$$
This terminology comes from the fact that, if we were to sample a function $f$ from a Gaussian process with kernel $K$, then we expect that $r\%$ of the total variance of $f$ comes from degree $k$ components of $f$, where $r\%$ is the degree $k$ fractional variance.
If we were to try to learn a homogeneous degree-$k$ polynomial using a kernel $K$, intuitively we should try to choose $K$ such that its $\mu_k$ is maximized.
In the paper, we present empirical evidence that fractional variance is indeed inversely correlated with test loss.
So let's compute them.
```python
# `erfeigs['ckfracvar']` is an array with shape (`maxdeg`, `depth`+1, len(erfvwrange), len(erfvbrange))
# just like `erfeigs['ck']`
erfeigs['ckfracvar'] = (
sp.special.binom(dim, np.arange(0, maxdeg+1))[:, None, None, None]
* erfeigs['ck']
)
# Same thing here
erfeigs['ntkfracvar'] = (
sp.special.binom(dim, np.arange(0, maxdeg+1))[:, None, None, None]
* erfeigs['ntk']
)
```
```python
erfeigs['ckfracvar'].shape
```
(9, 129, 41, 41)
Similarly, let's compute the eigenvalues of ReLU CK and NTK over a large range of hyperparameters:
- $\sigma_w^2 = 2$
- $\sigma_b^2 \in [0, 4]$
- dimension 128 boolean cube
- depth up to 128
- degree $k \le 8$.
Unless stated otherwise, all plots below use these hyperparameters.
```python
reluvws, reluvbs = np.meshgrid([2], np.linspace(0, 4, num=401), indexing='ij')
dim = 128
depth = 128
maxdeg = 8
relukervals = boolcubeFgrid(dim, depth, reluvws, reluvbs, VReLU, VStep)
relueigs = {}
relueigs['ck'] = relu(boolCubeMuAll(dim, maxdeg, relukervals['cks']))
relueigs['ntk'] = relu(boolCubeMuAll(dim, maxdeg, relukervals['ntks']))
relueigs['ckfracvar'] = (
sp.special.binom(dim, np.arange(0, maxdeg+1))[:, None, None, None]
* relueigs['ck']
)
relueigs['ntkfracvar'] = (
sp.special.binom(dim, np.arange(0, maxdeg+1))[:, None, None, None]
* relueigs['ntk']
)
```
Now we have computed all the eigenvalues, let's take a look at them!
# Deeper Networks Learn More Complex Features --- But Not Too Deep
If $K$ were to be the CK or NTK of a relu or erf MLP, then we find that for higher $k$, depth of the network helps increase $\mu_k$.
```python
maxdeg = 8
plt.figure(figsize=(12, 4))
relueigs['ntkbestdepth'] = np.argmax(np.max(relueigs['ntk'][1:, :, ...], axis=(2, 3)), axis=1).squeeze()
relueigs['ckbestdepth'] = np.argmax(np.max(relueigs['ck'][1:, :, ...], axis=(2, 3)), axis=1).squeeze()
fig = plt.subplot(141)
plt.text(-.2, -.15, '(a)', fontsize=24, transform=fig.axes.transAxes)
plt.plot(np.arange(1, maxdeg+1), relueigs['ntkbestdepth'], label='ntk', markersize=4, marker='o')
plt.plot(np.arange(1, maxdeg+1), relueigs['ckbestdepth'], label='ck', markersize=4, marker='o')
plt.legend()
plt.xlabel('degree')
plt.ylabel('optimal depth')
plt.title('relu kernel optimal depths')
erfeigs['ntkbestdepth'] = np.argmax(np.max(erfeigs['ntk'][1:, :, ...], axis=(2, 3)), axis=1).squeeze()
erfeigs['ckbestdepth'] = np.argmax(np.max(erfeigs['ck'][1:, :, ...], axis=(2, 3)), axis=1).squeeze()
fig = plt.subplot(142)
plt.text(-.2, -.15, '(b)', fontsize=24, transform=fig.axes.transAxes)
plt.plot(np.arange(1, maxdeg+1), erfeigs['ntkbestdepth'], label='ntk', markersize=4, marker='o')
plt.plot(np.arange(1, maxdeg+1), erfeigs['ckbestdepth'], label='ck', markersize=4, marker='o')
plt.legend()
plt.xlabel('degree')
plt.title('erf kernel optimal depths')
fig = plt.subplot(143)
plt.text(-.5, -.15, '(c)', fontsize=24, transform=fig.axes.transAxes)
plt.imshow(relueigs['ntkfracvar'][3:, :20, 0, 0].T, aspect=12/20, origin='lower', extent=[2.5, 8.5, -.5, 20.5])
cb = plt.colorbar()
plt.xticks(range(3, 9, 2))
plt.xlabel('degree')
plt.ylabel('depth')
plt.grid()
plt.title(u'relu ntk, $\sigma_b^2=0$')
fig = plt.subplot(144)
plt.text(-.5, -.15, '(d)', fontsize=24, transform=fig.axes.transAxes)
plt.imshow(erfeigs['ntkfracvar'][3:, :, 0, 0].T, aspect=12/129, origin='lower', extent=[2.5, 8.5, -.5, 128.5], vmin=0, vmax=0.21)
cb = plt.colorbar()
cb.set_label('fractional variance')
plt.xticks(range(3, 9, 2))
plt.xlabel('degree')
plt.grid()
plt.title(u'erf ntk, $\sigma_w^2=1$, $\sigma_b^2=0$')
tight_layout(plt)
```
In **(a)** and **(b)** above, we plot, for each degree $k$, the depth that (with some combination of other hyperparameters like $\sigma_b^2$) maximizes degree $k$ fractional variance, for respectively relu and erf kernels.
Clearly, the maximizing depths are increasing with $k$ for relu, and also for erf when considering either odd $k$ or even $k$ only.
The slightly differing behavior between even and odd $k$ is expected, as seen in the form of Theorem 4.1 in the paper.
Note the different scales of y-axes for relu and erf --- the depth effect is much stronger for erf than relu.
For relu NTK and CK, $\sigma_b^2=0$ maximizes fractional variance in general, and the same holds for erf NTK and CK in the odd degrees (see our other notebook, [TheCompleteHyperparameterPicture]()).
In **(c)** and **(d)**, we give a more fine-grained look at the $\sigma_b^2=0$ slice, via heatmaps of fractional variance against degree and depth.
Brighter color indicates higher variance, and we see the optimal depth for each degree $k$ clearly increases with $k$ for relu NTK, and likewise for odd degrees of erf NTK.
However, note that as $k$ increases, the difference between the maximal fractional variance and those slightly suboptimal becomes smaller and smaller, reflected by suppressed range of color moving to the right.
The heatmaps for relu and erf CKs look similar (compute them yourself, as an exercise!).
In the paper, this trend of increasing optimal depth is backed up empirical data from training neural networks to learn polynomials of various degrees.
Note that implicit in our results here is a highly nontrivial observation:
Past some point (the *optimal depth*), high depth can be detrimental to the performance of the network, beyond just the difficulty to train, and this detriment can already be seen in the corresponding NTK or CK.
In particular, it's *not* true that the optimal depth is infinite.
This adds significant nuance to the folk wisdom that "depth increases expressivity and allows neural networks to learn more complex features."
# NTK Favors More Complex Features Than CK
We generally find the degree $k$ fractional variance of NTK to be higher than that of CK when $k$ is large, and vice versa when $k$ is small.
```python
plt.figure(figsize=(14, 4))
def convert2vb(i):
return i/100
fig = plt.subplot(131)
plt.text(-.15, -.15, '(a)', fontsize=24, transform=fig.axes.transAxes)
cpal = sns.color_palette()
for i, (depth, vbid) in enumerate([(1, 10), (128, 300), (3, 0)]):
color = cpal[i]
plt.plot(relueigs['ntkfracvar'][:, depth, 0, vbid], c=color, label='{} | {}'.format(depth, convert2vb(vbid)), marker='o', markersize=4)
plt.plot(relueigs['ckfracvar'][:, depth, 0, vbid], '--', c=color, marker='o', markersize=4)
plt.plot([], c='black', label='ntk')
plt.plot([], '--', c='black', label='ck')
plt.legend(title=u'depth | $\sigma_b^2$')
plt.xlabel('degree')
plt.ylabel('fractional variance')
plt.title('relu examples')
plt.semilogy()
def convert2vb(i):
return i/10
def convert2vw(i):
return i/10 + 1
cpal = sns.color_palette()
fig = plt.subplot(132)
plt.text(-.15, -.15, '(b)', fontsize=24, transform=fig.axes.transAxes)
for i, (depth, vwid, vbid) in enumerate([(1, 10, 1),(24, 0, 1), (1, 40, 40)]):
color = cpal[i]
plt.plot(erfeigs['ntkfracvar'][:, depth, vwid, vbid], c=color,
label='{} | {} | {}'.format(depth, int(convert2vw(vwid)), convert2vb(vbid)), marker='o', markersize=4)
plt.plot(erfeigs['ckfracvar'][:, depth, vwid, vbid], '--', c=color, marker='o', markersize=4)
plt.plot([], c='black', label='ntk')
plt.plot([], '--', c='black', label='ck')
plt.legend(title=u'depth | $\sigma_w^2$ | $\sigma_b^2$')
plt.xlabel('degree')
plt.title('erf examples')
plt.semilogy()
fig = plt.subplot(133)
plt.text(-.15, -.15, '(c)', fontsize=24, transform=fig.axes.transAxes)
# relu
balance = np.mean((relueigs['ntk'] > relueigs['ck']) & (relueigs['ntk'] > 1e-15), axis=(1, 2, 3))
# needed to filter out all situations where eigenval is so small that it's likely just 0
balance /= np.mean((relueigs['ntk'] > 1e-15), axis=(1, 2, 3))
plt.plot(np.arange(0, maxdeg+1), balance, marker='o', label='relu')
# erf
balance = np.mean((erfeigs['ntk'] > erfeigs['ck']) & (erfeigs['ntk'] > 1e-15), axis=(1, 2, 3))
# needed to filter out all situations where eigenval is so small that it's likely just 0
balance /= np.mean((erfeigs['ntk'] > 1e-15), axis=(1, 2, 3))
plt.plot(np.arange(0, maxdeg+1), balance, marker='o', label='erf')
plt.xlabel('degree')
plt.ylabel('fraction')
plt.legend()
plt.title('fraction of hyperparams where ntk > ck')
plt.suptitle('ntk favors higher degrees compared to ck')
tight_layout(plt)
```
In **(a)**, we give several examples of the fractional variance curves for relu CK and NTK across several representative hyperparameters.
In **(b)**, we do the same for erf CK and NTK.
In both cases, we clearly see that, while for degree 0 or 1, the fractional variance is typically higher for CK, the reverse is true for larger degrees.
In **(c)**, for each degree $k$, we plot the *fraction of hyperparameters* where the degree $k$ fractional variance of NTK is greater than that of CK.
Consistent with previous observations, this fraction increases with the degree.
This means that, if we train only the last layer of a neural network (i.e. CK dynamics), we intuitively should expect to learn *simpler* features faster and generalize better, while, if we train all parameters of the network (i.e. NTK dynamics), we should expect to learn more *complex* features faster and generalize better.
Similarly, if we were to sample a function from a Gaussian process with the CK as kernel (recall this is just the distribution of randomly initialized infinite width MLPs), this function is more likely to be accurately approximated by low degree polynomials than the same with the NTK.
Again, in the paper, we present empirical evidence that, training the last layer is better than training all layers only for degree 0 polynomials, i.e. constant functions, but is worse for all higher degree (homogenous) polynomials.
This corroborates the observations made from the spectra here.
# Conclusion
We have replicated the spectral plots of section 5 and 6 in the paper, concerning the generalization properties of neural networks.
As one can see, the spectral perspective is quite useful for understanding the effects of hyperparameters.
A complete picture of how the combination of $\sigma_w^2, \sigma_b^2$, depth, degree, and nonlinearity affects fractional variance is presented in the notebook *[The Complete Hyperparameter Picture](TheCompleteHyperparameterPicture.ipynb)*.
The fractional variance is, however, by no means a perfect indicator of generalization, and there's plenty of room for improvement, as mentioned in the main text.
We hope a better predictor of test loss can be obtained in future works.
Another interesting topic that spectral analysis can shed light on is the so-called "simplicity bias" of neural networks.
We discuss this in the notebook *[Clarifying Simplicity Bias](ClarifyingSimplicityBias.ipynb)*.
| f913a5c9f3fcd161834d5718635b1602e7ceab0f | 179,267 | ipynb | Jupyter Notebook | NeuralNetworkGeneralization.ipynb | thegregyang/NNspectra | 8c71181e93a46cdcabfafdf71ae5a58830cbb27d | [
"MIT"
]
| 46 | 2019-07-25T01:23:26.000Z | 2022-03-25T13:49:08.000Z | NeuralNetworkGeneralization.ipynb | saarthaks/NNspectra | 8c71181e93a46cdcabfafdf71ae5a58830cbb27d | [
"MIT"
]
| null | null | null | NeuralNetworkGeneralization.ipynb | saarthaks/NNspectra | 8c71181e93a46cdcabfafdf71ae5a58830cbb27d | [
"MIT"
]
| 9 | 2019-07-26T00:06:33.000Z | 2021-07-16T15:49:20.000Z | 253.919263 | 92,036 | 0.90894 | true | 5,646 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92079 | 0.83762 | 0.771272 | __label__eng_Latn | 0.985844 | 0.630255 |
# Model project
```python
import numpy as np
import scipy as sp
from scipy import linalg
from scipy import optimize
from scipy import interpolate
import sympy as sm
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
```
In this project we analyze a consumer problem with a CES utility function and two goods, x1 and x2.
The utility function: alpha is a preference parameter and rho is the elasticity parameter
The income is 30 and the prices of the goods are: \\[ p_1=2, p_2=4\\].
First we investigate the jacobian and hessian matrix and how to use gradient based optimizers. In this section we use the *minimize_gradient_descent()* method.
In the next section the budget constraint is introduced and we ty to solve the consumer problem using a *Multi-dimensional constrained solver*.
After this the consumer problem is solved by using the optimizer **SLSQP**.
In the last section we introduce an extension to the problem where we impose a tax on good x2, which increases the price p2 with 0.5 for each consumed x2.
Consider the **CES utility function**:
\\[ f(\boldsymbol{x}) = u(x_1,x_2) =(\alpha x_{1}^{-\rho}+(1-\alpha) x_{2}^{-\rho})^{-1/\rho} \\]
We choose the parameters:
\\[ \alpha=0.2 \\]
\\[ \rho=-0.2 \\]
```python
sm.init_printing(use_unicode=True)
x1 = sm.symbols('x_1')
x2 = sm.symbols('x_2')
f = (0.2*x1**(2) + (1-0.2)*x2**(2))**(1/2)
```
Find the Jacobian matrix:
```python
Df = sm.Matrix([sm.diff(f,i) for i in [x1,x2]])
Df
```
Find the Hessian matrix:
```python
Hf = sm.Matrix([[sm.diff(f,i,j) for j in [x1,x2]] for i in [x1,x2]])
Hf
```
I now define the function and the matrixes:
```python
def _ces(x1,x2):
return (0.2*x1**(2) + (1-0.2)*x2**(2))**(1/2)
def ces(x):
return _ces(x[0],x[1])
def ces_jac(x):
return np.array([(0.2*x[0])/(0.2*x[0]**2+0.8*x[1]**2)**0.50,(0.8*x[1])/((0.2*x[0]**2+0.8*x[1]**2)**0.5)])
def ces_hess(x):
return np.array([0.2/(0.2*x[0]**2+0.8*x[1]**2)**0.5-(0.04*x[0]**2)/(0.2*x[0]**2+0.8*x[1]**2)**1.5,-(0.16*x[0]*x[1])/(0.2*x[0]**2+0.8*x[1]**2)**1.5,-(0.16*x[0]*x[1])/(0.2*x[0]**2+0.8*x[1]**2)**1.5,0.8/(0.2*x[0]**2+0.8*x[1]**2)**0.5-(0.64*x[1]**2)/(0.2*x[0]**2+0.8*x[1]**2)**1.5])
```
I can now plot the function in 3D by using the definitions:
```python
# Grids:
x1_vec = np.linspace(-6,6,500)
x2_vec = np.linspace(-4,4,500)
x1_grid,x2_grid = np.meshgrid(x1_vec,x2_vec,indexing='ij')
ces_grid = _ces(x1_grid,x2_grid)
# Figure:
fig = plt.figure()
ax = fig.add_subplot(1,1,1,projection='3d')
cs = ax.plot_surface(x1_grid,x2_grid,ces_grid,cmap=cm.jet)
# Labels:
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_zlabel('$u$')
ax.invert_xaxis()
ax.xaxis.pane.fill = False
ax.yaxis.pane.fill = False
ax.zaxis.pane.fill = False
#Color:
fig.colorbar(cs);
```
This figure shows the CES utility funtion. It shows that utility is smallest when x1 and x2 is equal to zero. Utility increases when x1 and x2 increases. Further it shows that x2 gives more utility than x1.
```python
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
levels = [1e-6,5*1e-6,1e-5,5*1e-5,1e-4,5*1e-4,1e-3,5*1e-3,1e-2,5*1e-2,0.50,1.00,1.50,2.00,2.50,3.00,3.50,4.00,5.00]
cs = ax.contour(x1_grid,x2_grid,ces_grid,levels=levels,cmap=cm.jet)
fig.colorbar(cs);
```
Here we see a different kind of plot. It shows the same but in 2D. Utility is increased when x1 and x2 is increased. The circles are oval which shows that x2 give more utility than x1.
# Optimizing with gradient based optimizer
**Now we wish to optimize using the algorithm:** `minimize_gradient_descent()`
The idea is to make some guesses on x0, compute f(x0), find a step size and update guess. Keep doing this until it converges.
```python
def minimize_gradient_descent(f,x0,jac,alphas=[0.01,0.05,0.1,0.25,0.5,1],max_iter=5000,tol=1e-8):
""" minimize function with gradient descent
Args:
f (callable): function
x0 (float): initial value
jac (callable): jacobian
alpha (list): potential step sizes
max_iter (int): maximum number of iterations
tol (float): tolerance
Returns:
x (float): root
n (int): number of iterations used
"""
# step 1: initialize
x = x0
fx = f(x0)
n = 1
# step 2-6: iteration
while n < max_iter:
x_prev = x
fx_prev = fx
# step 2: evaluate gradient
jacx = jac(x)
# step 3: find good step size
fx_ast = np.inf
alpha_ast = np.nan
for alpha in alphas:
x = x_prev - alpha*jacx
fx = f(x)
if fx < fx_ast:
fx_ast = fx
alpha_ast = alpha
# step 4: update guess
x = x_prev - alpha_ast*jacx
# step 5: check convergence
fx = f(x)
if abs(fx-fx_prev) < tol:
break
# d. update i
n += 1
return x,n
```
```python
x0 = np.array([5,4])
x,n = minimize_gradient_descent(ces,x0,ces_jac,alphas=[0.01,0.05,0.1,0.25,0.5,1])
print(n,x,ces(x))
```
5000 [9.8813129e-324 4.2335895e-003] 0.0037866375632192124
**Solution:** We don't find any convergence and therefore no solution using the gradient based optimizer. Therefore we now try using the SLSQP optimizer, but still by iteration over gradient evaluations.
```python
def collect(x):
global evals
global x0
global x1s
global x2s
global fs
if evals == 0:
x1s = [x0[0]]
x2s = [x0[1]]
fs = [ces(x0)]
x1s.append(x[0])
x2s.append(x[1])
fs.append(ces(x))
evals += 1
```
```python
def contour():
global evals
global x1s
global x2s
global fs
# Code for plotting the result:
fig = plt.figure(figsize=(10,4))
ax = fig.add_subplot(1,2,1)
levels = [1e-6,5*1e-6,1e-5,5*1e-5,1e-4,5*1e-4,1e-3,5*1e-3,1e-2,5*1e-2,0.50,1.00,1.50,2.00,2.50,3.00,3.50,4.00,5.00]
cs = ax.contour(x1_grid,x2_grid,ces_grid,levels=levels,cmap=cm.jet)
fig.colorbar(cs)
ax.plot(x1s,x2s,'-o',ms=4,color='black')
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax = fig.add_subplot(1,2,2)
ax.plot(np.arange(evals+1),fs,'-o',ms=4,color='black')
ax.set_xlabel('iteration')
ax.set_ylabel('function value')
```
```python
evals = 0
x0 = [-1.5,-1]
result = optimize.minimize(ces,x0,jac=ces_jac,
method='SLSQP',
bounds=((-2,2),(-2,2)),
callback=collect,
options={'disp':True})
contour()
```
We now see some convergence in the function with 23 iterations.
# Optimization problem
Optimization using the **SLSQP** method.
Consider now the consumer problem given by:
\\[
\begin{eqnarray*}
V(p_{1},p_{2},I) & = & \max_{x_{1},x_{2}}u(x_{1},x_{2})\\
& \text{s.t.}\\
p_{1}x_{1}+p_{2}x_{2} & \leq & I\\
p_{1},p_{2},I>0\\
x_{1},x_{2} & \geq & 0
\end{eqnarray*}
\\]
where the utility is a CES funtion as described earlier.
We choose the parameters:
\\[ \alpha=0.20 \\]
\\[ \rho=-0.20 \\]
```python
def u_func(x1,x2,alpha=0.20,rho=-0.2):
return (alpha*x1**(-rho)+(1-alpha)*x2**(-rho))**(-1/rho)
```
We now try to set a random value for x1 and x2 and calculate the utility:
```python
x1 = 2
x2 = 4
u = u_func(x1,x2)
print(f'x1 = {x1:.1f}, x2 = {x2:.1f} -> u = {u:.4f}')
```
x1 = 2.0, x2 = 4.0 -> u = 3.5083
```python
print(u)
```
3.508328497554137
The utility when the consumer buys two x1 and 4 x2, the utility is 3.5
Now we try to increase first x1 and then x2 to see how much it affects the utility:
```python
# Increase x1 to several random numbers:
x1_list = [2,4,6,8,10,12]
x2 = 3
for i,x1 in enumerate(x1_list):
u = u_func(x1,x2,alpha=0.20,rho=-0.2)
print(f'{i:2d}: x1 = {x1:<6.3f} x2 = {x2:<6.3f} -> u = {u:<6.3f}')
```
0: x1 = 2.000 x2 = 3.000 -> u = 2.773
1: x1 = 4.000 x2 = 3.000 -> u = 3.182
2: x1 = 6.000 x2 = 3.000 -> u = 3.473
3: x1 = 8.000 x2 = 3.000 -> u = 3.709
4: x1 = 10.000 x2 = 3.000 -> u = 3.911
5: x1 = 12.000 x2 = 3.000 -> u = 4.089
```python
# Increase x2 to several random numbers:
x2_list = [2,4,6,8,10,12]
x1 = 3
for i,x2 in enumerate(x2_list): # i is a counter
u = u_func(x1,x2,alpha=0.20,rho=-0.2)
print(f'{i:2d}: x1 = {x1:<6.3f} x2 = {x2:<6.3f} -> u = {u:<6.3f}')
```
0: x1 = 3.000 x2 = 2.000 -> u = 2.175
1: x1 = 3.000 x2 = 4.000 -> u = 3.781
2: x1 = 3.000 x2 = 6.000 -> u = 5.262
3: x1 = 3.000 x2 = 8.000 -> u = 6.673
4: x1 = 3.000 x2 = 10.000 -> u = 8.036
5: x1 = 3.000 x2 = 12.000 -> u = 9.362
This shows that in the beginning more of x1 gives much more utility, but after 10 units of x1, the marginal utility is very small, and more x1 doesn't give much more utility. But when x2 increases, the utility is also increased.
# Solve the consumer problem
We now solve the consumer problem using a **Multi-dimensional constrained solver**.
The idea is to loop through a grid of \\(N_1 \times N_2\\) possible solutions for \\(x_1\\) and \\(x_2\\). This is a way of solving.
The problem is given by:
\\[
\begin{eqnarray*}
V(p_{1},p_{2},I) & = & \max_{x_{1}\in X_1}(\alpha x_{1}^{-\rho}+(1-\alpha) x_{2}^{-\rho})^{-1/\rho}\\
& \text{s.t.}\\
X_1 & = & \left\{0,\frac{1}{N-1}\frac{}{p_1},\frac{2}{N-1}\frac{I}{p_1},\dots,\frac{I}{p_1}\right\} \\
x_{2} & = & \frac{I-p_{1}x_{1}}{p_2}\\
\end{eqnarray*}
\\]
We choose Income and prices:
I=30, p1=2, p2=4
```python
def find_best_choice(alpha,rho,I,p1,p2,N1,N2,do_print=True):
shape_tuple = (N1,N2)
x1_values = np.empty(shape_tuple)
x2_values = np.empty(shape_tuple)
u_values = np.empty(shape_tuple)
x1_best = 0
x2_best = 0
u_best = u_func(0,0,alpha=alpha,rho=rho)
for i in range(N1):
for j in range(N2):
x1_values[i,j] = x1 = (i/(N1-1))*I/p1
x2_values[i,j] = x2 = (j/(N2-1))*I/p2
if p1*x1+p2*x2 <= I:
u_values[i,j] = u_func(x1,x2,alpha=alpha,rho=rho)
else:
u_values[i,j] = u_func(0,0,alpha=alpha,rho=rho)
if u_values[i,j] > u_best:
x1_best = x1_values[i,j]
x2_best = x2_values[i,j]
u_best = u_values[i,j]
if do_print:
print_solution(x1_best,x2_best,u_best,I,p1,p2)
return x1_best,x2_best,u_best,x1_values,x2_values,u_values
def print_solution(x1,x2,u,I,p1,p2):
print(f'x1 = {x1:.8f}')
print(f'x2 = {x2:.8f}')
print(f'u = {u:.8f}')
print(f'I-p1*x1-p2*x2 = {I-p1*x1-p2*x2:.8f}')
```
```python
solution = find_best_choice(alpha=0.20,I=30,p1=2,p2=4,N1=200,N2=400,rho=-0.2)
```
x1 = 2.63819095
x2 = 6.16541353
u = 5.26100494
I-p1*x1-p2*x2 = 0.06196396
We wish to find the best solution with high utility and low left-over income: $I-p_1 x_1-p_2 x_2$
This is not optimal and not the best way to solve it as the left-over income is greater than zero.
# Solve by using the SLSQP method
Now we use the SLSQP method to solve the consumer problem with a budget constraint. We use the same income and prices as before.
```python
from scipy import optimize
```
```python
alpha = 0.20 # preference parameter
rho = -0.20 # elasticity parameter
I = 30 # income
p1 = 2 # price on good x1
p2 = 4 # price on good x2
```
```python
def value_of_choice(x,alpha,rho,I,p1,p2):
x1 = x[0]
x2 = x[1]
return -u_func(x1,x2,alpha,rho)
constraints = (
{'type': 'ineq', 'fun': lambda x: I-p1*x[0]-p2*x[1]}
)
bounds = (
(0,I/p1),
(0,I/p2)
)
# Call the SLSQP solver:
initial_guess = [I/p1/2,I/p2/2]
sol_optimize = optimize.minimize(
value_of_choice,initial_guess,args=(alpha,rho,I,p1,p2),
method='SLSQP',bounds=bounds,constraints=constraints)
x1 = sol_optimize.x[0]
x2 = sol_optimize.x[1]
u = u_func(x1,x2,alpha,rho)
print_solution(x1,x2,u,I,p1,p2)
```
x1 = 2.60552384
x2 = 6.19723808
u = 5.27198775
I-p1*x1-p2*x2 = -0.00000000
Now the utility is increased and the left-over income is equal to zero. Therefore this is a better way to solve this consumer problem.
The maximum utility is 5.27 and the optimal consumption is 2.6 of good x1 and 6.2 of good x2.
**Plot the solution**
The plot shows utility as a function of consumed x1. From this we can see maximum utility
```python
fig = plt.figure(figsize=(12,4),dpi=100)
ax = fig.add_subplot(1,2,1)
ax.plot(x1_values,u_values, 'bo')
ax.set_title('value of choice, $u(x_1,x_2)$')
ax.set_xlabel('$x_1$')
ax.set_ylabel('$u(x_1,(I-p_1 x_1)/p_2)$')
ax.grid(True)
```
# Extension of the model
Change in budget constraint.
Now we introduce a tax on good x2, which increases when consuming more of x2. The price is increased by 0.5 for each consumed x2.
\\[
\begin{eqnarray*}
V(p_{1},p_{2},I) & = & \max_{x_{1}\in X_1}(\alpha x_{1}^{-\rho}+(1-\alpha) x_{2}^{-\rho})^{-1/\rho}\\
& \text{s.t.}\\
p_{1}*x_{1}+(p_{2}+0.5*x_{2})*x_{2} & \leq & I,\,\,\,p_{1},p_{2},I>0\\
x_{1},x_{2} & \geq & 0
\end{eqnarray*}
\\]
```python
alpha = 0.20 # preference parameter
rho = -0.20 # elasticity parameter
I = 30 # income
p1 = 2 # price 1
p2 = 4 # price 2
```
Again we use the SLSQP optimizer as it was the best one.
```python
def value_of_choice(x,alpha,rho,I,p1,p2):
x1 = x[0]
x2 = x[1]
return -u_func(x1,x2,alpha,rho)
# Constraint:
constraints = (
{'type': 'ineq', 'fun': lambda x: I-p1*x[0]-(p2+0.5*x[1])*x[1]}
)
bounds = (
(0,I/p1),
(0,I/p2)
)
# Call the SLSQP optimizer:
initial_guess = [I/p1/2,I/p2/2]
sol_optimize = optimize.minimize(
value_of_choice,initial_guess,args=(alpha,rho,I,p1,p2),
method='SLSQP',bounds=bounds,constraints=constraints)
x1 = sol_optimize.x[0]
x2 = sol_optimize.x[1]
u = u_func(x1,x2,alpha,rho)
print_solution(x1,x2,u,I,p1,p2)
```
x1 = 3.71257356
x2 = 3.81982781
u = 3.79818113
I-p1*x1-p2*x2 = 7.29554164
**Plot the solution**
```python
fig = plt.figure(figsize=(12,3),dpi=100)
ax = fig.add_subplot(1,2,2)
ax.scatter(x1,x2)
ax.set_title('Optimum')
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.grid(True)
```
```python
def _objective(x1,x2):
return (0.2*x1**(2) + (1-0.2)*x2**(2))**(1/2)
def objective(x):
return _objective(x[0],x[1])
def ineq_constraint(x):
return I-p1*x[0]-(p2+0.5*x[1])*x[1]
def eq_constraint(x):
income = 30 - p1*x[0]-(p2+0.5*x[1])*x[1]
return income
```
**Solving by SLSQP optimizer and iterating gradient evaluations:**
```python
bound = (1.0,5.0)
bounds = (bound, bound)
ineq_con = {'type': 'ineq', 'fun': ineq_constraint}
eq_con = {'type': 'eq', 'fun': eq_constraint}
x0=(1.0,1.0)
result = optimize.minimize(objective,x0,
method='SLSQP',
bounds=bounds,
constraints=[ineq_con,eq_con],
options={'disp':True})
print('\nx = ',result.x)
contour()
```
We now define the values for x1, x2 and u with the new budget constraint in order to plot the solution:
```python
def values(alpha,rho,I,p1,p2,N1,N2,do_print=True):
shape_tuple = (N1,N2)
x1_values = np.empty(shape_tuple)
x2_values = np.empty(shape_tuple)
u_values = np.empty(shape_tuple)
x1_best = 0
x2_best = 0
u_best = u_func(0,0,alpha=alpha,rho=rho)
for i in range(N1):
for j in range(N2):
x1_values[i,j] = x1 = (i/(N1-1))*I/p1
x2_values[i,j] = x2 = (j/(N2-1))*I/p2
#Budget constraint:
if p1*x1+(p2+0.5*x2)*x2 <= I:
u_values[i,j] = u_func(x1,x2,alpha=alpha,rho=rho)
else:
u_values[i,j] = u_func(0,0,alpha=alpha,rho=rho)
if u_values[i,j] > u_best:
x1_best = x1_values[i,j]
x2_best = x2_values[i,j]
u_best = u_values[i,j]
if do_print:
print_solution(x1_best,x2_best,u_best,I,p1,p2)
return x1_best,x2_best,u_best,x1_values,x2_values,u_values
def print_solution(x1,x2,u,I,p1,p2):
print(f'x1 = {x1:.8f}')
print(f'x2 = {x2:.8f}')
print(f'u = {u:.8f}')
print(f'I-p1*x1-(p2+0.5*x2)*x2 = {I-p1*x1-(p2+0.5*x2)*x2:.8f}')
```
```python
solution2 = values(alpha=0.20,rho=-0.2,I=30,p1=2,p2=4,N1=400,N2=600)
```
x1 = 3.75939850
x2 = 3.80634391
u = 3.79691751
I-p1*x1-(p2+0.5*x2)*x2 = 0.01170041
**Plot the solution**
```python
fig = plt.figure(figsize=(12,4),dpi=100)
#Unpack solution:
x1_best,x2_best,u_best,x1_values,x2_values,u_values = solution2
# Left plot
ax_left = fig.add_subplot(1,2,1)
ax_left.plot(x1_values,u_values, 'ob')
ax_left.set_title('$u(x_1,x_2)$')
ax_left.set_xlabel('$x_1$')
ax_left.set_ylabel('$u(x_1,(I-p_1 x_1)/p_2)$')
ax_left.grid(True)
# Right plot
ax_right = fig.add_subplot(1,2,2)
ax_right.scatter(x1_best,x2_best)
ax_right.set_title('Optimum')
ax_right.set_xlabel('$x_1$')
ax_right.set_ylabel('$x_2$')
ax_right.grid(True)
```
From this we see that the plot has changed and the utillity is smaller than before introdusing the tax. The consumer now wish to consume almost as much of x1 as x2. The optimal amount of x1 is increased, but the utility curve is more flat than before.
| a7694c8386fc7e95fe732481f5aebfdff6654c87 | 345,535 | ipynb | Jupyter Notebook | modelproject/model.ipynb | NumEconCopenhagen/projects-2019-group-kvm | f83a32f74a3ba24242aad538633fa5f0c8e39d38 | [
"MIT"
]
| null | null | null | modelproject/model.ipynb | NumEconCopenhagen/projects-2019-group-kvm | f83a32f74a3ba24242aad538633fa5f0c8e39d38 | [
"MIT"
]
| 12 | 2019-04-08T17:03:54.000Z | 2019-05-14T21:52:22.000Z | modelproject/model.ipynb | NumEconCopenhagen/projects-2019-group-kvm | f83a32f74a3ba24242aad538633fa5f0c8e39d38 | [
"MIT"
]
| 2 | 2019-04-03T17:39:36.000Z | 2019-04-12T11:26:37.000Z | 273.150198 | 69,852 | 0.918387 | true | 6,280 | Qwen/Qwen-72B | 1. YES
2. YES | 0.808067 | 0.868827 | 0.70207 | __label__eng_Latn | 0.766299 | 0.469476 |
# Stochastic Differential Equations: Lab 1
```python
from IPython.core.display import HTML
css_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'
HTML(url=css_file)
```
<link href='http://fonts.googleapis.com/css?family=Open+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 1000px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1200px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Open Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:900px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Arvo', serif;
font-style:regular;
font-weight: 400;
font-size: 45pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Arvo', serif;
font-weight: 400;
font-size: 30pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Arvo', serif;
margin-top:16px;
font-size: 22pt;
font-weight: 600;
margin-bottom: 3px;
font-style: regular;
color: rgb(102,102,0);
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Arvo', serif;
font-size: 14pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Arvo', sans-serif;
font-weight: 400;
font-size: 16pt;
color: rgb(163,0,0);
font-style: italic;
margin-bottom: .1em;
margin-top: 0.8em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "PT Mono";
font-size: 90%;
}
</style>
This background for these exercises is article of D Higham, [*An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations*, SIAM Review 43:525-546 (2001)](http://epubs.siam.org/doi/abs/10.1137/S0036144500378302).
Higham provides Matlab codes illustrating the basic ideas at <http://personal.strath.ac.uk/d.j.higham/algfiles.html>, which are also given in the paper.
For random processes in `python` you should look at the `numpy.random` module. To set the initial seed (which you should *not* do in a real simulation, but allows for reproducible testing), see `numpy.random.seed`.
## Brownian processes
A *random walk* or *Brownian process* or *Wiener process* is a way of modelling error introduced by uncertainty into a differential equation. The random variable representing the walk is denoted $W$. A single realization of the walk is written $W(t)$. We will assume that
1. The walk (value of $W(t)$) is initially (at $t=0$) $0$, so $W(0)=0$, to represent "perfect knowledge" there;
2. The walk is *on average* zero, so $\mathbb{E}[W(t+h) - W(t)] = 0$, where the *expectation value* is
$$
\mathbb{E}[W] = \int_{-\infty}^{\infty} t W(t) \, \text{d}t
$$
3. Any step in the walk is independent of any other step, so $W(t_2) - W(t_1)$ is independent of $W(s_2) - W(s_1)$ for any $s_{1,2} \ne t_{1,2}$.
These requirements lead to a definition of a *discrete* random walk: given the points $\{ t_i \}$ with $i = 0, \dots, N$ separated by a uniform timestep $\delta t$, we have - for a single realization of the walk - the definition
$$
\begin{align}
\text{d}W_i &= \sqrt{\delta t} {\cal N}(0, 1), \\
W_i &= \left( \sum_{j=0}^{i-1} \text{d}W_j \right), \\
W_0 &= 0
\end{align}
$$
Here ${\cal N}(0, 1)$ means a realization of a normally distributed random variable with mean $0$ and standard deviation $1$: programmatically, the output of `numpy.random.randn`.
When working with discrete Brownian processes, there are two things we can do.
1. We can think about a *single realization* at different timescales, by averaging over more points. E.g.
$$
W_i = \left( \sum_{j=0}^{i_1} \sum_{k=0}^{p} \text{d}W_{(p j + k)} \right)
$$
is a Brownian process with timestep $p \, \delta t$.
2. We can think about *multiple realizations* by computing a new set of steps $\text{d}W$, whilst at the same timestep.
Both viewpoints are important.
### Tasks
1. Simulate a single realization of a Brownian process over $[0, 1]$ using a step length $\delta t = 1/N$ for $N = 500, 1000, 2000$. Use a fixed seed of `100`. Compare the results.
2. Simulation different realizations of a Brownian process with $\delta t$ of your choice. Again, compare the results.
```python
%matplotlib inline
import numpy
from matplotlib import pyplot
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
rcParams['figure.figsize'] = (12,6)
from scipy.integrate import quad
```
Evaluate the function $u(W(t)) = \sin^2(t + W(t))$, where $W(t)$ is a Brownian process, on $M$ Brownian paths for $M = 500, 1000, 2000$. Compare the *average* path for each $M$.
The average path at time $t$ should be given by
$$
\begin{equation}
\int_{-\infty}^{\infty} \frac{\sin(t+s)^2 \exp(-s^2 / 2t)}{\sqrt{2 \pi t}} \,\text{d}s.
\end{equation}
$$
```python
# This computes the exact solution!
t_int = numpy.linspace(0.005, numpy.pi, 1000)
def integrand(x,t):
return numpy.sin(t+x)**2*numpy.exp(-x**2/(2.0*t))/numpy.sqrt(2.0*numpy.pi*t)
int_exact = numpy.zeros_like(t_int)
for i, t in enumerate(t_int):
int_exact[i], err = quad(integrand, -numpy.inf, numpy.inf, args=(t,))
```
## Stochastic integrals
We have, in eg finite elements or multistep methods for IVPs, written the solution of differential equations in terms of integrals. We're going to do the same again, so we need to integrate random variables. The integral of a random variable *with respect to a Brownian process* is written
$$
\int_0^t G(s) \, \text{d}W_s,
$$
where the notation $\text{d}W_s$ indicates that the step in the Brownian process depends on the (dummy) independent variable $s$.
We'll concentrate on the case $G(s) = W(s)$, so we're trying to integrate the Brownian process itself. If this were a standard, non-random variable, the answer would be
$$
\int_0^t W(s) \, \text{d}W_s = \frac{1}{2} \left( W(t)^2 - W(0)^2 \right).
$$
When we approximate the quadrature numerically than we would split the interval $[0, T]$ into strips (subintervals), approximate the integral on each subinterval by picking a point inside the interval, evaluating the integrand at that point, and weighting it by the width of the subinterval. In normal integration it doesn't matter which point within the subinterval we choose.
In the stochastic case that is not true. We pick a specific point $\tau_i = a t_i + (1-a) t_{i-1}$ in the interval $[t_{i-1}, t_i]$. The value $a \in [0, 1]$ is a constant that says where within each interval we are evaluating the integrand. We can then approximate the integral by
\begin{equation}
\int_0^T W(s) \, dW_s = \sum_{i=1}^N W(\tau_i) \left[ W(t_i) - W(t_{i-1}) \right] = S_N.
\end{equation}
Now we can compute (using that the expectation of the products of $W$ terms is the covariance, which is the minimum of the arguments)
\begin{align}
\mathbb{E}(S_N) &= \mathbb{E} \left( \sum_{i=1}^N W(\tau_i) \left[ W(t_i) - W(t_{i-1}) \right] \right) \\
&= \sum_{i=1}^N \mathbb{E} \left( W(\tau_i) W(t_i) \right) - \mathbb{E} \left( W(\tau_i) W(t_{i-1}) \right) \\
&= \sum_{i=1}^N (\min\{\tau_i, t_i\} - \min\{\tau_i, t_{i-1}\}) \\
&= \sum_{i=1}^N (\tau_i - t_{i-1}) \\
&= (t - t_0) a.
\end{align}
The choice of evaluation point **matters**.
So there are multiple different stochastic integrals, each (effectively) corresponding to a different choice of $a$. The two standard choices are
There are two standard choices of stochastic integral.
1. Ito: choose $a=0$.
2. Stratonovich: choose $a=1/2$.
These lead to
$$
\int_0^t G(s) \, \text{d}W_s \simeq_{\text{Ito}} \sum_{j=0}^{N-1} G(s_j, W(s_j)) \left( W(s_{j+1}) - W(s_j) \right) = \sum_{j=0}^{N-1} G(s_j) \text{d}W(s_{j})
$$
for the Ito integral, and
$$
\int_0^t G(s) \, \text{d}W_s \simeq_{\text{Stratonovich}} \sum_{j=0}^{N-1} \frac{1}{2} \left( G(s_j, W(s_j)) + G(s_{j+1}, W(s_{j+1})) \right) \left( W(s_{j+1}) - W(s_j) \right) = \sum_{j=0}^{N-1} \frac{1}{2} \left( G(s_j, W(s_j)) + G(s_{j+1}, W(s_{j+1})) \right) \text{d}W(s_{j}).
$$
for the Stratonovich integral.
### Tasks
Write functions to compute the Itô and Stratonovich integrals of a function $h(t, W(t))$ of a *given* Brownian process $W(t)$ over the interval $[0, 1]$.
```python
def ito(h, trange, dW):
"""Compute the Ito stochastic integral given the range of t.
Parameters
----------
h : function
integrand
trange : list of float
the range of integration
dW : array of float
Brownian increments
seed : integer
optional seed for the Brownian path
Returns
-------
ito : float
the integral
"""
return ito
```
```python
def stratonovich(h, trange, dW):
"""Compute the Stratonovich stochastic integral given the range of t.
Parameters
----------
h : function
integrand
trange : list of float
the range of integration
dW : array of float
the Brownian increments
Returns
-------
stratonovich : float
the integral
"""
return stratonovich
```
Test the functions on $h = W(t)$ for various $N$. Compare the limiting values of the integrals.
## Euler-Maruyama's method
Now we can write down a stochastic differential equation.
The differential form of a stochastic differential equation is
$$
\frac{\text{d}X}{\text{d}t} = f(X) + g(X) \frac{\text{d}W}{\text{d}t}
$$
and the comparable (and more useful) *integral form* is
$$
\text{d}X = f(X) \, \text{d}t + g(X) \text{d}W.
$$
This has formal solution
$$
X(t) = X_0 + \int_0^t f(X(s)) \, \text{d}s + \int_0^t g(X(s)) \, \text{d}W_s.
$$
We can use our Ito integral above to write down the *Euler-Maruyama method*
$$
X(t+h) \simeq X(t) + h f(X(t)) + g(X(t)) \left( W(t+h) - W(t) \right) + {\cal{O}}(h^p).
$$
Written in discrete, subscript form we have
$$
X_{n+1} = X_n + h f_n + g_n \, \text{d}W_{n}
$$
The order of convergence $p$ is an interesting and complex question.
### Tasks
Apply the Euler-Maruyama method to the stochastic differential equation
$$
\begin{equation}
dX(t) = \lambda X(t) + \mu X(t) dW(t), \qquad X(0) = X_0.
\end{equation}
$$
Choose any reasonable values of the free parameters $\lambda, \mu, X_0$.
The exact solution to this equation is $X(t) = X(0) \exp \left[ \left( \lambda - \tfrac{1}{2} \mu^2 \right) t + \mu W(t) \right]$. Fix the timetstep and compare your solution to the exact solution.
Vary the timestep of the Brownian path and check how the numerical solution compares to the exact solution.
## Convergence
We have two ways of thinking about Brownian paths or processes.
We can fix the path (ie fix $\text{d}W$) and vary the timescale on which we're looking at it: this gives us a single random path, and we can ask how the numerical method converges for this single realization. This is *strong convergence*.
Alternatively, we can view each path as a single realization of a random process that should average to zero. We can then look at how the method converges as we average over a large number of realizations, *also* looking at how it converges as we vary the timescale. This is *weak convergence*.
Formally, denote the true solution as $X(T)$ and the numerical solution for a given step length $h$ as $X^h(T)$. The order of convergence is denoted $p$.
#### Strong convergence
$$
\mathbb{E} \left| X(T) - X^h(T) \right| \le C h^{p}
$$
For Euler-Maruyama, expect $p=1/2$!.
#### Weak convergence
$$
\left| \mathbb{E} \left( \phi( X(T) ) \right) - \mathbb{E} \left( \phi( X^h(T) ) \right) \right| \le C h^{p}
$$
For Euler-Maruyama, expect $p=1$.
### Tasks
Investigate the weak and strong convergence of your method, applied to the problem above.
```python
```
| c093e96b17de669772f24b364ae5f9a3c204fe59 | 21,347 | ipynb | Jupyter Notebook | FEEG6016 Simulation and Modelling/09-Stochastic-DEs-Lab-1.ipynb | ngcm/training-public | e5a0d8830df4292315c8879c4b571eef722fdefb | [
"MIT"
]
| 7 | 2015-06-23T05:50:49.000Z | 2016-06-22T10:29:53.000Z | FEEG6016 Simulation and Modelling/09-Stochastic-DEs-Lab-1.ipynb | Jhongesell/training-public | e5a0d8830df4292315c8879c4b571eef722fdefb | [
"MIT"
]
| 1 | 2017-11-28T08:29:55.000Z | 2017-11-28T08:29:55.000Z | FEEG6016 Simulation and Modelling/09-Stochastic-DEs-Lab-1.ipynb | Jhongesell/training-public | e5a0d8830df4292315c8879c4b571eef722fdefb | [
"MIT"
]
| 24 | 2015-04-18T21:44:48.000Z | 2019-01-09T17:35:58.000Z | 34.654221 | 386 | 0.519464 | true | 4,208 | Qwen/Qwen-72B | 1. YES
2. YES | 0.629775 | 0.855851 | 0.538993 | __label__eng_Latn | 0.935499 | 0.090592 |
```python
import jax.numpy as jnp
from sympy import *
```
```python
x,y,z=symbols('x y z')
init_printing(use_unicode=True)
```
```python
print("\ncos(x) =",diff(cos(x),x),"\n")
diff(exp(x**2),x)
```
Podemos calcular derivadas complejas:
Tomelos la séptima derivada de exp(xyz) con respecto a sus variables, por ejemplo:
## $\frac{\partial^7}{\partial x \partial y^2 \partial z^4}\exp(xyz)$
```python
expr= exp(x*y*z)
diff(expr,x,y,y,z,z,z,z)
```
Para las derivadas numéricas:
## $f'(x_i)\approx \frac{\Delta f}{\Delta x}= \frac{f(x_{i+1})-f(x_i)}{x_{i+1}-x_i}$
```python
import numpy as np
import matplotlib.pyplot as plt
def derivative(x,y):
xp=np.zeros(len(x))
yp=np.zeros(len(x))
for i in range(len(x)-1):
yp[i]=y[i+1]-y[i]
xp[i]=x[i+1]-x[i]
return (yp/xp)
```
```python
x1=np.linspace(-10,10,100)
f1=x1**2*np.sin(x1)
der=derivative(x1,f1)
```
/Users/diegobarbosa/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:10: RuntimeWarning: invalid value encountered in true_divide
# Remove the CWD from sys.path while we load stuff.
```python
plt.figure(figsize=(15,8))
plt.plot(x1,f1,'r*',label='Function')
plt.plot(x1,der,'kd',label='Numerical derivative')
plt.legend(fontsize=15)
plt.xlabel("x",fontsize=15)
plt.ylabel("y",fontsize=15)
plt.legend(fontsize=15)
plt.grid()
```
```python
from jax import grad,jit,vmap
import jax.numpy as jnp
x=jnp.linspace(-5,5,100)
grad_f=jit(vmap(grad(jnp.tanh)))
g2=jit(vmap(grad(grad(jnp.tanh))))
g3=jit(vmap(grad(grad(grad(jnp.tanh)))))
plt.figure(figsize=(15,8))
#plt.plot(x,y,'o')
plt.plot(x,np.tanh(x))
plt.plot(x,grad_f(x))
plt.plot(x,g2(x))
plt.plot(x,g3(x))
plt.xlabel("Mediciones",fontsize=15)
plt.ylabel("Observaciones",fontsize=15)
plt.legend(["Datos"],fontsize=15)
plt.grid()
plt.show()
```
```python
```
| 14cc03d7a5d4917b844f91b3d23763f44f3c49b1 | 105,009 | ipynb | Jupyter Notebook | Week3/Session6.ipynb | dabarbosa10/UN_AI | 286839220b845184e9bc551be25ad992964d54a1 | [
"MIT"
]
| null | null | null | Week3/Session6.ipynb | dabarbosa10/UN_AI | 286839220b845184e9bc551be25ad992964d54a1 | [
"MIT"
]
| null | null | null | Week3/Session6.ipynb | dabarbosa10/UN_AI | 286839220b845184e9bc551be25ad992964d54a1 | [
"MIT"
]
| null | null | null | 416.702381 | 61,580 | 0.939053 | true | 602 | Qwen/Qwen-72B | 1. YES
2. YES | 0.913677 | 0.841826 | 0.769156 | __label__eng_Latn | 0.140781 | 0.62534 |
<a href="https://colab.research.google.com/github/julianovale/project_trains/blob/master/Exemplo_04.ipynb" target="_parent"></a>
```
from sympy import I, Matrix, symbols, Symbol, eye
from datetime import datetime
import numpy as np
import pandas as pd
```
```
'''
Rotas
'''
R1 = Matrix([[0,"L1_p3",0,0,0,0],[0,0,"L1_v1",0,0,0],[0,0,0,"L1_p4",0,0],[0,0,0,0,"L1_v3",0],[0,0,0,0,0,"L1_v4"],[0,0,0,0,0,0]])
R2 = Matrix([[0,"L2_p3",0,0,0,0],[0,0,"L2_v2",0,0,0],[0,0,0,"L2_p5",0,0],[0,0,0,0,"L2_v3",0],[0,0,0,0,0,"L2_v5"],[0,0,0,0,0,0]])
R3 = Matrix([[0,"L3_p3",0,0,0,0],[0,0,"L3_v5",0,0,0],[0,0,0,"L3_p1",0,0],[0,0,0,0,"L3_v3",0],[0,0,0,0,0,"L3_v1"],[0,0,0,0,0,0]])
```
```
'''
Seções de bloqueio
'''
T1 = Matrix([[0, "p1"],["v1", 0]])
T2 = Matrix([[0, "p2"],["v2", 0]])
T3 = Matrix([[0, "p3"],["v3", 0]])
T4 = Matrix([[0, "p4"],["v4", 0]])
T5 = Matrix([[0, "p5"],["v5", 0]])
```
```
def kronSum(A,B):
m = np.size(A,1)
n = np.size(B,1)
A = np.kron(A, np.eye(n))
B = np.kron(np.eye(m),B)
return A + B
```
```
momento_inicio = datetime.now()
'''
Algebra de rotas
'''
rotas = kronSum(R1,R2)
rotas = kronSum(rotas,R3)
'''
Algebra de seções
'''
secoes = kronSum(T1,T2)
secoes = kronSum(secoes,T3)
secoes = kronSum(secoes,T4)
secoes = kronSum(secoes,T5)
'''
Algebra de sistema
'''
sistema = np.kron(rotas, secoes)
# calcula tempo de processamento
tempo_processamento = datetime.now() - momento_inicio
```
```
sistema = pd.DataFrame(data=sistema,index=list(range(1,np.size(sistema,0)+1)), columns=list(range(1,np.size(sistema,1)+1)))
```
```
sistema.shape
```
(6912, 6912)
```
print(tempo_processamento)
```
0:01:14.491733
```
sistema
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
<th>12</th>
<th>13</th>
<th>14</th>
<th>15</th>
<th>16</th>
<th>17</th>
<th>18</th>
<th>19</th>
<th>20</th>
<th>21</th>
<th>22</th>
<th>23</th>
<th>24</th>
<th>25</th>
<th>26</th>
<th>27</th>
<th>28</th>
<th>29</th>
<th>30</th>
<th>31</th>
<th>32</th>
<th>33</th>
<th>34</th>
<th>35</th>
<th>36</th>
<th>37</th>
<th>38</th>
<th>39</th>
<th>40</th>
<th>...</th>
<th>6873</th>
<th>6874</th>
<th>6875</th>
<th>6876</th>
<th>6877</th>
<th>6878</th>
<th>6879</th>
<th>6880</th>
<th>6881</th>
<th>6882</th>
<th>6883</th>
<th>6884</th>
<th>6885</th>
<th>6886</th>
<th>6887</th>
<th>6888</th>
<th>6889</th>
<th>6890</th>
<th>6891</th>
<th>6892</th>
<th>6893</th>
<th>6894</th>
<th>6895</th>
<th>6896</th>
<th>6897</th>
<th>6898</th>
<th>6899</th>
<th>6900</th>
<th>6901</th>
<th>6902</th>
<th>6903</th>
<th>6904</th>
<th>6905</th>
<th>6906</th>
<th>6907</th>
<th>6908</th>
<th>6909</th>
<th>6910</th>
<th>6911</th>
<th>6912</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3_p3*p5</td>
<td>1.0*L3_p3*p4</td>
<td>0</td>
<td>1.0*L3_p3*p3</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3_p3*v5</td>
<td>0</td>
<td>0</td>
<td>1.0*L3_p3*p4</td>
<td>0</td>
<td>1.0*L3_p3*p3</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3_p3*v4</td>
<td>0</td>
<td>0</td>
<td>1.0*L3_p3*p5</td>
<td>0</td>
<td>0</td>
<td>1.0*L3_p3*p3</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3_p3*v4</td>
<td>1.0*L3_p3*v5</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3_p3*p3</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>5</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3_p3*v3</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3_p3*p5</td>
<td>1.0*L3_p3*p4</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>6908</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>6909</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>6910</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>6911</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>6912</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>6912 rows × 6912 columns</p>
</div>
```
sistema.loc[4740,4788]
```
1.0*L3_v1*p1
```
momento_inicio = datetime.now()
colunas = ['denode', 'paranode', 'aresta']
grafo = pd.DataFrame(columns=colunas)
r = 1
c = 1
for j in range(np.size(sistema,0)):
for i in range(np.size(sistema,0)):
if sistema.loc[r,c]==0 and c < np.size(sistema,0):
c += 1
elif c < np.size(sistema,0):
grafo.loc[len(grafo)+1] = (r, c, sistema.loc[r,c])
c += 1
else:
c = 1
r += 1
tempo_processamento = datetime.now() - momento_inicio
print(tempo_processamento)
```
0:21:53.389261
```
grafo
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>denode</th>
<th>paranode</th>
<th>aresta</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1</td>
<td>34</td>
<td>1.0*L3_p3*p5</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>35</td>
<td>1.0*L3_p3*p4</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>37</td>
<td>1.0*L3_p3*p3</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>41</td>
<td>1.0*L3_p3*p2</td>
</tr>
<tr>
<th>5</th>
<td>1</td>
<td>49</td>
<td>1.0*L3_p3*p1</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>86381</th>
<td>6880</td>
<td>6896</td>
<td>1.0*L3_v1*v1</td>
</tr>
<tr>
<th>86382</th>
<td>6880</td>
<td>6904</td>
<td>1.0*L3_v1*v2</td>
</tr>
<tr>
<th>86383</th>
<td>6880</td>
<td>6908</td>
<td>1.0*L3_v1*v3</td>
</tr>
<tr>
<th>86384</th>
<td>6880</td>
<td>6910</td>
<td>1.0*L3_v1*v4</td>
</tr>
<tr>
<th>86385</th>
<td>6880</td>
<td>6911</td>
<td>1.0*L3_v1*v5</td>
</tr>
</tbody>
</table>
<p>86385 rows × 3 columns</p>
</div>
```
grafo['aresta'] = grafo['aresta'].astype('str')
grafo
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>denode</th>
<th>paranode</th>
<th>aresta</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1</td>
<td>34</td>
<td>1.0*L3_p3*p5</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>35</td>
<td>1.0*L3_p3*p4</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>37</td>
<td>1.0*L3_p3*p3</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>41</td>
<td>1.0*L3_p3*p2</td>
</tr>
<tr>
<th>5</th>
<td>1</td>
<td>49</td>
<td>1.0*L3_p3*p1</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>86381</th>
<td>6880</td>
<td>6896</td>
<td>1.0*L3_v1*v1</td>
</tr>
<tr>
<th>86382</th>
<td>6880</td>
<td>6904</td>
<td>1.0*L3_v1*v2</td>
</tr>
<tr>
<th>86383</th>
<td>6880</td>
<td>6908</td>
<td>1.0*L3_v1*v3</td>
</tr>
<tr>
<th>86384</th>
<td>6880</td>
<td>6910</td>
<td>1.0*L3_v1*v4</td>
</tr>
<tr>
<th>86385</th>
<td>6880</td>
<td>6911</td>
<td>1.0*L3_v1*v5</td>
</tr>
</tbody>
</table>
<p>86385 rows × 3 columns</p>
</div>
```
new = grafo["aresta"].str.split("*", n = -1, expand = True)
grafo["aresta"]=new[1]
grafo["semaforo_secao"]=new[2]
new = grafo["aresta"].str.split("_", n = -1, expand = True)
grafo["semaforo_trem"]=new[1]
grafo['coincide'] = np.where(grafo['semaforo_secao']==grafo['semaforo_trem'], True, False)
grafo
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>denode</th>
<th>paranode</th>
<th>aresta</th>
<th>semaforo_secao</th>
<th>semaforo_trem</th>
<th>coincide</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1</td>
<td>34</td>
<td>L3_p3</td>
<td>p5</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>35</td>
<td>L3_p3</td>
<td>p4</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>37</td>
<td>L3_p3</td>
<td>p3</td>
<td>p3</td>
<td>True</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>41</td>
<td>L3_p3</td>
<td>p2</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>5</th>
<td>1</td>
<td>49</td>
<td>L3_p3</td>
<td>p1</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>86381</th>
<td>6880</td>
<td>6896</td>
<td>L3_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>86382</th>
<td>6880</td>
<td>6904</td>
<td>L3_v1</td>
<td>v2</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>86383</th>
<td>6880</td>
<td>6908</td>
<td>L3_v1</td>
<td>v3</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>86384</th>
<td>6880</td>
<td>6910</td>
<td>L3_v1</td>
<td>v4</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>86385</th>
<td>6880</td>
<td>6911</td>
<td>L3_v1</td>
<td>v5</td>
<td>v1</td>
<td>False</td>
</tr>
</tbody>
</table>
<p>86385 rows × 6 columns</p>
</div>
```
grafo = pd.DataFrame(data=grafo)
```
```
# Passo 1:
alcancavel = [1]
r = 1
for i in range(np.size(grafo,0)):
de = grafo.loc[r]['denode']
para = grafo.loc[r]['paranode']
if (de in alcancavel):
alcancavel.append(para)
r = r + 1
else:
r = r + 1
```
```
alcancavel.sort()
```
```
grafo = grafo[grafo.denode.isin(alcancavel)]
grafo
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>denode</th>
<th>paranode</th>
<th>aresta</th>
<th>semaforo_secao</th>
<th>semaforo_trem</th>
<th>coincide</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1</td>
<td>34</td>
<td>L3_p3</td>
<td>p5</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>35</td>
<td>L3_p3</td>
<td>p4</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>37</td>
<td>L3_p3</td>
<td>p3</td>
<td>p3</td>
<td>True</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>41</td>
<td>L3_p3</td>
<td>p2</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>5</th>
<td>1</td>
<td>49</td>
<td>L3_p3</td>
<td>p1</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>86376</th>
<td>6878</td>
<td>6909</td>
<td>L3_v1</td>
<td>v5</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>86377</th>
<td>6879</td>
<td>6895</td>
<td>L3_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>86378</th>
<td>6879</td>
<td>6903</td>
<td>L3_v1</td>
<td>v2</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>86379</th>
<td>6879</td>
<td>6907</td>
<td>L3_v1</td>
<td>v3</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>86380</th>
<td>6879</td>
<td>6909</td>
<td>L3_v1</td>
<td>v4</td>
<td>v1</td>
<td>False</td>
</tr>
</tbody>
</table>
<p>41865 rows × 6 columns</p>
</div>
```
grafo.reset_index(drop = True)
grafo.index = np.arange(1, len(grafo) + 1)
grafo
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>denode</th>
<th>paranode</th>
<th>aresta</th>
<th>semaforo_secao</th>
<th>semaforo_trem</th>
<th>coincide</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1</td>
<td>34</td>
<td>L3_p3</td>
<td>p5</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>35</td>
<td>L3_p3</td>
<td>p4</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>37</td>
<td>L3_p3</td>
<td>p3</td>
<td>p3</td>
<td>True</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>41</td>
<td>L3_p3</td>
<td>p2</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>5</th>
<td>1</td>
<td>49</td>
<td>L3_p3</td>
<td>p1</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>41861</th>
<td>6878</td>
<td>6909</td>
<td>L3_v1</td>
<td>v5</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>41862</th>
<td>6879</td>
<td>6895</td>
<td>L3_v1</td>
<td>v1</td>
<td>v1</td>
<td>True</td>
</tr>
<tr>
<th>41863</th>
<td>6879</td>
<td>6903</td>
<td>L3_v1</td>
<td>v2</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>41864</th>
<td>6879</td>
<td>6907</td>
<td>L3_v1</td>
<td>v3</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>41865</th>
<td>6879</td>
<td>6909</td>
<td>L3_v1</td>
<td>v4</td>
<td>v1</td>
<td>False</td>
</tr>
</tbody>
</table>
<p>41865 rows × 6 columns</p>
</div>
```
# Passo 2:
alcancavel = [6906]
r = np.size(grafo,0)
for i in range(np.size(grafo,0),1,-1):
para = grafo.loc[r]['paranode']
de = grafo.loc[r]['denode']
if (para in alcancavel):
alcancavel.append(de)
r = r - 1
else:
r = r - 1
```
```
grafo = grafo[grafo.paranode.isin(alcancavel)]
grafo
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>denode</th>
<th>paranode</th>
<th>aresta</th>
<th>semaforo_secao</th>
<th>semaforo_trem</th>
<th>coincide</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1</td>
<td>34</td>
<td>L3_p3</td>
<td>p5</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>35</td>
<td>L3_p3</td>
<td>p4</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>37</td>
<td>L3_p3</td>
<td>p3</td>
<td>p3</td>
<td>True</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>41</td>
<td>L3_p3</td>
<td>p2</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>5</th>
<td>1</td>
<td>49</td>
<td>L3_p3</td>
<td>p1</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>41815</th>
<td>6858</td>
<td>6906</td>
<td>L3_v1</td>
<td>p1</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>41834</th>
<td>6866</td>
<td>6906</td>
<td>L3_v1</td>
<td>p2</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>41851</th>
<td>6873</td>
<td>6906</td>
<td>L3_v1</td>
<td>p5</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>41856</th>
<td>6876</td>
<td>6906</td>
<td>L3_v1</td>
<td>v4</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>41860</th>
<td>6878</td>
<td>6906</td>
<td>L3_v1</td>
<td>v3</td>
<td>v1</td>
<td>False</td>
</tr>
</tbody>
</table>
<p>40560 rows × 6 columns</p>
</div>
```
grafo.reset_index(drop = True)
grafo.index = np.arange(1, len(grafo) + 1)
grafo
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>denode</th>
<th>paranode</th>
<th>aresta</th>
<th>semaforo_secao</th>
<th>semaforo_trem</th>
<th>coincide</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1</td>
<td>34</td>
<td>L3_p3</td>
<td>p5</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>35</td>
<td>L3_p3</td>
<td>p4</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>37</td>
<td>L3_p3</td>
<td>p3</td>
<td>p3</td>
<td>True</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>41</td>
<td>L3_p3</td>
<td>p2</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>5</th>
<td>1</td>
<td>49</td>
<td>L3_p3</td>
<td>p1</td>
<td>p3</td>
<td>False</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>40556</th>
<td>6858</td>
<td>6906</td>
<td>L3_v1</td>
<td>p1</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>40557</th>
<td>6866</td>
<td>6906</td>
<td>L3_v1</td>
<td>p2</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>40558</th>
<td>6873</td>
<td>6906</td>
<td>L3_v1</td>
<td>p5</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>40559</th>
<td>6876</td>
<td>6906</td>
<td>L3_v1</td>
<td>v4</td>
<td>v1</td>
<td>False</td>
</tr>
<tr>
<th>40560</th>
<td>6878</td>
<td>6906</td>
<td>L3_v1</td>
<td>v3</td>
<td>v1</td>
<td>False</td>
</tr>
</tbody>
</table>
<p>40560 rows × 6 columns</p>
</div>
```
grafo.to_csv('grafo.csv', sep=";")
from google.colab import files
files.download('grafo.csv')
```
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
| f35f9ae1613bf4dd45c91d6d3c19af195e7d9c7f | 94,358 | ipynb | Jupyter Notebook | Exemplo_04.ipynb | julianovale/project_trains | 73f698ab9618363b93777ab7337be813bf14d688 | [
"MIT"
]
| null | null | null | Exemplo_04.ipynb | julianovale/project_trains | 73f698ab9618363b93777ab7337be813bf14d688 | [
"MIT"
]
| null | null | null | Exemplo_04.ipynb | julianovale/project_trains | 73f698ab9618363b93777ab7337be813bf14d688 | [
"MIT"
]
| null | null | null | 35.313623 | 235 | 0.238379 | true | 15,582 | Qwen/Qwen-72B | 1. YES
2. YES | 0.882428 | 0.70253 | 0.619932 | __label__cym_Latn | 0.163674 | 0.27864 |
| |Pierre Proulx, ing, professeur|
|:---|:---|
|Département de génie chimique et de génie biotechnologique |** GCH200-Phénomènes d'échanges I **|
### Section 6.3, exemple 6.3-1
* seulement en régime de Stokes. Autrement il faut utiliser numpy.
```python
#
# Pierre Proulx
#
# Préparation de l'affichage et des outils de calcul symbolique
#
import sympy as sp
from IPython.display import *
sp.init_printing(use_latex=True)
```
```python
# Paramètres, variables et fonctions
rho_s,rho,D,v_inf,mu,g=sp.symbols('rho_s,rho,D,v_inf,mu,g')
```
```python
f=4/3*g*D/v_inf**2*(rho_s-rho)/rho # equation définissant le facteur f
display(f)
Re=rho*v_inf*D/mu
f_v=(24/Re) # équation calculant f par la friction, régime de Stokes
display(f_v)
```
```python
# Dictionnaire contenant les valeurs des paramètres.
# Valeurs prises pour avoir le régime de Stokes
dico={'rho_s':1000,'rho':1.4,'D':50e-6,'mu':1.6e-5,'g':9.81}
#
eq=sp.Eq(f-f_v)
v=sp.solve((eq,0),v_inf)
display(v)
```
```python
v=v_inf.subs(v)
v=v.subs(dico)
display(v)
```
```python
Re=Re.subs(dico)
Re=Re.subs(v_inf,v)
display(Re)
```
Maintenant, regardons un cas plus général, une particule qui tombe mais qui n'est pas dans le régime de Stokes.
On doit maintenant utiliser un autre outil, pas *sympy* mais plutot *scipy*, qui effectuera la recherche des racines numériquement, pas analytiquement.
Mettons les valeurs numériques de l'exemple 6-3.1
```python
import numpy as np
import math
from scipy.optimize import fsolve, root
#
# définir la fonction dont on cherche les zéros
#
def f(D):
f1=(math.sqrt(24*vis/(rho*vinf*D))+0.5407)**2
f2=4./3.*g*D/(vinf**2)*(rhop-rho)/rho
return f1-f2
#
# valeurs des paramètres
#
rhop=2620
Mair=28.966
rho=1590
vis=0.00958
g=9.81
vinf=0.65
D=fsolve(f,.1) # fonction de recherche de zéros de scipy.optimize, valeur 'guess' de .1
print(D*100, 'cm')
```
[ 2.07031821] cm
| e2c38174345cff1fcd33a52767fd95e0ee6f8803 | 13,413 | ipynb | Jupyter Notebook | Chap-6-ex-6-3-1.ipynb | pierreproulx/GCH200 | 66786aa96ceb2124b96c93ee3d928a295f8e9a03 | [
"MIT"
]
| 1 | 2018-02-26T16:29:58.000Z | 2018-02-26T16:29:58.000Z | Chap-6-ex-6-3-1.ipynb | pierreproulx/GCH200 | 66786aa96ceb2124b96c93ee3d928a295f8e9a03 | [
"MIT"
]
| null | null | null | Chap-6-ex-6-3-1.ipynb | pierreproulx/GCH200 | 66786aa96ceb2124b96c93ee3d928a295f8e9a03 | [
"MIT"
]
| 2 | 2018-02-27T15:04:33.000Z | 2021-06-03T16:38:07.000Z | 52.190661 | 2,368 | 0.734213 | true | 672 | Qwen/Qwen-72B | 1. YES
2. YES | 0.757794 | 0.785309 | 0.595102 | __label__fra_Latn | 0.782638 | 0.220952 |
# Mixture Models and, specifically, Gaussian Mixture Models
* Thus far, we have primarily discussed relatively simple models consisting of only one peak in the probability distribution function (pdf) when representing data using pdfs.
* For example, when we introduced the probabilistic generative classifier, our examples focused on representing each class using a single Gaussian distribution.
* Consider the following data set, would a multivariate Gaussian be able to represent each of these clustering data set well?
```python
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
%matplotlib inline
n_samples = 1500
n_clusters = 1;
#generate data
transformation = [[ 0.60834549, -0.63667341], [-0.40887718, 0.85253229]]
X, y = datasets.make_blobs(n_samples=n_samples , centers = n_clusters)
X = np.dot(X, transformation)
#Plot Results
plt.figure(figsize=(12, 12))
plt.subplot(221)
plt.scatter(X[:, 0], X[:, 1])
```
* Would a single multivariate Gaussian be able to represent this data set well?
```python
n_clusters = 10;
#generate data
X, y = datasets.make_blobs(n_samples=n_samples , centers = n_clusters)
#Plot Results
plt.figure(figsize=(12, 12))
plt.subplot(221)
plt.scatter(X[:, 0], X[:, 1])
```
* The second data set would be better represented by a *mixture model*
$p(x) = \sum_{k=1}^K \pi_k f(x | \theta_k)$
where
$0 \le \pi_k \le 1$, $\sum_k \pi_k =1$
* If each $f(x | \theta_k)$ is assumed to be a Gaussian distribution, then the above mixture model would be a *Gaussian Mixture Model*
$p(x) = \sum_{k=1}^K \pi_k N(x | \mu_k, \Sigma_k)$
where
$0 \le \pi_k \le 1$, $\sum_k \pi_k =1$
* *How would you draw samples from a Gaussian Mixture Model? from a mixture model in general?*
* Gaussian mixture models (GMMs) can be used to learn a complex distribution that represents a data set. Thus, it can be used within the probabilistic generative classifier framework to model complex classes.
* GMMs are also commonly used for clustering where a GMM is fit to a data set to be clustered and each estimated Gaussian component is a resulting cluster.
* *If you were given a data set, how would you estimate the parameters of a GMM to fit the data?*
* A common approach for estimating the parameters of a GMM given data is *expectation maximization* (EM)
# Expectation Maximization
* EM is a general algorithm that can be applied to a variety of problems (not just mixture model clustering).
* With MLE, we define a likelihood and maximize it to find parameters of interest.
* With MAP, we maximize the posterior to find parameters of interest.
* The goal of EM is to also find the parameters that maximize your likelihood function.
* *The 1st step* is to define your likelihood function (defines your objective)
* Originally introduced by Dempster, Laird, and Rubin in 1977 - ``Maximum Likelihood from Incomplete Data via the EM Algorithm``
* EM is a method to simplify difficult maximum likelihood problems.
* Suppose we observe $\mathbf{x}_1, \ldots, \mathbf{x}_N$ i.i.d. from $g(\mathbf{x}_i | \Theta)$
* We want: $\hat\Theta = argmax L(\Theta|X) = argmax \prod_{i=1}^N g(\mathbf{x}_i | \Theta)$
* But suppose this maximization is very difficult. EM simplifies it by expanding the problem to a bigger easier problem - ``demarginalization``
\begin{equation}
g(x|\Theta) = \int_z f(x, z | \Theta) dz
\end{equation}
Main Idea: Do all of your analysis on $f$ and then integrate over the unknown z's.
### Censored Data Example
* Suppose we observe $\mathbf{y}_1, \ldots, \mathbf{y}_N$ i.i.d. from $f(\mathbf{y} | \Theta)$
* Lets say that we know that values are censored at $\ge a$
* So, we see: $\mathbf{y}_1, \ldots, \mathbf{y}_m$ (less than $a$) and we do not see $\mathbf{y}_{m+1}, \ldots, \mathbf{y}_N$ which are censored and set to $a$.
* Given this censored data, suppose we want to estimate the mean if the data was uncensored.
* Our observed data likelihood in this case would be:
\begin{eqnarray}
L &=& \prod_{i=1}^m \left[ 1 - F(a |\theta)\right]^{N-m}f(\mathbf{y}_i | \theta)\\
&=& \prod_{i=1}^m f(\mathbf{y}_i | \theta) \prod_{j=m+1}^N \int_a^\infty f(\mathbf{y}_j | \theta) dy_j
\end{eqnarray}
where $F(\cdot)$ is the cumulative distribution function and $f(y|\theta) = N(y|\theta)$, for example.
* So, the observed data likelihood would be very difficult to maximize to solve for $\theta$
* In EM, we introduce *latent variables* (i.e., ``hidden variables``) to simplify the problem
* *The second step*: Define the *complete likelihood* by introducing variables that simplify the problem.
* Going back to the censored data example, if we had observed the missing data, the problem would be easy to solve! It would simplify to a standard MLE. For this example, the complete data likelihood is:
\begin{equation}
L^c = \prod_{i=1}^m f(y_i | \theta) \prod_{i=m+1}^N f(z_i | \theta)
\end{equation}
where $z_i$ are the latent, hidden variables.
* Note: you cannot just use $a$ for the censored data, it would skew the results!
* The complete data likelihood would be much much simpler to optimize for $\theta$ if we had the $z$s...
| 64b5e62a5006e609c73de5fec8d127715446bc63 | 42,746 | ipynb | Jupyter Notebook | 11_GMM/Lecture 11 Gaussian Mixture Models.ipynb | zhengyul9/lecture | 905b93ba713f8467887fe8de5a44a3d8a7cae45c | [
"CC-BY-4.0",
"MIT"
]
| null | null | null | 11_GMM/Lecture 11 Gaussian Mixture Models.ipynb | zhengyul9/lecture | 905b93ba713f8467887fe8de5a44a3d8a7cae45c | [
"CC-BY-4.0",
"MIT"
]
| null | null | null | 11_GMM/Lecture 11 Gaussian Mixture Models.ipynb | zhengyul9/lecture | 905b93ba713f8467887fe8de5a44a3d8a7cae45c | [
"CC-BY-4.0",
"MIT"
]
| null | null | null | 175.909465 | 21,240 | 0.894399 | true | 1,449 | Qwen/Qwen-72B | 1. YES
2. YES | 0.805632 | 0.839734 | 0.676517 | __label__eng_Latn | 0.987433 | 0.410106 |
# Death by Asteroid
### Bennett Taylor and Olivia Seitelman
```python
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
import math as math
import numpy as np
import matplotlib.pyplot as ply
import scipy, pylab
```
## Question
If an asteroid is passing Earth at a set distance, at what velocity would it enter Earth's orbit or collide with Earth?
This question is important to everyone on Earth, as asteroids pose a threat to human life and their impact can be predicted and prevented.
## Model
We chose to model an asteroid that is approaching the Earth at a varying velocity and will pass tangentially at a set distance. The asteroid starts at `3e8 * m`, which is almost the distance to the moon, and the only force acting on the asteroid is Earth's gravity. Our asteroid is the size of the one that killed the dinosaurs, with a radius of `5000 *m` and a mass of about `6.1e15 * kg`.
To model the asteroid, we created universal gravitation and slope functions to determine the force on the asteroid, and then used the ode solver. We used a sweepseries to sweep velocity values and graphed their different orbits.
### Schematic
Our model follows a fairly straigh forward phenomenon:the gravitational attraction of the Earth on a passing asteroid.
### Differential Equations
\begin{align}
\frac{dv}{dt} = G \times \frac{m_1 \times m_2}{r^2} \\
\frac{dy}{dt} = v
\end{align}
We used the universal gravitation equation, converting force into the change in velocity over time. For the purposes of using run_ode_solver, we then turned this into a derivative of position, as to have it running a first order differential equation.
### Python
Having sketched a schematic and determined our equations, we then went to work writing the model in Python.
```python
#the units we will be using throughout
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
AU = UNITS.astronomical_unit
N = UNITS.newton
```
newton
First, we will define our state. The asteroid starts at `(300000 * m, 300000 * m)` with an initial velocity of `-1000 * m/s` in the y direction.
```python
#asteroid is starting approximately "r" away in the x and y distance
px_0 = 300000 * m
py_0 = 300000 * m
vx_0 = 0 * m/s
vy_0 = -1000 * m/ s
init = State(px=px_0,
py=py_0,
vx=vx_0,
vy=vy_0)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>values</th>
</tr>
</thead>
<tbody>
<tr>
<th>px</th>
<td>300000 meter</td>
</tr>
<tr>
<th>py</th>
<td>300000 meter</td>
</tr>
<tr>
<th>vx</th>
<td>0.0 meter / second</td>
</tr>
<tr>
<th>vy</th>
<td>-1000.0 meter / second</td>
</tr>
</tbody>
</table>
</div>
Next, we create our system using a make system function from the state variables we have previously defined. We will define the universal gravitation constant, the masses of the bodies, the initial and final times, and the combined radii of the earth and asteroid to help define the event function.
```python
def make_system(px_0, py_0, vx_0, vy_0):
init = State(px=px_0 * m,
py=py_0 * m,
vx=vx_0 * m/s,
vy=vy_0 * m/s)
#universal gravitation value
G = 6.67408e-11*N/(kg**2 * m**2)
#mass of asteroid that killed dinosaurs
m1 = 6.1e15* kg
#earth mass
m2 = 5.972324e24* kg
#intial and final time (0s and 1 year in seconds)
t_0 = 0 * s
t_end = 315360000 * s
#radius of earth plus radius of asteroid that killed the dinosaurs
r_final = 6376000 * m
print(init)
return System(init=init, G=G, m1=m1, m2=m2, t_0=t_0, t_end=t_end, r_final=r_final)
```
We then made the system with our chosen values of our state variables.
```python
#asteroid is starting approximately one moon's distance away in the
#x and y distance (selected for feasability, asteroids would rarely pass closer)
system = make_system(300000, 300000, 0, -1000)
```
px 300000 meter
py 300000 meter
vx 0.0 meter / second
vy -1000.0 meter / second
dtype: object
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>values</th>
</tr>
</thead>
<tbody>
<tr>
<th>init</th>
<td>px 300000 meter
py 3...</td>
</tr>
<tr>
<th>G</th>
<td>6.67408e-11 newton / kilogram ** 2 / meter ** 2</td>
</tr>
<tr>
<th>m1</th>
<td>6100000000000000.0 kilogram</td>
</tr>
<tr>
<th>m2</th>
<td>5.972324e+24 kilogram</td>
</tr>
<tr>
<th>t_0</th>
<td>0 second</td>
</tr>
<tr>
<th>t_end</th>
<td>315360000 second</td>
</tr>
<tr>
<th>r_final</th>
<td>6376000 meter</td>
</tr>
</tbody>
</table>
</div>
We define the universal gravitation function to determine the force on the asteroid caused by the Earth.
```python
def universal_gravitation(state, system):
#position and velocity in x and y directions
px, py, vx, vy = state
unpack(system)
#divide magnitude of position by vector of position to find direction
position = Vector(px, py)
P = sqrt(px**2/m**2 + py**2/m**2)
#Calculate magnitude of gravitational force
F_magnitude = G * m1 * m2/ ((P)**2)
P_direction = position/P
#give direction to force magnitude, make it a force vector
F = P_direction * (-1) * F_magnitude * m
return F
```
```python
universal_gravitation(init, system)
```
[-9.55162141e+18 -9.55162141e+18] newton
We create an event function to stop the ode solver just before the asteroid hits the earth, meaning that the asteroid is at `r_final`, the sum of the radii of the earth and the asteroid.
```python
#this did not end up functioning as we had desired, but is left in incase of use in future model
def event_func(state, t, system):
px, py, vx, vy = state
#find absolute value of position relative to earth
P = abs(sqrt(px**2 + py**2))
#return zero when distance equals distance between the center points of the asteroid and the earth
#(when they are touching)
return P - abs(system.r_final - 1)
```
```python
universal_gravitation(init, system)
```
[-9.55162141e+18 -9.55162141e+18] newton
The slope function returns derivatives that can be processed by the ode solver.
```python
def slope_func(state, t, system):
px, py, vx, vy = state
unpack(system)
#combind x and y components to make one position value
position = Vector(px, py)
#set force using universal ravitation function
F = universal_gravitation(state, system)
#seperate force into x and y components
Fx = F.x
Fy = F.y
#set chain of differentials, so acceleration (Force divided by mass) is equal to dv/dt
#and v is equal to dpdt, each in x and y components
dpxdt = vx
dpydt = vy
dvxdt = Fx/m1
dvydt = Fy/m1
return dpxdt, dpydt, dvxdt, dvydt
```
Calling the slope function should return the x and y velocities we set and the force in the x and y directions, and checking this proves it true
```python
#test slope func
slope_func(init, 0, system)
```
(<Quantity(0.0, 'meter / second')>,
<Quantity(-1000.0, 'meter / second')>,
<Quantity(-1565.839575767627, 'newton / kilogram')>,
<Quantity(-1565.839575767627, 'newton / kilogram')>)
```python
#test gravity value
grav = universal_gravitation(init, system)
```
[-9.55162141e+18 -9.55162141e+18] newton
The ode solver will return values for x position, y position, x velocity, and y velocity as the asteroid moves through space. X position and y position will then be divided by 1e9 such that they are expresses in millions of kilometers.
```python
#run ode, with results scaled down to millions
results, details = run_ode_solver(system, slope_func, vectorized = True, events = event_func)
results.px/=1e6
results.py/=1e6
```
```python
#note that success is listed as false
#the ode solver would not allow the asteroid to hit the earth, no matter how we changed the event function or equations
#we later impliment an if then statement into the return to work around this
print(details)
results.tail()
```
sol None
t_events [[]]
nfev 23444
njev 0
nlu 0
status -1
message Required step size is less than spacing betwee...
success False
dtype: object
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>px</th>
<th>py</th>
<th>vx</th>
<th>vy</th>
</tr>
</thead>
<tbody>
<tr>
<th>17.664187</th>
<td>-8.135517e-11</td>
<td>2.326817e-11</td>
<td>6.475701e+08</td>
<td>2.092805e+09</td>
</tr>
<tr>
<th>17.664187</th>
<td>-2.202471e-11</td>
<td>8.062377e-11</td>
<td>2.128075e+09</td>
<td>5.627940e+08</td>
</tr>
<tr>
<th>17.664187</th>
<td>5.772468e-11</td>
<td>6.016299e-11</td>
<td>1.572694e+09</td>
<td>-1.524570e+09</td>
</tr>
<tr>
<th>17.664187</th>
<td>8.081795e-11</td>
<td>-1.841503e-11</td>
<td>-5.172692e+08</td>
<td>-2.125779e+09</td>
</tr>
<tr>
<th>17.664187</th>
<td>2.401861e-11</td>
<td>-7.774800e-11</td>
<td>-2.129400e+09</td>
<td>-5.942638e+08</td>
</tr>
</tbody>
</table>
</div>
Plotting x position and y position separately against time produces two nearly identical lines that shown the asteroid getting closer and closer to the earth and then orbiting it in an off center ellipse until it inevitably collides.
```python
#plot x and y relative to time
results.px.plot()
results.py.plot()
decorate(ylabel='Distance to Earth [thousands of km]',
xlabel='Time [billions of s]',
title='Path of Asteroid')
```
We can also plot the asteroid on x and y axes to see its motion towards the Earth.
```python
#plot x vs y to find full position
plot(results.px,results.py)
decorate(xlabel='Distance [millions of km]',
ylabel='Distance [millions of km]',
title='Path of Asteroid')
```
We can limit the axes to zoom in on its impact with the Earth, which is at the center of the ellipses.
```python
axes = plt.gca()
axes.set_xlim([0, 0.005])
axes.set_ylim([0, 0.005])
plot(results.px,results.py)
decorate(xlabel='Distance [millions of km]',
ylabel='Distance [millions of km]',
title='Path of Asteroid')
```
The event function cannot accurately determine if the asteroid hits the Earth, as the distance between the two gets so small but the event function will never let them collide. Instead of the event function, we can use a function to determine if the last value of the results for x and y are close enough to the Earth such that the asteroid will be guaranteed to hit.
```python
def collision_result(results):
#store and print final x and y position of asteroid relative to earth
colvalx = get_last_value(results.px)
colvaly = get_last_value(results.py)
print('Final X Value =', colvalx)
print('Final Y Value =', colvaly)
#if the asteroid is within 1 meter of the Earth after a year of orbit, it is assumed that it will hit the earth
if -1 < colvalx and colvaly < 1:
print ('Kaboom! The asteroid hit!')
else:
print ('We live to love another day!')
```
```python
#test
collision_result(results)
```
Final X Value = 2.401860681452048e-11
Final Y Value = -7.774799575110359e-11
Kaboom! The asteroid hit!
## Results
Sweeping the velocity can show what happens to the asteroid at varying speeds. In this first sweeep, all 5 of the swept speeds result in the asteroid colliding with the Earth. The velocities vary from `1000 * m/s` and `10000 * m/s` in this sweep. We vary the linspace to narrow into the correct velocity or small range in which the asteroid will go into orbit.
```python
vel_array = linspace(1000, 10000, 5)
#sweep starting velocities between 1000 and 10000 in the y direction
for sweep_vel in vel_array:
system = make_system(300000, 300000, 0, -1*sweep_vel)
results, details = run_ode_solver(system, slope_func, vectorized = True, events = event_func)
collision_result(results)
#scale results to thousands of km and plot
results.px/=1e3
results.py/=1e3
plot(results.px,results.py)
decorate(xlabel='Distance [thousands of km]',
ylabel='Distance [thousands of km]',
title='Path of Asteroid')
```
The second sweep takes velocities between 10,000 m/s and 100,000 m/s. The asteroid hits the Earth at 10,000 m/s and 32,500 m/s, but not at 55,000 m/s, 77,500 m/s, or 100,000 m/s.
```python
vel_array2 = linspace(10000, 100000, 5)
#range of sweep altered to 10000 to 100000
for sweep_vel in vel_array2:
system = make_system(300000, 300000, 0, -1*sweep_vel)
results, details = run_ode_solver(system, slope_func, vectorized = True, events = event_func)
#results scaled to millions of km, then plotted
results.px/=1e6
results.py/=1e6
collision_result(results)
plot(results.px,results.py)
decorate(xlabel='Distance [millions of km]',
ylabel='Distance [millions of km]',
title='Path of Asteroid')
```
The next sweep takes a much narrower range to determine the exact point at which the asteroid narrowly misses colliding with the Earth. That velocity is shown to be somewhere in between 42098.3 m/s and 42106.0 m/s.
```python
vel_array4 = linspace(42083.0, 42152, 10)
#range narrowed from 42083 to 42152
for sweep_vel in vel_array4:
system = make_system(300000, 300000, 0, -1*sweep_vel)
results, details = run_ode_solver(system, slope_func, vectorized = True, events = event_func)
collision_result(results)
#results scaled to millions of km, then plotted
results.px/=1e6
results.py/=1e6
plot(results.px,results.py)
decorate(xlabel='Distance [millions of km]',
ylabel='Distance [millions of km]',
title='Path of Asteroid')
```
Narrowing even further, the asteroid collides when it it travelling between 42099.0 m/s and 42100.0 m/s.
```python
vel_array5 = linspace(42098.0, 42106, 9)
#resuts narrowed from 42098 to 42106, all integers tested
for sweep_vel in vel_array5:
system = make_system(300000, 300000, 0, -1*sweep_vel)
results, details = run_ode_solver(system, slope_func, vectorized = True, events = event_func)
collision_result(results)
#results scaled to millions of km, then plotted
results.px/=1e6
results.py/=1e6
plot(results.px,results.py)
decorate(xlabel='Distance [millions of km]',
ylabel='Distance [millions of km]',
title='Path of Asteroid')
```
## Interpretation
The asteroid collides with the Earth at velocities of about 42099.0 m/s and below. It slowly converges on the Earth until it eventually hits. Through a sweep of each indicidual starting velocity, we were able to determine to the integer of the velocity at what point an asteroid would be pulled in, given a set starting point.
At a speed somewhere between 42099.0 m/s and 42100.0 m/s, our model predicts that the asteroid goes into orbit.
At velocity of about 42100.0 m/s and above, the asteroid is travelling fast enough to escape Earth's gravity and does not collide.
Our model did work as expected, although we initially intended to only use the event function and then let the Earth revolve around the sun but we had to redesign our model using an if then statement and a logical assumption (that an asteroid 1 m away from the surface of the earth will collide with it) after the event function did not work.
Our model also has further application, as starting position and velocity can both be altered, with a readable and clear output of both visual and verbal confirmation. To take the model further, we would have introduced the sun and its gravity, placing the earth into orbit to make the model fully accurate to our star system, but the current iteration is sufficient for calculating rough velocities for impact.
| 7e6f3209291fdc23e2b4bb2cf5b654a7221a306a | 405,489 | ipynb | Jupyter Notebook | code/Asteroid Final.ipynb | BennettCTaylor/ModSimPy | a91704f90892e25e1f5dd8beb279ee8b33432829 | [
"MIT"
]
| null | null | null | code/Asteroid Final.ipynb | BennettCTaylor/ModSimPy | a91704f90892e25e1f5dd8beb279ee8b33432829 | [
"MIT"
]
| null | null | null | code/Asteroid Final.ipynb | BennettCTaylor/ModSimPy | a91704f90892e25e1f5dd8beb279ee8b33432829 | [
"MIT"
]
| null | null | null | 307.65478 | 83,188 | 0.920215 | true | 4,844 | Qwen/Qwen-72B | 1. YES
2. YES | 0.919643 | 0.803174 | 0.738633 | __label__eng_Latn | 0.979502 | 0.554423 |
# Week 3 worksheet 1: Root finding methods
This notebook is modified from one created by Charlotte Desvages.
This notebook investigates the first root finding algorithms to solve nonlinear equations: the **bisection method** and **fixed-point iteration**.
The best way to learn programming is to write code. Don't hesitate to edit the code in the example cells, or add your own code, to test your understanding. You will find practice exercises throughout the notebook, denoted by 🚩 ***Exercise $x$:***.
#### Displaying solutions
Solutions will be released one week after the worksheets are released, as a new `.txt` file in the same GitHub repository. After pulling the file to your workspace, run the following cell to create clickable buttons under each exercise, which will allow you to reveal the solutions.
```python
%run scripts/create_widgets.py W03-W1
```
---
### 📚 Book sections
- **ASC**: sections 5.1, **5.2**
- **PCP**: sections 7.1.3, 7.4
## Bracketing methods: bisection and regula falsi
**Bracketing methods** seek to find smaller and smaller **intervals** $[a, b]$ which contain a root.
They rely on the Intermediate Value Theorem: for a continuous function $F(x)$, if $F(a)$ has different sign than $F(b)$, then $F(x)$ has a root $x_\ast \in [a, b]$.
### Bisection
Key idea: start with an interval $[a, b]$ such that $F(a)$ and $F(b)$ have different signs.
Then, split the interval in two, and evaluate $F(\frac{a+b}{2})$ at the midpoint. Compare the sign of $F(\frac{a+b}{2})$ with the sign of $F(a)$ and $F(b)$, and choose the half-interval which still contains the root. Repeat the procedure with the new, smaller interval, until convergence.
### Regula falsi
Similar to bisection, but instead of choosing the midpoint, we choose the point of intersection between the x-axis and a straight line passing through the points $(a, F(a))$ and $(b, F(b))$.
---
## Solving nonlinear equations with root-finding algorithms
In this section, we consider equations of the form
$$
F(x) = 0,
$$
where $F(x)$ is a **nonlinear** function of $x$. Solving this equation is equivalent to finding the *root(s)* $x_\ast$ of the function $F$.
There are direct methods we can use to solve *linear* equations, even linear systems of equations as we saw previously; however, a nonlinear equation of the form given above doesn't always have a solution which can be found analytically.
The methods we will discuss to solve nonlinear equations are all **iterative**:
1. we start with a guess for where the root may be;
2. if we are close enough to the solution, for instance if the function is sufficiently close to zero for the current guess, we stop;
3. if not, we refine our guess using information we have about the function;
4. we go back to step 1 with our new guess.
Step 3 is what differentiates the methods we will use, as they each use a different process to refine the current best guess. For all these methods, the key idea is to reduce the nonlinear problem to smaller, simpler problems, which we solve repeatedly (iteratively).
### The bisection method
Given a continuous function $F \left( x \right)$, if $F \left( a \right) \le 0$ and $F \left( b \right) \ge 0$, the Intermediate Value Theorem tells us that there must be a root in the closed interval $\left[ a, b \right].$ The bisection method proceeds by testing the **sign** of $F \left( c \right)$ where $c$ is the **mid-point** of the interval $\left[ a, b \right]$, and uses this to halve the size of the interval in which a root is sought. The process is repeated with the new half-interval, until the interval is small enough to approximate the root with a given tolerance.
The next few exercises will guide you through implementing the bisection method yourself.
---
**📚 Learn more:**
- **ASC**: section 5.2
- **PCP**: section 7.4
---
🚩 ***Exercise 1:***
Consider the function
\begin{equation}\label{eqn:F}
F \left( x \right) = \sin \left( 2 \pi x \right) e^{4 x} + x.
\end{equation}
Plot this function in the interval $x \in \left[ 0, 1 \right]$ and identify the three roots in this interval. Check that $F \left( 0.2 \right)$ and $F \left( 0.6 \right)$ have opposite signs.
You may find it convenient to create a function `F`.
```python
```
```python
%run scripts/show_solutions.py W03-W1_ex1
```
---
🚩 ***Exercise 2:*** Define variables corresponding to $a = 0.2$ and $b = 0.6$. Define $c$ to be the midpoint $c = (a + b) / 2$.
Then, start a loop, which should iterate until the root is found. At each iteration:
- Depending on the sign of $F \left( a \right) F \left( c \right)$, decide whether to set $a$ or $b$ to be equal to $c$, so that there is a root of $F \left( x \right)$ in the *new* interval $\left[ a, b \right]$ (with half the width of the previous interval).
- Define $c$ to be the new midpoint $c = (a + b) / 2$.
- The loop should stop when you have found the root $x_\ast$ within an error of $10^{-10}$.
A possible convergence criterion is $|F(c)| < \varepsilon$, where $\varepsilon$ is the tolerance -- here, $10^{-10}$.
How many iterations are needed to find the root to within this error?
```python
```
```python
%run scripts/show_solutions.py W03-W1_ex2
```
---
🚩 ***Exercise 3:*** Choose different $a$ and $b$ values in order to find the root near $x = 1$, to within an error of $10^{-10}$.
You may wish to write your code from Exercise 2 into a function `bisection(F, a, b, tol)`, which finds the root of a function `F` in the interval `[a, b]` to within an error of `tol`, and returns the value of the roots and the number of iterations.
```python
```
```python
%run scripts/show_solutions.py W03-W1_ex3
```
---
### Regula falsi
The "regula falsi" method is similar to the bisection method, with an important difference: instead of selecting the midpoint of the interval $[a, b]$ at each iteration, we trace a straight line between the points $(a, F(a))$ and $(b, F(b))$, and select the point $c$ where this line intersects the x-axis. In other words, we interpolate $F$ linearly between $a$ and $b$, and we find the root of this interpolating polynomial (of degree 1) at each iteration.
---
🚩 ***Exercise 4:*** Show that $c$ is given by
$$
c = \frac{a F(b) - b F(a)}{F(b) - F(a)}.
$$
*Hint: a line with slope $\alpha$ which passes through the point $(x_0, y_0)$ has equation*
$$
y - y_0 = \alpha (x - x_0).
$$
```python
%run scripts/show_solutions.py W03-W1_ex4
```
---
🚩 ***Exercise 5:*** Consider the same function $F$ as in section 2.1.
Define variables corresponding to $a = 0.2$ and $b = 0.6$.
Then, proceed as you did for the bisection method, but instead of defining $c$ to be the midpoint of $[a, b]$, define $c$ as above.
How many iterations are needed to find the root to within a tolerance of $10^{-10}$?
You may wish to define a function `regula_falsi(F, a, b, tol)` to find a root of a function `F` within an interval `[a, b]` to within an error `tol`, which returns the computed root and the number of iterations.
```python
```
```python
%run scripts/show_solutions.py W03-W1_ex5
```
---
## Convergence of root-finding methods
The bisection and regula falsi methods are guaranteed to converge to a root, provided $F$ is sufficiently smooth and the starting interval $[a, b]$ is chosen appropriately.
But different methods may converge to a root at different *speeds*. The **order of convergence** for root-finding algorithms is defined in terms of successive values of the error $e_k := x_k - x_\ast$ between the true solution $x_\ast$ and the guess $x_k$ obtained at the $k$th iteration.
---
#### 🚩 Definition: Order of convergence of root-finding methods
A convergent root-finding algorithm converges **at $p$th order** if
$$
\lim_{k \to \infty} \frac{|e_{k+1}|}{|e_k|^p} = \alpha,
$$
where $\alpha \in \mathbb{R}$ is a constant.
---
For a $p$th order convergent method, we expect the error at the $k+1$th iteration to be roughly proportional to the $p$th power of the error at the $k$th iteration, for sufficiently large $k$ -- that is, when we are in a close enough neighbourhood of $x_\ast$.
Note that $p$ is not always an integer.
---
🚩 ***Exercise 6:*** Modify your code from Exercise 5 so that all the successive guesses $x_k$ are stored in a Numpy array. Perform the task from Exercise 6 again -- use the regula falsi method to find the same root of `F`, using the same starting interval and tolerance. You should obtain the same result, but now you should also have a vector `x` with length $k_\max + 1$ containing all the guesses.
Consider $x_{k_\max}$, the last guess obtained by the method, to be the "true solution". Compute the magnitude of the error $e_k$ between each of the previous guesses $x_k$ and the true solution.
For $p=1, 1.5, 2, 2.5$, compute the ratio $\frac{|e_{k+1}|}{|e_k|^p}$ for $k=0, 1, \dots, k_\max - 1$, and plot it against $k$. Set your y-axis limits to $[0, 1000]$ to start with, and reduce the range as necessary.
Given the definition above, what do you expect is the order of convergence of regula falsi? How do you explain the appearance of the graph?
```python
```
```python
%run scripts/show_solutions.py W03-W1_ex6
```
| a85b80eb87e2253dd0b3761dddf137e0566e87d5 | 13,258 | ipynb | Jupyter Notebook | Workshops/W03-W1_NMfCE_Root_finding.ipynb | DrFriedrich/nmfce-2021-22 | 2ccee5a97b24bd5c1e80e531957240ffb7163897 | [
"MIT"
]
| null | null | null | Workshops/W03-W1_NMfCE_Root_finding.ipynb | DrFriedrich/nmfce-2021-22 | 2ccee5a97b24bd5c1e80e531957240ffb7163897 | [
"MIT"
]
| null | null | null | Workshops/W03-W1_NMfCE_Root_finding.ipynb | DrFriedrich/nmfce-2021-22 | 2ccee5a97b24bd5c1e80e531957240ffb7163897 | [
"MIT"
]
| null | null | null | 37.241573 | 604 | 0.593528 | true | 2,537 | Qwen/Qwen-72B | 1. YES
2. YES | 0.901921 | 0.853913 | 0.770162 | __label__eng_Latn | 0.999221 | 0.627676 |
```python
import sys
sys.path.append('..')
from autodiff.structures import Number
from autodiff.structures import Array
from autodiff.optimizations import bfgs_symbolic
from autodiff.optimizations import bfgs
from autodiff.optimizations import steepest_descent
import timeit
import numpy as np
```
We implemented three optimization methods in our optimization module, steepest descent, AD BFGS and symbolic BFGS. The former two use automatic differentiation to obtain gradient information and the last one requires user input of the expression of the gradient.
In this example, we use the classic rosenbrock function to benchmark each optimization method. We time each method to compare efficiency.
Let's first take a look at the AD bfgs method: Use Array([Number(2),Number(1)]) as initial guess
```python
initial_guess = Array([Number(2),Number(1)])
def rosenbrock(x0):
return (1-x0[0])**2+100*(x0[1]-x0[0]**2)**2
initial_time2 = timeit.timeit()
results = bfgs(rosenbrock,initial_guess)
print("Xstar:",results[0])
print("Minimum:",results[1])
print("Jacobian at each step:",results[2])
final_time2 = timeit.timeit()
time_for_optimization = initial_time2-final_time2
print('\n\n\n')
print("Time for symbolic bfgs to perform optimization",time_for_optimization,'total time taken is',time_for_optimization)
```
Xstar: [Number(val=1.0000000000025382) Number(val=1.0000000000050797)]
Minimum: Number(val=6.4435273497518935e-24)
Jacobian at each step: [array([2402, -600]), array([-5.52902304e+12, -1.15187980e+09]), array([-474645.77945484, 127109.93018289]), array([ 1.62663315e+09, -2.70845802e+07]), array([-8619.39185109, 2161.73842208]), array([-144.79886656, 36.76686628]), array([1.99114433e+00, 3.55840767e-04]), array([1.99094760e+00, 4.05114252e-04]), array([1.96305014, 0.00739234]), array([1.9349555 , 0.01442883]), array([1.87896168, 0.02845251]), array([1.79486979, 0.04951253]), array([1.65477644, 0.08459553]), array([1.43057943, 0.14073564]), array([1.06628965, 0.23194665]), array([0.47793023, 0.37924961]), array([-0.47390953, 0.61758141]), array([-2.01036637, 1.00259974]), array([-4.48450951, 1.62438092]), array([-8.45367483, 2.63103416]), array([-14.76319038, 4.28099834]), array([-4.62373441, 1.75866303]), array([141.6035648 , -30.46772268]), array([ 7.1036742 , -1.70213995]), array([10.42215839, -2.72209527]), array([13.5437883 , -3.72650641]), array([16.21168154, -4.6779326 ]), array([16.2320604 , -4.88405766]), array([10.21216085, -3.12699453]), array([ 5.13098972, -1.52792632]), array([14.78025278, -5.5124494 ]), array([-1.52082729, 0.8129434 ]), array([ 3.23558391, -1.1679951 ]), array([ 7.90182702, -3.37442389]), array([ 2.23734678, -0.8351361 ]), array([ 0.98042779, -0.31955033]), array([ 2.87258631, -1.30884655]), array([ 0.71918344, -0.28586232]), array([ 0.36444279, -0.14819236]), array([ 0.46808941, -0.22547022]), array([-0.01787606, 0.01191596]), array([ 0.01227375, -0.00612718]), array([ 0.00044299, -0.00020912]), array([ 6.03745724e-08, -1.87348359e-08]), array([3.74411613e-12, 6.66133815e-13])]
Time for symbolic bfgs to perform optimization 0.0005920969999999581 total time taken is 0.0005920969999999581
The total time needed for the entire optimization process is around 0.0003 s.
Then, let's compare with the traditional symbolic optimization using bfgs
First, the user needs to calculate the derivative either by hand or through sympy. Here we use sympy.
```python
from sympy import *
import sympy
initial_time = timeit.timeit()
x, y = symbols('x y')
rb = (1-x)**2+100*(y-x**2)**2
print("Function to be derivatized : {}".format(rb))
# Use sympy.diff() method
par1 = diff(rb, x)
par2 = diff(rb,y)
print("After Differentiation : {}".format(par1))
print("After Differentiation : {}".format(par2))
def gradientRosenbrock(x0):
x=x0[0]
y=x0[1]
drdx = -2*(1 - x) - 400*x*(-x**2 + y)
drdy = 200 *(-x**2 + y)
return drdx,drdy
final_time=timeit.timeit()
time_for_sympy = initial_time-final_time
print("Time for sympy to find derivative expression",time_for_sympy)
```
Function to be derivatized : (1 - x)**2 + 100*(-x**2 + y)**2
After Differentiation : -400*x*(-x**2 + y) + 2*x - 2
After Differentiation : -200*x**2 + 200*y
Time for sympy to find derivative expression 0.0014529649999914795
The time taken for sympy to find the derivative is around 0.0003 second and this can be a lot more if the user calculates derivatives by hand.
Second, use symbolic bfgs to perform optimization. Use [2,1] as the initial guess.
```python
initial_time1 = timeit.timeit()
results = bfgs_symbolic(rosenbrock,gradientRosenbrock,[2,1])
print("Xstar:",results[0])
print("Minimum:",results[1])
print("Jacobian at each step:",results[2])
final_time1 = timeit.timeit()
time_for_optimization_symbolic = initial_time1-final_time1
print('\n\n\n')
print("Time for symbolic bfgs to perform optimization",time_for_optimization_symbolic,'total time taken is',time_for_optimization_symbolic+time_for_sympy)
```
Xstar: [1. 1.]
Minimum: 6.4435273497518935e-24
Jacobian at each step: [array([2402, -600]), array([-5.52902304e+12, -1.15187980e+09]), array([-474645.77945484, 127109.93018289]), array([ 1.62663315e+09, -2.70845802e+07]), array([-8619.39185109, 2161.73842208]), array([-144.79886656, 36.76686628]), array([1.99114433e+00, 3.55840767e-04]), array([1.99094760e+00, 4.05114252e-04]), array([1.96305014, 0.00739234]), array([1.9349555 , 0.01442883]), array([1.87896168, 0.02845251]), array([1.79486979, 0.04951253]), array([1.65477644, 0.08459553]), array([1.43057943, 0.14073564]), array([1.06628965, 0.23194665]), array([0.47793023, 0.37924961]), array([-0.47390953, 0.61758141]), array([-2.01036637, 1.00259974]), array([-4.48450951, 1.62438092]), array([-8.45367483, 2.63103416]), array([-14.76319038, 4.28099834]), array([-4.62373441, 1.75866303]), array([141.6035648 , -30.46772268]), array([ 7.1036742 , -1.70213995]), array([10.42215839, -2.72209527]), array([13.5437883 , -3.72650641]), array([16.21168154, -4.6779326 ]), array([16.2320604 , -4.88405766]), array([10.21216085, -3.12699453]), array([ 5.13098972, -1.52792632]), array([14.78025278, -5.51244941]), array([-1.52082729, 0.8129434 ]), array([ 3.23558391, -1.16799509]), array([ 7.90182703, -3.3744239 ]), array([ 2.23734678, -0.8351361 ]), array([ 0.9804278 , -0.31955033]), array([ 2.87258632, -1.30884655]), array([ 0.71918344, -0.28586232]), array([ 0.36444279, -0.14819236]), array([ 0.46808941, -0.22547022]), array([-0.01787606, 0.01191596]), array([ 0.01227375, -0.00612718]), array([ 0.00044299, -0.00020912]), array([ 6.03746022e-08, -1.87348803e-08]), array([3.74411613e-12, 6.66133815e-13])]
Time for symbolic bfgs to perform optimization 0.005963177000012365 total time taken is 0.007416142000003845
The total time taken for symbolic optimization is 0.007416142000003845 s, while the total time taken for a0.0005920969999999581
```python
```
```python
```
```python
```
```python
```
```python
```
| 7154ba46e5175be89a1dfb101e0731d3b1042ce8 | 9,984 | ipynb | Jupyter Notebook | docs/optimization_example.ipynb | rocketscience0/cs207-FinalProject | bb2a38bc2ca341c55cf544d316318798b42efde7 | [
"MIT"
]
| 1 | 2019-11-12T18:03:52.000Z | 2019-11-12T18:03:52.000Z | docs/optimization_example.ipynb | rocketscience0/cs207-FinalProject | bb2a38bc2ca341c55cf544d316318798b42efde7 | [
"MIT"
]
| 3 | 2019-11-19T20:45:05.000Z | 2019-12-10T14:33:21.000Z | docs/optimization_example.ipynb | rocketscience0/cs207-FinalProject | bb2a38bc2ca341c55cf544d316318798b42efde7 | [
"MIT"
]
| null | null | null | 40.258065 | 1,645 | 0.598758 | true | 2,582 | Qwen/Qwen-72B | 1. YES
2. YES | 0.721743 | 0.812867 | 0.586681 | __label__eng_Latn | 0.404068 | 0.201388 |
# PRAKTIKUM 14
`Solusi Persamaan Differensial Parsial (PDP)`
1. Persamaan Gelombang
2. Persamaan Panas
3. Persamaan Laplace
- Dengan kondisi batas Dirichlet
- Dengan kondisi batas Neumann
<hr style="border:2px solid black"> </hr>
# 1 Persamaan Gelombang
Suatu persamaan gelombang memiliki bentuk umum, yaitu
$\begin{align}
u_{tt}(x,t)=c^2u_{xx}(x,t)\ \ \ \text{ untuk } 0<x<a \text{ dan } 0<t<b
\end{align}$
dengan kondisi batas
$\begin{align}\label{eq:14 gelombang batas}
\begin{split}
&u(0,t)=0\ \text{ dan }\ u(a,t)=0\\
&u(x,0)=f(x)\\
&u_t(x,0)=g(x)
\end{split}
\hskip1cm
\begin{split}
&\text{ untuk } 0\le t\le b\\
&\text{ untuk } 0\le x\le a\\
&\text{ untuk } 0< x< a\\
\end{split}
\end{align}$
## Solusi Persamaan Gelombang
Partisikan persegi $ R=\{(x,t):x\in[0,a],t\in[0,b]\} $ menjadi suatu grid yang mengandung persegi sebanyak $ n-1 \times m-1 $ dengan panjang sisi $ \Delta x=h $ dan $ \Delta t=k $. Solusi numerik dimulai dari
$ t_1=0 $ yaitu menggunakan kondisi batas $ u(x_i,0)=f(x_i) $
dan
$ t_2 $ menggunakan persamaan $u_{i,2}=(1-r^2)f_i+kg_i+\dfrac{r^2}{2}(f_{i+1}+f_{i-1})$
Selanjutnya, akan digunakan metode beda-hingga untuk mencari hampiran solusi persamaan differensial $ u_{i,j} \approx u(x_i,t_j) $.
Formula beda-pusat untuk hampiran $ u_{tt}(x,t) $ dan $ u_{xx}(x,t) $ adalah
$$
u_{tt}(x,t)=\dfrac{u(x,t+k)-2u(x,t)+u(x,t-k)}{k^2}+O(k^2)
$$
dan
\begin{align}
u_{xx}(x,t)=\dfrac{u(x+h,t)-2u(x,t)+u(x-h,t)}{h^2}+O(h^2)
\end{align}
Selanjutnya, dengan menghilangkan $ O(k^2) $ dan $ O(h^2) $ serta menggunakan $ u_{i,j} $ untuk menghampiri $ u(x_i,t_j) $, hasil substitusi kedua persamaan tersebut pada persamaan gelombang adalah
\begin{equation}
\dfrac{u_{i,j+1}-2u_{i,j}+u_{i,j-1}}{k^2}=c^2\dfrac{u_{i+1,j}-2u_{i,j}+u_{i-1,j}}{h^2}
\end{equation}
Supaya lebih mudah, substitusi nilai $ r=ck/h $ , sehingga diperoleh persamaan
\begin{align}
u_{i,j+1}=(2-2r^2)u_{i,j}+r^2(u_{i+1,j}+u_{i-1,j})-u_{i,j-1}
\end{align}
untuk $ i=2,3,\dots,n-1 $.
<p style="text-align: center"><i>Stensil Persamaan Gelombang</i></p>
Hal yang perlu diperhatikan dalam penggunaan metode beda-hingga untuk mencari nilai hampiran solusi persamaan gelombang adalah solusi yang dihasilkan tidak selalu stabil. Syarat yang diperlukan agar mendapatkan solusi yang stabil adalah nilai $ r=ck/h\le1 $.
```julia
#%%METODE BEDA PUSAT UNTUK PERSAMAAN GELOMBANG
#%
#% Digunakan untuk mencari solusi persamaan differensial
#% parsial yaitu persamaan gelombang
#%
#% U = gelombang(f,g,a,b,c,m,n)
#% Input : f,g -> fungsi nilai batas
#% a,b -> batas atas solusi x dan t
#% c -> koefisien persamaan gelombang
#% m,n -> jumlah partisi x dan t
#% Output : U -> solusi PDP, U(t,x)
#%
#% Digunakan Sebagai Pedoman Praktikum Metode Numerik
#%
#% Lihat juga : panas, laplace
function gelombang(f,g,a,b,c,m,n)
h = a/(m-1);
x = 0:h:1;
k = b/(n-1);
r = c*k/h;
U = zeros(m,n);
for i = 2:m-1
U[i,1] = f(x[i]);
U[i,2] = (1-r^2)*f(x[i]) + k*g(x[i]) + r^2/2*(f(x[i+1])+f(x[i-1]));
end
for j = 3:n
for i = 2:(m-1)
U[i,j] = (2-2*r^2)*U[i,j-1] + r^2*(U[i-1,j-1] + U[i+1,j-1])-U[i,j-2];
end
end
U = U';
end
```
### Contoh 1
Gunakan metode beda-hingga untuk mencari solusi persamaan gelombang dari suatu senar yang bergetar.
\begin{align*}
u_{tt}(x,t)=4u_{xx}(x,t)\ \ \ \text{ untuk } 0<x<1 \text{ dan } 0<t<0.5
\end{align*}
dengan kondisi batas
\begin{align*}
\begin{split}
&u(0,t)=0\ \text{ dan }\ u(1,t)=0\\
&u(x,0)=f(x)=\sin(\pi x)+\sin(2\pi x)\\
&u_t(x,0)=g(x)=0
\end{split}
\hskip1cm
\begin{split}
&\text{ untuk } 0\le t\le 0.5\\
&\text{ untuk } 0\le x\le 1\\
&\text{ untuk } 0\le x\le 1\\
\end{split}
\end{align*}
dan panjang sisi $ h=0.1 $ dan $ k=0.05 $.
```julia
# Solusi U(x,t)
f(x) = sin(pi*x)+sin(2*pi*x);
g(x) = 0;
a = 1;
b = 0.5;
c = 2;
m = 11;
t = 0:0.05:b;
n = length(t);
U = gelombang(f,g,a,b,c,m,n)
```
```julia
using Plots
```
```julia
# Plot Solusi
x = 0:0.1:a;
t = 0:0.05:b;
gr()
plot(x,t,U,st=:wireframe,camera=(50,45))
xlabel!("x")
ylabel!("t")
```
# 2 Persamaan Panas
Suatu persamaan panas memiliki bentuk umum, yaitu
$\begin{align}
u_{t}(x,t)=c^2u_{xx}(x,t)\ \ \ \text{ untuk } 0<x<a \text{ dan } 0<t<b
\end{align}$
dengan kondisi batas
$\begin{align}
\begin{split}
&u(0,t)=c_1\ \text{ dan }\ u(a,t)=c_2\\
&u(x,0)=f(x)
\end{split}
\hskip1cm
\begin{split}
&\text{ untuk } 0\le t\le b\\
&\text{ untuk } 0\le x\le a\\
\end{split}
\end{align}$
## Solusi Persamaan Panas
Partisikan persegi $ R=\{(x,t):x\in[0,a],t\in[0,b]\} $ menjadi suatu grid yang mengandung persegi sebanyak $ n-1 \times m-1 $ dengan panjang sisi $ \Delta x=h $ dan $ \Delta t=k $. Solusi numerik dimulai dari
$ t_1=0 $ yaitu menggunakan kondisi batas $ u(x_i,t_1)=f(x_i) $.
Selanjutnya, akan digunakan metode beda-hingga untuk mencari hampiran solusi persamaan differensial untuk
\begin{align*}
\{ u_{i,j}:i=1,2,\dots,n \} \ \ \ \text{ untuk } j=2,3,\dots,m
\end{align*}
Formula beda-maju dapat digunakan untuk menghampiri $ u_t(x,t) $ dan formula beda-tengah untuk $ u_{xx}(x,t) $, yaitu
\begin{align}\label{eq:14 panas 1}
u_t(x,t)=\dfrac{u(x,t+k)-u(t,k)}{k}+O(k)
\end{align}
dan
\begin{align}\label{eq:14 panas 2}
u_{xx}(x,t)=\dfrac{u(x-h,t)-2u(x,t)+u(x+h,t)}{h^2}+O(h^2)
\end{align}
Selanjutnya, dengan menghilangkan $ O(k) $ dan $ O(h^2) $ serta menggunakan $ u_{i,j} $ untuk menghampiri $ u(x_i,t_j) $, hasil substitusi adalah
\begin{equation}\label{eq:14 panas 3}
\dfrac{u_{i,j+1}-u_{i,j}}{k}=c^2\dfrac{u_{i-1,j}-2u_{i,j}+u_{i+1,j}}{h^2}
\end{equation}
Supaya lebih mudah, substitusi $ r=c^2k/h^2 $, sehingga diperoleh persamaan
\begin{align}\label{eq:14 panas numerik}
u_{i,j+1}=(1-2r)u_{i,j}+r(u_{i-1,j}+u_{i+1,j})
\end{align}
untuk $ i=2,3,\dots,n-1 $.
<p style="text-align: center"><i>Stensil Persamaan Panas</i></p>
Hal yang perlu diperhatikan dalam penggunaan metode beda-hingga untuk mencari nilai hampiran solusi persamaan panas adalah solusi yang dihasilkan tidak selalu stabil. Syarat yang diperlukan agar solusi yang dihasilkan stabil adalah nilai $ 0\le c^2k/h^2\le\frac{1}{2} $.
```julia
#%%METODE BEDA MAJU UNTUK PERSAMAAN PANAS
#%
#% Digunakan untuk mencari solusi persamaan differensial
#% parsial yaitu persamaan panas
#%
#% [U,r] = panas(f,c1,c2,a,b,c,m,n)
#% Input : f -> fungsi nilai awal u(x,0)
#% c1,c2-> nilai batas u(0,t) dan u(a,t)
#% a,b -> batas solusi x dan t
#% c -> koefisien persamaan panas
#% m,n -> jumlah titik x dan t
#% Output : U -> solusi PDP, U(t,x)
#%
#% Digunakan Sebagai Pedoman Praktikum Metode Numerik
#%
#% Lihat juga : gelombang, laplace
function panas(f,c1,c2,a,b,c,m,n)
h = a/(m-1);
k = b/(n-1);
r = c^2*k/h^2;
U = zeros(m,n);
U[1,:] .= c1;
U[m,:] .= c2;
U[2:m-1,1] = f.(h:h:(m-2)*h)';
for j = 2:n
for i = 2:m-1
U[i,j]=(1-2*r)*U[i,j-1]+ r*(U[i-1,j-1]+U[i+1,j-1]);
end
end
U=U';
return U,r
end
```
### Contoh 2
Gunakan metode beda-maju untuk mencari solusi persamaan panas
\begin{align}
u_{t}(x,t)=u_{xx}(x,t)\ \ \ \text{ untuk } 0<x<1 \text{ dan } 0<t<0.2
\end{align}
dengan kondisi batas
\begin{align}
\begin{split}
&u(0,t)=0\ \text{ dan }\ u(1,t)=0\\
&u(x,0)=4x-4x^2
\end{split}
\hskip1cm
\begin{split}
&\text{ untuk } 0\le t\le 0.2\\
&\text{ untuk } 0\le x\le 1\\
\end{split}
\end{align}
dengan panjang sisi $ h=0.2 $ dan $ k=0.02 $.
```julia
f(x) = 4*x - 4*x.^2;
c1 = 0;
c2 = 0;
a = 1;
b = 0.2;
c = 1 ;
m = 6;
n = 11;
U,r = panas(f,c1,c2,a,b,c,m,n)
@show r
U
```
```julia
x = 0:0.2:a
t = 0:0.02:b
plot(x,t,U,st=:wireframe,camera=(50,45))
xlabel!("x")
ylabel!("t")
```
# 3 Persamaan Laplace
Beberapa contoh dari persamaan eliptik antara lain persamaan Laplace, Poisson, dan Helmholtz. Laplacian dari fungsi $ u(x,y) $ didefinisikan sebagai
\begin{equation}\label{eq:14 eliptik}
\nabla^2u=u_{xx}+u_{yy}
\end{equation}
Dengan notasi tersebut, persamaan Laplace, Poisson, dan Helmholtz dapat dituliskan dalam bentuk
\begin{align}
\begin{split}
&\nabla^2u=0 \\
&\nabla^2u=g(x,y) \\
&\nabla^2u+f(x,y)u=g(x,y)
\end{split}
\begin{split}
&\text{Persamaan Laplace} \\
&\text{Persamaan Poisson} \\
&\text{Persamaan Helmholtz}
\end{split}
\end{align}
## Solusi Persamaan Laplace
Dengan menerapkan hampiran beda hingga seperti pada persamaan panas dan gelombang, didapatkan persamaan
\begin{align}\label{eq:14 laplace}
u_{i+1,j}+u_{i-1,j}+u_{i,j+1}+u_{i,j-1}-4u_{i,j}=0
\end{align}
untuk $ i=2,3,\dots,n-1 $.
<p style="text-align: center"><i>Stensil Persamaan Laplace</i></p>
Apabila dimisalkan grid berukuran $5\times5$, maka akan diperoleh
<p style="text-align: center"><i>Pembagian grid kondisi batas Dirichlet</i></p>
Dengan menerapkan persamaan stensil Laplace, jika dipilih $ p_1 $ sebagai pusat stensil yaitu $ u_{2,2} $, maka akan diperoleh persamaan $ p_2+u_{1,2}+p_4+u_{2,1}-4p_1=0 $. Selanjutnya, jika persamaan stensil Laplace diterapkan dengan pusat stensil mulai dari $ p_1 $, $ p_2 $, hingga $ p_9 $, maka akan diperoleh sistem persamaan berikut.
\begin{align}
p_2+u_{1,2}+p_4+u_{2,1}-4p_1&=0\\
p_3+p_1+p_5+u_{3,1}-4p_2&=0\\
u_{5,2}+p_2+p_6+u_{4,1}-4p_3&=0\\
p_5+u_{1,3}+p_7+p_1-4p_4&=0\\
p_6+p_4+p_8+p_2-4p_5&=0\\
u_{5,3}+p_5+p_9+p_3-4p_6&=0\\
p_8+u_{1,4}+u_{2,5}+p_4-4p_7&=0\\
p_9+p_7+u_{3,5}+p_5-4p_8&=0\\
u_{5,4}+p_8+u_{4,5}+p_6-4p_9&=0
\end{align}
Dalam notasi matriks, sistem persamaan linear tersebut dapat ditulis menjadi
$\begin{align}
\begin{bmatrix}
-4 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
1 &-4 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 1 &-4 & 0 & 0 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 &-4 & 1 & 0 & 1 & 0 & 0 \\
0 & 1 & 0 & 1 &-4 & 1 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 1 &-4 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 & 0 &-4 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 1 &-4 & 1 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 &-4 \\
\end{bmatrix}
\begin{bmatrix}
p_1\\p_2\\p_3\\p_4\\p_5\\p_6\\p_7\\p_8\\p_9
\end{bmatrix}
=
\begin{bmatrix}
-u_{2,1}-u_{1,2}\\
-u_{3,1}\\
-u_{4,1}-u_{5,2}\\
-u_{1,3}\\
0\\
-u_{5,3}\\
-u_{2,5}-u_{1,4}\\
-u_{3,5}\\
-u_{4,5}-u_{5,4}\\
\end{bmatrix}
\end{align}$
Solusi interior $ p_1 $, $ p_2 $, hingga $ p_9 $ dapat diperoleh dengan menyelesaikan SPL di atas.
```julia
function dirichlet(f1,f2,f3,f4,a,b,h)
maxi = 100;
tol = 1e-7;
n = Int(a/h)+1;
m = Int(b/h)+1;
U = (a*(f1(0)+f2(0))+b*(f3(0)+f4(0)))/(2*a+2*b).+ones(n,m);
# Masalah nilai batas
U[1:n,1]=f1.(0:h:(n-1)*h);
U[1:n,m]=f2.(0:h:(n-1)*h);
U[1,1:m]=f3.(0:h:(m-1)*h);
U[n,1:m]=f4.(0:h:(m-1)*h);
U[1,1]=(U[1,2]+U[2,1])/2;
U[1,m]=(U[1,m-1]+U[2,m])/2;
U[n,1]=(U[n-1,1]+U[n,2])/2;
U[n,m]=(U[n-1,m]+U[n,m-1])/2;
w = 4/(2+sqrt(4-(cos(pi/(n-1))+cos(pi/(m-1)))^2));
err=1; iter=0;
while (err>tol)&&(iter<=maxi)
err = 0;
for j = 2:m-1
for i = 2:n-1
relx = w*(U[i,j+1]+U[i,j-1]+U[i+1,j]+U[i-1,j]-4*U[i,j])/4;
U[i,j]=U[i,j]+relx;
if err<=abs(relx)
err=abs(relx);
end
end
end
iter=iter+1;
end
U = U';
end
```
### Contoh 3 - Kondisi Batas Dirichlet
Diberikan persamaan Laplace $\nabla^2 u=0$ pada daerah persegi $R=\{(x,y)│0\le x\le 4, 0\le y\le 4\}$
dengan $u(x,y)$ menyatakan suhu pada titik $(x,y)$ dan nilai-nilai batas Dirichlet berikut:
$u(x,0)=20$ untuk $0<x<4$,
$u(x,4)=180$ untuk $0<x<4$,
$u(0,y)=80$ untuk $0\le y<4$,
$u(4,y)=0$ untuk $0\le y<4$.
Berikut merupakan langkah-langkah untuk menghitung solusi numerik dari masalah persamaan Laplace di atas menggunakan
1. sistem linear yang terbentuk dari stensil persamaan laplace dengan $ h=1 $
2. metode dirichlet pada fungsi `dirichlet` dengan $ h=1 $ dan $ h=0.1 $
serta gambar solusi numerik dalam grafik 3 dimensi.
**Langkah 1** Pendefinisian SPL berdasarkan stensil yang diperoleh dengan $ h=1 $.}
Karena digunakan nilai $ h=1 $, jumlah grid yang akan terbentuk adalah $ 5 \times 5 $ dengan pembagian grid seperti pada gambar di bawah ini.
<p style="text-align: center"><i>Pembagian grid kondisi batas Dirichlet Contoh 3</i></p>
Dengan demikian, berdasarkan stensil komputasi Laplace diperoleh SPL sebagai berikut.
$\begin{align}
\begin{bmatrix}
-4 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
1 &-4 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 1 &-4 & 0 & 0 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 &-4 & 1 & 0 & 1 & 0 & 0 \\
0 & 1 & 0 & 1 &-4 & 1 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 1 &-4 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 & 0 &-4 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 1 &-4 & 1 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 &-4 \\
\end{bmatrix}
\begin{bmatrix}
p_1\\p_2\\p_3\\p_4\\p_5\\p_6\\p_7\\p_8\\p_9
\end{bmatrix}
=
\begin{bmatrix}
-100\\
-20 \\
-20\\
-80\\
0\\
-0\\
-260\\
-180\\
-180\\
\end{bmatrix}
\end{align}$
**Langkah 2** Selesaikan sistem linear di atas untuk mendapatkan solusi interior dari masalah persamaan Laplace yang diberikan.
```julia
A = [ -4 1 0 1 0 0 0 0 0
1 -4 1 0 1 0 0 0 0
0 1 -4 0 0 1 0 0 0
1 0 0 -4 1 0 1 0 0
0 1 0 1 -4 1 0 1 0
0 0 1 0 1 -4 0 0 1
0 0 0 1 0 0 -4 1 0
0 0 0 0 1 0 1 -4 1
0 0 0 0 0 1 0 1 -4]
B = [-100;-20;-20;-80;0;0;-260;-180;-180]
P = A\B
```
**Langkah 3** Ubah solusi interior tersebut ke bentuk matriks dan sisipkan kondisi batas di setiap sisi
```julia
reshape(P,3,3)'
```
```julia
U = zeros(5,5)
# interior
U[2:end-1,2:end-1] = reshape(P,3,3)'
# sepanjang tepi
U[1,2:end-1] .= 20
U[end,2:end-1] .= 180
U[2:end-1,1] .= 80
U[2:end-1,end] .= 0
# titik pojok
U[1,1] = (U[1,2]+U[2,1])/2
U[1,end] = (U[1,end-1]+U[2,end])/2
U[end,1] = (U[end-1,1]+U[end,2])/2
U[end,end] = (U[end-1,end]+U[end,end-1])/2
U
```
**Langkah 4** Penyelesaian persamaan Laplace di atas menggunakan metode Dirichlet pada fungsi `dirichlet` dengan ukuran langkah $ h=1 $.
```julia
f1(x)= 20+0*x;
f2(x)= 180+0*x;
f3(y)= 80+0*y;
f4(y)= 0*y;
a=4;
b=4;
h = 1;
U = dirichlet(f1,f2,f3,f4,a,b,h)
```
```julia
x = 0:4
y = 0:4
plot(x,y,U,st=:wireframe,camera=(50,45))
xlabel!("x")
ylabel!("y")
```
**Langkah 5** Penyelesaian persamaan Laplace di atas menggunakan metode Dirichlet pada fungsi `dirichlet` dengan ukuran langkah $ h=0.1 $.
```julia
f1(x)= 20+0*x;
f2(x)= 180+0*x;
f3(y)= 80+0*y;
f4(y)= 0*y;
a=4;
b=4;
h = 0.1;
U = dirichlet(f1,f2,f3,f4,a,b,h)
```
```julia
x = 0:h:4
y = 0:h:4
plot(x,y,U,st=:wireframe,camera=(50,45))
xlabel!("x")
ylabel!("y")
```
### Contoh 4 - Kondisi Batas Neumann
Diberikan persamaan Laplace $\nabla^2 u=0$ pada daerah persegi $R=\{(x,y)│0\le x\le 4, 0\le y\le 4\}$
dengan $u(x,y)$ menyatakan suhu pada titik $(x,y)$ dan nilai-nilai batas Neumann berikut:
$u_y(x,0)=0$ untuk $0<x<4$,
$u(x,4)=180$ untuk $0<x<4$,
$u(0,y)=80$ untuk $0\le y <4$,
$u(4,y)=0$ untuk $0\le y<4$.
Berikut merupakan langkah-langkah untuk menghitung solusi numerik dari masalah persamaan Laplace di atas menggunakan
sistem linear yang terbentuk dari stensil persamaan laplace dan Neumann dengan $ h=1 $
serta gambar solusi numerik dalam grafik 3 dimensi.
**Langkah 1** Pendefinisian SPL berdasarkan stensil yang diperoleh dengan $ h=1 $.
Karena digunakan nilai $ h=1 $, jumlah grid yang akan terbentuk adalah $ 5 \times 5 $ dengan pembagian grid seperti pada gambar berikut.
<p style="text-align: center"><i>Pembagian grid kondisi batas campuran Dirichlet dan Neumann Contoh 4</i></p>
dan sistem persamaan linear yang berkorespondensi adalah
$\begin{align}
\begin{bmatrix}
-4 & 1 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
1 &-4 & 1 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 &-4 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 0 & 0 &-4 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 1 &-4 & 1 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 1 &-4 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 &-4 & 1 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 1 &-4 & 1 & 0 & 1 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 &-4 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 &-4 & 1 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 &-4 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 &-4\\
\end{bmatrix}
\begin{bmatrix}
q_1\\q_2\\q_3\\q_4\\q_5\\q_6\\q_7\\q_8\\q_9\\q_{10}\\q_{11}\\q_{12}\\
\end{bmatrix}=
\begin{bmatrix}
-8\\0\\0\\-80\\0\\0\\-80\\0\\0\\-260\\-180\\-180\\
\end{bmatrix}
\end{align}$
Perhatikan titik $q_1$. Pada titik tersebut berlaku batas Neumann yaitu $𝑢_y(x,0)=0$. Dengan formula beda pusat, misalkan terdapat variable dummy $q_{-1}$ dibawah $q_1$, sehingga akan diperoleh
\begin{align}
u_y(x,0)&=0\\
\dfrac{q_4-q_{-1}}{2\Delta y}&=0\\
q_{-1} &= q_{4}
\end{align}
Dengan demikian persamaan dengan $q_1$ sebagai pusat adalah
$80-4q_1+q_2+2q_4=0$
hal serupa berlaku untuk $q_2$ dan $q_3$
**Langkah 2** Selesaikan sistem linear di atas untuk mendapatkan solusi interior dan tepi dengan batas Neumann dari masalah persamaan Laplace yang diberikan.
```julia
A = [ -4 1 0 2 0 0 0 0 0 0 0 0
1 -4 1 0 2 0 0 0 0 0 0 0
0 1 -4 0 0 2 0 0 0 0 0 0
1 0 0 -4 1 0 1 0 0 0 0 0
0 1 0 1 -4 1 0 1 0 0 0 0
0 0 1 0 1 -4 0 0 1 0 0 0
0 0 0 1 0 0 -4 1 0 1 0 0
0 0 0 0 1 0 1 -4 1 0 1 0
0 0 0 0 0 1 0 1 -4 0 0 1
0 0 0 0 0 0 1 0 0 -4 1 0
0 0 0 0 0 0 0 1 0 1 -4 1
0 0 0 0 0 0 0 0 1 0 1 -4]
B = [-80;0;0;-80;0;0;-80;0;0;-260;-180;-180]
q = A\B
```
**Langkah 3** Ubah solusi interior tersebut ke bentuk matriks dan sisipkan kondisi batas di setiap sisi
```julia
reshape(q,3,4)'
```
```julia
U = zeros(5,5)
# interior
U[1:end-1,2:end-1] = reshape(q,3,4)'
# sepanjang tepi
U[end,2:end-1] .= 180
U[2:end-1,1] .= 80
U[2:end-1,end] .= 0
# titik pojok
U[1,1] = (U[1,2]+U[2,1])/2
U[1,end] = (U[1,end-1]+U[2,end])/2
U[end,1] = (U[end-1,1]+U[end,2])/2
U[end,end] = (U[end-1,end]+U[end,end-1])/2
U
```
```julia
x = 0:4
y = 0:4
plot(x,y,U,st=:wireframe,camera=(50,45))
xlabel!("x")
ylabel!("y")
```
<hr style="border:2px solid black"> </hr>
# Soal Latihan
Kerjakan soal berikut pada saat kegiatan praktikum berlangsung.
`Nama: ________`
`NIM: ________`
### Soal 1
Diberikan suatu persamaan gelombang dari senar yang bergetar seperti berikut.
$u_{tt}(x,t)=4u_{xx}(x,t)$ dengan $0<x<2$ dan $0<t<1$
dengan batas
$\begin{align}
\begin{split}
&u(0,t)=0\ \text{ dan }\ u(2,t)=0\\
&u(x,0)=f(x)=-\sin(\pi x)/(x+1)\\
&u_t(x,0)=g(x)=0
\end{split}
\hskip1cm
\begin{split}
&\text{ untuk } 0\le t\le 1\\
&\text{ untuk } 0\le x\le 2\\
&\text{ untuk } 0\le x\le 2\\
\end{split}
\end{align}$
Hitung solusi numerik dari masalah persamaan gelombang di atas menggunakan ukuran langkah $ h=0.1 $ dan $ k=0.05 $, kemudian gambarkan solusi numerik dalam suatu grafik 3 dimensi dengan langkah-langkah seperti pada **Contoh 1**.
```julia
```
### Soal 2
Diberikan suatu persamaan panas seperti berikut.
$u_{t}(x,t)=u_{xx}(x,t)$ untuk $0<x<1$ dan $0<t<0.1$
dengan kondisi awal
dan kondisi batas
\begin{align}
\begin{split}
&u(x,0)=f(x)=-\sin(2\pi x)\\
&u(0,t)=0\ \text{ dan }\ u(1,t)=-1\\
\end{split}
\hskip1cm
\begin{split}
&\text{ untuk } 0\le x\le 1\\
&\text{ untuk } 0\le t\le 0.1\\
\end{split}
\end{align}
Hitung solusi numerik dari masalah persamaan panas di atas menggunakan ukuran langkah $ h=0.1 $ dan $ k=0.005 $, kemudian gambarkan solusi numerik dalam suatu grafik 3 dimensi dengan langkah-langkah seperti pada **Contoh 2**.
```julia
```
### Soal 3
Diketahui suatu persamaan Laplace seperti berikut.
$$u_{xx}+u_{yy}=0$$
dimana $ R=\left\{\left(x,y\right)\ |\ 0\le x\le 3,\ 0\le y\le 3\right\} $ dengan masalah nilai batas
$u(x,0)=10$ dan
$u(x,3)=90$ untuk $0<x<3$
$u(0,y)=70$ dan
$u(3,y)= 0$ untuk $0<y<3$
Hitung solusi numerik dari masalah persamaan Laplace di atas menggunakan
1. sistem linear yang terbentuk dari stensil dengan $ h=1 $
2. metode dirichlet pada fungsi `dirichlet` dengan $ h=1 $ dan $ h=0.1 $
serta gambar solusi numerik dalam grafik 3 dimensi dengan langkah-langkah seperti pada **Contoh 3**.
```julia
```
### Soal 4
Diketahui suatu persamaan Laplace seperti berikut.
$$u_{xx}+u_{yy}=0$$
dimana $ R=\left\{\left(x,y\right)\ |\ 0\le x\le 3,\ 0\le y\le 3\right\} $ dengan masalah nilai batas
$u(x,0)=10$ dan
$u_y(x,3)=0$ untuk $0<x<3$
$u(0,y)=70$ dan
$u_x(3,y)= 0$ untuk $0<y<3$
Hitung solusi numerik dari masalah persamaan Laplace di atas menggunakan sistem persamaan linear yang terbentuk dari stensil persamaan Laplace dan Neumann dengan $ h=1 $ serta gambar solusi numerik dalam grafik 3 dimensi dengan langkah-langkah seperti pada **Contoh 4**.
```julia
```
| 3faca3257d47d29c6be7efd9b4784ea086ab6200 | 31,919 | ipynb | Jupyter Notebook | notebookpraktikum/Praktikum 14.ipynb | mkhoirun-najiboi/metnum.jl | a6e35d04dc277318e32256f9b432264157e9b8f4 | [
"MIT"
]
| null | null | null | notebookpraktikum/Praktikum 14.ipynb | mkhoirun-najiboi/metnum.jl | a6e35d04dc277318e32256f9b432264157e9b8f4 | [
"MIT"
]
| null | null | null | notebookpraktikum/Praktikum 14.ipynb | mkhoirun-najiboi/metnum.jl | a6e35d04dc277318e32256f9b432264157e9b8f4 | [
"MIT"
]
| null | null | null | 30.750482 | 348 | 0.479119 | true | 9,411 | Qwen/Qwen-72B | 1. YES
2. YES | 0.857768 | 0.835484 | 0.716651 | __label__ind_Latn | 0.654537 | 0.503352 |
<a href="https://colab.research.google.com/github/fabxy/course-content-dl/blob/main/tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial1.ipynb" target="_parent"></a>
# Tutorial 1: Gradient Descent and AutoGrad
**Week 1, Day 2: Linear Deep Learning**
**By Neuromatch Academy**
__Content creators:__ Saeed Salehi, Vladimir Haltakov, Andrew Saxe
__Content reviewers:__ Polina Turishcheva, Antoine De Comite, Kelson Shilling-Scrivo
__Content editors:__ Anoop Kulkarni, Spiros Chavlis
__Production editors:__ Khalid Almubarak, Spiros Chavlis
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'></p>
---
#Tutorial Objectives
Day 2 Tutorial 1 will continue on buiding PyTorch skillset and motivate its core functionality, Autograd. In this notebook, we will cover the key concepts and ideas of:
* Gradient descent
* PyTorch Autograd
* PyTorch nn module
```python
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/3qevp/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
```
---
# Setup
This a GPU-Free tutorial!
```python
# @title Install dependencies
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# init airtable form
atform = AirtableForm('appn7VdPRseSoMXEG','W1D2_T1','https://portal.neuromatchacademy.org/api/redirect/to/9c55f6cb-cdf9-4429-ac1c-ec44fe64c303')
```
```python
# Imports
import torch
import numpy as np
from torch import nn
from math import pi
import matplotlib.pyplot as plt
```
```python
# @title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
```
```python
# @title Plotting functions
from mpl_toolkits.axes_grid1 import make_axes_locatable
def ex3_plot(model, x, y, ep, lss):
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
ax1.set_title("Regression")
ax1.plot(x, model(x).detach().numpy(), color='r', label='prediction')
ax1.scatter(x, y, c='c', label='targets')
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.legend()
ax2.set_title("Training loss")
ax2.plot(np.linspace(1, epochs, epochs), losses, color='y')
ax2.set_xlabel("Epoch")
ax2.set_ylabel("MSE")
plt.show()
def ex1_plot(fun_z, fun_dz):
"""Plots the function and gradient vectors
"""
x, y = np.arange(-3, 3.01, 0.02), np.arange(-3, 3.01, 0.02)
xx, yy = np.meshgrid(x, y, sparse=True)
zz = fun_z(xx, yy)
xg, yg = np.arange(-2.5, 2.6, 0.5), np.arange(-2.5, 2.6, 0.5)
xxg, yyg = np.meshgrid(xg, yg, sparse=True)
zxg, zyg = fun_dz(xxg, yyg)
plt.figure(figsize=(8, 7))
plt.title("Gradient vectors point towards steepest ascent")
contplt = plt.contourf(x, y, zz, levels=20)
plt.quiver(xxg, yyg, zxg, zyg, scale=50, color='r', )
plt.xlabel('$x$')
plt.ylabel('$y$')
ax = plt.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plt.colorbar(contplt, cax=cax)
cbar.set_label('$z = h(x, y)$')
plt.show()
```
```python
# @title Set random seed
# @markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
```
```python
# @title Set device (GPU or CPU). Execute `set_device()`
# especially if torch modules used.
# inform the user if the notebook uses GPU or CPU.
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
```
```python
SEED = 2021
set_seed(seed=SEED)
DEVICE = set_device()
```
Random seed 2021 has been set.
GPU is not enabled in this notebook.
If you want to enable it, in the menu under `Runtime` ->
`Hardware accelerator.` and select `GPU` from the dropdown menu
---
# Section 0: Introduction
Today, we will go through 3 tutorials. Starting with Gradient Descent, the workhorse of deep learning algorithms, in this tutorial. The second tutorial will help us build a better intuition about neural networks and basic hyper-parameters. Finally, in tutorial 3, we learn about the learning dynamics, what the (a good) deep network is learning, and why sometimes they may perform poorly.
```python
# @title Video 0: Introduction
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Qf4y1578t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"i7djAv2jnzY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 0:Introduction')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
---
# Section 1: Gradient Descent Algorithm
*Time estimate: ~30-45 mins*
Since the goal of most learning algorithms is **minimizing the risk (also known as the cost or loss) function**, optimization is often the core of most machine learning techniques! The gradient descent algorithm, along with its variations such as stochastic gradient descent, is one of the most powerful and popular optimization methods used for deep learning. Today we will introduce the basics, but you will learn much more about Optimization in the coming days (Week 1 Day 4).
## Section 1.1: Gradients & Steepest Ascent
```python
# @title Video 1: Gradient Descent
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Pq4y1p7em", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"UwgA_SgG0TM", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 1: Gradient Descent')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
Before introducing the gradient descent algorithm, let's review a very important property of gradients. The gradient of a function always points in the direction of the steepest ascent. The following exercise will help clarify this.
### Analytical Exercise 1.1: Gradient vector (Optional)
Given the following function:
\begin{equation}
z = h(x, y) = \sin(x^2 + y^2)
\end{equation}
find the gradient vector:
\begin{equation}
\begin{bmatrix}
\dfrac{\partial z}{\partial x} \\ \\ \dfrac{\partial z}{\partial y}
\end{bmatrix}
\end{equation}
*hint: use the chain rule!*
**Chain rule**: For a composite function $F(x) = g(h(x)) \equiv (g \circ h)(x)$:
\begin{equation}
F'(x) = g'(h(x)) \cdot h'(x)
\end{equation}
or differently denoted:
\begin{equation}
\frac{dF}{dx} = \frac{dg}{dh} ~ \frac{dh}{dx}
\end{equation}
---
#### Solution:
We can rewrite the function as a composite function:
\begin{equation}
z = f\left( g(x,y) \right), ~~ f(u) = \sin(u), ~~ g(x, y) = x^2 + y^2
\end{equation}
Using chain rule:
\begin{align}
\dfrac{\partial z}{\partial x} &= \dfrac{\partial f}{\partial g} \dfrac{\partial g}{\partial x} = \cos(g(x,y)) ~ (2x) = \cos(x^2 + y^2) \cdot 2x \\ \\
\dfrac{\partial z}{\partial y} &= \dfrac{\partial f}{\partial g} \dfrac{\partial g}{\partial y} = \cos(g(x,y)) ~ (2y) = \cos(x^2 + y^2) \cdot 2y
\end{align}
### Coding Exercise 1.1: Gradient Vector
Implement (complete) the function which returns the gradient vector for $z=\sin(x^2 + y^2)$.
```python
def fun_z(x, y):
"""Function sin(x^2 + y^2)
Args:
x (float, np.ndarray): variable x
y (float, np.ndarray): variable y
Return:
z (float, np.ndarray): sin(x^2 + y^2)
"""
z = np.sin(x**2 + y**2)
return z
def fun_dz(x, y):
"""Function sin(x^2 + y^2)
Args:
x (float, np.ndarray): variable x
y (float, np.ndarray): variable y
Return:
(tuple): gradient vector for sin(x^2 + y^2)
"""
#################################################
## Implement the function which returns gradient vector
## Complete the partial derivatives dz_dx and dz_dy
# Complete the function and remove or comment the line below
# raise NotImplementedError("Gradient function `fun_dz`")
#################################################
dz_dx = np.cos(x**2 + y**2) * 2 * x
dz_dy = np.cos(x**2 + y**2) * 2 * y
return (dz_dx, dz_dy)
#add event to airtable
atform.add_event('Coding Exercise 1.1: Gradient Vector')
## Uncomment to run
ex1_plot(fun_z, fun_dz)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial1_Solution_0c8e3872.py)
*Example output:*
We can see from the plot that for any given $x_0$ and $y_0$, the gradient vector $\left[ \dfrac{\partial z}{\partial x}, \dfrac{\partial z}{\partial y}\right]^{\top}_{(x_0, y_0)}$ points in the direction of $x$ and $y$ for which $z$ increases the most. It is important to note that gradient vectors only see their local values, not the whole landscape! Also, length (size) of each vector, which indicates the steepness of the function, can be very small near local plateaus (i.e. minima or maxima).
Thus, we can simply use the aforementioned formula to find the local minima.
In 1847, Augustin-Louis Cauchy used **negative of gradients** to develop the Gradient Descent algorithm as an **iterative** method to **minimize** a **continuous** and (ideally) **differentiable function** of **many variables**.
```python
# @title Video 2: Gradient Descent - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Rf4y157bw", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"8s22ffAfGwI", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 2: Gradient Descent ')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
## Section 1.2: Gradient Descent Algorithm
Let $f(\mathbf{w}): \mathbb{R}^d \rightarrow \mathbb{R}$ be a differentiable function. Gradient Descent is an iterative algorithm for minimizing the function $f$, starting with an initial value for variables $\mathbf{w}$, taking steps of size $\eta$ (learning rate) in the direction of the negative gradient at the current point to update the variables $\mathbf{w}$.
\begin{equation}
\mathbf{w}^{(t+1)} = \mathbf{w}^{(t)} - \eta \nabla f (\mathbf{w}^{(t)})
\end{equation}
where $\eta > 0$ and $\nabla f (\mathbf{w})= \left( \frac{\partial f(\mathbf{w})}{\partial w_1}, ..., \frac{\partial f(\mathbf{w})}{\partial w_d} \right)$. Since negative gradients always point locally in the direction of steepest descent, the algorithm makes small steps at each point **towards** the minimum.
<br/>
**Vanilla Algorithm**
---
> **inputs**: initial guess $\mathbf{w}^{(0)}$, step size $\eta > 0$, number of steps $T$
> *For* $t = 0, 2, \dots , T-1$ *do* \
$\qquad$ $\mathbf{w}^{(t+1)} = \mathbf{w}^{(t)} - \eta \nabla f (\mathbf{w}^{(t)})$\
*end*
> *return*: $\mathbf{w}^{(t+1)}$
---
<br/>
Hence, all we need is to calculate the gradient of the loss function with respect to the learnable parameters (i.e. weights):
\begin{equation}
\dfrac{\partial Loss}{\partial \mathbf{w}} = \left[ \dfrac{\partial Loss}{\partial w_1}, \dfrac{\partial Loss}{\partial w_2} , ..., \dfrac{\partial Loss}{\partial w_d} \right]^{\top}
\end{equation}
### Analytical Exercise 1.2: Gradients
Given $f(x, y, z) = \tanh \left( \ln \left[1 + z \frac{2x}{sin(y)} \right] \right)$, how easy is it to derive $\dfrac{\partial f}{\partial x}$, $\dfrac{\partial f}{\partial y}$ and $\dfrac{\partial f}{\partial z}$? (*hint: you don't have to actually calculate them!*)
## Section 1.3: Computational Graphs and Backprop
```python
# @title Video 3: Computational Graph
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1c64y1B7ZG", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"2z1YX5PonV4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 3: Computational Graph ')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
*Exercise 1.2* is an example of how overwhelming the derivation of gradients can get, as the number of variables and nested functions increases. This function is still extraordinarily simple compared to the loss functions of modern neural networks. So how can we (as well as PyTorch and similar frameworks) approach such beasts?
Let’s look at the function again:
\begin{equation}
f(x, y, z) = \tanh \left(\ln \left[1 + z \frac{2x}{sin(y)} \right] \right)
\end{equation}
We can build a so-called computational graph (shown below) to break the original function into smaller and more approachable expressions.
<center></center>
Starting from $x$, $y$, and $z$ and following the arrows and expressions, you would see that our graph returns the same function as $f$. It does so by calculating intermediate variables $a,b,c,d,$ and $e$. This is called the **forward pass**.
Now, let’s start from $f$, and work our way against the arrows while calculating the gradient of each expression as we go. This is called the **backward pass**, from which the **backpropagation of errors** algorithm gets its name.
<center></center>
By breaking the computation into simple operations on intermediate variables, we can use the chain rule to calculate any gradient:
\begin{equation}
\dfrac{\partial f}{\partial x} = \dfrac{\partial f}{\partial e}~\dfrac{\partial e}{\partial d}~\dfrac{\partial d}{\partial c}~\dfrac{\partial c}{\partial a}~\dfrac{\partial a}{\partial x} = \left( 1-\tanh^2(e) \right) \cdot \frac{1}{d+1}\cdot z \cdot \frac{1}{b} \cdot 2
\end{equation}
Conveniently, the values for $e$, $b$, and $d$ are available to us from when we did the forward pass through the graph. That is, the partial derivatives have simple expressions in terms of the intermediate variables $a,b,c,d,e$ that we calculated and stored during the forward pass.
### Analytical Exercise 1.3: Chain Rule (Optional)
For the function above, calculate the $\dfrac{\partial f}{\partial y}$ using the computational graph and chain rule.
---
#### Solution:
\begin{equation}
\dfrac{\partial f}{\partial y} = \dfrac{\partial f}{\partial e}~\dfrac{\partial e}{\partial d}~\dfrac{\partial d}{\partial c}~\dfrac{\partial c}{\partial b}~\dfrac{\partial b}{\partial y} = \left( 1-\tanh^2(e) \right) \cdot \frac{1}{d+1}\cdot z \cdot \frac{-a}{b^2} \cdot \cos(y)
\end{equation}
For more: [Calculus on Computational Graphs: Backpropagation](https://colah.github.io/posts/2015-08-Backprop/)
---
# Section 2: PyTorch AutoGrad
*Time estimate: ~30-45 mins*
```python
# @title Video 4: Auto-Differentiation
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1UP4y1s7gv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"IBYFCNyBcF8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 4: Auto-Differentiation ')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
Deep learning frameworks such as PyTorch, JAX, and TensorFlow come with a very efficient and sophisticated set of algorithms, commonly known as Automatic Differentiation. AutoGrad is PyTorch's automatic differentiation engine. Here we start by covering the essentials of AutoGrad, and you will learn more in the coming days.
## Section 2.1: Forward Propagation
Everything starts with the forward propagation (pass). PyTorch tracks all the instructions, as we declare the variables and operations, and it builds the graph when we call the `.backward()` pass. PyTorch rebuilds the graph every time we iterate or change it (or simply put, PyTorch uses a dynamic graph).
For gradient descent, it is only required to have the gradients of cost function with respect to the variables we wish to learn. These variables are often called "learnable / trainable parameters" or simply "parameters" in PyTorch. In neural nets, weights and biases are often the learnable parameters.
### Coding Exercise 2.1: Buiding a Computational Graph
In PyTorch, to indicate that a certain tensor contains learnable parameters, we can set the optional argument `requires_grad` to `True`. PyTorch will then track every operation using this tensor while configuring the computational graph. For this exercise, use the provided tensors to build the following graph, which implements a single neuron with scalar input and output.
<br/>
<center></center>
```python
#add event to airtable
atform.add_event('Coding Exercise 2.1: Computational Graph ')
class SimpleGraph:
def __init__(self, w, b):
"""Initializing the SimpleGraph
Args:
w (float): initial value for weight
b (float): initial value for bias
"""
assert isinstance(w, float)
assert isinstance(b, float)
self.w = torch.tensor([w], requires_grad=True)
self.b = torch.tensor([b], requires_grad=True)
def forward(self, x):
"""Forward pass
Args:
x (torch.Tensor): 1D tensor of features
Returns:
torch.Tensor: model predictions
"""
assert isinstance(x, torch.Tensor)
#################################################
## Implement the the forward pass to calculate prediction
## Note that prediction is not the loss, but the value after `tanh`
# Complete the function and remove or comment the line below
# raise NotImplementedError("Forward Pass `forward`")
#################################################
prediction = torch.tanh((x * self.w) + self.b)
return prediction
def sq_loss(y_true, y_prediction):
"""L2 loss function
Args:
y_true (torch.Tensor): 1D tensor of target labels
y_prediction (torch.Tensor): 1D tensor of predictions
Returns:
torch.Tensor: L2-loss (squared error)
"""
assert isinstance(y_true, torch.Tensor)
assert isinstance(y_prediction, torch.Tensor)
#################################################
## Implement the L2-loss (squred error) given true label and prediction
# Complete the function and remove or comment the line below
# raise NotImplementedError("Loss function `sq_loss`")
#################################################
loss = (y_true - y_prediction)**2
return loss
feature = torch.tensor([1]) # input tensor
target = torch.tensor([7]) # target tensor
## Uncomment to run
simple_graph = SimpleGraph(-0.5, 0.5)
print(f"initial weight = {simple_graph.w.item()}, "
f"\ninitial bias = {simple_graph.b.item()}")
prediction = simple_graph.forward(feature)
square_loss = sq_loss(target, prediction)
print(f"for x={feature.item()} and y={target.item()}, "
f"prediction={prediction.item()}, and L2 Loss = {square_loss.item()}")
```
initial weight = -0.5,
initial bias = 0.5
for x=1 and y=7, prediction=0.0, and L2 Loss = 49.0
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial1_Solution_6668feea.py)
It is important to appreciate the fact that PyTorch can follow our operations as we arbitrarily go through classes and functions.
## Section 2.2: Backward Propagation
Here is where all the magic lies. In PyTorch, `Tensor` and `Function` are interconnected and build up an acyclic graph, that encodes a complete history of computation. Each variable has a `grad_fn` attribute that references a function that has created the Tensor (except for Tensors created by the user - these have `None` as `grad_fn`). The example below shows that the tensor `c = a + b` is created by the `Add` operation and the gradient function is the object `<AddBackward...>`. Replace `+` with other single operations (e.g., `c = a * b` or `c = torch.sin(a)`) and examine the results.
```python
a = torch.tensor([1.0], requires_grad=True)
b = torch.tensor([-1.0], requires_grad=True)
c = a * b
print(f'Gradient function = {c.grad_fn}')
```
Gradient function = <MulBackward0 object at 0x7ff2de0a0bd0>
For more complex functions, printing the `grad_fn` would only show the last operation, even though the object tracks all the operations up to that point:
```python
print(f'Gradient function for prediction = {prediction.grad_fn}')
print(f'Gradient function for loss = {square_loss.grad_fn}')
```
Gradient function for prediction = <TanhBackward object at 0x7ff2de0a0550>
Gradient function for loss = <PowBackward0 object at 0x7ff2de0a0610>
Now let's kick off the backward pass to calculate the gradients by calling `.backward()` on the tensor we wish to initiate the backpropagation from. Often, `.backward()` is called on the loss, which is the last node on the graph. Before doing that, let's calculate the loss gradients by hand:
$$\frac{\partial{loss}}{\partial{w}} = - 2 x (y_t - y_p)(1 - y_p^2)$$
$$\frac{\partial{loss}}{\partial{b}} = - 2 (y_t - y_p)(1 - y_p^2)$$
Where $y_t$ is the target (true label), and $y_p$ is the prediction (model output). We can then compare it to PyTorch gradients, which can be obtained by calling `.grad` on the relevant tensors.
**Important Notes**
* Learnable parameters (i.e. `reguires_grad` tensors) are "contagious". Let's look at a simple example: `Y = W @ X`, where `X` is the feature tensors and `W` is the weight tensor (learnable parameters, `reguires_grad`), the newly generated output tensor `Y` will be also `reguires_grad`. So any operation that is applied to `Y` will be part of the computational graph. Therefore, if we need to plot or store a tensor that is `reguires_grad`, we must first `.detach()` it from the graph by calling the `.detach()` method on that tensor.
* `.backward()` accumulates gradients in the leaf nodes (i.e., the input nodes to the node of interest). We can call `.zero_grad()` on the loss or optimizer to zero out all `.grad` attributes (see [autograd.backward](https://pytorch.org/docs/stable/autograd.html#torch.autograd.backward) for more).
* Recall that in python we can access variables and associated methods with `.method_name`. You can use the command `dir(my_object)` to observe all variables and associated methods to your object, e.g., `dir(simple_graph.w)`.
```python
# analytical gradients (remember detaching)
ana_dloss_dw = - 2 * feature * (target - prediction.detach())*(1 - prediction.detach()**2)
ana_dloss_db = - 2 * (target - prediction.detach())*(1 - prediction.detach()**2)
if simple_graph.w.grad is not None:
simple_graph.w.grad.data.zero_()
simple_graph.b.grad.data.zero_()
prediction = simple_graph.forward(feature)
square_loss = sq_loss(target, prediction)
square_loss.backward() # first we should call the backward to build the graph
autograd_dloss_dw = simple_graph.w.grad # we calculate the derivative w.r.t weights
autograd_dloss_db = simple_graph.b.grad # we calculate the derivative w.r.t bias
print(ana_dloss_dw == autograd_dloss_dw)
print(ana_dloss_db == autograd_dloss_db)
```
tensor([True])
tensor([True])
References and more:
* [A GENTLE INTRODUCTION TO TORCH.AUTOGRAD](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html)
* [AUTOMATIC DIFFERENTIATION PACKAGE - TORCH.AUTOGRAD](https://pytorch.org/docs/stable/autograd.html)
* [AUTOGRAD MECHANICS](https://pytorch.org/docs/stable/notes/autograd.html)
* [AUTOMATIC DIFFERENTIATION WITH TORCH.AUTOGRAD](https://pytorch.org/tutorials/beginner/basics/autogradqs_tutorial.html)
---
# Section 3: PyTorch's Neural Net module (`nn.Module`)
*Time estimate: ~30 mins*
```python
# @title Video 5: PyTorch `nn` module
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1MU4y1H7WH", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"jzTbQACq7KE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 5: PyTorch `nn` module')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
PyTorch provides us with ready-to-use neural network building blocks, such as layers (e.g. linear, recurrent, ...), different activation and loss functions, and much more, packed in the [`torch.nn`](https://pytorch.org/docs/stable/nn.html) module. If we build a neural network using `torch.nn` layers, the weights and biases are already in `requires_grad` mode and will be registered as model parameters.
For training, we need three things:
* **Model parameters** - Model parameters refer to all the learnable parameters of the model, which are accessible by calling `.parameters()` on the model. Please note that NOT all the `requires_grad` tensors are seen as model parameters. To create a custom model parameter, we can use [`nn.Parameter`](https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html) (*A kind of Tensor that is to be considered a module parameter*).
* **Loss function** - The loss that we are going to be optimizing, which is often combined with regularization terms (conming up in few days).
* **Optimizer** - PyTorch provides us with many optimization methods (different versions of gradient descent). Optimizer holds the current state of the model and by calling the `step()` method, will update the parameters based on the computed gradients.
You will learn more details about choosing the right model architecture, loss function, and optimizer later in the course.
## Section 3.1: Training loop in PyTorch
We use a regression problem to study the training loop in PyTorch.
The task is to train a wide nonlinear (using $\tanh$ activation function) neural net for a simple $\sin$ regression task. Wide neural networks are thought to be really good at generalization.
```python
# @markdown #### Generate the sample dataset
set_seed(seed=SEED)
n_samples = 32
inputs = torch.linspace(-1.0, 1.0, n_samples).reshape(n_samples, 1)
noise = torch.randn(n_samples, 1) / 4
targets = torch.sin(pi * inputs) + noise
plt.figure(figsize=(8, 5))
plt.scatter(inputs, targets, c='c')
plt.xlabel('x (inputs)')
plt.ylabel('y (targets)')
plt.show()
```
Let's define a very wide (512 neurons) neural net with one hidden layer and `Tanh` activation function.
```python
## A Wide neural network with a single hidden layer
class WideNet(nn.Module):
def __init__(self):
"""Initializing the WideNet
"""
n_cells = 512
super().__init__()
self.layers = nn.Sequential(
nn.Linear(1, n_cells),
nn.Tanh(),
nn.Linear(n_cells, 1),
)
def forward(self, x):
"""Forward pass
Args:
x (torch.Tensor): 2D tensor of features
Returns:
torch.Tensor: model predictions
"""
return self.layers(x)
```
We can now create an instance of our neural net and print its parameters.
```python
# creating an instance
set_seed(seed=SEED)
wide_net = WideNet()
print(wide_net)
```
Random seed 2021 has been set.
WideNet(
(layers): Sequential(
(0): Linear(in_features=1, out_features=512, bias=True)
(1): Tanh()
(2): Linear(in_features=512, out_features=1, bias=True)
)
)
```python
# Create a mse loss function
loss_function = nn.MSELoss()
# Stochstic Gradient Descent optimizer (you will learn about momentum soon)
lr = 0.003 # learning rate
sgd_optimizer = torch.optim.SGD(wide_net.parameters(), lr=lr, momentum=0.9)
```
The training process in PyTorch is interactive - you can perform training iterations as you wish and inspect the results after each iteration.
Let's perform one training iteration. You can run the cell multiple times and see how the parameters are being updated and the loss is reducing. This code block is the core of everything to come: please make sure you go line-by-line through all the commands and discuss their purpose with the pod.
```python
# Reset all gradients to zero
sgd_optimizer.zero_grad()
# Forward pass (Compute the output of the model on the features (inputs))
prediction = wide_net(inputs)
# Compute the loss
loss = loss_function(prediction, targets)
print(f'Loss: {loss.item()}')
# Perform backpropagation to build the graph and compute the gradients
loss.backward()
# Optimizer takes a tiny step in the steepest direction (negative of gradient)
# and "updates" the weights and biases of the network
sgd_optimizer.step()
```
Loss: 0.4475176930427551
### Coding Exercise 3.1: Training Loop
Using everything we've learned so far, we ask you to complete the `train` function below.
```python
def train(features, labels, model, loss_fun, optimizer, n_epochs):
"""Training function
Args:
features (torch.Tensor): features (input) with shape torch.Size([n_samples, 1])
labels (torch.Tensor): labels (targets) with shape torch.Size([n_samples, 1])
model (torch nn.Module): the neural network
loss_fun (function): loss function
optimizer(function): optimizer
n_epochs (int): number of training iterations
Returns:
list: record (evolution) of training losses
"""
loss_record = [] # keeping recods of loss
for i in range(n_epochs):
#################################################
## Implement the missing parts of the training loop
# Complete the function and remove or comment the line below
# raise NotImplementedError("Training loop `train`")
#################################################
optimizer.zero_grad() # set gradients to 0
predictions = model(features) # Compute model prediction (output)
loss = loss_fun(predictions, labels) # Compute the loss
loss.backward() # Compute gradients (backward pass)
optimizer.step() # update parameters (optimizer takes a step)
loss_record.append(loss.item())
return loss_record
#add event to airtable
atform.add_event('Coding Exercise 3.1: Training Loop')
set_seed(seed=2021)
epochs = 1847 # Cauchy, Exercices d'analyse et de physique mathematique (1847)
## Uncomment to run
losses = train(inputs, targets, wide_net, loss_function, sgd_optimizer, epochs)
ex3_plot(wide_net, inputs, targets, epochs, losses)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial1_Solution_5204c053.py)
*Example output:*
---
# Summary
In this tutorial we covered one of the most basic concepts of deep learning; the computational graph and how a network learns via gradient descent and the backpropagation algorithm. We have seen all of these using PyTorch modules and we compared the analytical solutions with the ones provided directly by the PyTorch module.
```python
# @title Video 6: Tutorial 1 Wrap-up
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Pg41177VU", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"TvZURbcnXc4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 6: Tutorial 1 Wrap-up')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
```python
# @title Airtable Submission Link
from IPython import display
display.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
</a>
</div>""" )
```
<div>
<a href= "https://portal.neuromatchacademy.org/api/redirect/to/9c55f6cb-cdf9-4429-ac1c-ec44fe64c303?data=eyJmb3JtX2lkIjogImFwcG43VmRQUnNlU29NWEVHIiwgInRhYmxlX25hbWUiOiAiVzFEMl9UMSIsICJhbnN3ZXJzIjoge30sICJldmVudHMiOiBbeyJldmVudCI6ICJpbml0IiwgInRzIjogMTYyODAwODkyMi45MDg1NzM0fSwgeyJldmVudCI6ICJWaWRlbyAwOkludHJvZHVjdGlvbiIsICJ0cyI6IDE2MjgwMDg5MjMuODk5NzMxNH0sIHsiZXZlbnQiOiAiVmlkZW8gMTogR3JhZGllbnQgRGVzY2VudCIsICJ0cyI6IDE2MjgwMDg5MjQuMjA0ODMzfSwgeyJldmVudCI6ICJDb2RpbmcgRXhlcmNpc2UgMS4xOiBHcmFkaWVudCBWZWN0b3IiLCAidHMiOiAxNjI4MDA4OTI0LjI0MjYxMzh9LCB7ImV2ZW50IjogIlZpZGVvIDI6IEdyYWRpZW50IERlc2NlbnQgIiwgInRzIjogMTYyODAwODkyNS40NTg2OTE2fSwgeyJldmVudCI6ICJWaWRlbyAzOiBDb21wdXRhdGlvbmFsIEdyYXBoICIsICJ0cyI6IDE2MjgwMDg5MjUuNzY2ODQ1Mn0sIHsiZXZlbnQiOiAiVmlkZW8gNDogQXV0by1EaWZmZXJlbnRpYXRpb24gIiwgInRzIjogMTYyODAwODkyNi4wNjMxODk1fSwgeyJldmVudCI6ICJDb2RpbmcgRXhlcmNpc2UgMi4xOiBDb21wdXRhdGlvbmFsIEdyYXBoICIsICJ0cyI6IDE2MjgwMDg5MjYuMTIwMjU5OH0sIHsiZXZlbnQiOiAiVmlkZW8gNTogUHlUb3JjaCBgbm5gIG1vZHVsZSIsICJ0cyI6IDE2MjgwMDg5MzYuNjY3MDUwOH0sIHsiZXZlbnQiOiAiQ29kaW5nIEV4ZXJjaXNlIDMuMTogVHJhaW5pbmcgTG9vcCIsICJ0cyI6IDE2MjgwMDk2NDMuMjU1MzM5Nn0sIHsiZXZlbnQiOiAiVmlkZW8gNjogVHV0b3JpYWwgMSBXcmFwLXVwIiwgInRzIjogMTYyODAwOTY1NC42MjQ1ODl9LCB7ImV2ZW50IjogInVybCBnZW5lcmF0ZWQiLCAidHMiOiAxNjI4MDA5NjU5LjEwNjQ1OTZ9XX0%3D" target="_blank">
</a>
</div>
```python
```
| 8414dc4573be653c04c358f6d99a3cab986ed0fb | 641,825 | ipynb | Jupyter Notebook | tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial1.ipynb | fabxy/course-content-dl | d2b4bf8c6d97215184d063c4dd444a99d2767ec9 | [
"CC-BY-4.0",
"BSD-3-Clause"
]
| 1 | 2021-11-30T08:42:05.000Z | 2021-11-30T08:42:05.000Z | tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial1.ipynb | fabxy/course-content-dl | d2b4bf8c6d97215184d063c4dd444a99d2767ec9 | [
"CC-BY-4.0",
"BSD-3-Clause"
]
| null | null | null | tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial1.ipynb | fabxy/course-content-dl | d2b4bf8c6d97215184d063c4dd444a99d2767ec9 | [
"CC-BY-4.0",
"BSD-3-Clause"
]
| null | null | null | 171.336092 | 266,550 | 0.864592 | true | 11,014 | Qwen/Qwen-72B | 1. YES
2. YES | 0.715424 | 0.721743 | 0.516352 | __label__eng_Latn | 0.904 | 0.037989 |
# Chapter 8 Matrices
行列に関してはコメントは少なめです。(~~正直、SympyのMatrix使うぐらいなら、Numpy使ったほうが効率良さそうだよね〜.~~)
```python
from sympy import *
init_printing(use_unicode=True)
```
`Sympy`で行列を作るには、`Matrix`オブジェクトを使う. たとえば
```python
Matrix([
[1, -1],
[3, 4],
[0, 2] ]) #行ベクトルの組み合わせであることを明示するために[]カッコを余分に付けている。
```
とできる. 列ベクトルは
```python
Matrix([1,2,3]) #([])はベクトル
```
#### 行列の積
```python
M = Matrix([
[1, 2, 3],
[3, 2, 1] ])
```
```python
type(M)
```
sympy.matrices.dense.MutableDenseMatrix
```python
N = Matrix([0, 1, 1])
```
```python
M*N
```
**注意**: `Matrix`オブジェクトは`mutable`.
## 8.1 基本的な演算
### 8.1.1 形(Shape)
```python
M = Matrix([
[1, 2, 3],
[-2, 0, 4]
])
```
```python
M
```
```python
M.shape #Numpy同様()はいらない.
```
(行, 列)
### 8.1.2 行と列を参照する
```python
M.row(0) #1行 0から始まる.
```
```python
M.col(-1) #3列(負で最終列を参照)
```
### 8.1.2 行と列の削除 / 挿入
#### 行 / 列の削除
```python
M.col_del(0) # 一列目を削除
```
```python
M
```
```python
M.row_del(1) # 二行目を削除
```
```python
M
```
#### 行 / 列の追加
```python
M
```
```python
M = M.row_insert(1, Matrix([[0, 4]])) #2行目に(0, 4)成分を追加
```
```python
M
```
```python
M = M.col_insert(0, Matrix([1, -2])) #列ベクトルを追加するのでカッコは一組
```
```python
M
```
## 8.2 基本的な方法
```python
M = Matrix([
[1, 3],
[-2,3]
])
```
```python
N = Matrix([
[0,3],
[0,7]
])
```
```python
M + N #足し算
```
```python
M*N #行列の積
```
```python
3*M #定数倍
```
```python
M**2 #べき乗
```
```python
M**-1 #逆行列
```
```python
N**-1 #行列式がゼロで存在しない.
```
```python
M = Matrix([
[1, 2, 3],
[4, 5, 6]
])
```
```python
M
```
```python
M.T #転置行列
```
## 8.3 行列の構成
### 8.3.1 単位行列
```python
eye(3)
```
```python
eye(4)
```
### 8.3.2 ゼロ行列
```python
zeros(2,3)
```
```python
zeros(4)
```
### 8.3.3 すべての成分が1の行列
```python
ones(2,3)
```
### 8.3.4 対角行列
```python
diag(1, 2, 3)
```
```python
diag(-1, ones(2, 2), Matrix([5, 7, 5])) #複数組み合わせる
```
## 8.4 高等的な扱い
### 8.4.1 行列式
```python
M = Matrix([
[1, 0, 1],
[2, -1, 3],
[4, 3, 2]
])
```
```python
M
```
```python
M.det()
```
### 8.4.2 簡約化
```python
M = Matrix([
[1, 0, 1, 3],
[2, 3, 4, 7],
[-1, -3, -3, -4]
])
```
```python
M.rref()
```
第一引数は簡約化した行列、第二引数はピボット列の添字リスト. この行列のランクは2.
### 8.4.3 ヌル空間
```python
M = Matrix([
[1, 2, 3, 0, 0],
[4, 10, 0, 0, 1]
])
```
```python
M.nullspace()
```
--->M*x=0なる方程式の解. ヌル空間の次元は3.
### 8.4.4 列空間
```python
M = Matrix([
[1, 1, 2],
[2, 1, 3],
[3, 1, 4]
])
```
```python
M.columnspace()
```
### 8.4.5 固有値、固有ベクトル、対角化
```python
M = Matrix([
[3, -2, 4, -2],
[5, 3, -3, -2],
[5, -2, 2, -2],
[5, -2, -3 ,3]
])
```
```python
M.eigenvals()
```
--->固有値 -2 と 3 は多重度1, 固有値5は多重度 3
```python
M.eigenvects()
```
--->固有値とともに固有ベクトルも表示される. 常にコレを使ってもいいが、固有ベクトルの計算は時間がかかるので、固有値だけ欲しければeigenvals()を使うのが吉.
```python
P, D = M.diagonalize()
```
```python
P #直交行列
```
```python
D #対角行列
```
```python
P*D*P**-1
```
これはMに等しい.
#### 固有方程式
```python
lamda = symbols('lamda') #lambdaは予約語
```
```python
p = M.charpoly(lamda)
```
```python
factor(p) #Mの固有方程式
```
| a60de916860f0241e4ec89a73871f5c93cf01d8b | 82,708 | ipynb | Jupyter Notebook | Chapter8_Matrices.ipynb | hiroyuki827/SymPy_tutorial | 8423ceab49482dc83c90c4cb1d388cad100ced84 | [
"BSD-3-Clause"
]
| 9 | 2018-01-02T16:53:11.000Z | 2021-05-05T13:48:49.000Z | Chapter8_Matrices.ipynb | hiroyuki827/SymPy_tutorial | 8423ceab49482dc83c90c4cb1d388cad100ced84 | [
"BSD-3-Clause"
]
| 1 | 2018-06-12T03:51:09.000Z | 2018-06-13T08:15:45.000Z | Chapter8_Matrices.ipynb | hiroyuki827/SymPy_tutorial | 8423ceab49482dc83c90c4cb1d388cad100ced84 | [
"BSD-3-Clause"
]
| null | null | null | 54.057516 | 8,036 | 0.731924 | true | 1,739 | Qwen/Qwen-72B | 1. YES
2. YES | 0.914901 | 0.855851 | 0.783019 | __label__yue_Hant | 0.668615 | 0.657548 |
```julia
using Symbolics
using LinearAlgebra
using StaticArrays
```
```julia
n = 3
@variables a[1:n, 1:n] x[1:n]
```
2-element Vector{Symbolics.Arr{Num, N} where N}:
a[1:3,1:3]
x[1:3]
```julia
A = collect(a)
```
\begin{equation}
\left[
\begin{array}{ccc}
a{{_1}}ˏ{_1} & a{{_1}}ˏ{_2} & a{{_1}}ˏ{_3} \\
a{{_2}}ˏ{_1} & a{{_2}}ˏ{_2} & a{{_2}}ˏ{_3} \\
a{{_3}}ˏ{_1} & a{{_3}}ˏ{_2} & a{{_3}}ˏ{_3} \\
\end{array}
\right]
\end{equation}
```julia
as = unique(vec(A))
```
\begin{equation}
\left[
\begin{array}{c}
a{{_1}}ˏ{_1} \\
a{{_2}}ˏ{_1} \\
a{{_3}}ˏ{_1} \\
a{{_1}}ˏ{_2} \\
a{{_2}}ˏ{_2} \\
a{{_3}}ˏ{_2} \\
a{{_1}}ˏ{_3} \\
a{{_2}}ˏ{_3} \\
a{{_3}}ˏ{_3} \\
\end{array}
\right]
\end{equation}
```julia
X = collect(x)
```
\begin{equation}
\left[
\begin{array}{c}
x{_1} \\
x{_2} \\
x{_3} \\
\end{array}
\right]
\end{equation}
```julia
X'A*X/2
```
\begin{equation}
\frac{1}{2} \left( x{_1} a{{_1}}ˏ{_1} + x{_2} a{{_2}}ˏ{_1} + x{_3} a{{_3}}ˏ{_1} \right) x{_1} + \frac{1}{2} \left( x{_1} a{{_1}}ˏ{_2} + x{_2} a{{_2}}ˏ{_2} + x{_3} a{{_3}}ˏ{_2} \right) x{_2} + \frac{1}{2} \left( x{_1} a{{_1}}ˏ{_3} + x{_2} a{{_2}}ˏ{_3} + x{_3} a{{_3}}ˏ{_3} \right) x{_3}
\end{equation}
```julia
X'A*X/2 |> expand
```
\begin{equation}
\frac{1}{2} x{_1}^{2} a{{_1}}ˏ{_1} + \frac{1}{2} x{_2}^{2} a{{_2}}ˏ{_2} + \frac{1}{2} x{_3}^{2} a{{_3}}ˏ{_3} + \frac{1}{2} x{_1} x{_2} a{{_1}}ˏ{_2} + \frac{1}{2} x{_1} x{_2} a{{_2}}ˏ{_1} + \frac{1}{2} x{_1} x{_3} a{{_1}}ˏ{_3} + \frac{1}{2} x{_1} x{_3} a{{_3}}ˏ{_1} + \frac{1}{2} x{_2} x{_3} a{{_2}}ˏ{_3} + \frac{1}{2} x{_2} x{_3} a{{_3}}ˏ{_2}
\end{equation}
```julia
Symbolics.gradient(X'A*X/2, X)
```
\begin{equation}
\left[
\begin{array}{c}
x{_1} a{{_1}}ˏ{_1} + \frac{1}{2} x{_2} a{{_1}}ˏ{_2} + \frac{1}{2} x{_2} a{{_2}}ˏ{_1} + \frac{1}{2} x{_3} a{{_1}}ˏ{_3} + \frac{1}{2} x{_3} a{{_3}}ˏ{_1} \\
x{_2} a{{_2}}ˏ{_2} + \frac{1}{2} x{_1} a{{_1}}ˏ{_2} + \frac{1}{2} x{_1} a{{_2}}ˏ{_1} + \frac{1}{2} x{_3} a{{_2}}ˏ{_3} + \frac{1}{2} x{_3} a{{_3}}ˏ{_2} \\
x{_3} a{{_3}}ˏ{_3} + \frac{1}{2} x{_1} a{{_1}}ˏ{_3} + \frac{1}{2} x{_1} a{{_3}}ˏ{_1} + \frac{1}{2} x{_2} a{{_2}}ˏ{_3} + \frac{1}{2} x{_2} a{{_3}}ˏ{_2} \\
\end{array}
\right]
\end{equation}
```julia
f_expr = build_function(Symbolics.gradient(X'A*X/2, X), A, X)
```
(:(function (ˍ₋arg1, ˍ₋arg2)
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:282 =#
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:283 =#
let var"a[1, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[1]), var"a[2, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[2]), var"a[3, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[3]), var"a[1, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[4]), var"a[2, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[5]), var"a[3, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[6]), var"a[1, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[7]), var"a[2, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[8]), var"a[3, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[9]), var"x[1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[1]), var"x[2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[2]), var"x[3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[3])
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:375 =#
(SymbolicUtils.Code.create_array)(typeof(ˍ₋arg1), nothing, Val{1}(), Val{(3,)}(), (+)((+)((*)(var"x[1]", var"a[1, 1]"), (*)(1//2, var"x[2]", var"a[1, 2]"), (*)(1//2, var"x[2]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[1, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 1]"))), (+)((+)((*)(var"x[2]", var"a[2, 2]"), (*)(1//2, var"x[1]", var"a[1, 2]"), (*)(1//2, var"x[1]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[2, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 2]"))), (+)((+)((*)(var"x[3]", var"a[3, 3]"), (*)(1//2, var"x[1]", var"a[1, 3]"), (*)(1//2, var"x[1]", var"a[3, 1]"), (*)(1//2, var"x[2]", var"a[2, 3]")), (+)((*)(1//2, var"x[2]", var"a[3, 2]"))))
end
end), :(function (ˍ₋out, ˍ₋arg1, ˍ₋arg2)
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:282 =#
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:283 =#
let var"a[1, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[1]), var"a[2, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[2]), var"a[3, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[3]), var"a[1, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[4]), var"a[2, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[5]), var"a[3, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[6]), var"a[1, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[7]), var"a[2, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[8]), var"a[3, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[9]), var"x[1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[1]), var"x[2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[2]), var"x[3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[3])
#= D:\.julia\packages\Symbolics\fd3w9\src\build_function.jl:373 =#
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:329 =# @inbounds begin
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:325 =#
ˍ₋out[1] = (+)((+)((*)(var"x[1]", var"a[1, 1]"), (*)(1//2, var"x[2]", var"a[1, 2]"), (*)(1//2, var"x[2]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[1, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 1]")))
ˍ₋out[2] = (+)((+)((*)(var"x[2]", var"a[2, 2]"), (*)(1//2, var"x[1]", var"a[1, 2]"), (*)(1//2, var"x[1]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[2, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 2]")))
ˍ₋out[3] = (+)((+)((*)(var"x[3]", var"a[3, 3]"), (*)(1//2, var"x[1]", var"a[1, 3]"), (*)(1//2, var"x[1]", var"a[3, 1]"), (*)(1//2, var"x[2]", var"a[2, 3]")), (+)((*)(1//2, var"x[2]", var"a[3, 2]")))
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:327 =#
nothing
end
end
end))
```julia
f_expr[1] |> Base.remove_linenums!
```
:(function (ˍ₋arg1, ˍ₋arg2)
let var"a[1, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[1]), var"a[2, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[2]), var"a[3, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[3]), var"a[1, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[4]), var"a[2, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[5]), var"a[3, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[6]), var"a[1, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[7]), var"a[2, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[8]), var"a[3, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[9]), var"x[1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[1]), var"x[2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[2]), var"x[3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[3])
(SymbolicUtils.Code.create_array)(typeof(ˍ₋arg1), nothing, Val{1}(), Val{(3,)}(), (+)((+)((*)(var"x[1]", var"a[1, 1]"), (*)(1//2, var"x[2]", var"a[1, 2]"), (*)(1//2, var"x[2]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[1, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 1]"))), (+)((+)((*)(var"x[2]", var"a[2, 2]"), (*)(1//2, var"x[1]", var"a[1, 2]"), (*)(1//2, var"x[1]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[2, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 2]"))), (+)((+)((*)(var"x[3]", var"a[3, 3]"), (*)(1//2, var"x[1]", var"a[1, 3]"), (*)(1//2, var"x[1]", var"a[3, 1]"), (*)(1//2, var"x[2]", var"a[2, 3]")), (+)((*)(1//2, var"x[2]", var"a[3, 2]"))))
end
end)
```julia
f = eval(f_expr[1])
```
#1 (generic function with 1 method)
```julia
AA = SA[
2 -1 0
-1 2 -1
0 -1 2
]
```
3×3 SMatrix{3, 3, Int64, 9} with indices SOneTo(3)×SOneTo(3):
2 -1 0
-1 2 -1
0 -1 2
```julia
XX = SVector(X...)
```
\begin{equation}
\left[
\begin{array}{c}
x{_1} \\
x{_2} \\
x{_3} \\
\end{array}
\right]
\end{equation}
```julia
f(AA, XX) |> typeof
```
SVector{3, Num} (alias for SArray{Tuple{3}, Num, 1, 3})
```julia
string(f_expr[1]) |> print
```
function (ˍ₋arg1, ˍ₋arg2)
let var"a[1, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[1]), var"a[2, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[2]), var"a[3, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[3]), var"a[1, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[4]), var"a[2, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[5]), var"a[3, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[6]), var"a[1, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[7]), var"a[2, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[8]), var"a[3, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[9]), var"x[1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[1]), var"x[2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[2]), var"x[3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[3])
(SymbolicUtils.Code.create_array)(typeof(ˍ₋arg1), nothing, Val{1}(), Val{(3,)}(), (+)((+)((*)(var"x[1]", var"a[1, 1]"), (*)(1//2, var"x[2]", var"a[1, 2]"), (*)(1//2, var"x[2]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[1, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 1]"))), (+)((+)((*)(var"x[2]", var"a[2, 2]"), (*)(1//2, var"x[1]", var"a[1, 2]"), (*)(1//2, var"x[1]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[2, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 2]"))), (+)((+)((*)(var"x[3]", var"a[3, 3]"), (*)(1//2, var"x[1]", var"a[1, 3]"), (*)(1//2, var"x[1]", var"a[3, 1]"), (*)(1//2, var"x[2]", var"a[2, 3]")), (+)((*)(1//2, var"x[2]", var"a[3, 2]"))))
end
end
```julia
g_expr = build_function(Symbolics.gradient(X'A*X/2, X), A, X; expression=Val(false))
```
(RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:ˍ₋arg1, :ˍ₋arg2), Symbolics.var"#_RGF_ModTag", Symbolics.var"#_RGF_ModTag", (0x9a8745f3, 0xc6f732c3, 0x31df4e8c, 0x4a5420e1, 0xa9b39910)}(quote
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:282 =#
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:283 =#
let var"a[1, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[1]), var"a[2, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[2]), var"a[3, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[3]), var"a[1, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[4]), var"a[2, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[5]), var"a[3, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[6]), var"a[1, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[7]), var"a[2, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[8]), var"a[3, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[9]), var"x[1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[1]), var"x[2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[2]), var"x[3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[3])
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:375 =#
(SymbolicUtils.Code.create_array)(typeof(ˍ₋arg1), nothing, Val{1}(), Val{(3,)}(), (+)((+)((*)(var"x[1]", var"a[1, 1]"), (*)(1//2, var"x[2]", var"a[1, 2]"), (*)(1//2, var"x[2]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[1, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 1]"))), (+)((+)((*)(var"x[2]", var"a[2, 2]"), (*)(1//2, var"x[1]", var"a[1, 2]"), (*)(1//2, var"x[1]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[2, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 2]"))), (+)((+)((*)(var"x[3]", var"a[3, 3]"), (*)(1//2, var"x[1]", var"a[1, 3]"), (*)(1//2, var"x[1]", var"a[3, 1]"), (*)(1//2, var"x[2]", var"a[2, 3]")), (+)((*)(1//2, var"x[2]", var"a[3, 2]"))))
end
end), RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:ˍ₋out, :ˍ₋arg1, :ˍ₋arg2), Symbolics.var"#_RGF_ModTag", Symbolics.var"#_RGF_ModTag", (0xc5f304e4, 0xcebd6a78, 0x1617d93f, 0x8bbba735, 0x070674d5)}(quote
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:282 =#
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:283 =#
let var"a[1, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[1]), var"a[2, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[2]), var"a[3, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[3]), var"a[1, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[4]), var"a[2, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[5]), var"a[3, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[6]), var"a[1, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[7]), var"a[2, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[8]), var"a[3, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[9]), var"x[1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[1]), var"x[2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[2]), var"x[3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[3])
#= D:\.julia\packages\Symbolics\fd3w9\src\build_function.jl:373 =#
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:329 =# @inbounds begin
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:325 =#
ˍ₋out[1] = (+)((+)((*)(var"x[1]", var"a[1, 1]"), (*)(1//2, var"x[2]", var"a[1, 2]"), (*)(1//2, var"x[2]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[1, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 1]")))
ˍ₋out[2] = (+)((+)((*)(var"x[2]", var"a[2, 2]"), (*)(1//2, var"x[1]", var"a[1, 2]"), (*)(1//2, var"x[1]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[2, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 2]")))
ˍ₋out[3] = (+)((+)((*)(var"x[3]", var"a[3, 3]"), (*)(1//2, var"x[1]", var"a[1, 3]"), (*)(1//2, var"x[1]", var"a[3, 1]"), (*)(1//2, var"x[2]", var"a[2, 3]")), (+)((*)(1//2, var"x[2]", var"a[3, 2]")))
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:327 =#
nothing
end
end
end))
```julia
g_expr[1] |> Base.remove_linenums!
```
RuntimeGeneratedFunction(#=in Symbolics=#, #=using Symbolics=#, :((ˍ₋arg1, ˍ₋arg2)->begin
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:282 =#
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:283 =#
let var"a[1, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[1]), var"a[2, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[2]), var"a[3, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[3]), var"a[1, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[4]), var"a[2, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[5]), var"a[3, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[6]), var"a[1, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[7]), var"a[2, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[8]), var"a[3, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[9]), var"x[1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[1]), var"x[2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[2]), var"x[3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[3])
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:375 =#
(SymbolicUtils.Code.create_array)(typeof(ˍ₋arg1), nothing, Val{1}(), Val{(3,)}(), (+)((+)((*)(var"x[1]", var"a[1, 1]"), (*)(1//2, var"x[2]", var"a[1, 2]"), (*)(1//2, var"x[2]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[1, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 1]"))), (+)((+)((*)(var"x[2]", var"a[2, 2]"), (*)(1//2, var"x[1]", var"a[1, 2]"), (*)(1//2, var"x[1]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[2, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 2]"))), (+)((+)((*)(var"x[3]", var"a[3, 3]"), (*)(1//2, var"x[1]", var"a[1, 3]"), (*)(1//2, var"x[1]", var"a[3, 1]"), (*)(1//2, var"x[2]", var"a[2, 3]")), (+)((*)(1//2, var"x[2]", var"a[3, 2]"))))
end
end))
```julia
typeof(g_expr[1])
```
RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:ˍ₋arg1, :ˍ₋arg2), Symbolics.var"#_RGF_ModTag", Symbolics.var"#_RGF_ModTag", (0x9a8745f3, 0xc6f732c3, 0x31df4e8c, 0x4a5420e1, 0xa9b39910)}
```julia
g = g_expr[1]
```
RuntimeGeneratedFunction(#=in Symbolics=#, #=using Symbolics=#, :((ˍ₋arg1, ˍ₋arg2)->begin
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:282 =#
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:283 =#
let var"a[1, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[1]), var"a[2, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[2]), var"a[3, 1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[3]), var"a[1, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[4]), var"a[2, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[5]), var"a[3, 2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[6]), var"a[1, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[7]), var"a[2, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[8]), var"a[3, 3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg1[9]), var"x[1]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[1]), var"x[2]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[2]), var"x[3]" = #= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:169 =# @inbounds(ˍ₋arg2[3])
#= D:\.julia\packages\SymbolicUtils\Hwe4r\src\code.jl:375 =#
(SymbolicUtils.Code.create_array)(typeof(ˍ₋arg1), nothing, Val{1}(), Val{(3,)}(), (+)((+)((*)(var"x[1]", var"a[1, 1]"), (*)(1//2, var"x[2]", var"a[1, 2]"), (*)(1//2, var"x[2]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[1, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 1]"))), (+)((+)((*)(var"x[2]", var"a[2, 2]"), (*)(1//2, var"x[1]", var"a[1, 2]"), (*)(1//2, var"x[1]", var"a[2, 1]"), (*)(1//2, var"x[3]", var"a[2, 3]")), (+)((*)(1//2, var"x[3]", var"a[3, 2]"))), (+)((+)((*)(var"x[3]", var"a[3, 3]"), (*)(1//2, var"x[1]", var"a[1, 3]"), (*)(1//2, var"x[1]", var"a[3, 1]"), (*)(1//2, var"x[2]", var"a[2, 3]")), (+)((*)(1//2, var"x[2]", var"a[3, 2]"))))
end
end))
```julia
g(AA, XX) |> typeof
```
SVector{3, Num} (alias for SArray{Tuple{3}, Num, 1, 3})
```julia
```
| b7d7fb98f1d1e7185aeff81631ff2381d765d268 | 31,848 | ipynb | Jupyter Notebook | 0020/Symbolics example.ipynb | genkuroki/public | 339ea5dfd424492a6b21d1df299e52d48902de18 | [
"MIT"
]
| 10 | 2021-06-06T00:33:49.000Z | 2022-01-24T06:56:08.000Z | 0020/Symbolics example.ipynb | genkuroki/public | 339ea5dfd424492a6b21d1df299e52d48902de18 | [
"MIT"
]
| null | null | null | 0020/Symbolics example.ipynb | genkuroki/public | 339ea5dfd424492a6b21d1df299e52d48902de18 | [
"MIT"
]
| 3 | 2021-08-02T11:58:34.000Z | 2021-12-11T11:46:05.000Z | 52.903654 | 1,287 | 0.474315 | true | 10,191 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.757794 | 0.675809 | __label__swe_Latn | 0.059362 | 0.408463 |
# Tutorial Overview
### This tutorial is divided into 4 parts; they are:
- Expected Value
- Variance
- Covariance
- Covariance Matrix
# Expected Value
## Finite case
Let $X$ be a random variable with a finite number of finite outcomes $x_1, x_2, \ldots, x_k$ occurring with probabilities $p_1, p_2, \ldots, p_k,$ respectively. The 'expectation' of $X$ is defined as
$\operatorname{E}[X] =\sum_{i=1}^k x_i\,p_i=x_1p_1 + x_2p_2 + \cdots + x_kp_k.$
Since all probabilities $p_i$ add up to 1 ($p_1 + p_2 + \cdots + p_k = 1$), the expected value is the weighted average, with $p_i$’s being the weights.
## Dice Roll Example
```python
import numpy
import matplotlib.pyplot as plt
```
```python
N = 100000
```
```python
roll = numpy.zeros(N, dtype=int)
```
```python
expectation = numpy.zeros(N)
```
```python
for i in range(N):
roll[i] = numpy.random.randint(1, 7)
```
```python
for i in range(1, N):
expectation[i] = numpy.mean(roll[0:i])
```
```python
plt.plot(expectation)
plt.title("Expectation of a dice roll");
```
# Variance
## Definition
The variance of a random variable $X$ is the expected value of the squared deviation from the Expected value|mean of $X$
$$\mu = \operatorname{E}[X]$$:
$$ \operatorname{Var}(X) = \operatorname{E}\left[(X - \mu)^2 \right]. $$
This definition encompasses random variables that are generated by processes that are discrete random variable|discrete, continuous random variable|continuous, Cantor distribution|neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself:
$$\operatorname{Var}(X) = \operatorname{Cov}(X, X).$$
The variance is also equivalent to the second cumulant of a probability distribution that generates $X$. The variance is typically designated as $\operatorname{Var}(X)$, $\sigma^2_X$, or simply $\sigma^2$ (pronounced "sigma squared"). The expression for the variance can be expanded:
$$\begin{align}
\operatorname{Var}(X) &= \operatorname{E}\left[(X - \operatorname{E}[X])^2\right] \\[4pt]
&= \operatorname{E}\left[X^2 - 2X\operatorname{E}[X] + \operatorname{E}[X]^2\right] \\[4pt]
&= \operatorname{E}\left[X^2\right] - 2\operatorname{E}[X]\operatorname{E}[X] + \operatorname{E}[X]^2 \\[4pt]
&= \operatorname{E}\left[X^2 \right] - \operatorname{E}[X]^2
\end{align}$$
In other words, the variance of $X$ is equal to the mean of the square of $X$ minus the square of the mean of $X$. This equation should not be used for computations using floating point arithmetic because it suffers from catastrophic cancellation if the two components of the equation are similar in magnitude. There exist Algorithms for calculating variance|numerically stable alternatives.
## Dice Roll Example again
```python
N = 1000
```
```python
roll = numpy.zeros(N, dtype=int)
```
```python
variance = numpy.zeros(N)
```
```python
for i in range(N):
roll[i] = numpy.random.randint(1, 7)
```
```python
for i in range(1, N):
variance[i] = numpy.var(roll[0:i])
```
```python
plt.plot(variance)
plt.title("Variance of a dice roll");
```
# Covariance
## Definition
For two jointly distributed real real-valued random variables $X$ and $Y$ with finite second moments, the covariance is defined as the expected value (or mean) of the product of their deviations from their individual expected values
$$\operatorname{cov}(X,Y) = \operatorname{E}{\big[(X - \operatorname{E}[X])(Y - \operatorname{E}[Y])\big]}$$
where $\operatorname{E}[X]$ is the expected value of $X$, also known as the mean of $X$. The covariance is also sometimes denoted $\sigma_{XY}$ or $\sigma(X,Y)$, in analogy to variance. By using the linearity property of expectations, this can be simplified to the expected value of their product minus the product of their expected values:
$$
\begin{align}
\operatorname{cov}(X, Y)
&= \operatorname{E}\left[\left(X - \operatorname{E}\left[X\right]\right) \left(Y - \operatorname{E}\left[Y\right]\right)\right] \\
&= \operatorname{E}\left[X Y - X \operatorname{E}\left[Y\right] - \operatorname{E}\left[X\right] Y + \operatorname{E}\left[X\right] \operatorname{E}\left[Y\right]\right] \\
&= \operatorname{E}\left[X Y\right] - \operatorname{E}\left[X\right] \operatorname{E}\left[Y\right] - \operatorname{E}\left[X\right] \operatorname{E}\left[Y\right] + \operatorname{E}\left[X\right] \operatorname{E}\left[Y\right] \\
&= \operatorname{E}\left[X Y\right] - \operatorname{E}\left[X\right] \operatorname{E}\left[Y\right],
\end{align}
$$
but this equation is susceptible to Loss of catastrophic cancellation.
The units of measurement of the covariance $\operatorname{cov}(X,Y)$ are those of $X$ times those of $Y$. By contrast, correlation coefficients, which depend on the covariance, are a dimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.)
## Iris Dataset Example
```python
import pandas as pd
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import seaborn.apionly as sns
from sklearn import datasets
%matplotlib inline
```
```python
iris_data = datasets.load_iris()
```
```python
iris_data.keys()
```
dict_keys(['data', 'target', 'target_names', 'DESCR', 'feature_names', 'filename'])
```python
df = pd.concat(
[
pd.DataFrame(iris_data["data"], columns=iris_data["feature_names"]),
pd.DataFrame(iris_data["target"], columns=["target"]),
],
axis=1,
)
```
```python
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>sepal length (cm)</th>
<th>sepal width (cm)</th>
<th>petal length (cm)</th>
<th>petal width (cm)</th>
<th>target</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>5.1</td>
<td>3.5</td>
<td>1.4</td>
<td>0.2</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>4.9</td>
<td>3.0</td>
<td>1.4</td>
<td>0.2</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>4.7</td>
<td>3.2</td>
<td>1.3</td>
<td>0.2</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>4.6</td>
<td>3.1</td>
<td>1.5</td>
<td>0.2</td>
<td>0</td>
</tr>
<tr>
<td>4</td>
<td>5.0</td>
<td>3.6</td>
<td>1.4</td>
<td>0.2</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
```python
iris_data["target_names"]
```
array(['setosa', 'versicolor', 'virginica'], dtype='<U10')
```python
df.target = df.target.apply(lambda x: iris_data["target_names"][x])
```
```python
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>sepal length (cm)</th>
<th>sepal width (cm)</th>
<th>petal length (cm)</th>
<th>petal width (cm)</th>
<th>target</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>5.1</td>
<td>3.5</td>
<td>1.4</td>
<td>0.2</td>
<td>setosa</td>
</tr>
<tr>
<td>1</td>
<td>4.9</td>
<td>3.0</td>
<td>1.4</td>
<td>0.2</td>
<td>setosa</td>
</tr>
<tr>
<td>2</td>
<td>4.7</td>
<td>3.2</td>
<td>1.3</td>
<td>0.2</td>
<td>setosa</td>
</tr>
<tr>
<td>3</td>
<td>4.6</td>
<td>3.1</td>
<td>1.5</td>
<td>0.2</td>
<td>setosa</td>
</tr>
<tr>
<td>4</td>
<td>5.0</td>
<td>3.6</td>
<td>1.4</td>
<td>0.2</td>
<td>setosa</td>
</tr>
</tbody>
</table>
</div>
### create scatterplot matrix
```python
fig = sns.pairplot(data=df, hue="target")
plt.show()
```
```python
X = df[df.columns[:-1]].values
X.shape
```
(150, 4)
## Sample Covariance
measures how two variables differ from their mean
positive covariance: that the two variables are both above or both below their respective means
variables with a positive covariance are positively "correlated" -- they go up or done together
negative covariance: valuables from one variable tends to be above the mean and the other below their mean
in other words, negative covariance means that if one variable goes up, the other variable goes down
$$\sigma_{x,y} = \frac{1}{n-1} \sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})$$
note that similar to variance, the dimension of the covariance is $unit^2$
covariance can be understood as the "variability due to codependence" whereas the variance is the "independent variability"
```python
x_mean, y_mean = np.mean(X[:, 2:4], axis=0)
```
```python
sum([(x - x_mean) * (y - y_mean) for x, y in zip(X[:, 2], X[:, 3])]) / (X.shape[0] - 1)
```
1.2956093959731545
# Covariance Matrix
## Definition
Throughout this article, boldfaced unsubscripted $\mathbf{X}$ and $\mathbf{Y}$ are used to refer to random vectors, and unboldfaced subscripted $X_i$ and $Y_i$ are used to refer to scalar random variables.
If the entries in the column vector
$\mathbf{X}=(X_1, X_2, ... , X_n)^{\mathrm T}$
are random variables, each with finite variance and expected value, then the covariance matrix $\operatorname{K}_{\mathbf{X}\mathbf{X}}$ is the matrix whose $(i,j)$ entry is the covariance
$\operatorname{K}_{X_i X_j} = \operatorname{cov}[X_i, X_j] = \operatorname{E}[(X_i - \operatorname{E}[X_i])(X_j - \operatorname{E}[X_j])]$
where the operator $\operatorname{E}$ denotes the expected value (mean) of its argument.
In other words,
$
\operatorname{K}_{\mathbf{X}\mathbf{X}}=
\begin{bmatrix}
\mathrm{E}[(X_1 - \operatorname{E}[X_1])(X_1 - \operatorname{E}[X_1])] & \mathrm{E}[(X_1 - \operatorname{E}[X_1])(X_2 - \operatorname{E}[X_2])] & \cdots & \mathrm{E}[(X_1 - \operatorname{E}[X_1])(X_n - \operatorname{E}[X_n])] \\ \\
\mathrm{E}[(X_2 - \operatorname{E}[X_2])(X_1 - \operatorname{E}[X_1])] & \mathrm{E}[(X_2 - \operatorname{E}[X_2])(X_2 - \operatorname{E}[X_2])] & \cdots & \mathrm{E}[(X_2 - \operatorname{E}[X_2])(X_n - \operatorname{E}[X_n])] \\ \\
\vdots & \vdots & \ddots & \vdots \\ \\
\mathrm{E}[(X_n - \operatorname{E}[X_n])(X_1 - \operatorname{E}[X_1])] & \mathrm{E}[(X_n - \operatorname{E}[X_n])(X_2 - \operatorname{E}[X_2])] & \cdots & \mathrm{E}[(X_n - \operatorname{E}[X_n])(X_n - \operatorname{E}[X_n])]
\end{bmatrix}
$
### Covariance of Iris dataset Features
```python
numpy.cov(X.T)
```
array([[ 0.68569351, -0.042434 , 1.27431544, 0.51627069],
[-0.042434 , 0.18997942, -0.32965638, -0.12163937],
[ 1.27431544, -0.32965638, 3.11627785, 1.2956094 ],
[ 0.51627069, -0.12163937, 1.2956094 , 0.58100626]])
```python
covariance_matrix = pd.DataFrame(
numpy.cov(X.T), columns=iris_data["feature_names"], index=iris_data["feature_names"]
)
```
```python
covariance_matrix
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>sepal length (cm)</th>
<th>sepal width (cm)</th>
<th>petal length (cm)</th>
<th>petal width (cm)</th>
</tr>
</thead>
<tbody>
<tr>
<td>sepal length (cm)</td>
<td>0.685694</td>
<td>-0.042434</td>
<td>1.274315</td>
<td>0.516271</td>
</tr>
<tr>
<td>sepal width (cm)</td>
<td>-0.042434</td>
<td>0.189979</td>
<td>-0.329656</td>
<td>-0.121639</td>
</tr>
<tr>
<td>petal length (cm)</td>
<td>1.274315</td>
<td>-0.329656</td>
<td>3.116278</td>
<td>1.295609</td>
</tr>
<tr>
<td>petal width (cm)</td>
<td>0.516271</td>
<td>-0.121639</td>
<td>1.295609</td>
<td>0.581006</td>
</tr>
</tbody>
</table>
</div>
## Heatmap of Covariance Matrix
```python
f, ax = plt.subplots(figsize=(11, 15))
heatmap = sns.heatmap(
covariance_matrix,
square=True,
linewidths=0.5,
cmap="coolwarm",
cbar_kws={"shrink": 0.4, "ticks": [-1, -0.5, 0, 0.5, 1]},
vmin=-1,
vmax=1,
annot=True,
annot_kws={"size": 12},
)
# add the column names as labels
ax.set_yticklabels(covariance_matrix.columns, rotation=0)
ax.set_xticklabels(covariance_matrix.columns)
sns.set_style({"xtick.bottom": True}, {"ytick.left": True})
```
| c61be359af189cd7275eb68d3a001457a0d77497 | 219,689 | ipynb | Jupyter Notebook | content/week-06/samples/Expectation Variance and Covariance Using NumPy.ipynb | GiorgiBeriashvili/school-of-ai | abd033fecf32c1222da097aa8420db6c69b357e6 | [
"Apache-2.0",
"MIT"
]
| null | null | null | content/week-06/samples/Expectation Variance and Covariance Using NumPy.ipynb | GiorgiBeriashvili/school-of-ai | abd033fecf32c1222da097aa8420db6c69b357e6 | [
"Apache-2.0",
"MIT"
]
| null | null | null | content/week-06/samples/Expectation Variance and Covariance Using NumPy.ipynb | GiorgiBeriashvili/school-of-ai | abd033fecf32c1222da097aa8420db6c69b357e6 | [
"Apache-2.0",
"MIT"
]
| null | null | null | 244.098889 | 143,352 | 0.9065 | true | 4,287 | Qwen/Qwen-72B | 1. YES
2. YES | 0.931463 | 0.879147 | 0.818892 | __label__eng_Latn | 0.7673 | 0.740894 |
```python
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import sympy as sy
sy.init_printing()
```
```python
def round_expr(expr, num_digits):
return expr.xreplace({n : round(n, num_digits) for n in expr.atoms(sy.Number)})
```
# <font face="gotham" color="purple"> Systems of First-Order Equations</font>
In time series analysis, we study difference equations by writing them into a linear system. For instance,
$$
3y_{t+3} - 2y_{t+2} + 4y_{t+1} - y_t = 0
$$
We define
$$
\mathbf{x}_t =
\left[
\begin{matrix}
y_t\\
y_{t+1}\\
y_{t+2}
\end{matrix}
\right], \qquad
\mathbf{x}_{t+1} =
\left[
\begin{matrix}
y_{t+1}\\
y_{t+2}\\
y_{t+3}
\end{matrix}
\right]
$$
Rerrange the difference equation for better visual shape,
$$
y_{t+3} = \frac{2}{3}y_{t+2} - \frac{4}{3}y_{t+1} + \frac{1}{3}y_{t}
$$
The difference equation can be rewritten as
$$
\mathbf{x}_{t+1} = A \mathbf{x}_{t}
$$
That is,
$$
\left[
\begin{matrix}
y_{t+1}\\
y_{t+2}\\
y_{t+3}
\end{matrix}
\right] =
\left[
\begin{matrix}
0 & 1 & 0\\
0 & 0 & 1\\
\frac{1}{3} & -\frac{4}{3} & \frac{2}{3}
\end{matrix}
\right]
\left[
\begin{matrix}
y_t\\
y_{t+1}\\
y_{t+2}
\end{matrix}
\right]
$$
In general, we make sure the difference equation look like:
$$
y_{t+k} = a_1y_{t+k-1} + a_2y_{t+k-2} + ... + a_ky_{t}
$$
then rewrite as $\mathbf{x}_{t+1} = A \mathbf{x}_{t}$, where
$$
\mathbf{x}_t =
\left[
\begin{matrix}
y_{t}\\
y_{t+1}\\
\vdots\\
y_{t+k-1}
\end{matrix}
\right], \quad
\mathbf{x}_{t+1} =
\left[
\begin{matrix}
y_{t+1}\\
y_{t+2}\\
\vdots\\
y_{t+k}
\end{matrix}
\right]
$$
And also
$$A=\left[\begin{array}{ccccc}
0 & 1 & 0 & \cdots & 0 \\
0 & 0 & 1 & & 0 \\
\vdots & & & \ddots & \vdots \\
0 & 0 & 0 & & 1 \\
a_{k} & a_{k-1} & a_{k-2} & \cdots & a_{1}
\end{array}\right]$$
# <font face="gotham" color="purple"> Markov Chains</font>
Markov chain is a type of stochastic process, commonly modeled by difference equation, we will be slightly touching the surface of this topic by walking through an example.
Markov chain is also described by the first-order difference equation $\mathbf{x}_{t+1} = P\mathbf{x}_t$, where $\mathbf{x}_t$ is called <font face="gotham" color="red">state vector</font>, $A$ is called <font face="gotham" color="red">stochastic matrix</font>.
Suppose there are 3 cities $A$, $B$ and $C$, the proportion of population migration among cities are constructed in the stochastic matrix below
$$
M =
\left[
\begin{matrix}
.89 & .07 & .10\\
.07 & .90 & .11\\
.04 & .03 & .79
\end{matrix}
\right]
$$
For instance, the first column means that $89\%$ of population will stay in city $A$, $7\%$ will move to city $B$ and $4\%$ will migrate to city $C$. The first row means $7\%$ of city $B$'s population will immigrate into $A$, $10\%$ of city $C$'s population will immigrate into $A$.
Suppose the initial population of 3 cities are $(593000, 230000, 709000)$, convert the entries into percentage of total population.
```python
x = np.array([593000, 230000, 709000])
x = x/np.sum(x);x
```
array([0.38707572, 0.15013055, 0.46279373])
Input the stochastic matrix
```python
M = np.array([[.89, .07, .1], [.07, .9, .11], [.04, .03, .79]])
```
After the first period, the population proportion among cities are
```python
x1 = M@x
x1
```
array([0.4012859 , 0.2131201 , 0.38559399])
The second period
```python
x2 = M@x1
x2
```
array([0.41062226, 0.26231345, 0.3270643 ])
The third period
```python
x3 = M@x2
x3
```
array([0.41652218, 0.30080273, 0.28267509])
We can construct a loop till $\mathbf{x}_{100} = M\mathbf{x}_{99}$, then plot the dynamic path. Notice that the curve is flattening after 20 periods, and we call it <font face="gotham" color="red">convergence to steady-state</font>.
```python
k = 100
X = np.zeros((k, 3))
X[0] = M@x
i = 0
while i+1 < 100:
X[i+1] = M@X[i]
i = i + 1
```
```python
fig, ax = plt.subplots(figsize = (12, 12))
la = ['City A', 'City B', 'City C']
s = '$%.3f$'
for i in [0, 1, 2]:
ax.plot(X[:, i], lw = 3, label = la[i] )
ax.text(x = 20, y = X[-1,i], s = s %X[-1,i], size = 16)
ax.axis([0, 20, 0, .6]) # No need to show more of x, it reaches steady-state around 20 periods
ax.legend(fontsize = 16)
ax.grid()
ax.set_title('Dynamics of Population Percentage')
plt.show()
```
# <font face="gotham" color="purple"> Eigenvalue and -vector in Markov Chain</font>
If the $M$ in last example is diagonalizable, there will be $n$ linearly independent eigenvectors and corresponding eigenvalues, $\lambda_1$,...,$\lambda_n$. And eigenvalues can always be arranged so that $\left|\lambda_{1}\right| \geq\left|\lambda_{2}\right| \geq \cdots \geq\left|\lambda_{n}\right|$.
Also, because any initial vector $\mathbb{x}_0 \in \mathbb{R}^n$, we can use the basis of eigenspace (eigenvectors) to represent all $\mathbf{x}$.
$$
\mathbf{x}_{0}=c_{1} \mathbf{v}_{1}+\cdots+c_{n} \mathbf{v}_{n}
$$
This is called <font face="gotham" color="red">eigenvector decomposition</font> of $\mathbf{x}_0$. Multiply by $A$
$$
\begin{aligned}
\mathbf{x}_{1}=A \mathbf{x}_{0} &=c_{1} A \mathbf{v}_{1}+\cdots+c_{n} A \mathbf{v}_{n} \\
&=c_{1} \lambda_{1} \mathbf{v}_{1}+\cdots+c_{n} \lambda_{n} \mathbf{v}_{n}
\end{aligned}
$$
In general, we have a formula for $\mathbf{x}_k$
$$
\mathbf{x}_{k}=c_{1}\left(\lambda_{1}\right)^{k} \mathbf{v}_{1}+\cdots+c_{n}\left(\lambda_{n}\right)^{k} \mathbf{v}_{n}
$$
Now we test if $M$ has $n$ linearly independent eigvectors.
```python
M = sy.Matrix([[.89, .07, .1], [.07, .9, .11], [.04, .03, .79]]);M
```
```python
M.is_diagonalizable()
```
True
$M$ is diagonalizable, which also means that $M$ has $n$ linearly independent eigvectors.
```python
P, D = M.diagonalize()
P = round_expr(P,4); P # user-defined round function at the top of the notebook
```
```python
D = round_expr(D,4); D
```
First we find the $\big[\mathbf{x}_0\big]_C$, i.e. $c_1, c_2, c_3$
```python
x0 = sy.Matrix([[.3870], [.1501], [0.4627]]);x0
```
```python
P_aug = P.row_join(x0)
P_aug_rref = round_expr(P_aug.rref()[0],4); P_aug_rref
```
```python
c = sy.zeros(3, 1)
for i in [0, 1, 2]:
c[i] = P_aug_rref[i, 3]
c = round_expr(c,4);c
```
Now we can use the formula to compute $\mathbf{x}_{100}$, it is the same as we have plotted in the graph.
```python
x100 = c[0] * D[0, 0]**100 * P[:, 0]\
+ c[1] * D[1, 1]**100 * P[:, 1]\
+ c[2] * D[2, 2]**100 * P[:, 2]
x100 = round_expr(x100,4);x100
```
This is close enough to the steady-state.
# <font face="gotham" color="purple"> Fractal Pictures</font>
Here is an example of fractal geometry, illustrating how dynamic system and affine transformation can create fractal pictures.
The algorithem is perform 4 types of affine transformation. The corresponding probabilites are $p_1, p_2, p_3, p_4$.
$$
\begin{array}{l}
T_{1}\left(\left[\begin{array}{l}
x \\
y
\end{array}\right]\right)=\left[\begin{array}{rr}
0.86 & 0.03 \\
-0.03 & 0.86
\end{array}\right]\left[\begin{array}{l}
x \\
y
\end{array}\right]+\left[\begin{array}{l}
0 \\
1.5
\end{array}\right], p_{1}=0.83 \\
T_{2}\left(\left[\begin{array}{l}
x \\
y
\end{array}\right]\right)=\left[\begin{array}{lr}
0.2 & -0.25 \\
0.21 & 0.23
\end{array}\right]\left[\begin{array}{l}
x \\
y
\end{array}\right]+\left[\begin{array}{l}
0 \\
1.5
\end{array}\right], p_{2}=0.09 \\
T_{3}\left(\left[\begin{array}{l}
x \\
y
\end{array}\right]\right)=\left[\begin{array}{rr}
-0.15 & 0.27 \\
0.25 & 0.26
\end{array}\right]\left[\begin{array}{l}
x \\
y
\end{array}\right]+\left[\begin{array}{l}
0 \\
0.45
\end{array}\right], p_{3}=0.07 \\
T_{4}\left(\left[\begin{array}{l}
x \\
y
\end{array}\right]\right)=\left[\begin{array}{ll}
0 & 0 \\
0 & 0.17
\end{array}\right]\left[\begin{array}{l}
x \\
y
\end{array}\right]+\left[\begin{array}{l}
0 \\
0
\end{array}\right], p_{4}=0.01
\end{array}
$$
The codes below are self-explanatory.
```python
A = np.array([[[.86, .03],
[-.03, .86]],
[[.2, -.25],
[.21, .23]],
[[-.15, .27],
[.25, .26]],
[[0., 0.],
[0., .17]]])
a = np.array([[[0,1.5]],
[[0,1.5]],
[[0,0.45]],
[[0,0]]])
p1 = 1*np.ones(83)
p2 = 2*np.ones(9)
p3 = 3*np.ones(7)
p4 = 4*np.ones(1)
p = np.hstack((p1,p2,p3,p4))
k = 30000
fig, ax = plt.subplots(figsize = (5,8))
X = np.zeros((2,k))
for i in range(k-1):
n = np.random.randint(0, 100)
if p[n] == 1:
X[:,i+1] = A[0,:,:]@X[:,i]+a[0,:,:]
elif p[n] == 2:
X[:,i+1] = A[1,:,:]@X[:,i]+a[1,:,:]
elif p[n] == 3:
X[:,i+1] = A[2,:,:]@X[:,i]+a[2,:,:]
else:
X[:,i+1] = A[3,:,:]@X[:,i]+a[3,:,:]
ax.scatter(X[0,:],X[1,:], s = 1, color = 'g')
plt.show()
```
| a26e76aec44ef355161d91c39081c78b73c23238 | 241,574 | ipynb | Jupyter Notebook | Chapter 14 - Applications to Dynamic System.ipynb | testinggg-art/Linear_Algebra_With_Python | bd5c6bdac07e65b52e92960aee781f63489a0260 | [
"MIT"
]
| 1,719 | 2020-12-30T07:26:45.000Z | 2022-03-31T21:05:57.000Z | Chapter 14 - Applications to Dynamic System.ipynb | testinggg-art/Linear_Algebra_With_Python | bd5c6bdac07e65b52e92960aee781f63489a0260 | [
"MIT"
]
| 1 | 2021-01-13T00:02:03.000Z | 2021-01-13T00:02:03.000Z | Chapter 14 - Applications to Dynamic System.ipynb | testinggg-art/Linear_Algebra_With_Python | bd5c6bdac07e65b52e92960aee781f63489a0260 | [
"MIT"
]
| 421 | 2020-12-30T07:27:23.000Z | 2022-03-01T17:40:41.000Z | 272.349493 | 147,428 | 0.920923 | true | 3,335 | Qwen/Qwen-72B | 1. YES
2. YES | 0.808067 | 0.740174 | 0.598111 | __label__eng_Latn | 0.796541 | 0.227942 |
```python
import numpy as np
import pandas as pd
from sympy import pprint
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import learning_curve
#combined_data
train = pd.read_csv('train.csv',header=None)
test = pd.read_csv('test.csv',header=None)
val = pd.read_csv('val.csv',header=None)
#using validation data for different prediction models and check for least E_in
#processing_data to 20 columns
i=0;
data1=pd.DataFrame()
while(i<1080):
data1=pd.DataFrame.append(data1,val.iloc[:,i:(i+120)].mean(axis=1),ignore_index=True)
i=i+120;
i=1080;
while(i<1094):
data1=pd.DataFrame.append(data1,val.iloc[:,i:(i+2)].T,ignore_index=True)
i=i+2;
data1=pd.DataFrame.append(data1,val.iloc[:,i:(i+2)].sum(axis=1),ignore_index=True)
i=i+2;
data1=data1.T
data1=data1.iloc[:,0:20]
Xval=data1.iloc[:,0:18]
Yval=data1.iloc[:,18:20]
j=0;
test1=pd.DataFrame()
while(j<1080):
test1=pd.DataFrame.append(test1,test.iloc[:,j:(j+120)].mean(axis=1),ignore_index=True)
j=j+120;
j=1080;
while(j<1094):
test1=pd.DataFrame.append(test1,test.iloc[:,j:(j+2)].T,ignore_index=True)
j=j+2;
test1=pd.DataFrame.append(test1,test.iloc[:,j:(j+2)].sum(axis=1),ignore_index=True)
j=j+2;
test1=test1.T
test1=test1.iloc[:,0:20]
Xtest=test1.iloc[:,0:18]
Ytest=test1.iloc[:,18:20]
k=0;
train1=pd.DataFrame()
while(k<1080):
train1=pd.DataFrame.append(train1,train.iloc[:,k:(k+120)].mean(axis=1),ignore_index=True)
k=k+120;
k=1080;
while(k<1094):
train1=pd.DataFrame.append(train1,train.iloc[:,k:(k+2)].T,ignore_index=True)
k=k+2;
train1=pd.DataFrame.append(train1,train.iloc[:,k:(k+2)].sum(axis=1),ignore_index=True)
k=k+2;
train1=train1.T
train1=train1.iloc[:,0:20]
Xtrain=train1.iloc[:,0:18]
Ytrain=train1.iloc[:,18:20]
#LINEAR REGRESSION
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(Xtrain,Ytrain)
weights=model.coef_
Yval_pred=model.predict(Xval)
Eval=mean_squared_error(Yval.iloc[:,0:2],Yval_pred[:,:], multioutput='raw_values')
print('Eval for LINEAR REGRESSION ',Eval)
#learning curve
training_sizes, training_scores,validation_scores = learning_curve(
estimator = model,
X = Xval,
y = Yval,
train_sizes = np.linspace(5, len(Xval) * 0.8, dtype = int)
)
line1 = plt.plot(
training_sizes, training_scores.mean(axis = 1), 'r')
```
```python
import tensorflow
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
neural = Sequential([
Dense(18, activation='tanh'),
Dense(18, activation='tanh'),
Dense(2, activation='sigmoid')
])
neural.compile(optimizer='sgd',
loss='binary_crossentropy')
#for translational velocity
hist = neural.fit(Xtrain, Ytrain, batch_size=1000, epochs=50,validation_data=(Xval, Yval))
Yval_pred1=neural.predict(Xval)
Eval1=mean_squared_error(Yval.iloc[:,0:1],Yval_pred1[:,0], multioutput='raw_values')
print('Eval of translational velocity for Neural Network',Eval1)
Eval2=mean_squared_error(Yval.iloc[:,1:2],Yval_pred1[:,1], multioutput='raw_values')
print('Eval of rotational velocity for Neural Network',Eval2)
#learning curve
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper right')
plt.show()
```
```python
from keras.layers import Dropout
from keras import regularizers
#data is purposefully overfitted to show hor regularization works with λ being 0.01
neural1 = Sequential([
Dense(1000, activation='tanh'),
Dense(1000, activation='tanh'),
Dense(2, activation='sigmoid')])
neural1.compile(optimizer='sgd',
loss='binary_crossentropy')
hist1 = neural1.fit(Xtrain, Ytrain,batch_size=1000, epochs=50,validation_data=(Xval, Yval))
plt.plot(hist1.history['loss'])
plt.plot(hist1.history['val_loss'])
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper right')
plt.show()
neural2 = Sequential([
Dense(1000, activation='tanh', kernel_regularizer=regularizers.l2(0.01)),
Dropout(0.3),
Dense(1000, activation='tanh', kernel_regularizer=regularizers.l2(0.01)),
Dropout(0.3),
Dense(2, activation='sigmoid', kernel_regularizer=regularizers.l2(0.01))])
neural2.compile(optimizer='sgd',
loss='binary_crossentropy')
hist2 = neural2.fit(Xtrain, Ytrain,batch_size=1000, epochs=50,validation_data=(Xval, Yval))
plt.plot(hist2.history['loss'])
plt.plot(hist2.history['val_loss'])
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper right')
plt.show()
```
```python
#to_test
Xtest=test1.iloc[:,0:18]
Ytest=test1.iloc[:,18:20]
Ytest_pred=neural.predict(Xtest)
Etest_linvel=mean_squared_error(Ytest.iloc[:,0:1],Ytest_pred[:,0])
print('Etest for lin vel',Etest_linvel)
Etest_angvel=mean_squared_error(Ytest.iloc[:,1:2],Ytest_pred[:,1])
print('Etest for ang vel',Etest_angvel)
dvc=19
tol=0.1
E_out = Etest_linvel + np.sqrt((8/35861)*np.log(4*((2*35861)**(dvc+1))/tol))
print('Eout for translational velocity',E_out)
E_out1 = Etest_angvel + np.sqrt((8/35861)*np.log(4*((2*35861)**(dvc+1))/tol))
print('Eout for rotational velocity',E_out1)
```
Etest for lin vel 0.0843801184253638
Etest for ang vel 0.04773384888149761
Eout for translational velocity 0.30956200431383674
Eout for rotational velocity 0.27291573476997055
```python
```
| cfb2f56c3b4476b191126d5a5647e3af209f6d50 | 86,080 | ipynb | Jupyter Notebook | ml_final_project.ipynb | sharmithag/ML_Training-LiDaR-data | d4fd42f1a54a364ca38e1e9c482a970ed299b3fa | [
"MIT"
]
| null | null | null | ml_final_project.ipynb | sharmithag/ML_Training-LiDaR-data | d4fd42f1a54a364ca38e1e9c482a970ed299b3fa | [
"MIT"
]
| null | null | null | ml_final_project.ipynb | sharmithag/ML_Training-LiDaR-data | d4fd42f1a54a364ca38e1e9c482a970ed299b3fa | [
"MIT"
]
| null | null | null | 136.202532 | 17,112 | 0.794366 | true | 1,600 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.70253 | 0.609119 | __label__eng_Latn | 0.157198 | 0.253517 |
```python
#Author-Vishal Burman
```
## Vectorization and mini-batch
```python
# H1=sigma(W1*X+b1)
# H2=sigma(W2*H1+b2)
# o=softmax(W3*H2+b3)
```
```python
%matplotlib inline
from mxnet import autograd, nd
```
```python
import matplotlib.pyplot as plt
```
## Relu Function
```python
# ReLU(z)= max(z, 0)
```
```python
# Retains only positive elements and discards negative elements
```
```python
x=nd.arange(-8.0, 8.0, 0.1)
x.attach_grad()
```
```python
with autograd.record():
y=x.relu()
# plt.figure(figsize=(4, 2.5))
# plt.plot(x, y, 'x', 'relu(x)')
plt.plot(x.asnumpy(), y.asnumpy())
```
```python
# When input is negative the derivative of the ReLU function is zero
# When the input is positive the derivative of the ReLU function is one
# When the input is zero the ReLU function is not differentiable and we take left derivative as 0
```
```python
y.backward()
```
```python
plt.plot(x.asnumpy(), x.grad.asnumpy(), 'x', 'grad of ReLU')
```
```python
# The reason for using the ReLU is that its derivatives are particularly well behaved
# They vanish or just let the argument through
# This makes argument better behaved and and reduces the vanishing gradient problem
```
## Sigmoid Function
```python
#sigmoid(x)=1/(1+exp(-x))
# The sigmoid function takes the value from R and transforms it into range(0, 1)
```
```python
with autograd.record():
y=x.sigmoid()
plt.plot(x.asnumpy(), y.asnumpy(), 'x', 'sigmoid(x)')
```
```python
# The derivative of the sigmoid function is given by:
```
\begin{equation}
\frac{d}{dx} \mathrm{sigmoid}(x) = \frac{\exp(-x)}{(1 + \exp(-x))^2} = \mathrm{sigmoid}(x)\left(1-\mathrm{sigmoid}(x)\right).
\end{equation}
```python
y.backward()
```
```python
plt.plot(x.asnumpy(),x.grad.asnumpy(), 'x', 'grad of sigmoid')
```
```python
# When the input is 0, the derivative of the sigmoid function reaches a max of 0.25
# As input diverges from 0 the derivative approaches 0
```
## Tanh function
```python
# Like the sigmoid function the tanh function squashes its input and transforms the element into the interval of
# (-1, 1)
```
\begin{equation}
\text{tanh}(x) = \frac{1 - \exp(-2x)}{1 + \exp(-2x)}.
\end{equation}
```python
with autograd.record():
y=x.tanh()
plt.plot(x.asnumpy(), y.asnumpy(), 'x', 'tanh(x)')
```
```python
# When the input nears 0 the function approaches a linear transformation
# The shape is similar to sigmoid function but tanh function exhibits point symmetry
```
```python
# The derivative of the tanh function is:
```
\begin{equation}
\frac{d}{dx} \mathrm{tanh}(x) = 1 - \mathrm{tanh}^2(x).
\end{equation}
```python
# As the input approaches 0 the derivative reaches a max of 1
# As the input diverges from 0 the derivative approaches 0
```
```python
```
| ed402076d000e04421b10695d5d98cdc1d56c569 | 53,029 | ipynb | Jupyter Notebook | Multilayer_NN/test1.ipynb | vishal-burman/MXNet_Architectures | d4e371226e814c1507974244c4642b906566f1d8 | [
"MIT"
]
| null | null | null | Multilayer_NN/test1.ipynb | vishal-burman/MXNet_Architectures | d4e371226e814c1507974244c4642b906566f1d8 | [
"MIT"
]
| 3 | 2020-03-24T17:14:05.000Z | 2021-02-02T22:01:48.000Z | Multilayer_NN/test1.ipynb | vishal-burman/MXNet_Architectures | d4e371226e814c1507974244c4642b906566f1d8 | [
"MIT"
]
| null | null | null | 121.905747 | 10,736 | 0.891757 | true | 823 | Qwen/Qwen-72B | 1. YES
2. YES | 0.897695 | 0.817574 | 0.733933 | __label__eng_Latn | 0.952767 | 0.543504 |
# Exercise: beam bending with the collocation method
The differential equation for beam bending
$$
(EI w'')'' - q = 0
$$
can be integrated analytically by specifying the linear loading as $q = q_0\frac{x}{L}$ and the boundary conditions
$$
w(0) = 0\\
w(L) = 0\\
w''(0)=0\\
w''(L) = 0
$$
to yield the deflection solution ("Biegelinie"):
$$
w(x) = \frac{q_0 L^4}{360 EI} \left[ 3\left(\frac{x}{L}\right)^5 - 10 \left(\frac{x}{L}\right)^3 + 7 \left(\frac{x}{L}\right)\right]
$$
We now seek to approximate this solution numerically using the collocation method.
```python
import numpy as np #numerical methods
import sympy as sp #symbolic operations
import matplotlib.pyplot as plt #plotting
sp.init_printing(use_latex='mathjax') #makes sympy output look nice
#Some plot settings
plt.style.use('seaborn-deep')
plt.rcParams['lines.linewidth']= 2.0
plt.rcParams['lines.color']= 'black'
plt.rcParams['legend.frameon']=True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['legend.fontsize']=14
plt.rcParams['font.size'] = 14
plt.rcParams['axes.spines.right'] = False
plt.rcParams['axes.spines.top'] = False
plt.rcParams['axes.spines.left'] = True
plt.rcParams['axes.spines.bottom'] = True
plt.rcParams['axes.axisbelow'] = True
plt.rcParams['figure.figsize'] = (8, 6)
```
```python
#Defining the geometric an material characteristics as symbolic quantities
L,q,EI,x = sp.symbols('L q_0 EI x')
```
```python
#Analytical solution to deflection
def deflection_analytical():
a = x/L
f = q*L**4/(360 * EI)
return f*(3*a**5 - 10*a**3 + 7*a)
```
```python
deflection_analytical() #check definition
```
$$\frac{L^{4} q_{0}}{360 EI} \left(\frac{7 x}{L} - \frac{10 x^{3}}{L^{3}} + \frac{3 x^{5}}{L^{5}}\right)$$
Now, let's plot the analytical solution. For that purpose, we use some Python magic ("lambdify"). We sample the analytical solution for $x \in [0,L]$ at 100 points and plot the dimensionless deflection over the dimensionless length.
```python
lam_x = sp.lambdify(x, deflection_analytical(), modules=['numpy'])
#For the variable x the function deflection_analytical() will obtain something
x_vals = np.linspace(0, 1, 100)*L #This something is x from 0 to L
analytical = lam_x(x_vals) #We calculate the solution by passing x = [0,...,L] to deflection_analytical
plt.plot(x_vals/L, analytical/(L**4*q)*EI)
plt.xlabel('$x / L$')
plt.ylabel('$w / L^4 q_0 EI^{-1}$')
```
## Trigonometric Ansatz
Let's try the approximation
$$
\tilde{w} = a_1 \sin \left(\pi \frac{x}{L}\right) + a_2 \sin \left(2\pi\frac{x}{L}\right)
$$
```python
a1, a2 = sp.symbols('a_1 a_2')#Defining the free values as new symbols
```
```python
def deflection_ansatz():
return a1*sp.sin(sp.pi/L*x) + a2*sp.sin(2*sp.pi/L*x) #defining the approximate solution with the unknown coefficients
```
Now we substitute this solution into the fourth-order ODE for beam bending by differentiating it twice and adding the distributed loading:
$$
EI w^\text{IV} - q_0 \frac{x}{L} = 0 \text{ with } EI = \text{const.}
$$
```python
ODE = EI * deflection_ansatz().diff(x,4) - q * (x/L)
ODE
```
$$\frac{\pi^{4} EI}{L^{4}} \left(a_{1} \sin{\left (\frac{\pi x}{L} \right )} + 16 a_{2} \sin{\left (\frac{2 \pi}{L} x \right )}\right) - \frac{q_{0} x}{L}$$
Now we evaluate the Ansatz we made at two collocation points distributed along the beam: $x_1 = L/2$ and $x_2 = 3L/4$. This creates two equations for two unknowns:
```python
collocation_conditions = (ODE.subs(x,L/4), ODE.subs(x,3*L/4))
```
```python
collocation_conditions
```
$$\left ( \frac{\pi^{4} EI}{L^{4}} \left(\frac{\sqrt{2} a_{1}}{2} + 16 a_{2}\right) - \frac{q_{0}}{4}, \quad \frac{\pi^{4} EI}{L^{4}} \left(\frac{\sqrt{2} a_{1}}{2} - 16 a_{2}\right) - \frac{3 q_{0}}{4}\right )$$
Now we solve these equations for our values $a_1$ and $a_2$ by demanding that the residuals vanish at the collocation points:
```python
coefficients = sp.solve(collocation_conditions,a1,a2)
```
```python
coefficients
```
$$\left \{ a_{1} : \frac{\sqrt{2} L^{4} q_{0}}{2 \pi^{4} EI}, \quad a_{2} : - \frac{L^{4} q_{0}}{64 \pi^{4} EI}\right \}$$
Now we're ready to plot the result and compare it to the analytical solution.
```python
#We first substite the now known Ansatz free values into our Ansatz
z = sp.symbols('z')
w_numerical = deflection_ansatz().subs([(a1,coefficients[a1]),(a2,coefficients[a2]),(x,z*L)])
#We also made the coordinate dimensionless (x/L --> z) because of sympy problems
lam_x_num = sp.lambdify(z, w_numerical, modules=['numpy'])
#For the variable x the expression w_numerical will be given something
z_vals = np.linspace(0, 1,100) #This something is z from 0 to 1
numerical = lam_x_num(z_vals) #We calculate the solution by passing x = [0,...,L] to deflection_analytical
plt.plot(x_vals/L, analytical/(L**4*q)*EI)
plt.plot(z_vals, numerical/(L**4*q)*EI)
plt.xlabel('$x\ /\ L$')
plt.ylabel('$w\ /\ L^4 q_0 EI^{-1}$')
plt.tight_layout()
```
```python
print("Maximum absolute error in deflection: ", np.max(np.abs(analytical/(L**4*q)*EI - numerical/(L**4*q)*EI)))
```
Maximum absolute error in deflection: 0.000751049425655786
We can also plot and compare the bending moment. Let's first find the analytical expression by symbolically differentiating the deflection expression twice to obtain $M(x) = -EI w''(x)$:
```python
#analytical bending moment
moment_analytical = -deflection_analytical().diff(x,2)*EI
#numerical bending moment
moment_numerical = -deflection_ansatz().subs([(a1,coefficients[a1]),(a2,coefficients[a2])]).diff(x,2)*EI
#create lambdas for plotting along dimensionless length z
lam_x_analyt = sp.lambdify(z, moment_analytical.subs(x,z*L), modules=['numpy'])
lam_x_num = sp.lambdify(z, moment_numerical.subs(x,z*L), modules=['numpy'])
z_vals = np.linspace(0, 1,100)
analytical = lam_x_analyt(z_vals)
numerical = lam_x_num(z_vals)
#plot
plt.plot(x_vals/L, analytical/(L**2*q),label='analytical')
plt.plot(z_vals, numerical/(L**2*q),label='numerical')
plt.xlabel('$x\ /\ L$')
plt.ylabel('$M\ /\ L^2 q_0$')
plt.legend()
plt.tight_layout()
plt.savefig('beam_collocation_moment.pdf')
```
```python
print("Maximum absolute error in bending moment: ", np.max(np.abs(analytical/(L**2*q) - numerical/(L**2*q))))
```
Maximum absolute error in bending moment: 0.00915175157530091
*Tasks*:
- What happens if you chose the symmetrical collocation points $L/4$ and $3L/4$?
- What happens if you move them further outside or inside?
- How does the solution develop when you choose the Ansatz $\tilde{w} = a_1 \left(\frac{x}{L}\right)^4 + a_2 \left(\frac{x}{L}\right)^6$? Why is the result so bad? (Hint: Ansatz requirements)
- How does the solution develop, if you add another member of the same shape, e.g. $a_3 \sin\left(\pi\frac{x}{L}\right) $ but add another collocation point? (Hint: Ansatz requirements)
| 260060294d1d6cb52f5a791bbabd71198aac8eeb | 132,160 | ipynb | Jupyter Notebook | Beams/04_collocation.ipynb | dominik-kern/Numerical_Methods_Introduction | 09a0d6bd0ddbfc6e7f94b65516d9691766ed46ae | [
"MIT"
]
| null | null | null | Beams/04_collocation.ipynb | dominik-kern/Numerical_Methods_Introduction | 09a0d6bd0ddbfc6e7f94b65516d9691766ed46ae | [
"MIT"
]
| 1 | 2022-01-04T19:02:05.000Z | 2022-01-06T08:40:21.000Z | Beams/04_collocation.ipynb | dominik-kern/Numerical_Methods_Introduction | 09a0d6bd0ddbfc6e7f94b65516d9691766ed46ae | [
"MIT"
]
| 4 | 2020-12-03T13:01:55.000Z | 2022-03-16T14:07:04.000Z | 280.59448 | 43,656 | 0.920377 | true | 2,182 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92079 | 0.847968 | 0.7808 | __label__eng_Latn | 0.890128 | 0.652392 |
# Section 7.2 $\quad$ Diagonalization and Similar Matrices (contd)
**Recall**
- We say $A$ and $B$ are similar if <br /><br />
- We say $A$ is diagonalizable if <br /><br />
- An $n\times n$ matrix $A$ is diagonalizable if and only if <br /><br />
- If $D = P^{-1}AP$, how to find $D$ and $P$? <br /><br />
### Example 1
Find a nonsingular matrix $P$ such that $P^{-1}AP$ is a diagonal matrix.
\begin{equation*}
A = \left[
\begin{array}{ccc}
1 & 2 & 3\\
0 & 1 & 0\\
2 & 1 & 2\\
\end{array}
\right]
\end{equation*}
```python
from sympy import *
A = Matrix([[1, 2, 3], [0, 1, 0], [2, 1, 2]]);
A.diagonalize()
```
(Matrix([
[-3, 1, 1],
[ 0, -6, 0],
[ 2, 4, 1]]), Matrix([
[-1, 0, 0],
[ 0, 1, 0],
[ 0, 0, 4]]))
If an $n\times n$ matrix $A$ has $n$ distinct eigenvalues,<br /><br /><br /><br />
If all roots of the characteristic polynomial of $A$ are not all distinct,<br /><br /><br /><br />
### Example 2
Determine if the matrix $A$ is diagonalizable, where
\begin{equation*}
A = \left[
\begin{array}{ccc}
0 & 0 & 1\\
0 & 1 & 2\\
0 & 0 & 1\\
\end{array}
\right]
\end{equation*}
```python
from sympy import *
A = Matrix([[0, 0, 1], [0, 1, 2], [0, 0, 1]]);
A.is_diagonalizable()
```
False
### Example 3
Determine if the matrix $A$ is diagonalizable, where
\begin{equation*}
A = \left[
\begin{array}{ccc}
0 & 0 & 0\\
0 & 1 & 0\\
1 & 0 & 1\\
\end{array}
\right]
\end{equation*}
```python
from sympy import *
A = Matrix([[0, 0, 0], [0, 1, 0], [1, 0, 1]]);
A.is_diagonalizable()
```
True
| 6a479beadc769b2c958d50c678353119c83c8ff9 | 4,306 | ipynb | Jupyter Notebook | Jupyter_Notes/Lecture35_Sec7-2_DiagonalSimilarMat_Part2.ipynb | xiuquan0418/MAT341 | 2fb7ec4e5f0771f10719cb5e4a00a7ab07c49b59 | [
"MIT"
]
| null | null | null | Jupyter_Notes/Lecture35_Sec7-2_DiagonalSimilarMat_Part2.ipynb | xiuquan0418/MAT341 | 2fb7ec4e5f0771f10719cb5e4a00a7ab07c49b59 | [
"MIT"
]
| null | null | null | Jupyter_Notes/Lecture35_Sec7-2_DiagonalSimilarMat_Part2.ipynb | xiuquan0418/MAT341 | 2fb7ec4e5f0771f10719cb5e4a00a7ab07c49b59 | [
"MIT"
]
| null | null | null | 20.407583 | 104 | 0.419647 | true | 659 | Qwen/Qwen-72B | 1. YES
2. YES | 0.885631 | 0.91118 | 0.806969 | __label__eng_Latn | 0.878958 | 0.713193 |
# Chapter 4
______
## The greatest theorem never told
This chapter focuses on an idea that is always bouncing around our minds, but is rarely made explicit outside books devoted to statistics. In fact, we've been using this simple idea in every example thus far.
## The Law of Large Numbers
Let $Z_i$ be $N$ independent samples from some probability distribution. According to *the Law of Large numbers*, so long as the expected value $E[Z]$ is finite, the following holds,
$$\frac{1}{N} \sum_{i=1}^N Z_i \rightarrow E[ Z ], \;\;\; N \rightarrow \infty.$$
In words:
> The average of a sequence of random variables from the same distribution converges to the expected value of that distribution.
This may seem like a boring result, but it will be the most useful tool you use.
### Intuition
If the above Law is somewhat surprising, it can be made more clear by examining a simple example.
Consider a random variable $Z$ that can take only two values, $c_1$ and $c_2$. Suppose we have a large number of samples of $Z$, denoting a specific sample $Z_i$. The Law says that we can approximate the expected value of $Z$ by averaging over all samples. Consider the average:
$$ \frac{1}{N} \sum_{i=1}^N \;Z_i $$
By construction, $Z_i$ can only take on $c_1$ or $c_2$, hence we can partition the sum over these two values:
\begin{align}
\frac{1}{N} \sum_{i=1}^N \;Z_i
& =\frac{1}{N} \big( \sum_{ Z_i = c_1}c_1 + \sum_{Z_i=c_2}c_2 \big) \\\\[5pt]
& = c_1 \sum_{ Z_i = c_1}\frac{1}{N} + c_2 \sum_{ Z_i = c_2}\frac{1}{N} \\\\[5pt]
& = c_1 \times \text{ (approximate frequency of $c_1$) } \\\\
& \;\;\;\;\;\;\;\;\; + c_2 \times \text{ (approximate frequency of $c_2$) } \\\\[5pt]
& \approx c_1 \times P(Z = c_1) + c_2 \times P(Z = c_2 ) \\\\[5pt]
& = E[Z]
\end{align}
Equality holds in the limit, but we can get closer and closer by using more and more samples in the average. This Law holds for almost *any distribution*, minus some important cases we will encounter later.
### Example
Below is a diagram of the Law of Large numbers in action for three different sequences of Poisson random variables.
We sample `sample_size = 100000` Poisson random variables with parameter $\lambda = 4.5$. (Recall the expected value of a Poisson random variable is equal to its parameter.) We calculate the average for the first $n$ samples, for $n=1$ to `sample_size`.
```python
%matplotlib inline
import numpy as np
from IPython.core.pylabtools import figsize
import matplotlib.pyplot as plt
figsize( 12.5, 5 )
sample_size = 30000
expected_value = lambda_ = 4.5
poi = np.random.poisson
N_samples = range(1,sample_size,100)
for k in range(3):
samples = poi( lambda_, sample_size )
partial_average = [ samples[:i].mean() for i in N_samples ]
plt.plot( N_samples, partial_average, lw=1.5,label="average \
of $n$ samples; seq. %d"%k)
plt.plot( N_samples, expected_value*np.ones_like( partial_average),
ls = "--", label = "true expected value", c = "k" )
plt.ylim( 4.35, 4.65)
plt.title( "Convergence of the average of \n random variables to its \
expected value" )
plt.ylabel( "average of $n$ samples" )
plt.xlabel( "# of samples, $n$")
plt.legend();
```
Looking at the above plot, it is clear that when the sample size is small, there is greater variation in the average (compare how *jagged and jumpy* the average is initially, then *smooths* out). All three paths *approach* the value 4.5, but just flirt with it as $N$ gets large. Mathematicians and statistician have another name for *flirting*: convergence.
Another very relevant question we can ask is *how quickly am I converging to the expected value?* Let's plot something new. For a specific $N$, let's do the above trials thousands of times and compute how far away we are from the true expected value, on average. But wait — *compute on average*? This is simply the law of large numbers again! For example, we are interested in, for a specific $N$, the quantity:
$$D(N) = \sqrt{ \;E\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \;\;\right] \;\;}$$
The above formulae is interpretable as a distance away from the true value (on average), for some $N$. (We take the square root so the dimensions of the above quantity and our random variables are the same). As the above is an expected value, it can be approximated using the law of large numbers: instead of averaging $Z_i$, we calculate the following multiple times and average them:
$$ Y_k = \left( \;\frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \; \right)^2 $$
By computing the above many, $N_y$, times (remember, it is random), and averaging them:
$$ \frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k \rightarrow E[ Y_k ] = E\;\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \right]$$
Finally, taking the square root:
$$ \sqrt{\frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k} \approx D(N) $$
```python
figsize( 12.5, 4)
N_Y = 250 #use this many to approximate D(N)
N_array = np.arange( 1000, 50000, 2500 ) #use this many samples in the approx. to the variance.
D_N_results = np.zeros( len( N_array ) )
lambda_ = 4.5
expected_value = lambda_ #for X ~ Poi(lambda) , E[ X ] = lambda
def D_N( n ):
"""
This function approx. D_n, the average variance of using n samples.
"""
Z = poi( lambda_, (n, N_Y) )
average_Z = Z.mean(axis=0)
return np.sqrt( ( (average_Z - expected_value)**2 ).mean() )
for i,n in enumerate(N_array):
D_N_results[i] = D_N(n)
plt.xlabel( "$N$" )
plt.ylabel( "expected squared-distance from true value" )
plt.plot(N_array, D_N_results, lw = 3,
label="expected distance between\n\
expected value and \naverage of $N$ random variables.")
plt.plot( N_array, np.sqrt(expected_value)/np.sqrt(N_array), lw = 2, ls = "--",
label = r"$\frac{\sqrt{\lambda}}{\sqrt{N}}$" )
plt.legend()
plt.title( "How 'fast' is the sample average converging? " );
```
As expected, the expected distance between our sample average and the actual expected value shrinks as $N$ grows large. But also notice that the *rate* of convergence decreases, that is, we need only 10 000 additional samples to move from 0.020 to 0.015, a difference of 0.005, but *20 000* more samples to again decrease from 0.015 to 0.010, again only a 0.005 decrease.
It turns out we can measure this rate of convergence. Above I have plotted a second line, the function $\sqrt{\lambda}/\sqrt{N}$. This was not chosen arbitrarily. In most cases, given a sequence of random variable distributed like $Z$, the rate of convergence to $E[Z]$ of the Law of Large Numbers is
$$ \frac{ \sqrt{ \; Var(Z) \; } }{\sqrt{N} }$$
Kenny: For a Poisson distribution, $\lambda$ is the mean and variance.
This is useful to know: for a given large $N$, we know (on average) how far away we are from the estimate. On the other hand, in a Bayesian setting, this can seem like a useless result: Bayesian analysis is OK with uncertainty so what's the *statistical* point of adding extra precise digits? Though drawing samples can be so computationally cheap that having a *larger* $N$ is fine too.
### How do we compute $Var(Z)$ though?
The variance is simply another expected value that can be approximated! Consider the following, once we have the expected value (by using the Law of Large Numbers to estimate it, denote it $\mu$), we can estimate the variance:
$$ \frac{1}{N}\sum_{i=1}^N \;(Z_i - \mu)^2 \rightarrow E[ \;( Z - \mu)^2 \;] = Var( Z )$$
### Computing probability as an expected value
There is an even less explicit relationship between expected value and estimating probabilities. Define the *indicator function*
$$\mathbb{1}_A(x) =
\begin{cases} 1 & x \in A \\\\
0 & else
\end{cases}
$$
Then, by the law of large numbers, if we have many samples $X_i$, we can estimate the probability of an event $A$, denoted $P(A)$, by:
$$ \frac{1}{N} \sum_{i=1}^N \mathbb{1}_A(X_i) \rightarrow E[\mathbb{1}_A(X)] = P(A) $$
Again, this is fairly obvious after a moments thought: the indicator function is only 1 if the event occurs, so we are summing only the times the event occurs and dividing by the total number of trials (consider how we usually approximate probabilities using frequencies). For example, suppose we wish to estimate the probability that a $Z \sim Exp(.5)$ is greater than 5, and we have many samples from a $Exp(.5)$ distribution.
$$ P( Z > 5 ) = \frac{1}{N}\sum_{i=1}^N \mathbb{1}_{z > 5 }(Z_i) $$
```python
N = 10000
print( np.mean( [ np.random.exponential( 0.5 ) > 5 for i in range(N) ] ) )
```
0.0001
### What does this all have to do with Bayesian statistics?
*Point estimates*, to be introduced in the next chapter, in Bayesian inference are computed using expected values. In more analytical Bayesian inference, we would have been required to evaluate complicated expected values represented as multi-dimensional integrals. No longer. If we can sample from the posterior distribution directly, we simply need to evaluate averages. Much easier. If accuracy is a priority, plots like the ones above show how fast you are converging. And if further accuracy is desired, just take more samples from the posterior.
When is enough enough? When can you stop drawing samples from the posterior? That is the practitioners decision, and also dependent on the variance of the samples (recall from above a high variance means the average will converge slower).
We also should understand when the Law of Large Numbers fails. As the name implies, and comparing the graphs above for small $N$, the Law is only true for large sample sizes. Without this, the asymptotic result is not reliable. Knowing in what situations the Law fails can give us *confidence in how unconfident we should be*. The next section deals with this issue.
## The Disorder of Small Numbers
The Law of Large Numbers is only valid as $N$ gets *infinitely* large: never truly attainable. While the law is a powerful tool, it is foolhardy to apply it liberally. Our next example illustrates this.
### Example: Aggregated geographic data
Often data comes in aggregated form. For instance, data may be grouped by state, county, or city level. Of course, the population numbers vary per geographic area. If the data is an average of some characteristic of each the geographic areas, we must be conscious of the Law of Large Numbers and how it can *fail* for areas with small populations.
We will observe this on a toy dataset. Suppose there are five thousand counties in our dataset. Furthermore, population number in each county are uniformly distributed between 100 and 1500. The way the population numbers are generated is irrelevant to the discussion, so we do not justify this. We are interested in measuring the average height of individuals per county. Unbeknownst to us, height does **not** vary across county, and each individual, regardless of the county he or she is currently living in, has the same distribution of what their height may be:
$$ \text{height} \sim \text{Normal}(150, 15 ) $$
We aggregate the individuals at the county level, so we only have data for the *average in the county*. What might our dataset look like?
```python
figsize( 12.5, 4)
std_height = 15
mean_height = 150
n_counties = 5000
pop_generator = np.random.randint
norm = np.random.normal
#generate some artificial population numbers
population = pop_generator(100, 1500, n_counties )
average_across_county = np.zeros( n_counties )
for i in range( n_counties ):
#generate some individuals and take the mean
average_across_county[i] = norm(mean_height, 1./std_height,
population[i] ).mean()
#located the counties with the apparently most extreme average heights.
i_min = np.argmin( average_across_county )
i_max = np.argmax( average_across_county )
#plot population size vs. recorded average
plt.scatter( population, average_across_county, alpha = 0.5, c="#7A68A6")
plt.scatter( [ population[i_min], population[i_max] ],
[average_across_county[i_min], average_across_county[i_max] ],
s = 60, marker = "o", facecolors = "none",
edgecolors = "#A60628", linewidths = 1.5,
label="extreme heights")
plt.xlim( 100, 1500 )
plt.title( "Average height vs. County Population")
plt.xlabel("County Population")
plt.ylabel("Average height in county")
plt.plot( [100, 1500], [150, 150], color = "k", label = "true expected \
height", ls="--" )
plt.legend(scatterpoints = 1);
```
What do we observe? *Without accounting for population sizes* we run the risk of making an enormous inference error: if we ignored population size, we would say that the county with the shortest and tallest individuals have been correctly circled. But this inference is wrong for the following reason. These two counties do *not* necessarily have the most extreme heights. The error results from the calculated average of smaller populations not being a good reflection of the true expected value of the population (which in truth should be $\mu =150$). The sample size/population size/$N$, whatever you wish to call it, is simply too small to invoke the Law of Large Numbers effectively.
We provide more damning evidence against this inference. Recall the population numbers were uniformly distributed over 100 to 1500. Our intuition should tell us that the counties with the most extreme population heights should also be uniformly spread over 100 to 1500, and certainly independent of the county's population. Not so. Below are the population sizes of the counties with the most extreme heights.
```python
print("Population sizes of 10 'shortest' counties: ")
print(population[ np.argsort( average_across_county )[:10] ], '\n')
print("Population sizes of 10 'tallest' counties: ")
print(population[ np.argsort( -average_across_county )[:10] ])
```
Population sizes of 10 'shortest' counties:
[167 111 103 116 192 109 161 147 107 243]
Population sizes of 10 'tallest' counties:
[113 178 108 153 137 146 124 182 151 163]
Not at all uniform over 100 to 1500. This is an absolute failure of the Law of Large Numbers.
### Example: Kaggle's *U.S. Census Return Rate Challenge*
Below is data from the 2010 US census, which partitions populations beyond counties to the level of block groups (which are aggregates of city blocks or equivalents). The dataset is from a Kaggle machine learning competition some colleagues and I participated in. The objective was to predict the census letter mail-back rate of a group block, measured between 0 and 100, using census variables (median income, number of females in the block-group, number of trailer parks, average number of children etc.). Below we plot the census mail-back rate versus block group population:
```python
figsize( 12.5, 6.5 )
data = np.genfromtxt( "./data/census_data.csv", skip_header=1,
delimiter= ",")
plt.scatter( data[:,1], data[:,0], alpha = 0.5, c="#7A68A6")
plt.title("Census mail-back rate vs Population")
plt.ylabel("Mail-back rate")
plt.xlabel("population of block-group")
plt.xlim(-100, 15e3 )
plt.ylim( -5, 105)
i_min = np.argmin( data[:,0] )
i_max = np.argmax( data[:,0] )
plt.scatter( [ data[i_min,1], data[i_max, 1] ],
[ data[i_min,0], data[i_max,0] ],
s = 60, marker = "o", facecolors = "none",
edgecolors = "#A60628", linewidths = 1.5,
label="most extreme points")
plt.legend(scatterpoints = 1);
```
The above is a classic phenomenon in statistics. I say *classic* referring to the "shape" of the scatter plot above. It follows a classic triangular form, that tightens as we increase the sample size (as the Law of Large Numbers becomes more exact).
I am perhaps overstressing the point and maybe I should have titled the book *"You don't have big data problems!"*, but here again is an example of the trouble with *small datasets*, not big ones. Simply, small datasets cannot be processed using the Law of Large Numbers. Compare with applying the Law without hassle to big datasets (ex. big data). I mentioned earlier that paradoxically big data prediction problems are solved by relatively simple algorithms. The paradox is partially resolved by understanding that the Law of Large Numbers creates solutions that are *stable*, i.e. adding or subtracting a few data points will not affect the solution much. On the other hand, adding or removing data points to a small dataset can create very different results.
For further reading on the hidden dangers of the Law of Large Numbers, I would highly recommend the excellent manuscript [The Most Dangerous Equation](http://nsm.uh.edu/~dgraur/niv/TheMostDangerousEquation.pdf).
## Example: Ordering Reddit Posts by the Upvote Ratio Posterior
Kenny's Summary: This example is quite interesting.
Basically, we want to rank Reddit posts by the upvote ratio. However, it's not enough to do this with a frequentist point estimate over the votes received. For example, a post with 1 vote will have 100% upvote ratio.
Instead, we want to use a Bayesian method. Bayesian meaning that our estimate will incorporate uncertainty into its predicted upvote ratio. (We use an flat prior, so it's not Bayesian in that sense.)
A post with 90/100 upvote ratio will be more certain than a post with 9/10 upvote ratio. That is, the posterior of the former will be narrower around 0.9.
We then sort posts by the **lower bound** of the 95% credible interval. Super interesting.
---
You may have disagreed with the original statement that the Law of Large numbers is known to everyone, but only implicitly in our subconscious decision making. Consider ratings on online products: how often do you trust an average 5-star rating if there is only 1 reviewer? 2 reviewers? 3 reviewers? We implicitly understand that with such few reviewers that the average rating is **not** a good reflection of the true value of the product.
This has created flaws in how we sort items, and more generally, how we compare items. Many people have realized that sorting online search results by their rating, whether the objects be books, videos, or online comments, return poor results. Often the seemingly top videos or comments have perfect ratings only from a few enthusiastic fans, and truly more quality videos or comments are hidden in later pages with *falsely-substandard* ratings of around 4.8. How can we correct this?
Consider the popular site Reddit (I purposefully did not link to the website as you would never come back). The site hosts links to stories or images, called submissions, for people to comment on. Redditors can vote up or down on each submission (called upvotes and downvotes). Reddit, by default, will sort submissions to a given subreddit by Hot, that is, the submissions that have the most upvotes recently.
How would you determine which submissions are the best? There are a number of ways to achieve this:
1. *Popularity*: A submission is considered good if it has many upvotes. A problem with this model is that a submission with hundreds of upvotes, but thousands of downvotes. While being very *popular*, the submission is likely more controversial than best.
2. *Difference*: Using the *difference* of upvotes and downvotes. This solves the above problem, but fails when we consider the temporal nature of submission. Depending on when a submission is posted, the website may be experiencing high or low traffic. The difference method will bias the *Top* submissions to be the those made during high traffic periods, which have accumulated more upvotes than submissions that were not so graced, but are not necessarily the best.
3. *Time adjusted*: Consider using Difference divided by the age of the submission. This creates a *rate*, something like *difference per second*, or *per minute*. An immediate counter-example is, if we use per second, a 1 second old submission with 1 upvote would be better than a 100 second old submission with 99 upvotes. One can avoid this by only considering at least t second old submission. But what is a good t value? Does this mean no submission younger than t is good? We end up comparing unstable quantities with stable quantities (young vs. old submissions).
3. *Ratio*: Rank submissions by the ratio of upvotes to total number of votes (upvotes plus downvotes). This solves the temporal issue, such that new submissions who score well can be considered Top just as likely as older submissions, provided they have many upvotes to total votes. The problem here is that a submission with a single upvote (ratio = 1.0) will beat a submission with 999 upvotes and 1 downvote (ratio = 0.999), but clearly the latter submission is *more likely* to be better.
I used the phrase *more likely* for good reason. It is possible that the former submission, with a single upvote, is in fact a better submission than the later with 999 upvotes. The hesitation to agree with this is because we have not seen the other 999 potential votes the former submission might get. Perhaps it will achieve an additional 999 upvotes and 0 downvotes and be considered better than the latter, though not likely.
What we really want is an estimate of the *true upvote ratio*. Note that the true upvote ratio is not the same as the observed upvote ratio: the true upvote ratio is hidden, and we only observe upvotes vs. downvotes (one can think of the true upvote ratio as "what is the underlying probability someone gives this submission a upvote, versus a downvote"). So the 999 upvote/1 downvote submission probably has a true upvote ratio close to 1, which we can assert with confidence thanks to the Law of Large Numbers, but on the other hand we are much less certain about the true upvote ratio of the submission with only a single upvote. Sounds like a Bayesian problem to me.
One way to determine a prior on the upvote ratio is to look at the historical distribution of upvote ratios. This can be accomplished by scraping Reddit's submissions and determining a distribution. There are a few problems with this technique though:
1. Skewed data: The vast majority of submissions have very few votes, hence there will be many submissions with ratios near the extremes (see the "triangular plot" in the above Kaggle dataset), effectively skewing our distribution to the extremes. One could try to only use submissions with votes greater than some threshold. Again, problems are encountered. There is a tradeoff between number of submissions available to use and a higher threshold with associated ratio precision.
2. Biased data: Reddit is composed of different subpages, called subreddits. Two examples are *r/aww*, which posts pics of cute animals, and *r/politics*. It is very likely that the user behaviour towards submissions of these two subreddits are very different: visitors are likely friendly and affectionate in the former, and would therefore upvote submissions more, compared to the latter, where submissions are likely to be controversial and disagreed upon. Therefore not all submissions are the same.
In light of these, I think it is better to use a `Uniform` prior.
With our prior in place, we can find the posterior of the true upvote ratio. The Python script `top_showerthoughts_submissions.py` will scrape the best posts from the `showerthoughts` community on Reddit. This is a text-only community so the title of each post *is* the post. Below is the top post as well as some other sample posts:
```python
#adding a number to the end of the %run call will get the ith top post.
%run top_showerthoughts_submissions.py 2
print("Post contents: \n")
print(top_post)
```
Post contents:
Movie characters always have a top school locker
```python
"""
contents: an array of the text from the last 100 top submissions to a subreddit
votes: a 2d numpy array of upvotes, downvotes for each submission.
"""
n_submissions = len(votes)
submissions = np.random.randint( n_submissions, size=4)
print("Some Submissions (out of %d total) \n-----------"%n_submissions)
for i in submissions:
print('"' + contents[i] + '"')
print("upvotes/downvotes: ",votes[i,:], "\n")
```
Some Submissions (out of 1 total)
-----------
"The Fact That We Can Tell When Someone Is Smiling Based Only Off Their Voice Makes It Really Apparent That We Have Spent Millions Of Years Evolving To Figure Out If Someone Is A Threat Or Not"
upvotes/downvotes: [28 2]
"The Fact That We Can Tell When Someone Is Smiling Based Only Off Their Voice Makes It Really Apparent That We Have Spent Millions Of Years Evolving To Figure Out If Someone Is A Threat Or Not"
upvotes/downvotes: [28 2]
"The Fact That We Can Tell When Someone Is Smiling Based Only Off Their Voice Makes It Really Apparent That We Have Spent Millions Of Years Evolving To Figure Out If Someone Is A Threat Or Not"
upvotes/downvotes: [28 2]
"The Fact That We Can Tell When Someone Is Smiling Based Only Off Their Voice Makes It Really Apparent That We Have Spent Millions Of Years Evolving To Figure Out If Someone Is A Threat Or Not"
upvotes/downvotes: [28 2]
For a given true upvote ratio $p$ and $N$ votes, the number of upvotes will look like a Binomial random variable with parameters $p$ and $N$. (This is because of the equivalence between upvote ratio and probability of upvoting versus downvoting, out of $N$ possible votes/trials). We create a function that performs Bayesian inference on $p$, for a particular submission's upvote/downvote pair.
```python
import pymc3 as pm
def posterior_upvote_ratio( upvotes, downvotes, samples = 20000):
"""
This function accepts the number of upvotes and downvotes a particular submission recieved,
and the number of posterior samples to return to the user. Assumes a uniform prior.
"""
N = upvotes + downvotes
with pm.Model() as model:
upvote_ratio = pm.Uniform("upvote_ratio", 0, 1)
observations = pm.Binomial( "obs", N, upvote_ratio, observed=upvotes)
trace = pm.sample(samples, step=pm.Metropolis())
burned_trace = trace[int(samples/4):]
return burned_trace["upvote_ratio"]
```
Below are the resulting posterior distributions.
```python
figsize( 11., 8)
posteriors = []
colours = ["#348ABD", "#A60628", "#7A68A6", "#467821", "#CF4457"]
for i in range(len(submissions)):
j = submissions[i]
posteriors.append( posterior_upvote_ratio( votes[j, 0], votes[j,1] ) )
plt.hist( posteriors[i], bins = 10, normed = True, alpha = .9,
histtype="step",color = colours[i%5], lw = 3,
label = '(%d up:%d down)\n%s...'%(votes[j, 0], votes[j,1], contents[j][:50]) )
plt.hist( posteriors[i], bins = 10, normed = True, alpha = .2,
histtype="stepfilled",color = colours[i], lw = 3, )
plt.legend(loc="upper left")
plt.xlim( 0, 1)
plt.title("Posterior distributions of upvote ratios on different submissions");
```
Some distributions are very tight, others have very long tails (relatively speaking), expressing our uncertainty with what the true upvote ratio might be.
### Sorting!
We have been ignoring the goal of this exercise: how do we sort the submissions from *best to worst*? Of course, we cannot sort distributions, we must sort scalar numbers. There are many ways to distill a distribution down to a scalar: expressing the distribution through its expected value, or mean, is one way. Choosing the mean is a bad choice though. This is because the mean does not take into account the uncertainty of distributions.
I suggest using the *95% least plausible value*, defined as the value such that there is only a 5% chance the true parameter is lower (think of the lower bound on the 95% credible region). Below are the posterior distributions with the 95% least-plausible value plotted:
```python
N = posteriors[0].shape[0]
lower_limits = []
for i in range(len(submissions)):
j = submissions[i]
plt.hist( posteriors[i], bins = 20, normed = True, alpha = .9,
histtype="step",color = colours[i], lw = 3,
label = '(%d up:%d down)\n%s...'%(votes[j, 0], votes[j,1], contents[j][:50]) )
plt.hist( posteriors[i], bins = 20, normed = True, alpha = .2,
histtype="stepfilled",color = colours[i], lw = 3, )
v = np.sort( posteriors[i] )[ int(0.05*N) ]
#plt.vlines( v, 0, 15 , color = "k", alpha = 1, linewidths=3 )
plt.vlines( v, 0, 10 , color = colours[i], linestyles = "--", linewidths=3 )
lower_limits.append(v)
plt.legend(loc="upper left")
plt.legend(loc="upper left")
plt.title("Posterior distributions of upvote ratios on different submissions");
order = np.argsort( -np.array( lower_limits ) )
print(order, lower_limits)
```
The best submissions, according to our procedure, are the submissions that are *most-likely* to score a high percentage of upvotes. Visually those are the submissions with the 95% least plausible value close to 1.
Why is sorting based on this quantity a good idea? By ordering by the 95% least plausible value, we are being the most conservative with what we think is best. When using the lower-bound of the 95% credible interval, we believe with high certainty that the 'true upvote ratio' is at the very least equal to this value (or greater), thereby ensuring that the best submissions are still on top. Under this ordering, we impose the following very natural properties:
1. given two submissions with the same observed upvote ratio, we will assign the submission with more votes as better (since we are more confident it has a higher ratio).
2. given two submissions with the same number of votes, we still assign the submission with more upvotes as *better*.
### But this is too slow for real-time!
I agree, computing the posterior of every submission takes a long time, and by the time you have computed it, likely the data has changed. I delay the mathematics to the appendix, but I suggest using the following formula to compute the lower bound very fast.
$$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$
where
\begin{align}
& a = 1 + u \\\\
& b = 1 + d \\\\
\end{align}
$u$ is the number of upvotes, and $d$ is the number of downvotes. The formula is a shortcut in Bayesian inference, which will be further explained in Chapter 6 when we discuss priors in more detail.
```python
def intervals(u,d):
a = 1. + u
b = 1. + d
mu = a/(a+b)
std_err = 1.65*np.sqrt( (a*b)/( (a+b)**2*(a+b+1.) ) )
return ( mu, std_err )
print("Approximate lower bounds:")
posterior_mean, std_err = intervals(votes[:,0],votes[:,1])
lb = posterior_mean - std_err
print(lb)
print("\n")
print("Top 40 Sorted according to approximate lower bounds:")
print("\n")
order = np.argsort( -lb )
ordered_contents = []
for i in order[:40]:
ordered_contents.append( contents[i] )
print(votes[i,0], votes[i,1], contents[i])
print("-------------")
```
Approximate lower bounds:
[ 0.93349005 0.9532194 0.94149718 0.90859764 0.88705356 0.8558795
0.85644927 0.93752679 0.95767101 0.91131012 0.910073 0.915999
0.9140058 0.83276025 0.87593961 0.87436674 0.92830849 0.90642832
0.89187973 0.89950891 0.91295322 0.78607629 0.90250203 0.79950031
0.85219422 0.83703439 0.7619808 0.81301134 0.7313114 0.79137561
0.82701445 0.85542404 0.82309334 0.75211374 0.82934814 0.82674958
0.80933194 0.87448152 0.85350205 0.75460106 0.82934814 0.74417233
0.79924258 0.8189683 0.75460106 0.90744016 0.83838023 0.78802791
0.78400654 0.64638659 0.62047936 0.76137738 0.81365241 0.83838023
0.78457533 0.84980627 0.79249393 0.69020315 0.69593922 0.70758151
0.70268831 0.91620627 0.73346864 0.86382644 0.80877728 0.72708753
0.79822085 0.68333632 0.81699014 0.65100453 0.79809005 0.74702492
0.77318569 0.83221179 0.66500492 0.68134548 0.7249286 0.59412132
0.58191312 0.73142963 0.73142963 0.66251028 0.87152685 0.74107856
0.60935684 0.87152685 0.77484517 0.88783675 0.81814153 0.54569789
0.6122496 0.75613569 0.53511973 0.74556767 0.81814153 0.85773646
0.6122496 0.64814153]
Top 40 Sorted according to approximate lower bounds:
596 18 Someone should develop an AI specifically for reading Terms & Conditions and flagging dubious parts.
-------------
2360 98 Porn is the only industry where it is not only acceptable but standard to separate people based on race, sex and sexual preference.
-------------
1918 101 All polls are biased towards people who are willing to take polls
-------------
948 50 They should charge less for drinks in the drive-thru because you can't refill them.
-------------
3740 239 When I was in elementary school and going through the DARE program, I was positive a gang of older kids was going to corner me and force me to smoke pot. Then I became an adult and realized nobody is giving free drugs to somebody that doesn't want them.
-------------
166 7 "Noted" is the professional way of saying "K".
-------------
29 0 Rewatching Mr. Bean, I've realised that the character is an eccentric genius and not a blithering idiot.
-------------
289 18 You've been doing weird cameos in your friends' dreams since kindergarten.
-------------
269 17 At some point every parent has stopped wiping their child's butt and hoped for the best.
-------------
121 6 Is it really fair to say a person over 85 has heart failure? Technically, that heart has done exceptionally well.
-------------
535 40 It's surreal to think that the sun and moon and stars we gaze up at are the same objects that have been observed for millenia, by everyone in the history of humanity from cavemen to Aristotle to Jesus to George Washington.
-------------
527 40 I wonder if America's internet is censored in a similar way that North Korea's is, but we have no idea of it happening.
-------------
1510 131 Kenny's family is poor because they're always paying for his funeral.
-------------
43 1 If I was as careful with my whole paycheck as I am with my last $20 I'd be a whole lot better off
-------------
162 10 Black hair ties are probably the most popular bracelets in the world.
-------------
107 6 The best answer to the interview question "What is your greatest weakness?" is "interviews".
-------------
127 8 Surfing the internet without ads feels like a summer evening without mosquitoes
-------------
159 12 I wonder if Superman ever put a pair of glasses on Lois Lane's dog, and she was like "what's this Clark? Did you get me a new dog?"
-------------
21 0 Sitting on a cold toilet seat or a warm toilet seat both suck for different reasons.
-------------
1414 157 My life is really like Rihanna's song, "just work work work work work" and the rest of it I can't really understand.
-------------
222 22 I'm honestly slightly concerned how often Reddit commenters make me laugh compared to my real life friends.
-------------
52 3 The world must have been a spookier place altogether when candles and gas lamps were the only sources of light at night besides the moon and the stars.
-------------
194 19 I have not been thankful enough in the last few years that the Black Eyed Peas are no longer ever on the radio
-------------
18 0 Living on the coast is having the window seat of the land you live on.
-------------
18 0 Binoculars are like walkie talkies for the deaf.
-------------
28 1 Now that I am a parent of multiple children I have realized that my parents were lying through their teeth when they said they didn't have a favorite.
-------------
16 0 I sneer at people who read tabloids, but every time I look someone up on Wikipedia the first thing I look for is what controversies they've been involved in.
-------------
1559 233 Kid's menus at restaurants should be smaller portions of the same adult dishes at lower prices and not the junk food that they usually offer.
-------------
1426 213 Eventually once all phones are waterproof we'll be able to push people into pools again
-------------
61 5 Myspace is so outdated that jokes about it being outdated has become outdated
-------------
52 4 As a kid, seeing someone step on a banana peel and not slip was a disappointment.
-------------
90 9 Yahoo!® is the RadioShack® of the Internet.
-------------
34 2 People who "tell it like it is" rarely do so to say something nice
-------------
39 3 Closing your eyes after turning off your alarm is a very dangerous game.
-------------
39 3 Your known 'first word' is the first word your parents heard you speak. In reality, it may have been a completely different word you said when you were alone.
-------------
87 10 "Smells Like Teen Spirit" is as old to listeners of today as "Yellow Submarine" was to listeners of 1991.
-------------
239 36 if an ocean didnt stop immigrants from coming to America what makes us think a wall will?
-------------
22 1 The phonebook was the biggest invasion of privacy that everyone was oddly ok with.
-------------
57 6 I'm actually the most productive when I procrastinate because I'm doing everything I possibly can to avoid the main task at hand.
-------------
57 6 You will never feel how long time is until you have allergies and snot slowly dripping out of your nostrils, while sitting in a classroom with no tissues.
-------------
We can view the ordering visually by plotting the posterior mean and bounds, and sorting by the lower bound. In the plot below, notice that the left error-bar is sorted (as we suggested this is the best way to determine an ordering), so the means, indicated by dots, do not follow any strong pattern.
```python
r_order = order[::-1][-40:]
plt.errorbar( posterior_mean[r_order], np.arange( len(r_order) ),
xerr=std_err[r_order], capsize=0, fmt="o",
color = "#7A68A6")
plt.xlim( 0.3, 1)
plt.yticks( np.arange( len(r_order)-1,-1,-1 ), map( lambda x: x[:30].replace("\n",""), ordered_contents) );
```
In the graphic above, you can see why sorting by mean would be sub-optimal.
The above procedure works well for upvote-downvotes schemes, but what about systems that use star ratings, e.g. 5 star rating systems. Similar problems apply with simply taking the average: an item with two perfect ratings would beat an item with thousands of perfect ratings, but a single sub-perfect rating.
We can consider the upvote-downvote problem above as binary: 0 is a downvote, 1 if an upvote. A $N$-star rating system can be seen as a more continuous version of above, and we can set $n$ stars rewarded is equivalent to rewarding $\frac{n}{N}$. For example, in a 5-star system, a 2 star rating corresponds to 0.4. A perfect rating is a 1. We can use the same formula as before, but with $a,b$ defined differently:
$$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$
where
\begin{align}
& a = 1 + S \\\\
& b = 1 + N - S \\\\
\end{align}
where $N$ is the number of users who rated, and $S$ is the sum of all the ratings, under the equivalence scheme mentioned above.
## Conclusion
While the Law of Large Numbers is cool, it is only true so much as its name implies: with large sample sizes only. We have seen how our inference can be affected by not considering *how the data is shaped*.
1. By (cheaply) drawing many samples from the posterior distributions, we can ensure that the Law of Large Number applies as we approximate expected values (which we will do in the next chapter).
2. Bayesian inference understands that with small sample sizes, we can observe wild randomness. Our posterior distribution will reflect this by being more spread rather than tightly concentrated. Thus, our inference should be correctable.
3. There are major implications of not considering the sample size, and trying to sort objects that are unstable leads to pathological orderings. The method provided above solves this problem.
## Appendix
**Derivation of sorting submissions formula**
Basically what we are doing is using a Beta prior (with parameters $a=1, b=1$, which is a uniform distribution), and using a Binomial likelihood with observations $u, N = u+d$. This means our posterior is a Beta distribution with parameters $a' = 1 + u, b' = 1 + (N - u) = 1+d$. We then need to find the value, $x$, such that 0.05 probability is less than $x$. This is usually done by inverting the CDF ([Cumulative Distribution Function](http://en.wikipedia.org/wiki/Cumulative_Distribution_Function)), but the CDF of the beta, for integer parameters, is known but is a large sum [3].
We instead use a Normal approximation. The mean of the Beta is $\mu = a'/(a'+b')$ and the variance is
$$\sigma^2 = \frac{a'b'}{ (a' + b')^2(a'+b'+1) }$$
Hence we solve the following equation for $x$ and have an approximate lower bound.
$$ 0.05 = \Phi\left( \frac{(x - \mu)}{\sigma}\right) $$
$\Phi$ being the [cumulative distribution for the normal distribution](http://en.wikipedia.org/wiki/Normal_distribution#Cumulative_distribution)
## Exercises
See `Ch4 Exercises.ipynb`.
## References
1. Wainer, Howard. *The Most Dangerous Equation*. American Scientist, Volume 95.
2. Clarck, Torin K., Aaron W. Johnson, and Alexander J. Stimpson. "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression." (2013): n. page. [Web](http://www.sloansportsconference.com/wp-content/uploads/2013/Going%20for%20Three%20Predicting%20the%20Likelihood%20of%20Field%20Goal%20Success%20with%20Logistic%20Regression.pdf). 20 Feb. 2013.
3. http://en.wikipedia.org/wiki/Beta_function#Incomplete_beta_function
| 6a3e08bb529501408c4eb63d1872b4bc1d819575 | 552,617 | ipynb | Jupyter Notebook | Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb | kennysong/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers | 343a0f9ccacc6051689a18ee28bc82dfbdd7ccb4 | [
"MIT"
]
| null | null | null | Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb | kennysong/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers | 343a0f9ccacc6051689a18ee28bc82dfbdd7ccb4 | [
"MIT"
]
| null | null | null | Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb | kennysong/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers | 343a0f9ccacc6051689a18ee28bc82dfbdd7ccb4 | [
"MIT"
]
| null | null | null | 457.843413 | 103,512 | 0.925096 | true | 10,847 | Qwen/Qwen-72B | 1. YES
2. YES | 0.826712 | 0.931463 | 0.770051 | __label__eng_Latn | 0.998046 | 0.627419 |
<h1>Chapter 9</h1>
```python
%pylab inline
```
Populating the interactive namespace from numpy and matplotlib
<h2>9.5 Discriminative Classification</h2>
\tab In discriminative classification, classification boundaries are modeled without creating class density estimations. Nearest neghbor classifications (9.4) are one example of this type of classification. Recall, for a set of two classes $y \in \{0,1\}$, the discriminant function is $g(x) = p(y=1|x)$. The rule for classification is
$$\hat{y} = \begin{cases} 1 & g(x) \gt 1/2, \\ 0 & \text{otherwise}, \end{cases}$$
which can be used regardless of how we get $g(x)$.
<h3>9.5.1 Logistic Regression</h3>
While logistic regression can be in binomial or multinomial cases, we will be looking at the binomial case for simplicity. We use the linear model
\begin{align}
p(y=1|x) &= \frac{\exp[\sum_j \theta_j x^j]}{1+\exp[\sum_j \theta_j x^j]},\\
&= p(\boldsymbol{\theta})
\end{align}
and define the logit function as
$$logit(p_i) = log \left(\frac{p_i}{1-p_i}\right) = \displaystyle\sum_j \theta_j x_i^j.$$
Since we only have two classes (y is binary), we can use a Binomial distribution as a model, specifically the Bernoulli distribution [see Eq (3.50) in $\S$ 3.3.3]. This has a likelihood function of
$$L(\beta) = \displaystyle\prod_{i=1}^N p_i(\beta)^{y_i} (1-p(\beta))^{1-y_i}.$$
Logistic regression is related to other linear models, including linear discriminant analysis ($\S$ 9.3.4). In LDA, the assumptions give rise to
\begin{align}
\log\left(\frac{p(y=1|x)}{p(y=0|x)}\right) &= -\frac{1}{2}(\mu_0+\mu_1)^T \Sigma^{-1}(\mu_1-\mu_0) \\
&\quad + \log\left(\frac{\pi_0}{\pi_1}\right) + x^T\Sigma^{-1}(\mu_1-\mu_0) \\
&= \alpha_0 + \alpha^Tx.
\end{align}
In logistic regression, this is the model by assumption:
$$\log\left(\frac{p(y=1|x)}{p(y=0|x)}\right) = \beta_0 + \beta^Tx.$$
The difference is in how the parameters are estimated. Logistic regression chooses parameters to minimize classification errors, not density errors.
```python
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import colors
from sklearn.linear_model import LogisticRegression
from astroML.datasets import fetch_rrlyrae_combined
from astroML.utils import split_samples
from astroML.utils import completeness_contamination
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=False)
#----------------------------------------------------------------------
# get data and split into training & testing sets
X, y = fetch_rrlyrae_combined()
X = X[:, [1, 0, 2, 3]] # rearrange columns for better 1-color results
(X_train, X_test), (y_train, y_test) = split_samples(X, y, [0.75, 0.25],
random_state=0)
N_tot = len(y)
N_st = np.sum(y == 0)
N_rr = N_tot - N_st
N_train = len(y_train)
N_test = len(y_test)
N_plot = 5000 + N_rr
#----------------------------------------------------------------------
# perform Classification
classifiers = []
predictions = []
Ncolors = np.arange(1, X.shape[1] + 1)
for nc in Ncolors:
clf = LogisticRegression(class_weight='auto')
clf.fit(X_train[:, :nc], y_train)
y_pred = clf.predict(X_test[:, :nc])
classifiers.append(clf)
predictions.append(y_pred)
completeness, contamination = completeness_contamination(predictions, y_test)
print "completeness", completeness
print "contamination", contamination
#------------------------------------------------------------
# Compute the decision boundary
clf = classifiers[1]
xlim = (0.7, 1.35)
ylim = (-0.15, 0.4)
xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 71),
np.linspace(ylim[0], ylim[1], 81))
print clf.intercept_
print clf.raw_coef_
Z = clf.predict_proba(np.c_[yy.ravel(), xx.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
#----------------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(10, 5))
fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0,
left=0.1, right=0.95, wspace=0.2)
# left plot: data and decision boundary
ax = fig.add_subplot(121)
im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:],
s=4, lw=0, cmap=plt.cm.binary, zorder=2)
im.set_clim(-0.5, 1)
im = ax.imshow(Z, origin='lower', aspect='auto',
cmap=plt.cm.binary, zorder=1,
extent=xlim + ylim)
im.set_clim(0, 2)
ax.contour(xx, yy, Z, [0.5], colors='k')
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_xlabel('$u-g$')
ax.set_ylabel('$g-r$')
# plot completeness vs Ncolors
ax = fig.add_subplot(222)
ax.plot(Ncolors, completeness, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylabel('completeness')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
# plot contamination vs Ncolors
ax = fig.add_subplot(224)
ax.plot(Ncolors, contamination, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i'))
ax.set_xlabel('N colors')
ax.set_ylabel('contamination')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
plt.show()
```
<h2>9.2 Support Vector Machines</h2>
The idea behind SVM is to find a hyperplane that maximizes the distance between the plane and the closest points of either class. This distance is referred to as the *margin* and data points on the margin are called *support vectors*. We begin with the assumption that the classes $y \in \{-1,1\}$ are linearly seperable.
To find the hyperplane that maximizes the margin one must maximize
$$\max_{\beta_0,\beta}(m)\quad \text{subject to} \quad \frac{1}{\|\beta\|}y_i(\beta_0 + \beta^T x_i) \geq m \quad \forall i.$$
For any $\beta_0$ and $\beta$ that satisfy the above inequality, any positive scaled multiple works as well, so we can set $\|\beta\|=1/m$ and rewrite the problem as minimizing
$$\frac{1}{2}\|\beta\|\quad \text{subject to} \quad y_i(\beta_0 + \beta^T x_i) \geq 1 \quad \forall i.$$
In realistic problems, the assumption that classes are linearly seperable must be relaxed. We add in *slack variables* $\xi_i$ and minimize
$$\frac{1}{2}\|\beta\|\quad \text{subject to} \quad y_i(\beta_0 + \beta^T x_i) \geq 1-\xi_i \quad \forall i.$$
We limit the slack by imposing constraints to effectively bound the number of misclassifications:
$$\xi_i \geq 0 \quad \text{and} \quad \displaystyle\sum_i \xi_i \leq C.$$
SVM optimization is equivalent to minimizing
$$\displaystyle\sum_{i=1}^N \max(0,1-y_ig(x_i)) + \lambda \|\beta\|^2,$$
where $\lambda$ is related to the tuning parameter C.
There is an equivalent way to write this problem using the inner products $\langle x_i,x_{i'}\rangle$ of $x_i$ and $x_{i'}$, which can be replaced with kernel functions $K(x_i,x_{i'})$ to make this method nonlinear [See Eq (9.36), (9.37), (9.38), and (9.42)].
```python
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.svm import SVC
from astroML.decorators import pickle_results
from astroML.datasets import fetch_rrlyrae_combined
from astroML.utils import split_samples
from astroML.utils import completeness_contamination
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=False)
#----------------------------------------------------------------------
# get data and split into training & testing sets
X, y = fetch_rrlyrae_combined()
X = X[:, [1, 0, 2, 3]] # rearrange columns for better 1-color results
# SVM takes several minutes to run, and is order[N^2]
# truncating the dataset can be useful for experimentation.
#X = X[::5]
#y = y[::5]
(X_train, X_test), (y_train, y_test) = split_samples(X, y, [0.75, 0.25],
random_state=0)
N_tot = len(y)
N_st = np.sum(y == 0)
N_rr = N_tot - N_st
N_train = len(y_train)
N_test = len(y_test)
N_plot = 5000 + N_rr
#----------------------------------------------------------------------
# Fit SVM
Ncolors = np.arange(1, X.shape[1] + 1)
@pickle_results('SVM_rrlyrae.pkl')
def compute_SVM(Ncolors):
classifiers = []
predictions = []
for nc in Ncolors:
# perform support vector classification
clf = SVC(kernel='linear', class_weight='auto')
clf.fit(X_train[:, :nc], y_train)
y_pred = clf.predict(X_test[:, :nc])
classifiers.append(clf)
predictions.append(y_pred)
return classifiers, predictions
classifiers, predictions = compute_SVM(Ncolors)
completeness, contamination = completeness_contamination(predictions, y_test)
print "completeness", completeness
print "contamination", contamination
#------------------------------------------------------------
# compute the decision boundary
clf = classifiers[1]
w = clf.coef_[0]
a = -w[0] / w[1]
yy = np.linspace(-0.1, 0.4)
xx = a * yy - clf.intercept_[0] / w[1]
#----------------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(10, 5))
fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0,
left=0.1, right=0.95, wspace=0.2)
# left plot: data and decision boundary
ax = fig.add_subplot(121)
ax.plot(xx, yy, '-k')
im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:],
s=4, lw=0, cmap=plt.cm.binary, zorder=2)
im.set_clim(-0.5, 1)
ax.set_xlim(0.7, 1.35)
ax.set_ylim(-0.15, 0.4)
ax.set_xlabel('$u-g$')
ax.set_ylabel('$g-r$')
# plot completeness vs Ncolors
ax = fig.add_subplot(222)
ax.plot(Ncolors, completeness, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylabel('completeness')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
# plot contamination vs Ncolors
ax = fig.add_subplot(224)
ax.plot(Ncolors, contamination, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i'))
ax.set_xlabel('N colors')
ax.set_ylabel('contamination')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
plt.show()
```
```python
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.svm import SVC
from sklearn import metrics
from astroML.datasets import fetch_rrlyrae_mags
from astroML.decorators import pickle_results
from astroML.datasets import fetch_rrlyrae_combined
from astroML.utils import split_samples
from astroML.utils import completeness_contamination
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=False)
#----------------------------------------------------------------------
# get data and split into training & testing sets
X, y = fetch_rrlyrae_combined()
X = X[:, [1, 0, 2, 3]] # re-order the colors for better 1-color results
# SVM takes several minutes to run, and is order[N^2]
# truncating the dataset can be useful for experimentation.
#X = X[::5]
#y = y[::5]
(X_train, X_test), (y_train, y_test) = split_samples(X, y, [0.75, 0.25],
random_state=0)
N_tot = len(y)
N_st = np.sum(y == 0)
N_rr = N_tot - N_st
N_train = len(y_train)
N_test = len(y_test)
N_plot = 5000 + N_rr
#----------------------------------------------------------------------
# Fit Kernel SVM
Ncolors = np.arange(1, X.shape[1] + 1)
@pickle_results('kernelSVM_rrlyrae.pkl')
def compute_SVM(Ncolors):
classifiers = []
predictions = []
for nc in Ncolors:
# perform support vector classification
clf = SVC(kernel='rbf', gamma=20.0, class_weight='auto')
clf.fit(X_train[:, :nc], y_train)
y_pred = clf.predict(X_test[:, :nc])
classifiers.append(clf)
predictions.append(y_pred)
return classifiers, predictions
classifiers, predictions = compute_SVM(Ncolors)
completeness, contamination = completeness_contamination(predictions, y_test)
print "completeness", completeness
print "contamination", contamination
#------------------------------------------------------------
# compute the decision boundary
clf = classifiers[1]
xlim = (0.7, 1.35)
ylim = (-0.15, 0.4)
xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 101),
np.linspace(ylim[0], ylim[1], 101))
Z = clf.predict(np.c_[yy.ravel(), xx.ravel()])
Z = Z.reshape(xx.shape)
# smooth the boundary
from scipy.ndimage import gaussian_filter
Z = gaussian_filter(Z, 2)
#----------------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(10, 5))
fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0,
left=0.1, right=0.95, wspace=0.2)
# left plot: data and decision boundary
ax = fig.add_subplot(121)
im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:],
s=4, lw=0, cmap=plt.cm.binary, zorder=2)
im.set_clim(-0.5, 1)
ax.contour(xx, yy, Z, [0.5], colors='k')
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_xlabel('$u-g$')
ax.set_ylabel('$g-r$')
# plot completeness vs Ncolors
ax = fig.add_subplot(222)
ax.plot(Ncolors, completeness, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylabel('completeness')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
ax = fig.add_subplot(224)
ax.plot(Ncolors, contamination, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i'))
ax.set_xlabel('N colors')
ax.set_ylabel('contamination')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
plt.show()
```
<h2>9.7 Decision Trees</h2>
Decision trees set up a heirarchical set of decision boundaries to classify data. Each node splits the data points it includes into two subsets, and this branching continues until predetermined stopping criteria are met. Branching decision boundaries are often based on one feature (axis aligned). Terminal (or leaf) nodes record the fraction of points they contain of each class, with the largest deciding which class is associated with that node.
```python
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.tree import DecisionTreeRegressor
from astroML.datasets import fetch_sdss_specgals
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=False)
#------------------------------------------------------------
# Fetch data and prepare it for the computation
data = fetch_sdss_specgals()
# put magnitudes in a matrix
mag = np.vstack([data['modelMag_%s' % f] for f in 'ugriz']).T
z = data['z']
# train on ~60,000 points
mag_train = mag[::10]
z_train = z[::10]
# test on ~6,000 separate points
mag_test = mag[1::100]
z_test = z[1::100]
#------------------------------------------------------------
# Compute the cross-validation scores for several tree depths
depth = np.arange(1, 21)
rms_test = np.zeros(len(depth))
rms_train = np.zeros(len(depth))
i_best = 0
z_fit_best = None
for i, d in enumerate(depth):
clf = DecisionTreeRegressor(max_depth=d, random_state=0)
clf.fit(mag_train, z_train)
z_fit_train = clf.predict(mag_train)
z_fit = clf.predict(mag_test)
rms_train[i] = np.mean(np.sqrt((z_fit_train - z_train) ** 2))
rms_test[i] = np.mean(np.sqrt((z_fit - z_test) ** 2))
if rms_test[i] <= rms_test[i_best]:
i_best = i
z_fit_best = z_fit
best_depth = depth[i_best]
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(10, 5))
fig.subplots_adjust(wspace=0.25,
left=0.1, right=0.95,
bottom=0.15, top=0.9)
# first panel: cross-validation
ax = fig.add_subplot(121)
ax.plot(depth, rms_test, '-k', label='cross-validation')
ax.plot(depth, rms_train, '--k', label='training set')
ax.set_xlabel('depth of tree')
ax.set_ylabel('rms error')
ax.yaxis.set_major_locator(plt.MultipleLocator(0.01))
ax.set_xlim(0, 21)
ax.set_ylim(0.009, 0.04)
ax.legend(loc=1)
# second panel: best-fit results
ax = fig.add_subplot(122)
ax.scatter(z_test, z_fit_best, s=1, lw=0, c='k')
ax.plot([-0.1, 0.4], [-0.1, 0.4], ':k')
ax.text(0.04, 0.96, "depth = %i\nrms = %.3f" % (best_depth, rms_test[i_best]),
ha='left', va='top', transform=ax.transAxes)
ax.set_xlabel(r'$z_{\rm true}$')
ax.set_ylabel(r'$z_{\rm fit}$')
ax.set_xlim(-0.02, 0.4001)
ax.set_ylim(-0.02, 0.4001)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.1))
plt.show()
```
<h3>9.7.1 Defining the Split Criteria</h3>
We can consider a simple split criteria based on the entropy of the data defined (in $\S$5.2.2) as
$$E(x)=-\displaystyle\sum_i p_i(x)\ln(p_i(x)).$$
Here $p_i$ is the probability of class $i$ given the training data. WE can define the reduction of entropy due to branching as the information gain (also known as Kullback-Leibler divergence). With a binary split where $i=0$ for points below the decision boundary and $i=1$ for those above it, the information gain is
$$IG(x|x_i)=E(x)-\displaystyle\sum_{i=0}^1\frac{N_i}{N}E(x_i),$$
where you have $N_i$ points $x_i$ in the $i$th class, ans $E(x_i)$ is the entropy associated with that class.
Other commonly used loss functions are the Gini coefficient ($\S4.7.2$) and the misclassification error. The Gini coefficient for a $k$-class sample is
$$G=\displaystyle\sum_i^k p_i(1-p_i).$$
Here $p_i$ is the probability of finding a point with class $i$ within a data set. The misclassification error is
$$MC=1-\displaystyle\max_i(p_i).$$
<h3>9.7.2 Building the Tree</h3>
Where do you stop? There are several ways to find stopping criteria so as to capture trends while avoiding capturing the noise of the training set. Common options include the following:
- If a node contains only one class of object
- If splitting does not supply information gain or reduce misclassifications
- If a node contains a predefined number of points
Overall tree depth can be determined by using a cross validation set to determine when the complexity of the model starts overfitting data.
Another option is pruning: method where the tree grows until nodes contain a small predefined number of points and then cross validation is used to determine where to remove branching nodes in the tree.
<h3>9.7.3 Bagging and Random Forests</h3>
Ensemble learning (the idea of taking outputs of multiple models and combining them) gives us two methods of improving our classification. In bagging, the predictive results of a series of bootstrap samples (see $\S4.5$) are averaged. For a sample of $N$ points in a training set, bagging generates $K$ bootstrap samples of equal size to estimate $f_i(x)$. The final bagging estimator is
$$f(x)=\frac{1}{K} \displaystyle\sum_i^K f_i(x).$$
Random forests generate a decision tree for each in a series of bootstrap samples. To generate a forest we need to define the number of trees $n$ to use and the number of features $m$ to decide on a boundary at each node. Features on which to split are randomly decided upon at each node. Random forests address both the overfitting of deep trees and the nonlinear boundaries in data sets that single decision trees do not.
```python
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from astroML.datasets import fetch_sdss_specgals
from astroML.decorators import pickle_results
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=False)
#------------------------------------------------------------
# Fetch and prepare the data
data = fetch_sdss_specgals()
# put magnitudes in a matrix
mag = np.vstack([data['modelMag_%s' % f] for f in 'ugriz']).T
z = data['z']
# train on ~60,000 points
mag_train = mag[::10]
z_train = z[::10]
# test on ~6,000 distinct points
mag_test = mag[1::100]
z_test = z[1::100]
#------------------------------------------------------------
# Compute the results
# This is a long computation, so we'll save the results to a pickle.
@pickle_results('photoz_forest.pkl')
def compute_photoz_forest(depth):
rms_test = np.zeros(len(depth))
rms_train = np.zeros(len(depth))
i_best = 0
z_fit_best = None
for i, d in enumerate(depth):
clf = RandomForestRegressor(n_estimators=10,
max_depth=d, random_state=0)
clf.fit(mag_train, z_train)
z_fit_train = clf.predict(mag_train)
z_fit = clf.predict(mag_test)
rms_train[i] = np.mean(np.sqrt((z_fit_train - z_train) ** 2))
rms_test[i] = np.mean(np.sqrt((z_fit - z_test) ** 2))
if rms_test[i] <= rms_test[i_best]:
i_best = i
z_fit_best = z_fit
return rms_test, rms_train, i_best, z_fit_best
depth = np.arange(1, 21)
rms_test, rms_train, i_best, z_fit_best = compute_photoz_forest(depth)
best_depth = depth[i_best]
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(10, 5))
fig.subplots_adjust(wspace=0.25,
left=0.1, right=0.95,
bottom=0.15, top=0.9)
# left panel: plot cross-validation results
ax = fig.add_subplot(121)
ax.plot(depth, rms_test, '-k', label='cross-validation')
ax.plot(depth, rms_train, '--k', label='training set')
ax.legend(loc=1)
ax.set_xlabel('depth of tree')
ax.set_ylabel('rms error')
ax.set_xlim(0, 21)
ax.set_ylim(0.009, 0.04)
ax.yaxis.set_major_locator(plt.MultipleLocator(0.01))
# right panel: plot best fit
ax = fig.add_subplot(122)
ax.scatter(z_test, z_fit_best, s=1, lw=0, c='k')
ax.plot([-0.1, 0.4], [-0.1, 0.4], ':k')
ax.text(0.03, 0.97, "depth = %i\nrms = %.3f" % (best_depth, rms_test[i_best]),
ha='left', va='top', transform=ax.transAxes)
ax.set_xlabel(r'$z_{\rm true}$')
ax.set_ylabel(r'$z_{\rm fit}$')
ax.set_xlim(-0.02, 0.4001)
ax.set_ylim(-0.02, 0.4001)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.1))
plt.show()
```
<h3>9.7.4 Boosting Classification</h3>
Boosting is another ensemble method that iterates over a sample, changing the weights of data points after each iteration in an attempt to fix errors in the previous attempt. If we have a "weak" classifier $h(x)$ we can make a "strong" classifier $f(x)$, such that
$$f(x)=\displaystyle\sum_m^K \theta_mh_m(x),$$
where $m$ indicates the iteration and $\theta_m$ is the weight of that iteration of the classifier.
For a data set of $N$ points with known classifications $y$, we can assign a weight $w_m(x)$ to each point (With the initial weight being uniform $1/N$). After application of the weak classifier, we estimate the classification error as
$$e_m = \displaystyle\sum_{i=1}^N w_m(x_i) \times \begin{cases} 1 & h_m(x_i) \neq y_i, \\ 0 & \text{otherwise.}\end{cases}$$
We then define the weight of that iteration as
$$\theta_m = \frac{1}{2}\log{\left(\frac{1-e_m}{e_m}\right)},$$
and update the weight of each point:
$$w_{m+1}(x_i) = w_m(x_i) \times \begin{cases} e^{-\theta_m} & h_m(x_i) = y_i \\ e^{\theta_m} & h_m(x_i) \neq y_i\end{cases}
= \frac{w_m(x_i)e^{-\theta_m y_i h_m(x_i)}}{\sum_{i=1}^N w_m(x_i) e^{-\theta_m y_i h_m(x_i)}}.$$
Since each iteration depends on the previous iteration, parallelization is not possible here. This can lead to comutation time problems with large data sets. An alternative to adaptive boosting is gradient boosting, which may scale better to larger data.
<h2>9.8 Evaluating Classifiers: ROC Curves</h2>
Receiver operating characteristic (ROC) curves can be useful for visualizing how well classification methods work. They also conveniently show the trade offs between completeness and efficiency. The graphs can also be true positives vs, false positives, though this may not be the most useful when your data includes a large number of background data points.
```python
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.lda import LDA
from sklearn.qda import QDA
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from astroML.classification import GMMBayes
from sklearn.metrics import precision_recall_curve, roc_curve
from astroML.utils import split_samples, completeness_contamination
from astroML.datasets import fetch_rrlyrae_combined
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=False)
#----------------------------------------------------------------------
# get data and split into training & testing sets
X, y = fetch_rrlyrae_combined()
y = y.astype(int)
(X_train, X_test), (y_train, y_test) = split_samples(X, y, [0.75, 0.25],
random_state=0)
#------------------------------------------------------------
# Fit all the models to the training data
def compute_models(*args):
names = []
probs = []
for classifier, kwargs in args:
print classifier.__name__
clf = classifier(**kwargs)
clf.fit(X_train, y_train)
y_probs = clf.predict_proba(X_test)[:, 1]
names.append(classifier.__name__)
probs.append(y_probs)
return names, probs
names, probs = compute_models((GaussianNB, {}),
(LDA, {}),
(QDA, {}),
(LogisticRegression,
dict(class_weight='auto')),
(KNeighborsClassifier,
dict(n_neighbors=10)),
(DecisionTreeClassifier,
dict(random_state=0, max_depth=12,
criterion='entropy')),
(GMMBayes, dict(n_components=3, min_covar=1E-5,
covariance_type='full')))
#------------------------------------------------------------
# Plot ROC curves and completeness/efficiency
fig = plt.figure(figsize=(10, 5))
fig.subplots_adjust(left=0.1, right=0.95, bottom=0.15, top=0.9, wspace=0.25)
# ax2 will show roc curves
ax1 = plt.subplot(121)
# ax1 will show completeness/efficiency
ax2 = plt.subplot(122)
labels = dict(GaussianNB='GNB',
LDA='LDA',
QDA='QDA',
KNeighborsClassifier='KNN',
DecisionTreeClassifier='DT',
GMMBayes='GMMB',
LogisticRegression='LR')
thresholds = np.linspace(0, 1, 1001)[:-1]
# iterate through and show results
for name, y_prob in zip(names, probs):
fpr, tpr, thresh = roc_curve(y_test, y_prob)
# add (0, 0) as first point
fpr = np.concatenate([[0], fpr])
tpr = np.concatenate([[0], tpr])
ax1.plot(fpr, tpr, label=labels[name])
comp = np.zeros_like(thresholds)
cont = np.zeros_like(thresholds)
for i, t in enumerate(thresholds):
y_pred = (y_prob >= t)
comp[i], cont[i] = completeness_contamination(y_pred, y_test)
ax2.plot(1 - cont, comp, label=labels[name])
ax1.set_xlim(0, 0.04)
ax1.set_ylim(0, 1.02)
ax1.xaxis.set_major_locator(plt.MaxNLocator(5))
ax1.set_xlabel('false positive rate')
ax1.set_ylabel('true positive rate')
ax1.legend(loc=4)
ax2.set_xlabel('efficiency')
ax2.set_ylabel('completeness')
ax2.set_xlim(0, 1.0)
ax2.set_ylim(0.2, 1.02)
plt.show()
```
<h2>9.9 Which Classifier Should I Use?</h2>
Here's how the different methods perform in different areas.
<h4>Accuracy:</h4>
In general, it can't be stated what will be most accurate, but there are a few rules of thumb.
- More parameters -> potentially more accurate
- Ensemble methods are often more accurate
- Nonparametric are usually more accurate for large data sets
- Parametric can be more accurate for small data sets, or high dimension sets
<h4>Interpretability:</h4>
Parametric models are often the methods that will be interpretable, though some parametric models (KNN, Decision trees) are still interpretable.
<h4>Scalability:</h4>
Naive Bayes (and variants) is by far the easiest to compute, and logistic regression and linear support vector machines are next on the list. Scalability usually means that the method gives a simpler model for classification.
<h4>Simplicity:</h4>
Naive Beyes takes the cake here. Other simple methods include logistic regression, decision trees, and K nearest neighbors.
*Table 9.1 in the book has a nice comparison of the classification methods*.
| 110207248496cda8515eec469251eb1c4e947b9e | 472,045 | ipynb | Jupyter Notebook | Chapter9/chapter9.5-9.ipynb | dkirkby/astroml-study | 38286379c91f80c72d09e13424f90d3333d43096 | [
"MIT"
]
| 7 | 2016-01-14T20:33:30.000Z | 2020-07-10T14:15:35.000Z | Chapter9/chapter9.5-9.ipynb | dkirkby/astroml-study | 38286379c91f80c72d09e13424f90d3333d43096 | [
"MIT"
]
| null | null | null | Chapter9/chapter9.5-9.ipynb | dkirkby/astroml-study | 38286379c91f80c72d09e13424f90d3333d43096 | [
"MIT"
]
| null | null | null | 79.215472 | 454 | 0.809211 | true | 8,726 | Qwen/Qwen-72B | 1. YES
2. YES | 0.896251 | 0.787931 | 0.706184 | __label__eng_Latn | 0.908671 | 0.479035 |
# Simplified Arm Mode
## Introduction
This notebook presents the analytical derivations of the equations of motion for
three degrees of freedom and nine muscles arm model, some of them being
bi-articular, appropriately constructed to demonstrate both kinematic and
dynamic redundancy (e.g. $d < n < m$). The model is inspired from [1] with some
minor modifications and improvements.
## Model Constants
Abbreviations:
- DoFs: Degrees of Freedom
- EoMs: Equations of Motion
- KE: Kinetic Energy
- PE: Potential Energy
- CoM: center of mass
The following constants are used in the model:
- $m$ mass of a segment
- $I_{z_i}$ inertia around $z$-axis
- $L_i$ length of a segment
- $L_{c_i}$ length of the CoM as defined in local frame of a body
- $a_i$ muscle origin point as defined in the local frame of a body
- $b_i$ muscle insertion point as defined in the local frame of a body
- $g$ gravity
- $q_i$ are the generalized coordinates
- $u_i$ are the generalized speeds
- $\tau$ are the generalized forces
Please note that there are some differences from [1]: 1) $L_{g_i} \rightarrow
L_{c_i}$, 2) $a_i$ is always the muscle origin, 3) $b_i$ is always the muscle
insertion and 4) we don't use double indexing for the bi-articular muscles.
```python
# notebook general configuration
%load_ext autoreload
%autoreload 2
# imports and utilities
import sympy as sp
from IPython.display import display, Image
sp.interactive.printing.init_printing()
import logging
logging.basicConfig(level=logging.INFO)
# plot
%matplotlib inline
from matplotlib.pyplot import *
rcParams['figure.figsize'] = (10.0, 6.0)
# utility for displaying intermediate results
enable_display = True
def disp(*statement):
if (enable_display):
display(*statement)
```
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
```python
# construct model
from model import ArmModel
model = ArmModel(use_gravity=1, use_coordinate_limits=0, use_viscosity=0)
disp(model.constants)
```
## Dynamics
The simplified arm model has three DoFs and nine muscles, some of them being
bi-articular. The analytical expressions of the EoMs form is given by
\begin{equation}\label{equ:eom-standard-form}
M(q) \ddot{q} + C(q, \dot{q})\dot{q} + \tau_g(q) = \tau
\end{equation}
where $M \in \Re^{n \times n}$ represents the inertia mass matrix, $n$ the DoFs
of the model, $q, \dot{q}, \ddot{q} \in \Re^{n}$ the generalized coordinates and
their derivatives, $C \in \Re^{n \times n}$ the Coriolis and centrifugal matrix,
$\tau_g \in \Re^{n}$ the gravity contribution and $\tau$ the specified
generalized forces.
As the model is an open kinematic chain a simple procedure to derive the EoMs
can be followed. Assuming that the spatial velocity (translational, rotational)
of each body segment is given by $u_b = [v, \omega]^T \in \Re^{6 \times 1}$, the
KE of the system in body local coordinates is defined as
\begin{equation}\label{equ:spatial-ke}
K = \frac{1}{2} \sum\limits_{i=1}^{n_b} (m_i v_i^2 + I_i \omega_i^2) =
\frac{1}{2} \sum\limits_{i=1}^{n_b} u_i^T M_i u_i
\end{equation}
where $M_i = diag(m_i, m_i, m_i, [I_i]_{3 \times 3}) \in \Re^{6 \times 6}$
denotes the spatial inertia mass matrix, $m_i$ the mass and $I_i \in \Re^{3
\times 3}$ the inertia matrix of body $i$. The spatial quantities are related
to the generalized coordinates by the body Jacobian $u_b = J_b \dot{q}, \; J_b
\in \Re^{6 \times n}$. The total KE is coordinate invariant, thus it can be
expressed in different coordinate system
\begin{equation}\label{equ:ke-transformation}
K = \frac{1}{2} \sum\limits_{i=1}^{n_b} q^T J_i^T M_i J_i q
\end{equation}
Following the above definition, the inertia mass matrix of the system can be
written as
\begin{equation}\label{equ:mass-matrix}
M(q) = \sum\limits_{i=1}^{n_b} J_i^T M_i J_i
\end{equation}
Furthermore, the Coriolis and centrifugal forces $C(q, \dot{q}) \dot{q}$ can be
determined directly from the inertia mass matrix
\begin{equation}\label{equ:coriolis-matrix}
C_{ij}(q, \dot{q}) = \sum\limits_{k=1}^{n} \Gamma_{ijk} \; \dot{q}_k, \; i, j
\in [1, \dots n], \;
\Gamma_{ijk} = \frac{1}{2} (
\frac{\partial M_{ij}(q)}{\partial q_k} +
\frac{\partial M_{ik}(q)}{\partial q_j} -
\frac{\partial M_{kj}(q)}{\partial q_i})
\end{equation}
where the functions $\Gamma_{ijk}$ are called the Christoffel symbols. The
gravity contribution can be determined from the PE function
\begin{equation}\label{equ:gravity-pe}
\begin{gathered}
g(q) = \frac{\partial V(q)}{\partial q}, \; V(q) = \sum\limits_{i=1}^{n_b} m_i g h_i(q)
\end{gathered}
\end{equation}
where $h_i(q)$ denotes the vertical displacement of body $i$ with respect to the
ground. In this derivation we chose to collect all forces that act on the system
in the term $f(q, \dot{q})$.
```python
# define the spatial coordinates for the CoM in terms of Lcs' and q's
disp(model.xc[1:])
# define CoM spatial velocities
disp(model.vc[1:])
#define CoM Jacobian
disp(model.Jc[1:])
```
```python
# generate the inertial mass matrix
M = model.M
for i in range(0, M.shape[0]):
for j in range(0, M.shape[1]):
disp('M_{' + str(i + 1) + ',' + str(j + 1) + '} = ', M[i, j])
```
```python
# total forces from Coriolis, centrafugal and gravity
f = model.f
for i in range(0, f.shape[0]):
disp('f_' + str(i + 1) + ' = ', f[i])
```
## Muscle Moment Arm
The muscle forces $f_m$ are transformed into joint space generalized forces
($\tau$) by the moment arm matrix ($\tau = -R^T f_m$). For a n-lateral polygon
it can be shown that the derivative of the side length with respect to the
opposite angle is the moment arm component. As a consequence, when expressing
the muscle length as a function of the generalized coordinates of the model, the
moment arm matrix is evaluated by $R = \frac{\partial l_{mt}}{\partial q}$. The
analytical expressions of the EoMs following our convention are provided below
\begin{equation}\label{equ:eom-notation}
\begin{gathered}
M(q) \ddot{q} + f(q, \dot{q}) = \tau \\
\tau = -R^T(q) f_m
\end{gathered}
\end{equation}
```python
# assert that moment arm is correctly evaluated
# model.test_muscle_geometry() # slow
# muscle length
disp('l_m = ', model.lm)
# moment arm
disp('R = ', model.R)
```
```python
# draw model
fig, ax = subplots(1, 1, figsize=(10, 10), frameon=False)
model.draw_model([60, 70, 50], True, ax, 1, False)
fig.tight_layout()
fig.savefig('results/arm_model.pdf', dpi=600, format='pdf',
transparent=True, pad_inches=0, bbox_inches='tight')
```
[1] K. Tahara, Z. W. Luo, and S. Arimoto, “On Control Mechanism of Human-Like
Reaching Movements with Musculo-Skeletal Redundancy,” in International
Conference on Intelligent Robots and Systems, 2006, pp. 1402–1409.
| 4116da418983735caa03b93de1d65104e8f5ef69 | 339,507 | ipynb | Jupyter Notebook | arm_model/model.ipynb | mitkof6/musculoskeletal-redundancy | 331ae7ab01e768c6a8c20ec8090464eeef547eea | [
"CC-BY-4.0"
]
| 6 | 2019-01-08T19:11:18.000Z | 2022-03-09T10:20:46.000Z | arm_model/model.ipynb | mitkof6/musculoskeletal-redundancy | 331ae7ab01e768c6a8c20ec8090464eeef547eea | [
"CC-BY-4.0"
]
| null | null | null | arm_model/model.ipynb | mitkof6/musculoskeletal-redundancy | 331ae7ab01e768c6a8c20ec8090464eeef547eea | [
"CC-BY-4.0"
]
| null | null | null | 243.374194 | 65,178 | 0.790402 | true | 2,086 | Qwen/Qwen-72B | 1. YES
2. YES | 0.908618 | 0.740174 | 0.672536 | __label__eng_Latn | 0.973864 | 0.400857 |
# Get Unique Lambdas from H (plus other restrictions)
The only objective of this notebooks it to get a set of unique lambdas from H plus some extra assumptions.
Numerical methods would be fine, though riskier (since a random start would give different lambdas, bringing unstability to the maximization GMM procedure)
+ $Z$ = firm data
+ $\theta$ = deep parameters
+ $H = g(Z'\theta)$, $Eβ = m(Z'\theta)$
Thus, for a given $\theta$ try (GMM try), we have a fixed $(H, Eβ)$. This is our starting point and we should get $\lambda$s from then (hopefully just one set!!)
```python
from scipy.stats import entropy
from scipy import optimize
import numpy as np
import sympy as sp
sp.init_printing()
```
```python
p1, p2, p3 = sp.symbols('p1 p2 p3')
h_sp = -(p1*sp.log(p1) + p2*sp.log(p2) + (1 - p1 - p2)*sp.log(1 - p1 - p2))
sp.simplify(sp.diff(h_sp, p2))
```
```python
b1, b2, b3 = sp.symbols('b1 b2 b3')
eb_sp = b1 *p1 + b2*p2 + b3*(1-p1-p2)
sp.simplify(sp.diff(eb_sp, p2))
```
b2 - b3
```python
m = np.array([[2, 3, 4], [2, 1, 2]])
m
```
array([[2, 3, 4],
[2, 1, 2]])
```python
m.shape
```
(2, 3)
```python
Eβ = 1.2
βs = [0.7, 1.1, 1.5] # Corresponding to each lambda
H = 0.95
#θ = 0.1
def my_entropy(p):
return -np.sum(p * np.log(p))
def x_to_lambdas(x):
return [x[0], x[1], 1 - x[0] - x[1]]
# Set of lambdas that solve two equations
def fun(x):
lambdas = x_to_lambdas(x)
return [my_entropy(lambdas) - H,
np.dot(βs, lambdas) - Eβ]
#sol = optimize.root(fun, [0.2, 0.1]) --> [0.311, 0.12, 0.56]
sol = optimize.root(fun, [0.1, 0.4])# --> [0.111, 0.52, 0.36]
print(sol.message)
lambdas_sol = x_to_lambdas(sol.x)
print(lambdas_sol)
print("Values: ", H, Eβ)
entropy(lambdas_sol), np.dot(βs, lambdas_sol)
```
The solution converged.
[0.11150149216228898, 0.5269970156754217, 0.36150149216228933]
Values: 0.95 1.2
(0.9500000000000002, 1.2000000000000002)
```python
#With a jacobian
def jac(x):
dh_dx = np.array([-np.log(x[0]), -np.log(x[1])]) + np.log(1-x[0]-x[1])
deb_dx = np.array([βs[0], βs[1]]) - βs[2]
return np.array([dh_dx, deb_dx])
sol = optimize.root(fun, [0.8, 0.1], jac=jac)# --> [0.111, 0.52, 0.36]
print(sol.message)
lambdas_sol = x_to_lambdas(sol.x)
print(lambdas_sol)
print("Values: ", H, Eβ)
entropy(lambdas_sol), np.dot(βs, lambdas_sol)
```
```python
# Set of lambdas that solve just the H equation
def fun(x):
lambdas = x_to_lambdas(x)
return [entropy(lambdas) - H, 0.]
sol = optimize.root(fun, [0.1, 0.05])
lambdas_sol = x_to_lambdas(sol.x)
print(sol.message)
print(lambdas_sol)
print("True H: ", H, " . Obtained: ", my_entropy(lambdas_sol))
```
The solution converged.
[0.4493011313143122, 0.10072317846500357, 0.4499756902206842]
True H: 0.95 . Obtained: 0.95
## Reparametrise probabilities so they are between 0 and 1
$$ p = {\exp(x) \over 1 + \exp(x)}$$
```python
x1, x2, x3 = sp.symbols('x1 x2 x3')
#p1 = sp.exp(x1) / (1 + sp.exp(x1))
#p2 = sp.exp(x2) / (1 + sp.exp(x2))
p1 = 1 / (1 + sp.exp(-x1))
p2 = 1 / (1 + sp.exp(-x2))
h_sp = sp.simplify(-(p1*sp.log(p1) + p2*sp.log(p2) + (1 - p1 - p2)*sp.log(1 - p1 - p2)))
sp.simplify(sp.diff(h_sp, x1))
```
```python
sp.simplify(sp.diff(h_sp, x2))
```
```python
b1, b2, b3 = sp.symbols('b1 b2 b3')
eb_sp = b1 *p1 + b2*p2 + b3*(1-p1-p2)
sp.diff(eb_sp, x1)
```
```python
sp.diff(eb_sp, x2)
```
```python
def logit(p):
return np.log(p / (1 - p))
def x_to_p(x):
""" inverse logit"""
return np.e**(x) / (1 + np.e**(x))
def fun(x):
lambdas = x_to_lambdas(x_to_p(x))
return [my_entropy(lambdas) - H,
np.dot(βs, lambdas) - Eβ]
def jac(x):
block = np.log( (1 - np.e**(x[0] + x[1]) ) / (np.e**(x[0])+np.e**(x[1])+np.e**(x[0]+x[1]) + 1))
num0 = (-np.log( np.e**x[0] / ( np.e**x[0] + 1)) + block )*np.e**x[0]
den0 = np.e**(2*x[0]) + 2*np.e**(x[0]) + 1
num1 =(-np.log( np.e**x[1] / ( np.e**x[1] + 1)) + block )*np.e**x[1]
den1 =np.e**(2*x[1]) + 2*np.e**(x[1]) + 1
dh_dx = np.array([num0/den0, num1/den1])
deb_0 = ((βs[0] - βs[2])*np.e**(-x[0])) / (1 + np.e**(-x[0]))**2
deb_1 = ((βs[1] - βs[2])*np.e**(-x[1])) / (1 + np.e**(-x[1]))**2
deb_dx = np.array([deb_0, deb_1])
return np.array([dh_dx, deb_dx])
sol = optimize.root(fun, logit(np.array([0.1, 0.01])), jac=jac)
print(sol.message)
lambdas_sol = x_to_lambdas(x_to_p(sol.x))
print(lambdas_sol)
print("Values: ", H, Eβ)
my_entropy(lambdas_sol), np.dot(βs, lambdas_sol)
```
```python
```
```python
%matplotlib inline
import matplotlib.pyplot as plt
x = np.linspace(-100, 100, 1000)
y = np.e**x / ( 1 + np.e**x)
y2 = y = 1 / ( 1 + np.e**(-x))
plt.plot(x, y)
plt.plot(x, y2)
plt.plot(x, y -y2, label="hola")
plt.legend()
```
```python
%matplotlib inline
import matplotlib.pyplot as plt
y = 1 / ( 1 + np.e**(-x))
plt.plot(x, y)
```
```python
```
| 030db95b12d3d8cca9d7dd53b3c8a4843b0a3ba5 | 47,425 | ipynb | Jupyter Notebook | Notebooks/estimation/unique_lambdas_from_H.ipynb | cdagnino/LearningModels | b31d4e1dd5381ba06fc5b1d2b0e2eb1515f2d15f | [
"Apache-2.0"
]
| null | null | null | Notebooks/estimation/unique_lambdas_from_H.ipynb | cdagnino/LearningModels | b31d4e1dd5381ba06fc5b1d2b0e2eb1515f2d15f | [
"Apache-2.0"
]
| null | null | null | Notebooks/estimation/unique_lambdas_from_H.ipynb | cdagnino/LearningModels | b31d4e1dd5381ba06fc5b1d2b0e2eb1515f2d15f | [
"Apache-2.0"
]
| null | null | null | 81.346484 | 9,180 | 0.796563 | true | 1,951 | Qwen/Qwen-72B | 1. YES
2. YES | 0.805632 | 0.682574 | 0.549903 | __label__eng_Latn | 0.217346 | 0.115939 |
# 1-D Diffusion equation
$$\frac{\partial u}{\partial t}= \nu \frac{\partial^2 u}{\partial x^2}$$
```python
# needed imports
from numpy import zeros, ones, linspace, zeros_like
from matplotlib.pyplot import plot, show
%matplotlib inline
```
```python
# Initial condition
import numpy as np
u0 = lambda x: np.exp(-(x-.5)**2/.05**2)
grid = linspace(0., 1., 401)
u = u0(grid)
plot(grid, u) ; show()
```
### Time scheme
$$\frac{u^{n+1}-u^n}{\Delta t} - \nu \partial_{xx} u^{n+1} = 0 $$
$$ \left(I - \nu \Delta t \partial_{xx} \right) u^{n+1} = u^n $$
### Weak formulation
$$
\langle v, u^{n+1} \rangle + \nu \Delta t ~ \langle \partial_x v, \partial_x u^{n+1} \rangle = \langle v, u^n \rangle
$$
expending $u^n$ over the fem basis, we get the linear system
$$A U^{n+1} = M U^n$$
where
$$
M_{ij} = \langle b_i, b_j \rangle
$$
$$
A_{ij} = \langle b_i, b_j \rangle + \nu \Delta t ~ \langle \partial_x b_i, \partial_x b_j \rangle
$$
## Abstract Model using SymPDE
```python
from sympde.core import Constant
from sympde.expr import BilinearForm, LinearForm, integral
from sympde.topology import ScalarFunctionSpace, Line, element_of, dx
from sympde.topology import dx1 # TODO: this is a bug right now
```
```python
# ... abstract model
domain = Line()
V = ScalarFunctionSpace('V', domain)
x = domain.coordinates
u,v = [element_of(V, name=i) for i in ['u', 'v']]
nu = Constant('nu')
dt = Constant('dt')
# bilinear form
# expr = v*u - c*dt*dx(v)*u # TODO BUG not working
expr = v*u + nu*dt*dx1(v)*dx1(u)
a = BilinearForm((u,v), integral(domain , expr))
# bilinear form for the mass matrix
expr = u*v
m = BilinearForm((u,v), integral(domain , expr))
# linear form for initial condition
from sympy import exp
expr = exp(-(x-.5)**2/.05**2)*v
l = LinearForm(v, integral(domain, expr))
```
## Discretization using Psydac
```python
from psydac.api.discretization import discretize
```
```python
nu = 0.3 # viscosity
T = 0.02 # T final time
dt = 0.001
niter = int(T / dt)
degree = [3] # spline degree
ncells = [64] # number of elements
```
```python
# Create computational domain from topological domain
domain_h = discretize(domain, ncells=ncells, comm=None)
# Discrete spaces
Vh = discretize(V, domain_h, degree=degree)
# Discretize the bilinear forms
ah = discretize(a, domain_h, [Vh, Vh])
mh = discretize(m, domain_h, [Vh, Vh])
# Discretize the linear form for the initial condition
lh = discretize(l, domain_h, Vh)
```
```python
# assemble matrices and convert them to scipy
M = mh.assemble().tosparse()
A = ah.assemble(nu=nu, dt=dt).tosparse()
# assemble the rhs and convert it to numpy array
rhs = lh.assemble().toarray()
```
```python
from scipy.sparse.linalg import cg, gmres
```
```python
# L2 projection of the initial condition
un, status = cg(M, rhs, tol=1.e-8, maxiter=5000)
```
```python
from simplines import plot_field_1d
plot_field_1d(Vh.knots[0], Vh.degree[0], un, nx=401)
```
```python
for i in range(0, niter):
b = M.dot(un)
un, status = gmres(A, b, tol=1.e-8, maxiter=5000)
```
```python
plot_field_1d(Vh.knots[0], Vh.degree[0], un, nx=401)
```
```python
```
| ac9bf615b1c504710a40b3d9459c2e202ff86e43 | 39,310 | ipynb | Jupyter Notebook | lessons/Chapter2/02_diffusion_1d.ipynb | ratnania/IGA-Python | a9d7aa9bd14d4b3f1b12cdfbc2f9bf3c0a68fff4 | [
"MIT"
]
| 6 | 2018-04-27T15:40:17.000Z | 2020-08-13T08:45:35.000Z | lessons/Chapter2/02_diffusion_1d.ipynb | GabrielJie/IGA-Python | a9d7aa9bd14d4b3f1b12cdfbc2f9bf3c0a68fff4 | [
"MIT"
]
| 4 | 2021-06-08T22:59:19.000Z | 2022-01-17T20:36:56.000Z | lessons/Chapter2/02_diffusion_1d.ipynb | GabrielJie/IGA-Python | a9d7aa9bd14d4b3f1b12cdfbc2f9bf3c0a68fff4 | [
"MIT"
]
| 4 | 2018-10-06T01:30:20.000Z | 2021-12-31T02:42:05.000Z | 127.216828 | 12,860 | 0.884864 | true | 1,034 | Qwen/Qwen-72B | 1. YES
2. YES | 0.914901 | 0.7773 | 0.711152 | __label__eng_Latn | 0.64862 | 0.490577 |
## autodiff32 Documentation
## Introduction
Differentiation, or the process of finding a derivative, is an extremely important mathematical operation with a wide range of applications. The discovery of extrema or zeros of functions is essential in any optimization problem, and the solving of differential equations is fundamental to modern science and engineering. Differentiation is essential in nearly all quantitative disciplines: physicists may take the derivative of the displacement of a moving object with respect to time in order to find the velocity of that object, and data scientists may use derivatives when optimizing weights in a neural network.
Naturally, we would like to compute the derivative as accurately and efficiently as possible. Two classical methods of calculating the derivative have clear shortcomings. Symbolic differentiation (finding the derivative of a given formula with respect to a specified variable, producing a new formula as its output) will be accurate, but can be quite expensive computationally. The finite difference method ($\frac{\partial f}{\partial x} = \frac{f(x+\epsilon)-f(x)}{\epsilon}$ for some small $\epsilon$) does not have this issue, but will be less precise as different values of epsilon will give different results. This brings us to automatic differentiation, a less costly and more precise approach.
__Extension:__ For this project, we have implemented forward-mode automatic differentiation as well as reverse-mode automatic differentiation. It is important to have both methods accessible to have accuracy as well as optimal efficiency.
Automatic differentiation can be used to compute derivatives to machine precision of functions $f:\mathbb{R}^{m} \to \mathbb{R}^{n}$
The forward mode is more efficient when $n\gg m$.
- This corresponds to the case where the number of functions to evaluate is much greater than the number of inputs.
- Actually computes the Jacobian-vector product $Jp$.
The reverse mode is more efficient when $n\ll m$.
- This corresponds to the case where the number of inputs is much greater than the number of functions.
- Actually computes the Jacobian-transpose-vector product $J^{T}p$.
## Background
Automatic differentiation breaks down the main function into elementary functions, evaluated upon one another. It then uses the chain rule to update the derivative at each step and ends in the derivative of the entire function.
To better understand this process, let's look at an example. Consider the example function
\begin{equation}
f(x) = x + 4sin(\frac{x}{4})
\end{equation}
We would like to compute the derivative of this function at a particular value of x. Let's say that in this case, we are interested in evaluating the derivative at $x=\pi$. In other words, we want to find $f'(\pi)$ where $f'(x) = \frac{\partial f}{\partial x}$
We know how to solve this _symbolically_ using methods that we learned in calculus, but remember, we want to compute this answer as accurately and efficiently as possible, which is why we want to solve it using automatic differentiation.
### The Chain Rule
To solve this using automatic differentiation, we need to find the decomposition of the differentials provied by the chain rule. Remember, the chain rule is a formula for computing the derivative of the composition of two or more functions. So if we have a function $h\left(u\left(t\right)\right)$ and we want the derivative of $h$ with respect to $t$, then we know by the chain rule that the derivative is $\dfrac{\partial h}{\partial t} = \dfrac{\partial h}{\partial u}\dfrac{\partial u}{\partial t}.$ The chain rule can also be expanded to deal with multiple arguments and vector inputs (in which case we would be calculating the _gradient)_.
Our function $f(x)$ is composed of elemental functions for which we know the derivatives. We will separate out each of these elemental functions, evaluating the derivative at each step using the chain rule.
### Forward-mode differentiation
Using forward-mode differentiation, the evaluation trace for this problem looks like:
| Trace | Elementary Operation | Derivative | $f'(a)$ |
| :------: | :----------------------: | :------------------------------: | :------------------------------: |
| $x_{3}$ | $\pi$ | $1$ | $\pi$ | $1$ |
| $x_{0}$ | $\frac{x_{3}}{4}$ | $\frac{\dot{x}_{3}}{4}$ | $\frac{1}{4}$ |
| $x_{1}$ | $\sin\left(x_{0}\right)$ | $\cos\left(x_{0}\right)\dot{x}_{0}$ | $\frac{\sqrt{2}}{8}$|
| $x_{2}$ | $4x_{1}$ | $4\dot{1}_{3}$ | $\frac{\sqrt{2}}{2}$ |
| $x_{4}$ | $x_{2} + x_{3}$| $\dot{x}_{2} + \dot{x}_{3}$ | $1 + \frac{\sqrt{2}}{2}$ |
This evaluation trace provides some intuition for how forward-mode automatic differentiation is used to calculate the derivative of a function evaluated at a certain value ($f'(\pi) = 1 + \frac{\sqrt{2}}{2}$).
</img>
*Figure 1: Forward-mode computational graph for example above*
### Reverse-mode automatic differentiation
Using reverse-mode differentiation, sometimes called backpropagation, the trace on the right is evaluated top to bottom intsead.
| Trace | Elementary Operation | Derivative | $f'(a)$ |
| :------: | :----------------------: | :------------------------------: | :------------------------------: |
| $x_{3}$ | $\pi$ | $\frac{\partial{x_0}}{\partial{x_3}}\bar{x_0} + \bar{x_4} = \cos(\frac{\pi}{4}) + 1$ | $1 + \frac{\sqrt{2}}{2}$ |
| $x_{0}$ | $\frac{x_{3}}{4}$ | $\frac{\partial{x_1}}{\partial{x_0}}\bar{x_1} = 4\cos(\frac{\pi}{4})$ | $2\sqrt{2}$ |
| $x_{1}$ | $\sin\left(x_{0}\right)$ | $\frac{\partial{x_2}}{\partial{x_1}}\bar{x_2}$| $4$|
| $x_{2}$ | $4x_{1}$ | $\frac{\partial{x_4}}{\partial{x_2}}\bar{x_4}$ | $1$ |
| $x_{4}$ | $x_{2} + x_{3}$| $1$ | $1$ | $1$
</img>
*Figure 2: Reverse-mode computational graph for example above*
You may notice that when we computed the derivative above, we "seeded" the derivative with a value of 1. This seed vector doesn't have to be 1, but the utility of using a unit vector becomes apparent when we consider a problem involving directional derivatives.
The definition of the directional derivative (where $p$ is the seed vector)
$$D_{p}x_{3} = \sum_{j=1}^{2}{\dfrac{\partial x_{3}}{\partial x_{j}}p_{j}}$$
can be expanded to
\begin{align}
D_{p}x_{3} &= \dfrac{\partial x_{3}}{\partial x_{1}}p_{1} + \dfrac{\partial x_{3}}{\partial x_{2}}p_{2} \\
&= x_{2}p_{1} + x_{1}p_{2}
\end{align}
If we choose $p$ to be a the unit vector, we can see how this is beneficial:
$p = \left(1,0\right)$ gives $\dfrac{\partial f}{\partial x}$
$p = \left(0,1\right)$ gives $\dfrac{\partial f}{\partial y}$
So to summarize, the forward mode of automatic differentiation is really computing the _product of the gradient of our function with the seed vector:_
$$D_{p}x_{3} = \nabla x_{3}\cdot p.$$
If our function is a vector, then the forward mode actually computes $Jp$ where $J = \dfrac{\partial f_{i}}{\partial x_{j}}, \quad i = 1,\ldots, n, \quad j = 1,\ldots,m$ is the Jacobian matrix. Often we will really only want the "action" of the Jacobian on a vector, so we will just want to compute the matrix-vector product $Jp$ for some vector $p$. Using the same logic, the reverse mode actually computes the Jacobian-transpose-vector product $J^{T}p$.
Automatic differentiation can be used to compute derivatives to machine precision of functions $f:\mathbb{R}^{m} \to \mathbb{R}^{n}$
## How to Use autodiff32
### Installation
**1) Create a virtual environment (optional)**
From the terminal, create a virtual environment:
_(The command below will create the virtual environment in your present working directory, so consider moving to a project folder or a known location before creating the environment)_
```virtualenv env```
activate the virtual environment:
```source env/bin/activate```
if you plan to launch a jupyter notebook using this virtual environment, run the following to install and set up jupyter in your virtual environment:
```python -m pip install jupyter```
```python -m ipykernel install --user --name=env```
**3) Install the autodiff32 package**
In the terminal, type:
```pip install autodiff32```
Package dependencies will be taken care of automatically!
_(Alternatively, it is also possible to install the autodiff32 package by downloading this GitHub repository. If you choose that method, use the requirements.txt file to ensure you have installed all necessary dependencies.)_
## Tutorial
It is easy to use the autodiff32 package in a Jupyter notebook, as we will demonstrate here:
_(Alternatively, you can start a Python interpreter by typing ```Python``` into the terminal, or work from your favorite Python IDE.)_
_(Remember, if you are using a virtual environment, follow steps 1 through 3 above and then type ```jupyter notebook``` into your terminal to launch a notebook. Inside the notebook, switch the kernel to that of your virtual environment.)_
```python
pip install autodiff32
```
Collecting autodiff32
Downloading https://files.pythonhosted.org/packages/b6/8c/88fccbf0cc72968be09e75c7bd9f6811d6c675bba90d825e2fc5dfdd145e/autodiff32-0.1.2-py3-none-any.whl
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from autodiff32) (1.17.4)
Installing collected packages: autodiff32
Successfully installed autodiff32-0.1.2
```python
import autodiff32 as ad
import math # only necessary for this particular example
"""
Initialize an AutoDiff object
with the number you would like to pass into your function
"""
X = ad.AutoDiff(math.pi)
"""
Define your function of interest
"""
func = X + 4*ad.sin(X/4)
"""
Look at the derivative of your function
evaluated at the number you gave above
"""
print("derivative:",func.der)
"""
Look at the value of your function
evaluated at the number you gave:
"""
print("value:", func.val)
```
derivative: 1.7071067811865475
value: 5.970019778335983
Notice that this is the same equation used in our example above: $f(x) = x + 4(sin(\frac{x}{4}))$. Just for fun, let's see if the derivative that we calculated in the evolution trace is the same as the result using autodiff32:
```python
print("autodiff32 derivative:", func.der)
print("evolution trace derivative:", 1+math.sqrt(2)/2)
```
autodiff32 derivative: 1.7071067811865475
evolution trace derivative: 1.7071067811865475
We can see that the derivative calculated using autodiff32 is the same as the derivative calulated by walking manually through the evolution trace!
Now what if your function if interest has a **vector input** (X has more than one value)? In that case, use the following workflow:
```python
import numpy as np
X = ad.AutoDiff(np.array([1,2,3]))
func = X**2
print("value:", func.val)
print("derivative:",func.der)
```
value: [1 4 9]
derivative: [2. 4. 6.]
Notice that there are three values and three derivatives. This is because your function and its derivative have been evaluated at the three values you provided.
Now what if your function if interest is a **multivariate function** (has more than just an X variable)? In that case, use the following workflow:
```python
X,Y = ad.Multi_AutoDiff_Creator(X = 2, Y = 4).Vars
func = X**2 + 3*Y
print("value:", func.val)
print("derivative:",func.der)
```
value: 16
derivative: [4. 3.]
Notice that the derivative has two values. This is the derivative of your function with respect to X and Y, evaluated at the values of X and Y you provided.
Now what if you actually have **multiple functions of interest**? In that case, use the following workflow:
```python
X,Y = ad.Multi_AutoDiff_Creator(X = 2, Y = 4).Vars
func = np.array([X+Y, 2*X*Y]) # two functions!
# get value and derivatives of function separately
print("value of first function:", func[0].val)
print("derivative of first function:",func[0].der)
print("\nvalue of second function:", func[1].val)
print("derivative of second function:",func[1].der)
# return the Jacobian matrix
J = ad.Jacobian(func)
print("\nJacobian matrix:\n",J.value())
```
value of first function: 6
derivative of first function: [1. 1.]
value of second function: 16
derivative of second function: [8. 4.]
Jacobian matrix:
[[1. 1.]
[8. 4.]]
Notice that you have an additional option here! You can return the values and derivatives of the functions as you would normally (except that you indicate the index of the function when asking for the value or derivative), _or_ you can return the **Jacobian matrix** which contains the derivatives with respect to X and Y for each of the functions.
Now what if your function if interest is a **multivariate function AND it has vector inputs**? In that case, use the following workflow:
_Please note that the workflow here is significantly different from the rest of the package!_
_This is due to the complexities of handling the derivatives of vector inputs for multivariate functions. We wanted to give you, the user, as much functionality as possible, even if that meant sacrificing a bit in user-friendliness._
```python
# For a single multivariate function evaluated at vector value inputs
# define your variables and their vector values
X = [1,2,3]
Y = [2,3,3]
Z = [3,5,3]
W = [3,5,3]
# put them together in a list, in the order they will be used in your function!
VarValues = [X, Y, Z, W]
# define your function
# Vars[0] represents X, Vars[1] represents Y, etc.
func = lambda Vars:3*Vars[0] + 4*Vars[1] + 4*Vars[2]**2 + 3*Vars[3]
# find the values and derivatives
Values, Derivatives = ad.MultiVarVector_AutoDiff_Evaluate(VarValues,func)
print("values:\n", Values)
print("\nderivatives:\n", Derivatives)
```
values:
[ 56 133 66]
derivatives:
[[ 3. 4. 24. 3.]
[ 3. 4. 40. 3.]
[ 3. 4. 24. 3.]]
Now what if you have **multiple multivariate functions of interest with vector inputs**? In that case, use the following workflow:
_Please note that the workflow here is significantly different from the rest of the package!_
_This is due to the complexities of handling the derivatives of vector inputs for multivariate functions. We wanted to give you, the user, as much functionality as possible, even if that meant sacrificing a bit in user-friendliness._
```python
# For a single multivariate function evaluated at vector value inputs
# define your variables and their vector values
X = [1,2,3]
Y = [2,3,3]
Z = [3,5,3]
W = [3,5,3]
# put them together in a list, in the order they will be used in your function!
VarValues = [X, Y, Z, W]
# define your functions
# Vars[0] represents X, Vars[1] represents Y, etc.
func = lambda Vars:np.array([3*Vars[0] + 4*Vars[1] + 4*Vars[2]**2 + 3*Vars[3], # first function
5*Vars[0] + 6*Vars[1] + 7*Vars[2]**2 + 1*Vars[3]]) # second function
# find the values and derivatives
Values, Derivatives = ad.MultiVarVector_AutoDiff_Evaluate(VarValues,func)
print("values:\n", Values)
print("\nderivatives:\n", Derivatives)
```
values:
[[ 56 83]
[133 208]
[ 66 99]]
derivatives:
[[[ 3. 4. 24. 3.]
[ 5. 6. 42. 1.]]
[[ 3. 4. 40. 3.]
[ 5. 6. 70. 1.]]
[[ 3. 4. 24. 3.]
[ 5. 6. 42. 1.]]]
**Extension Usage Demo**
```python
# For univariate /Multivariate Scalar Function
Graph = ad.ComputationalGraph()
X = ad.Node(value = 3, Graph = Graph)
Y = ad.Node(value = 4, Graph = Graph)
Z = ad.Node(value = 1, Graph = Graph)
G = 2*X + 3*Y*Z + 2*Z
Graph.ComputeValue()
Graph.ComputeGradient(-1)
print("Derivative for X is: ",X.deri)
print("Derivative for Y is: ",Y.deri)
print("Derivative for Z is: ",Z.deri)
print("Value of Function is: ",G.value)
```
Derivative for X is: 2.0
Derivative for Y is: 3.0
Derivative for Z is: 14.0
Value of Function is: 20
```python
#For univariate or Multivariate Vector Functions with single value
import numpy as np
Graph = ad.ComputationalGraph()
X = ad.Node(value = 3, Graph = Graph)
Y = ad.Node(value = 4, Graph = Graph)
Z = ad.Node(value = 1, Graph = Graph)
G = np.array([-2*ad.sinr(X), #please use sinr for sin operation on the node
2*Y + Z*Y,
3*X+3*Y*X+2*Z])
Func = ad.ReverseVecFunc(G,X =X,Y= Y,Z= Z)
Value ,Derivative = Func.value(Graph)
print("The value for the vector function is: ")
print(Value)
print("The derivative for the vector function is: ")
print(Derivative)
```
The value for the vector function is:
[-0.28224002 12. 47. ]
The derivative for the vector function is:
[[ 1.97998499 0. 0. ]
[ 0. 3. 4. ]
[15. 9. 2. ]]
```python
#SERIES OF VALUES For vector functions
D = 3 # number of variables
x = [1,2,3]
y = [6,7,4]
z = [3,8,1]
Values = np.array([x,y,z])
G = np.array([-2*X, #please use sinr for sin operation on the node
2*Y + Z*Y,
3*X+3*Y*X+2*Z])
Func = ad.ReverseVecFunc(G,X =X,Y= Y,Z= Z)
Vals,Deris=Func.Seriesvalue(Values,D,Graph)
print("The value for the vector function is: ")
print(Vals)
print("The derivative for the vector function is: ")
print(np.array(Deris))
```
The value for the vector function is:
[[-2 30 27]
[-4 70 64]
[-6 12 47]]
The derivative for the vector function is:
[[[-2. 0. 0.]
[ 0. 5. 6.]
[21. 3. 2.]]
[[-2. 0. 0.]
[ 0. 10. 7.]
[24. 6. 2.]]
[[-2. 0. 0.]
[ 0. 3. 4.]
[15. 9. 2.]]]
```python
import numpy as np
Graph = ad.ComputationalGraph()
X = ad.Node(value = 3, Graph = Graph)
F = 3*X**2
Xvals= np.array([[3,2,4]]) #Please input Xvals as a 2 dimensional array
Vals, deri = Graph.SeriesValues(Xvals,1,Graph)
print(Vals)
print(deri)
```
[27, 12, 48]
[[18.0], [12.0], [24.0]]
## Software Organization
### Directory Structure
Our structure is as follows:
/cs207-FinalProject
README.md
LICENSE
.gitignore
.travis.yml
setup.py
requirements.txt
docs/
milestone1.ipynb
milestone2.ipynb
documentation.ipynb
autodif32/
__init__.py
AutoDiffObj.py
Elementary.py
ElementaryReverse.py
Graph.py
JacobianVectorFunc.py
MultivariateVarCreator.py
MultivariateVectorAutoDiffEvaluate.py
MultivariateVectorVarCreator.py
ReverseJacobVectorFunc.py
ReverseMode.py
test/
__init__.py
autodiffobj_test.py
elementary_test.py
Reverse_test.py
JacobianVectorFunc_test.py
### Modules
The ```AutoDiffObj``` module creates an AutoDiff object based on the scalar value you would like to evaluate a function and its derivative at. It overloads the basic operations including multiply, add, negative, subtract, powers, division, and equality. It also includes a Jacobian method, which returns the Jacobian of the function. If the function is univariate, then the Jacobian is just the derivative. If the function is multivariate (not yet implemented), then the Jacobian will be an array.
The ```Elementary``` module implements some of the elementary functions for use in the forward mode of autodifferentiation, including exp, log, sqrt, sin, cos, tan, asin, acos, atan, sinh, cosh and tanh.
The ```ElementaryReverse``` module implements some of the elementary functions for use in the reverse mode of autodifferentiation. These functions are made distinct from those in the Elementary module by an r at the end of the function name (expr, logr, sqrtr, sinr, cosr, tanr, asinr, acosr, atanr, sinhr, coshr and tanhr).
The ```Graph``` module implements the ComputationalGraph class which allows each node to be recorded in sequence in a graph for later use in value and derivative computation. This module also implements the ComputeValue and ComputeGradient functions, which compute the value and gradient using reverse mode and leveraging the computational graph.
The ```JacobianVectorFunc``` module implements the Jacobian method for vector inputs.
The ```MultivariateVarCreator``` module takes in the values of each variable in a user defined multivariate function, and returns len(kwargs) number of AutoDiff class variables with derivative (seed) as an np.array. The MultivariateVarCreator class acts as a helper function for user to create multiple AutoDiff Objects conveniently (instead of manually create many AutoDiff objects) to use in the evaluation of the multivariate function. Please note that the implementation of multivariate functions is still in progress.
The ```MultivariateVectorAutoDiffEvaluate``` module implements the MultiVarVector_AutoDiff_Evaluate function, which autodifferentiates and evaluates a multivariate function using forward mode at user defined vector input values.
The ```MultivariateVectorVarCreator``` module implements the Multi_Vector_AutoDiff_Creator class, which instantiates multiple AutoDiff objects (for use in multivariate functions).
The ```ReverseJacobVectorFunc``` implements the ReverseVecFunc class, which takes the vector function and variables as inputs, and computes the value and the Jacobian matrix using the reverse mode of automatic differentiation.
The ```ReverseMode``` module implements the node class, which is a single AutoDiff Object used for reverse mode.
### Testing Suite
Our testing files live in the `test/` directory. The tests are run using pytest.
### Installation procedure
1) For general users, install the autodiff32 package using pip (see 'How to Use autodiff32' above for complete instructions)
```pip install autodiff32```
2) For developers, feel free to clone this repository, and use the requirements.txt file to ensure you have installed all necessary dependencies.
## Implementation details
The current implementation of AutoDiff32 allows for scalar univariate inputs to functions. Core classes include the AutoDiff class, . AutoDiff32 is externally dependent on numpy, and this dependency has been automatically taken care of in the released version of the package (as well as in the requirements.txt file if the user chooses to manually download the package). AutoDiff32 has implemented a number of elementary functions, as listed above in the description of the basic modules.
We plan to continue development of the AutoDiff32 package to allow for vector inputs as well as multivariate inputs. This will require robust Jacobian matrix functionality.
In addition to the currently implemented forward mode, we plan to build out the reverse mode of auto differentiation as an advanced feature.
### Forward mode Details:
The core data structure (and also external dependencies) for our implementation will be numpy arrays, and the core classes we have implemented are described below:
1) An ***AutoDiff class*** which stores the current value and derivative for the current node. The class method will conatin overloading operators such as plus, minus, multiply, etc.
```python
"""
Initialize an Automatic Differentiation object which stores its current value and derivative
Note that the derivative needs to not be a scalar value
For multivariable differentiation problems of a scalar function, the derivative will be a vector
"""
def __init__(self,value,der=1) :
#Store the value of the Autodiff object
#Store derivative of autodiff object (default value is 1)
#overloading operators to enable basic operations between classes and numbers (not exhaustive):
"""
These methods will differentiate cases in which other is a scalar, a vector, a class, or any child class
All of these operators will return AutoDiff classes
"""
def __mult__(self,other):
def __rmult__(self,other):
def __radd__(self,other):
def __add__(self,other):
```
Multivariate functions (both scalar and vector) can be also evaluated using the AutoDiff class as below, and the resulting Jacobian will be a vector array:
```python
X = AutoDiff(1,np.array([1,0]))
Y = AutoDiff(1,np.array([0,1]))
func = X + 2*Y
```
While this way of defining and evaluating multivariate functions is feasible, it is very inconvenient for the users to have to keep track of the dimensionality of the derivatives. For,example, the above func definition will raise an error if Y's derivative is defined as np.array([0,0,1]). This potential problem in dimensionality will also be a cause difficulties in the error handling process in the code.
The way we tackle this problem is to create a helper class called **Multi_AutoDiff_Creator** as described below:
2) A ***Multi_AutoDiff_Creator class*** which helps the user create variable objects from a multivariable function
```python
"""
This class helps users initialize different variables from a multivariable function without explicitly specifying them using separate AutoDiff classes
It will need to import AutoDiff Class
"""
def __init__(self,*args,**kwargs):
'''
INPUT : variables as kwargs such as (X=3,Y =4)
RETURN : X number of autodiff objects with length kwargs and each with its 'vector' derivatives
'''
#Demo and comparison
'''initiate a multivariable function using Multi_AutoDiff_Creator class'''
X,Y= Multi_AutoDiff_Creator(X = 1., Y=3.).Vars #X,Y are autodiff object with derivative [1,0] and [0,1]
func = X + 2*Y*X
```
Notice that this class only serves as a **helper class** to ensure that every variable created has the correct format of dimensionality. The class itself has no influence on the forward mode differentation process.
For better calculation of the derivatives of our elementary functions, we also introduce our elementary function methods.
3) An ***Elementary function*** file which calculate derivatives for elementary functions as described previously
```python
#Elementary Function Derivative (not exhaustive):
def exp(self,ADobj):
def tan(self,ADobj):
def sin(self,ADobj):
def cos(self.ADobj):
'''
RETURN an AutoDiff object if ADobj is an AutoDiff object with related value and derivative for the particular elementary functions
Return np.exp/tan/sin(ODobj) if ODobj is a number
'''
```
4) A ***Jacobian*** class which helps the user compute the Jacobian matrix for vector functions evaluated at a single point
```python
class Jacobian:
def __init__(self,vector_func):
#The Jacobian class is initiated by a vector function
def value(self):
# Return the Jacobian value of the vector functions evaluated at a single point by looping through the vector function
'''
x, y = ad.Multi_AutoDiff_Creator(x=2, y=3).Vars
#Define a vector function
func = np.array([x+y, 2*x*y])
Jacob = ad.Jacobian(func)
print(Jacob.value()) # this class method output the full jacobian matrix as an np array
'''
```
## Extension
**Description**
The implementation of reverse mode also lies in the same autodiff32 package with no additional extension requirements.
We implemented the reverse mode of automatic differentiation in which users can evaluate univariate and multivariate scalar/vector functions with series or single values. The alogorithm is based on the computational flow chart (graph) for reverse mode discussed in class, in which each node is recorded in sequence in a Graph for later use in value and derivative computation. In particular, each node stores its value, the graph it connects to, and the index it has in the graph. In order to backward compute the derivative for our root node, each node records the parent which produced it through some operations, and also the oepration type ("plus","sub",etc.) itself.
We find this more intuitive and pedagogical than using recursion, and potentially can be more computationally efficient since each node only has to remeber its direct children, but not the indirect ones. The graph and node can also be reused easily, which makes it simpler and more computationally efficient in our evaluation for multiple values of our variables.
The high level mathematical ideas will be similar to that of the forward mode, the only additional thing that will be helpful to keep in mind is the chain rule, which is:
\begin{aligned}
\frac{dz}{dx} = \frac{dz}{dy}\times\frac{dy}{dx}
\end{aligned}
This will help use understand how we can calculate the derivatives by starting to set seed value of 1 for our function, which is:
\begin{aligned}
\frac{dz}{df} = 1
\end{aligned}
For example, if we would like to calculate the derivative for $f = 2x$ backward,by letting $z=f$,we will have:
\begin{aligned}
\frac{dz}{dx}=\frac{dz}{df}\times\frac{df}{dx} = 1 \times 2 = 2
\end{aligned}
###Implementation Details for Reverse Mode
The reverse mode implementation will have the following classes:
1) A **Node** class which serve as single automatic differentiation object.All the node will be connected to the same graph for a given function.
```python
class Node:
#INITIATOR
'''
The Node class is initiated with the paramters mentioned below. The node
will connect to the graph as soon as it is created.
'''
def __init__(values,Graph,derivative,leftparentnode,rightparentnode,derivative = 0):
def CheckConstant(x):
#Check if x is a Node or not. If it is ,return it,
# if not,return a new node with Node.value = x ,and connect it to the Graph.
#OVERLOADING OPERATORS
'''
perform subtraction between nodes
Return
======
A new node which store the value,the graph it connects to, and self as its left nod and other as its right node.
'''
def __mul__(self,other):
def __sub__(self,pther):
def __truediv__(self,other):
```
2 A **ComputationalGraph** class which stores the nodes of a given function in sequence and compute the value and the gradient of the function and the root variables.
```python
class ComputationalGraph:
def __init__(self):
# the graph is initialized by an empty list
def append(self):
#append the node in sequence in the graph and record its index for later computation
def ValidOp(self):
# sturcturely store every valid operator in this reverse mode computation
'''
RETURN
======
Valid operator code if valid otherwise raise error
'''
def ComputeValue(self):
#Compute the Value of the function by a forward pass of th graph
def ComputeGradient(self,lastindex = -1):
'''
Backward propagate through the graph to calcutate the derivative of the nodes and store it in each node by looping through the list in a reverse order and update the parent nodes using child nodes.
INPUT : Last index is the seed : dz/df = 1
RETURN
======
NONE
All the values after computation is stored in each node in the list
'''
def SeriesValues(self,args):
#compute the value and derivatives for
#a series of values for a function (illustration in detail in the How to use session)
'''
RETURN
===
A two dimensional Array for derivatives and one dimensional array for values
Values for the funtion evaulated at different points
Derivatives of root variables evaluated at different points
'''
```
3) A class **ReverseVecFunc** which calculate the value and derivative for vector functions evaluated at different points
```python
class ReverseVecFunc:
def __init__(self):
#it stores the variables and the functions when the class is initialized.
def value(self,Graph):
#it computes the Jacobian and the value for vector functions for a given single value of variable
def SeriesValues(self,values,dimension,Graph):
'''
INPUT
=====
value = values of the functions
dimension : the number of variables
Graph : the graph to be connected
RETURN
======
The value and the jacobian (both in 2D nparrays) of the function at a series of values
'''
#pseudocodes
initialize a valuelist = []
for each function in vection functions:
calculate its gradients and values using Wrapper(args)
append it to a valuelist
def Wrapper(args):
# A helper function to help calculate the values and derivative for a series values
'''
INPUT
=====
value = values of the functions
dimension : the number of variables
Graph : the graph to be connected
Returns the derivatives and values of a single function evaluated different values
'''
```
4) Additional **ElementaryReverse.py** which defines the elementary functions operations for the Node objects
```python
'''
NOTE
=====
In order to differentiate between reverse mode elementary functions and forward mode ones, all the elementary function in reverse mode will have a "r" as the last alphabet.
If user would like to use these function with a constant , please use built in functions in Numpy: such as : np.exp(2), np.sin(3) etc.
RETURN
=====
New Node object that stores the value after related computation and its relevant leftparent node (which is itself). The node returned wont have any rightparent node.
'''
def expr(x):
def logr(x):
def sqrtr(x):
...
```
## Future Features
As we have seen, automatic differentiation is an efficient and accurate way to compute derivatives, so it makes sense
to try to apply this where we can. One of the most popular methods that use the derivative is gradient descent.
Gradient descent is an optimization problem where we try to minimize some function. We do this by taking some starting point
and moving in the direction
of the steepest part of the function, or the most negative gradient. To find this minimum point, it makes sense that
want the gradient to be precise, so the point is precise, so automatic differentiation is the natural choice.
Building onto that idea, another use of this could be in neural networks. The entire concept of a neural network
is centered around optimizing its weights, and at each step and each time the node weights update, we need to calculate
lots of partial derivatives, so once again, it makes sense to use automatic differentiation.
Similarly, the concept of deep learning is essentially a large collection of neural networks, so the same utility
from automatic differentiation can be gained here.
Another interesting application can be found in statistics, namely, in a Markov chain Monte Carlo sampling method, called
the Hamiltonian Monte Carlo Algorithm. Without diving into deep statistical explanations, this is an algorithm to create a
random sample from a probability distribution that is difficult to get normally. Most Markov chain Monte Carlo algorithms
coverge quite slowly, and as a result explore the sampling slowly as well. The hamiltonian algorithm
does a better job with convergence, but at the cost of having to evaluate complicated gradients of probability models along the way.
However, with automatic differentiation, the user no longer has to manually derive the gradients, and this cost can be reduced significantly.
ref:http://jmlr.org/papers/volume18/17-468/17-468.pdf
| 4214112d8f4f278e364d96995939d39ad0242d3b | 49,981 | ipynb | Jupyter Notebook | docs/documentation.ipynb | ELAA207/cs207-FinalProject | 6e4f265c220a2a156bab51e7717878dea252256c | [
"MIT"
]
| 1 | 2019-10-26T12:46:22.000Z | 2019-10-26T12:46:22.000Z | docs/documentation.ipynb | ELAA207/cs207-FinalProject | 6e4f265c220a2a156bab51e7717878dea252256c | [
"MIT"
]
| 16 | 2019-11-19T07:33:03.000Z | 2019-12-10T14:53:05.000Z | docs/documentation.ipynb | ELAA207/cs207-FinalProject | 6e4f265c220a2a156bab51e7717878dea252256c | [
"MIT"
]
| null | null | null | 40.405012 | 717 | 0.599368 | true | 8,829 | Qwen/Qwen-72B | 1. YES
2. YES | 0.803174 | 0.899121 | 0.722151 | __label__eng_Latn | 0.995611 | 0.51613 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.